Sep 13, 2016

Would you please direct me to an example of MIRI's Red Team efforts that isn't the recent "Malevolent AI" paper [0]? Adherence to the belief that UFAI is a threshold-grade existential risk seems to compel a "define first, delay implementation" strategy, lest any step forward be the irrevocable wrong one.

[0] http://arxiv.org/abs/1605.02817