I'm not too sure how practical the suggested "social-systems analysis" approach is. It is summarized as:
"A practical and broadly applicable social-systems analysis thinks through all the possible effects of AI systems on all parties."
which seems incredibly difficult to do completely. Hopefully the authors will further describe their approach in future publications.
Also, somewhat of a nitpick, but the article states:
"The company has also proposed introducing a ‘red button’ into its AI systems that researchers could press should the system seem to be getting out of control."
in reference to Google, but cites a paper which discusses mitigating the effects of interrupting reinforcement learning . The paper makes a passing reference to a "big red button" as this is a common method for interrupting physically situated agents, but that is certainly not the contribution or focus of the work.