Aug 21, 2019

The question becomes, how much understanding is enough?

Do we need AI engineers passing laws or training courses for politicians?

Can the AI be treated like a black box [0] and the regulation still be effective?

I think in some cases the answer would be yes. A simple thought experiment demonstrates this.

Example: Consider regulation for AI in relation to job losses. Does one need to know the details of AI tech to develop regulation effectively? I would argue not.

The economic impacts can be analysed. At some tipping point, job losses Lead to a cycle of:

reduced demand -> reduced Revenue in companies -> reductions in jobs -> reduced demand and so on and on.

To regulate this, one would have to model the economic impacts as best as one could. Understand what jobs are most likely to be lost, in which industries. Understand the key things to monitor (I.e replacement job rate. Assuming jobs are created in other industries). And pass regulation to mitigate some of the risks or make sure the number of Job losses never hit the tipping point. None of these items leads me to believe the person passing the regulation requires knowledge of AI at an engineers level.

There is the Conant-Ashby Theorem or good regulator theory [1]. This states “Every good regulator of the system must be a model of the system” [2]

In this instance the regulation is to deal with ECONOMIC impacts. Therefore an economic model is needed, while impacted by the tech the tech does not need to be modelled. Thus my conclusion.

This poses an interesting question, could all eventualities be catered for in the regulation? There is other theories that could help us answer this question.

Happy to hear different or contradictory views.

[0] https://en.m.wikipedia.org/wiki/Black_box#Other_theories

[1] https://en.m.wikipedia.org/wiki/Good_regulator

[2] http://pespmc1.vub.ac.be/books/Conant_Ashby.pdf