Jun 04, 2016

Yes, very disappointing. Generally anything with "AI" in the title means the HN comments won't be worth reading. It's a big problem, and I'm not sure how solvable it is.

Basically, the paper discusses ways in which learning agents "will not learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can easily be made so, like Sarsa."[1]

It's an interesting result, and can probably be extended to other less hype-worthy scenarios.

[1] http://intelligence.org/files/Interruptibility.pdf