Yes, very disappointing. Generally anything with "AI" in the title means the HN comments won't be worth reading. It's a big problem, and I'm not sure how solvable it is.
Basically, the paper discusses ways in which learning agents "will not learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off-policy learning property to prove that either some agents are already safely interruptible, like Q-learning, or can easily be made so, like Sarsa."
It's an interesting result, and can probably be extended to other less hype-worthy scenarios.