Feb 29, 2020

The actual paper seems to be this: https://arxiv.org/abs/1905.10615

PDF: https://arxiv.org/pdf/1905.10615.pdf

Website with videos: https://adversarialpolicies.github.io/ (that would make a better submission imho)

Github: https://github.com/HumanCompatibleAI/adversarial-policies

You have to stretch the definition of "new" somewhat to come up with the title TR chose, adversarial effects in all kinds of learning settings certainly aren't, the paper itself seems to contain quite interesting thoughts on how to assess them though (as opposed to just using them to steer the training process).