Nov 27, 2019

Check out MuZero. It learns an embedding of the game's state space potentially allowing AlphaGo like dominance in more ways.

https://www.youtube.com/watch?v=We20YSAJZSE

https://arxiv.org/abs/1911.08265

(Actually, ML is still not very good at causal reasoning so we have sometime. I'm more excited and worried about crispr at this point; what happens when we can make people genetically super human)

Nov 23, 2019

> no AI that I know of has at its disposal a full blown model of the world it operates in

This is a field called Model-based Reinforcement learning, and it's quite advanced already -- there are indeed models that have an internal state reflecting the world state.

A good recent example:

https://papers.nips.cc/paper/7512-recurrent-world-models-fac...

> deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting

This is also addressed by recent models, somewhat. Once you have an abstract world model, searching for a high reward can be just a matter of running markovian simulation on it using high reward heuristics (given by a network of course), like AG does. This line is also very active right now, one example is the recent MuZero.

https://arxiv.org/abs/1911.08265

Inference at its core really isn't much more than an artful curve fitting (or an artful model search if you like), and it's one of the building blocks of intelligence.

Nov 22, 2019

There are certainly challenges, but "incorporating a world model" has been going well recently: "Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model"

https://arxiv.org/abs/1911.08265