Nov 06, 2017

Yeah, I'm confused that this is the top comment; it's factually incorrect. NASNet is an example of a result of AutoML. To quote the Google blogpost on NASNet:

>In Learning Transferable Architectures for Scalable Image Recognition, we apply AutoML to the ImageNet image classification and COCO object detection dataset... AutoML was able to find the best layers that work well on CIFAR-10 but work well on ImageNet classification and COCO object detection. These two layers are combined to form a novel architecture, which we called “NASNet”.

[https://research.googleblog.com/2017/11/automl-for-large-sca..., November 2017]

In contrast AutoML is, as the nytimes article describes, "a machine-learning algorithm that learns to build other machine-learning algorithms". More specifically, from the Google blogpost about AutoML:

>In our approach (which we call "AutoML"), a controller neural net can propose a “child” model architecture, which can then be trained and evaluated for quality on a particular task...Eventually the controller learns to assign high probability to areas of architecture space that achieve better accuracy on a held-out validation dataset, and low probability to areas of architecture space that score poorly.

[https://research.googleblog.com/2017/05/using-machine-learni..., May 2017]

Quoc, Barret, and others have been working on ANN-architecture-design systems for a while now (see: https://arxiv.org/abs/1611.01578), and AutoML specifically was done before announcing NASNet. Saying that NASNet is "the actual research behind AutoML" is drawing the causal arrow backwards.

Jul 14, 2017

There has been some work related to what you are describing:

Learning to learn by gradient descent by gradient descent https://arxiv.org/abs/1606.04474

Related (but less so), there are also some papers about learning neural network architectures:

Designing Neural Network Architectures using Reinforcement Learning https://arxiv.org/abs/1611.02167

Neural Architecture Search with Reinforcement Learning https://arxiv.org/abs/1611.01578

Mar 31, 2017

Neural Architecture Search with Reinforcement Learning

https://arxiv.org/abs/1611.01578

Too bad, it is already there :P

Feb 05, 2017

Deep learning architectures built by machines (so we no longer have to design architecture to solve problems) https://arxiv.org/abs/1611.01578

Transfer Learning (so we need less data to build models) http://ftp.cs.wisc.edu/machine-learning/shavlik-group/torrey...

Generative adversarial networks (so computers can get human like abilities at generating content) https://papers.nips.cc/paper/5423-generative-adversarial-net...

Jan 02, 2017

Other equally exciting papers that relates to learning to learn in DL.

"Neural Architecture Search with Reinforcement Learning"

https://arxiv.org/abs/1611.01578

"RL^2: Fast Reinforcement Learning via Slow Reinforcement Learning"

https://arxiv.org/abs/1611.02779

"Designing Neural Network Architectures using Reinforcement Learning"

https://arxiv.org/abs/1611.02167