Dec 30, 2016

yequalsx: yes, almost by definition, memory made up of bits should be modeled with a field of characteristic 2. That seems the most "natural" way to do it.

However, the authors of this paper are thinking about "computers" in a broader sense.

Take a look at these two papers:

* "Neural Turing Machines:" https://arxiv.org/abs/1410.5401

* "Hybrid computing using a neural network with dynamic external memory:" http://www.nature.com/nature/journal/v538/n7626/full/nature2... and a related blog post at https://deepmind.com/blog/differentiable-neural-computers

The memory space of these "neural" computers is formally modeled as R^n (n being the total number of parameters and memory addresses), and implemented approximately with 32-bit floating point numbers in practice (i.e., with a memory space of (2^32)^n bits). As you read the paper linked to by the OP, think about these neural computers instead of the one atop your desk.

EDITS: removed sloppy math and lightly edited some sentences.

Jul 31, 2016

The author proposes a lot of vague ideas in this article (for example "I believe one of the biggest problems is the use of Error Propagation and Gradient Descent") without references or any solid proofs why they are necessary to solve the proposed program (Automate programming using ML?).

In fact there is already a lot of solid work just on this subject:

* Learning algorithms from examples http://arxiv.org/abs/1511.07275 https://arxiv.org/abs/1410.5401

* Generating source code from natural language description http://arxiv.org/abs/1510.07211

* And, the most closest work to what author probably wants, a way to write a program in forth while leaving some functions as neural blackboxes to be learned from examples: http://arxiv.org/abs/1605.06640

* Also there is a whole research program by nothing less than Facebook AI Research that explicitly aims at creating a conversational AI agent that is able to translate user's natural language orders into programs (asking to the user additional questions if necessary): http://arxiv.org/abs/1511.08130 (there is also a summary here http://colinraffel.com/wiki/a_roadmap_towards_machine_intell... )

And deepmind is also working on conversational agents: https://youtu.be/vQXAsdMa_8A?t=1265

Given current success of such models, automating simple programming tasks maybe not as much research as engineering and scaling up problem.

There is a lot of exciting machine learning research out there nowadays. Almost all of this research is available for free from papers posted on arxiv. It is a really good idea to read more about state of the art before coming with new ideas.

Jul 22, 2016

There are signs, if you know were to look for them: http://arxiv.org/abs/1510.07211 https://arxiv.org/abs/1410.4615 http://arxiv.org/abs/1605.06640 https://arxiv.org/abs/1410.5401

May 20, 2016

Here's the original paper on Neural Turing machines: https://arxiv.org/abs/1410.5401

IIRC, that paper's results focus on the model's ability to extrapolate prediction of sequential data to arbitrary lengths having only been trained on sequences of a fixed length.