Dec 30, 2016

yequalsx: yes, almost by definition, memory made up of bits should be modeled with a field of characteristic 2. That seems the most "natural" way to do it.

However, the authors of this paper are thinking about "computers" in a broader sense.

Take a look at these two papers:

* "Neural Turing Machines:"

* "Hybrid computing using a neural network with dynamic external memory:" and a related blog post at

The memory space of these "neural" computers is formally modeled as R^n (n being the total number of parameters and memory addresses), and implemented approximately with 32-bit floating point numbers in practice (i.e., with a memory space of (2^32)^n bits). As you read the paper linked to by the OP, think about these neural computers instead of the one atop your desk.

EDITS: removed sloppy math and lightly edited some sentences.

Jul 31, 2016

The author proposes a lot of vague ideas in this article (for example "I believe one of the biggest problems is the use of Error Propagation and Gradient Descent") without references or any solid proofs why they are necessary to solve the proposed program (Automate programming using ML?).

In fact there is already a lot of solid work just on this subject:

* Learning algorithms from examples

* Generating source code from natural language description

* And, the most closest work to what author probably wants, a way to write a program in forth while leaving some functions as neural blackboxes to be learned from examples:

* Also there is a whole research program by nothing less than Facebook AI Research that explicitly aims at creating a conversational AI agent that is able to translate user's natural language orders into programs (asking to the user additional questions if necessary): (there is also a summary here )

And deepmind is also working on conversational agents:

Given current success of such models, automating simple programming tasks maybe not as much research as engineering and scaling up problem.

There is a lot of exciting machine learning research out there nowadays. Almost all of this research is available for free from papers posted on arxiv. It is a really good idea to read more about state of the art before coming with new ideas.

Jul 22, 2016

There are signs, if you know were to look for them:

May 20, 2016

Here's the original paper on Neural Turing machines:

IIRC, that paper's results focus on the model's ability to extrapolate prediction of sequential data to arbitrary lengths having only been trained on sequences of a fixed length.