Dec 04, 2016

Lamport has always been in favor of a rigorous mathematical reasoning about programs, but has always warned against looking for a "new math", the "right kind of math", or a "silver bullet math" (in other talks he calls it "weird computer-science math"), as he believes that ordinary math is all you need. So yeah, Lamport is not a big fan of FP (or any PL theory for that matter). He's been fighting the Robin Milner school of thought, or PL semantics in general since at least the early nineties. He says here that once you deal with all the incidental complexity, you're left with the essential complexity, and that is the hard part, and he's skeptical of the ability of "new math" to help solve the problem.

See: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

Oct 05, 2016

It's not playing fast and loose at all - at least not with the precise mathematical definition (perhaps it is playing fast and loose with the "literal every-day" definition which might mean a state machine with few enough states to fit on a whiteboard). A push down automata is defined as having infinite stack and a Turing machine is defined as having infinite tape. A finite stack push down automata permits an equivalent (much larger) finite state machine. A finite tape Turing machine also permits an equivalent (much much larger) finite state machine.

Leslie Lamport makes the same observation on page 4 here: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

The comment I replied to was "My experience shows it is a state machine, but I have not see any formal study of it." - I was simply replying "of course it's a state machine".

Lots of situations in GUIs can be modelled as quite compact state machines - but it depends on the level you model it at. If you need to model the state of a text box then your state machine will grow very large since you need a new state for every possible input string. Given modelling simplifications where we abstract away some parts with large state spaces by collapsing states, there are very few pieces of software which don't benefit from being modelled with state machines.

Personally I would be a very frustrated programmer and if I had to program user interfaces with Turing machines since writing a transition table would feel awfully low level and since the Turing machine model doesn't have any facilities for user interaction I'm not sure any users would be terribly happy with the result (so I'm not quite sure what you're trying to say here).

Oct 02, 2016

While not exactly a debate -- more of a fundamental difference in outlook -- these are interesting, completely opposite claims:

Bob Harper[1]:

> There is an alternative… without… reference to an underlying machine… [W]e adopt a linguistic model of computation, rather than a machine model, and life gets better! There is a wider range of options for expressing algorithms, and we simplify the story of how algorithms are to be analyzed.

Leslie Lamport[2]:

> Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. … State machines… provide a uniform way to describe computation with simple mathematics. The obsession with language is a strong obstacle to any attempt at unifying different parts of computer science.

[1]: https://existentialtype.wordpress.com/2011/03/16/languages-a...

[2]: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

Sep 01, 2016

Well, the 1994 paper does have about 2500 citations:

https://www.microsoft.com/en-us/research/wp-content/uploads/...

and an older paper covering an interesting theoretical property has over 1000. Also, the author did get a Turing award -- not primarily for this, but it got a mention. So yeah, we're talking about a very well-known formalism in software verification, and people were impressed. I don't know about groundbreaking because the community is different. In software verification you get points for new proof methods and such; not necessarily for creating an elegant language. TLA+ is about finding the ideas that have been shown to work at scale, and creating a very elegant, minimalist language to express them, based on simple ordinary math as much as possible.

There is no shortage of good tutorials (especially considering how simple TLA+ so there's no need for 500 tutorials on monads), and the language was designed for and is used by engineers in industry working on large, complex systems. Companies that have started to use TLA+ (or use it extensively) on large systems -- Amazon, Oracle and Microsoft -- report that their engineers pick it up on their own from available materials in two weeks (that was my experience, too; it's easier than most programming languages). But those are tutorials and don't focus on the theory. Amazon published two excellent technical reports, one in CACM, about their experience, with pretty good data (that's what convinced me; I'm impressed with what's been shown to work in the field).

There is also no lack in more academic, theoretical material, but, as usual, that is mostly concerned technical details.

What's missing is a brief overview of the theory. I'd read at least the introduction to the original paper above, this 4-page historical note[1], and this[2].

[1]: http://research.microsoft.com/en-us/um/people/lamport/pubs/c...

[2]: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

Sep 01, 2016

> their "native" tree form, which is cheap

It is not cheap at all, as type checking can be arbitrarily hard, depending on the type system, and perform arbitrary computation at the "validation" stage. This is real, significant (and useful, depending on circumstance) computational work, that no formalism or representation can reduce (only hide, by pushing it over to the collaborator).

> but because they are the richest lingua franca for every academic discipline to represent their ideas

I disagree. I strongly recommend you (and everyone) read this terrific essay by Lamport on a lingua franca for algorithms: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

Not only is TLA just as powerful as dependent types, it is far simpler[1]. Lamport says that the reason PLers aren't interested in TLA+ is because it's so simple, so there's not much to write about it. You see plenty of papers about embedding all kinds of things (separation logic, cost models, concurrency etc. etc.) with dependent types, things that are so trivial in TLA+ that they're barely worth a mention. Ideas from Hoare logic, behavioral refinement, differences between notions of program equivalence from process calculi (trace equivalence vs. bisimulation) simply collapse into simple, familiar logic in TLA (the only thing missing is probabilistic reasoning).

Lamport wrote about his experience trying to publish a paper showing that specifying real-time systems with TLA is trivial (it's a matter of defining a time variable): http://research.microsoft.com/en-us/um/people/lamport/pubs/p... You can only write a few papers on a simple approach that solves many problems, but lots and lots of papers that show how to use complicated approaches to solve them.

A lingua-franca must be simple. TLA+ uses notation and concepts that are familiar to all mathematicians and all computer scientists, plus a couple of new concepts that can be learned in a few days. Dependent types (and intuitionistic logic) are almost as arcane today as they were decades ago, virtually unknown outside the circles of logic and PLT, each of them is a particularly isolated sub-discipline within math and computer science. That so many papers are written about dependent types is strong evidence that they cannot serve as the lingua franca, and pretty conclusive proof that they cannot serve as the lingua franca just yet.

That PLers, as Lamport writes in his comment on my post, fail to see that Plotkin's SOS is an abstract state machine, and that some fail to see that computing a set-theory function and a type-theory function are two different computational problems with radically different computational complexity is further evidence that language models obscure rather than reveal.

Of course, as Lamport also says, real-world programming is very complicated, and so programming languages are justifiably complex (and whether dependent types can help with that remains to be seen[2]). But that complexity is absolutely not required for the purpose of a lingua franca with clear and simple semantics (TLA's semantics are far simpler than any programming language) for the purpose of specifying and analyzing algorithms.

Disclosure: I've contributed some code to the TLA+ project.

[1]: Not only do properties ("types") and algorithms share the same terms in TLA+, they are the same objects. This confuses some people to believe that TLA+ doesn't support higher-order algorithms, when, in fact, it becomes a non-issue. Instead of a parameter of a certain type, you have a concrete program that is the type (e.g., the program that nondeterministically returns, say, any even integer).

[2]: I'm skeptical because I believe we're close to the "Brooks limit": http://blog.paralleluniverse.co/2016/07/23/correctness-and-c...

Jun 14, 2016

> but for 99% of mathematics there is a far more direct link to pure functions.

But state transitions are pure! Deterministic state machines are nothing more than mathematical functions from one algorithm state to the next, and nondeterministic ones are binary relations! (see this for an overview: http://research.microsoft.com/en-us/um/people/lamport/pubs/s...) Those can then be mathematically manipulated, substituted and what-have-you elegantly and simply. This is how formal verification has been reasoning about programs since the seventies (early on the formalization was a bit more cumbersome, but the concept was as simple).

> For example, look at the various proof assistants like Coq.

For example, look at the specification language (and proof assistant) TLA+. It is far simpler than Coq, just as pure, and requires no more than highschool math.

Coq and other PFP formalizations are built on the core idea of denotational semantics, an approximation which at times only complicates the very simple math of computation, requiring exotic math like category theory.

> the manipulation of formulas and proofs fits very well with an immutable, functional approach.

The manipulation of formulas and proofs fits very well with the imperative approach, only no category theory is required, and core concepts such as simulation/trace-inclusion arise naturally without need for complex embeddings. "PFP math" is what arises after you've decided to use denotational semantics. The most important thing is not to confuse language -- for which PFP has an elegant albeit arcane mathematical formalization -- with the very simple mathematical concepts underlying computation.

PFP is an attempt to apply one specific mathematical formulation to programming languages (other similar, though perhaps less successful attempts are the various process calculi). But it is by no means the natural mathematics of computation. Conflating language with algorithms is a form of deconstructionism: an interesting philosophical concept, but by no means a simple one, and perhaps not the most appropriate one for a science. Short of that, we have the basic, simple core math, and on top of it languages, some espousing various mathematical formalizations, related to the core math in interesting ways, and some less mathematical. But the math of computation isn't created by the language!

Jun 14, 2016

But in a way, imperative is more natural, because it captures the notion of computation more precisely. Haskell -- like other pure FP languages -- is built around the approximation of denotational semantics, which does have a bit of a mismatch with computation (not to say it isn't extremely useful much of the time). Anyway, mathematical thinking about programs didn't start with PFP, nor is PFP the most common way of formalizing programs. See:

http://research.microsoft.com/en-us/um/people/lamport/pubs/s...

I believe that the best way to get better programs is to teach programmers how to think better. Thinking is not the ability to manipulate language; it’s the ability to manipulate concepts. Computer science should be about concepts, not languages. But how does one teach concepts without getting distracted by the language in which those concepts are expressed? My answer is to use the same language as every other branch of science and engineering—namely, mathematics.

It makes me sad that some PFP enthusiasts enjoy the mathematical aspects of it -- as they should -- yet are unfamiliar with the more "classical" mathematical thinking. I think it's important to grasp the more precise mathematics first, and only then choose which approximations you'd make in the language of your choice. Otherwise you get what Lamport calls "Whorfian syndrome — the confusion of language with reality".