Jun 06, 2016

Some of the author's claims are correct, however point 3 for example doesn't have citations and is complete bollocks.

An immutable collection isn't a place to be mutated, a bucket to fill, so saying that immutable collections don't support concurrent updates is really stupid, because not allowing concurrent updates is the point of immutable collections.

But then, I challenge the author to show me concurrent dictionary implementations that have the non-blocking, lock-free property. Such implementations exist, mind you, but they are an area of active research and not usually part of standard libraries. And there goes the author's argument about the established industry. One such implementation is the TrieMap [1], a new and interesting implementation of a lock-free Map that actually relies on techniques used in persistent data-structures ;-)

On the other hand, shove a persistent data-structure into an AtomicReference and you get a non-blocking concurrent collection for free, which of course includes any kind of persistent Map implementation you want. And lo and behold, the layman can achieve non-blocking concurrent data-structures without a PhD.

There are of course gotchas. The real strength of mutable concurrent data-structures is that they can distribute the contention over multiple nodes, whereas with an immutable data-structure you end up driving that contention to a single root. That can be really, really bad for writes. But then again, concurrent reads for immutable data-structures come basically for free. This means such a data-structure is bad for storage, but really good for message passing (like from producers to consumers). Hence actual databases making use of FP techniques, like for example Cognitec's Datomic, are implemented in a mixed style, best tool for the job and all that.

But instead the author's argument is just a rant that doesn't delve into any interesting details.

A similar weak point is number 7. First of all, what the author understands about parallelism isn't parallelism and I don't really understand his ramblings on performance and absolute performance, but that's not the goal of parallelism. The goal of parallelism is scalability (i.e. the ability to throw hardware at a problem in order to decrease processing time). But this will naturally tax the number of operations per second that a single core can achieve, which can be acceptable if you can make up for it with hardware. And for example his linked benchmark uses at most 8 cores, which is too few to come up with a sweeping generalization like that. That conclusion simply doesn't follow from this benchmark, being basically just Amdahl's law in action, which he fails to recognize.

[1] http://lampwww.epfl.ch/~prokopec/ctries-snapshot.pdf