Nov 14, 2017

I like Pugh's paper on skip lists [1], Shannon's "Mathematical Theory of Communication" [2], and these two might be a stretch but I also like Rong's "Word2vec Parameter Learning Explained" [3] and Levy & Goldberg's "Word2vec Explained" [4]. In any case use the recommended papers to learn paper reading skills e.g look at references when you don't understand a concept, find an introductory textbook to clarify a proof method, write a summary to make your understanding concrete. Good Luck!





Nov 14, 2017

Shannon's "A Mathematical Theory of Communication" is one of the most accessible and profound papers in the field:

Oct 22, 2017

"information theory"

A Mathematical Theory of Communication, Shannon,

Sep 25, 2017

> I don't think this method has anything to do with Markov Chains.

Oh, it absolutely does. I think it's fair to say that Efros launched the field of nearest neighbor texture synthesis, and his abstract states: "The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods.

This is the same Markov model that all subsequent texture synthesis papers are implicitly using, including the paper at the top of this thread. Efros' paper implemented directly is really slow, so a huge number of subsequent papers use the same conceptual framework, and are only adding methods for making the method performant and practical. (Sometimes, at the cost of some quality -- many cannot synthesize text, for example.)

Note the inspiration for text synthesis, Shannon's paper, also describes the "Markoff Process" explicitly. (Efros referenced Shannon, and noted on his web page: "Special thanks goes to Prof. Joe Zachary who taught my undergrad data structures course and had us implement Shannon's text synthesis program which was the inspiration for this project.")

> Well, of course almost anything can be interpreted as a Markov process, I don't think it's a very useful abstraction here.

It's not an abstraction to build a conditional probability table and then sample from it repeatedly to synthesize a new output. That's what a Markov process is, and that's what the paper posted here is doing. I don't really understand why you feel it's distant and abstract, but if you want to elaborate, I am willing to listen!

Aug 24, 2017

"A Mathematical Theory of Communication" - Claude E. Shannon

Jul 25, 2017

Thanks for sharing this article...not familiar with Betty Shannon, but her husband, Claude Shannon was really a genius...His paper "A Mathematical Theory of Communication" is one of the most elegant papers that I have read...

Sigh...the one hundred years before 1990 was really a period of giants, with Claude Shannon, Kurt Gödel, Alan Turing, John von Neumann, Emmy Noether, David Hilbert -- as well as Albert Einstein, Paul Dirac, Louis de Broglie and the other quantum guys...

I have to admit it was those guys that inspired me to try to pursue a career in academia...but after I grew up, I found they were all dead -- or it might be the other way around...and many academic papers these days are filled with words that do not resonant, and with bloated references that do not shine -- some even marked with the acceptance rate of its publication to show off their elitism...

Seems that the age when a 25-page PhD thesis, containing only 2 references, could still be well accepted is fading further and further away...

( )

Mar 03, 2017

Maybe you would find the theory of information interesting:

Here in his famous tour de force Claude Shannon lays out the way we can estimate the amount of actual information in an act of communication (e.g. a literary work) and relates it to system entropy.

To your point, !TB of all 1's compresses to a just few bits of actual information.

But I suspect you are not actually speaking about the information contained in a literary work. You are probably thinking: how much expansive commentary and explanation could a literary work spawn? That is another question. The answer is always: unbounded.