Thank you, that is a very intuitive explanation.
You've probably read Scott Aaronson's Why Philosophers Should Care About Computational Complexity? . This seems like the perfect area to apply a lot of the questions he brings up. That's what I was looking for as I skimmed through the paper. Maybe that's what idlewords was talking about as well.
Aaronson has an interesting proposal to address the Chinese room problem that I think makes a lot of sense. The idea is that the Chinese Room intuitively doesn't exhibit understanding because it's a constant-time exponential-memory algorithm (a lookup table), whereas the algorithm that generated the entries in the Chinese Room table (a human) is super-constant time, sub-exponential memory algorithm, which introduces a place for consciousness to emerge. So the only reason the Chinese Room problem is philosophically confounding is that it adds a layer of indirection (a cache table) that obviously can't be conscious over the algorithm that actually might be conscious and that generates the table entries. http://www.scottaaronson.com/papers/philos.pdf
This is most likely a false argument. You have to hedge such things because technically you can't disprove Descartes' evil demon, but beyond that, it can be shown to be false.
First, I pull in the paper "Why Philosophers Should Care About Computational Complexity": http://www.scottaaronson.com/papers/philos.pdf I particular commend to your attention the discussion in section four and the subsection "'Reductions' That Do All The Work".
At which point, if you understood that paper, the remainder of my argument is probably obvious. But:
Given the complexity of our incoming sense input, in order for the universe to somehow be something completely different but still meaningfully causally connected to our sensory input, there must be a transform function from the real state of the universe to the sensory experience my consciousness experiences (or "what my brain appears to be processing" or whatever you like here; the "mysteries" of consciousness are not important to my argument, I merely need "the thing that is not experiencing the true state of the universe but is making decisions somehow"). Given the ready available availability of the true (if wildly incomplete) state of the universe to these hypothetical organisms, the transform function must have been created by evolution, co-evolving with the organisms as they get more complicated.
When we say "all of reality is an illusion and it's wildly different than what we experience", we can (with a bit of handwaving) observe that if reality is supposed to be radically different than our experiences, yet the transform function somehow successfully keeps us alive as we act on our illusions, it is not unreasonably to expect the transform function to be exponential in complexity. I mean, I see a coherent "thing I think is my child", and if that's actually a three-toed sloth that can't speak or play video games, it's gonna be one heck of a transform function to maintain the illusion. However, evolution's speed can be characterized by the rate at which it can acquire bits. While there is some debate about what that speed may be, it generally considered to be linear at best in the number of generations. There is no time for such a complicated transform function to be evolved.
On the other hand, if the transform function is relatively simple, the argument degenerates to the rather pedestrian observation that humanity has known about for centuries if not millenia (depending on how you look at it), that our sense perceptions are not a completely accurate reflection of reality. But there are still significant ways in which it is a reflection of reality.
So I can not help but think that this article is one of two things: An impossible claim about the disparity between our sense impressions and reality, or a pedestrian claim about the disparity between our sense impressions and reality dressed up in very provocative, but ultimately content-free, dressing.
Incidentally, why do the experiments he run seem to confirm his point? Scale, of course. In a tiny little simulation, the differences between exponential and simple transform functions are still quite small, and evolution has plenty of room to play with outright deceptive sense functions. But as you scale up the complexity of the simulation, evolution will not be able to sustain wild illusions, only relatively simple transforms between reality and sense impression, exactly as we see in the real world. ("Relatively simple" compared to what it would take to have wildly deceptive transform functions; still complicated and we are still learning about what our real brains do, of course, but it's still the difference between "polynomial-but-large" and "exponential".)
Per Descartes' demon and/or brain-in-a-vat, etc., I can't prove very much about where our sense impressions are coming from, whether it's "real" or not. But I can say with some confidence that, where ever those sense impressions are coming from, be it a real universe or a simulation or whatever, I have no reason to believe that evolution is causing me to have those sense impressions so completely chewed on that "it's all an illusion". For that to be false would require so much about my world to be false that the very existence of "evolution" that his argument hinges on would not be a reliable fact to argue with.
No discussion of philosophy and complexity theory is complete without the thought-provoking paper by Aaronson .
 Aaronson, Scott. Why Philosophers Should Care About Computational Complexity. http://www.scottaaronson.com/papers/philos.pdf
Nah, P vs NP is a bigger factor here.
Basically, checking a proof for correctness takes polynomial time in the size of the proof. (Dependending on how you formalize your proofs, that might even be linear.)
Coming up with a reasonable sized proof is much harder. But if we had P=NP, then checking and finding a proof would be about equally hard.
I'm incredibly surprised that out of both previous threads, the piece "Why Philosophers Should Care About Computational Complexity" by Scott Aaronson  was only mentioned in one comment, and seemed to spark no further discussion.
Aaronson dedicates significant space specifically to the Chinese Room problem, and has a good literature review of different computer science objections (my favorite was probably the one estimating the entropy of English and coming up with space bounds on the size of all 5-minute long conversations in English).
It is one of the most comprehensive takedowns of the Chinese Room problem.
Along similar lines, Aaronson discusses later in the paper the problem "Does a rock implement every Finite-State Automaton?" popularized by Chalmers  and many computer science rebuttals.