Danger! Danger! Physics post ahead!
On the arXiv last week there was a paper written by a well known physicist, Dyakonov, criticizing the idea of fault tolerant quantum computing. Now, the theory of fault tolerance is one of the biggest results in the field of quantum computing. Basically, it is this theory that tells us that a quantum computer might be able to be built even though quantum systems are very hard to control. Recently there has been a fair bit of discussion about this paper over at The Quantum Pontiff's place [see Laugh therapy and A PR battle worth fighting?].
The main idea of fault tolerance goes something like this - if we want to make an ideal quantum circuit of a certain size then it is possible to simulate this circuit using components that are imperfect by constructing a circuit that is a bit more complex than the original, ideal, circuit. Now, the key thing is that there is a tradeoff between how complex this simulation circuit is and how bad its constituent components are. The threshold theorem tells us that if the imperfect components aren't too bad, then it is possible to use them to simulate your desired circuit in such a way that doesn't get so complicated that the whole exercise is a waste of time.
The big question in fault tolerance is how bad can these devices be before quantum computing becomes a waste of time? Which is related to the really big question: whether there are any quantum systems that can be manufactured that satisfy the detail of threshold theorem? At the moment, it seems that there are no physical restrictions that say that we cannot build a fault tolerant quantum computer. However, the technology required to build a fault tolerant quantum computer still seems to be quite a way off.
Now, everything I've said so far seems pretty easy. There is a lot of devil in the detail of all this. The math required to prove that we can build these simulation circuits - or more correctly error correcting circuits - is pretty intense. What's more, I've simplified the language that is normally used to explain the theory of fault tolerance. The literature that one must go through in order to get a thorough understanding of the topic is also very intimidating.
What's more, the theory of fault tolerance seems pretty counter-intuitive to physicists who have spent a lot of time playing with physics. Mostly, quantum systems don't behave nicely. Well, that isn't true, they behave nicely but not in a way that looks immediately nice for quantum computing. You see, quantum computations are made up out of a series of unitary operations. In principle all ideal quantum mechanical systems evolve via unitary operations. In practice, no quantum systems are ideal. The theory of error correction and fault tolerance was derived to show that there should exist quantum systems that have properties that allow us to create ideal operations on a subsystem while generating a whole lot of junk on the rest of the system.
There is a lot of physics underlying why it should be possible to make systems that can perform error correction. The problem is that a lot of it is buried under mountains of math. As a field we need to confront this problem, like Dave mentions here. Most of the language of quantum computing is, with good reason, phrased in the language of computer science. Maybe we need to reach out more to the rest of the physics community by better discussing how quantum computing works and also the implications of things like the fault tolerance theorem for our understanding of physics?