Re: Quantum computing practically impossible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For people who are mildly interested in the topic of quantum error
correction, we have a short discussion and a Javascript app that shows
some of the basic ideas in our MOOC (which is now free).
https://www.futurelearn.com/courses/intro-to-quantum-computing

For people with serious interest, there are two topics to cover:
decoherence, and quantum error correction.  (We'll leave aside
fault-tolerant execution of algorithms, which requires yet more work.)

My favorite source on decoherence is "Introduction to decoherence and
noise in open quantum systems," by Todd Brun and Daniel Lidar, which
is the opening chapter in the compendium _Quantum Error Correction_
(Cambridge, 2013).  Unfortunately, AFAIK, it's not publicly available
in PDF.  Other sources include John Preskill's lecture notes (which
are freely available) at
http://www.theory.caltech.edu/~preskill/ph219/ph219_2018-19 and many
books on quantum computing, such as the classic Nielsen & Chuang
(Cambridge, 2000).  Be warned that this does take moderately
good linear algebra chops, and if you want to really understand
decoherence, some partial differential equations.

If you want to learn about quantum error correction, my favorite
source is Devitt, Nemoto & Munro, "Quantum Error Correction for
Beginners," arXiv:0905.2794v3 [quant-ph].  Already over a decade old,
but still outstanding.  But "for beginners" assumes you have a decent
quantum background, just nothing on QEC.

-----

Okay, a little on mohta-san's concerns...

There are multiple sources of errors and multiple types.

* bit flip: a symmetric channel, with 0-->1 and 1-->0 errors equally
  likely
* phase flip: ditto, but for a qubit's phase
* amplitude damping: In this mode, 1-->0 errors are more likely than
  0-->1 errors, because the former happens when a photon is emitted
  and thpe latter when a photon is absorbed; moving to the
  lower-energy state is more common.  This is the harder one to
  understand.
  https://en.wikipedia.org/wiki/Amplitude_damping_channel
  http://www.thphys.nuim.ie/staff/jvala/Lecture_12.pdf

We can also talk about various gate errors, and errors in
measurement.  One that particularly worried me is, in fact, one of the
things that Peter Shor simplified away in his first QEC paper: what
happens when the errors on individual qubits aren't independent?

*ALL* of these are handled properly by quantum error correction.  In
fact, they can all be treated as types of memory errors, which
(thankfully) simplifies the math.

This is over-simplified, but QEC consists of several phases:

1. Encoding a logical state
1.5 <<errors happen>>
2. Extracting syndrome information
   a. calculating the syndromes in superposition
   b. measuring the syndrome qubits, which *forces the state into one
      with discrete errors*
3. Using that information to correct the state.

Steps 2 & 3 are repeated over the lifetime of the algorithm execution,
interleaved with the operations that execute logical gates.  (Well, in
codes known as topological codes, surface codes, toric codes and color
codes, that's not quite true.  Those codes also have no classical
equivalent, whereas other codes derive very directly from classical
error correction.)

Step 2a involves quantum operations, and so *adds* errors into the
state, meaning an interesting question is when the error correction
process as a whole *removes* more errors than the extraction adds.
Monroe's group claims to have exceeded that threshold.

Mohta's concern seems to be two-fold; first, that errors aren't
independent, and second, that forcing of the state (actually, a
*partial* collapse of the quantum state) doesn't take you a state
where applying correction works.

As it happens, the first problem worried me enough that I wrote about
it in my Ph.D. thesis.  See Sec. 2.3.5 (p. 55) of
https://arxiv.org/abs/quant-ph/0607065
Fundamentally, coherent noise over a group of qubits (e.g., some
coherent radio noise from outside, or shared dependence on a slightly
incorrect classical oscillator, or something) results in all of
the suppressed *to second order* -- to the *square* of the induced
rotation angle (since it's less than 1.0, that means reducing the
error).

A similar argument applies to the amplitude damping that is the harder
error type mentioned above: the syndrome extraction forces us back to
a state where, *with high probability*, the correction we choose to
apply makes the state better (== closer to the original) rather than
worse.

Some QEC schemes operate by applying multiple recursive levels of the
same (or, possibly, different) encoding schemes.  Yes, that results in
a big growth in resources used, but a) we're only talking about 2 or 3
levels, not some huge number; and b) the errors are actually
suppressed at a rate that exceeds the resource consumption, so it all
works out in the end.  I won't go into the math of that here.

Fundamentally, quantum error correction *works*, and over the course
of 2020-2025, we will see it demonstrated increasingly clearly in
experimental systems.

Barring something really extraordinary, this is all I will have to say
on the topic on ietf@ietf.  People who are interested are welcome to
attend the QIRG meeting (Monday afternoon Bangkok time during IETF)
and ask about this during the open mic.  Since we have other business
to attend to on the already-published agenda, I will defer discussion
of this draft and this topic to the very last thing, but I expect
there will be some time available.

See y'all online in a couple of weeks!

—Rod





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux