Harvard-QuEra atomic qubits just discard erroneous qubits post-selectively without error correction.

Top

QuEra-Harvard-MIT atomic qubits are impractical, full of errors and hypes.

Harvard-QuEra fault-tolerant quantum computer (= Not real error-correction but just discarding error qubits, postselectively ) is disastrous, still error-prone (= due to inability to correct errors ), far from practical use.

(Fig.1)  Harvard-QuEra used only 280 unstable atoms as qubits with extremely high error rates and No ability to correct errors.  ← practical quantum computer is hopeless

QuEra-Harvard's error-prone quantum computer with only 280 qubits is far from practical computer.

The recent QuEra-Harvard's overhyped "game-changing" quantum computer with only 280 neutral atoms (= each atomic two energy levels are used as bit state 0 and 1 ) trapped in laser light or 280 physical qubits (which were divided into 48 × logical qubits, this 5th-paragraph ) is still error-prone, and far from the practical quantum computer with fewer errors that will require millions of qubits.

The 8th and 12th paragraphs of this hyped news say
"But qubits can easily be disturbed, making them notoriously error-prone. Roughly 1 in 1,000 fail, versus 1 in 1 billion billion bits in conventional computers (= conventional classical computer has far less errors than the impractical error-prone quantum computer )."

QuEra-Harvard's new quantum computer can Not correct errors, instead, they just 'discarded' error-detected qubits post-selectively, which is useless.

The important point is that this QuEra's latest alleged-fault-tolerant quantum computer is still not only error-prone but also unable to correct errors like Google's quantum computer ( this 3rd-paragraph ).

This or this 6~7th paragraphs say
"Instead of fixing mistakes (= Not error correction ) that occur during computations, the processor used by the Harvard team incorporates a post-processing error-detection phase. During this phase, erroneous outputs are discovered and discarded (= error-qubits were just discarded post-selectively without being corrected )."

This 4th-last paragraph say
"What the Harvard team’s processor does, rather than correct errors during calculations, is add a post-processing error-detection phase wherein erroneous results are identified and rejected (= No error correction )."

↑ This means when we need to calculate large numbers that will cause errors to a lot of qubits, all those error qubits must be discarded without being corrected, and as a result, almost all qubits are unusable for final answers of the calculations due to being discarded, so this fault-tolerant quantum computer is useless.

Discarding all qubits which showed errors without error correction is meaningless.

This original paper (= p.4-left-2nd-3rd-paragraphs,  or this p.4-right ) says
"Averaged over the five computation logicals, we find that, by using the fault-tolerant initialization ( postselecting on the ancilla logical flag not detecting errors = discarded all qubits that showed errors without error-correction ).. physical two-qubit gate fidelity (= 99.5% = so error rate 0.5% in the last paragraph of this hyped news says about this only two-qubit-gate, Not 48 logical qubits )"

"Furthermore, we can postselect on all stabilizers of our computation logicals being correct; using this error-detection approach (= meaning discarding all qubits that showed errors detected by stabilizers, without error-correction ), the GHZ fidelity (= how correctly GHZ state = 0000 or 1111 expressed by four logical qubits can be generated ) increases to 99.85%, at the cost of postselection overhead"

"for example, discarding just 50% of the data improves GHZ fidelity to approximately 90%. (= by discarding 50% of all qubits, which showed errors, they could generate four-logical qubit state of 0000 or 1111 called GHZ state with the probability of 90% and 10% error rate. This Fig.3d shows GHZ fidelity = 0.9 in acceptance fraction of 0.5 meaning 50% qubits discarded without error correction )"

The error rate of 4 logical qubits (= 0000 ) was still high = 10% even after they discarded 50% of all qubits, which showed errors.

↑ Postselecting on ancilla logical flag or stabilizers of being correct means they used about half of all qubits as error-detection qubits (= each logical qubit consists of 7 data and error-detection physical qubits, this Fig.3a ) called ancilla qubits or stabilizers, and they only post-selected qubits whose ancilla or stabilizer error-detection qubits showed no errors by discarding all the remaining error qubits.

And even after discarding 50% of all (atomic) qubits due to their showing errors, the remaining qubits showed 10% errors (= 90% fidelity ) in only four logical qubits (= 0000 or 1111 ), which high error rate is too bad, and impractical.

This p.2-left-4th-paragraph says
"this code cannot be used to correct such errors. We thus perform state stabilization by post-selecting runs in which no error is detected by the stabilizer measurements (= when stabilizer or ancilla qubits detect errors, all relevant qubits are discarded without error correction )"

"Fidelity (= XEB )" is equal to "1 - error rate", so fidelity of 90% means error rate of 10%.

Fidelity is equal to 1 minus error rate ( this p.59,  this p.3-left-2nd-paragraph ).
So 90% or 0.9 fidelity means error rate of 10% or 0.1 (= 1 - 0.1 ).

In this research, "XEB (= FXEB )" was used as fidelity ( this p.7,  this p.20-upper ) where XEB (= fidelity ) = 1 means no error, and XEB = 0 means 100% error rate.

This p.5-right-4th-paragraph says
"To characterize the distribution overlap, we use the cross-entropy benchmark (XEB), which is a weighted sum between the measured probability distribution and the ideal calculated distribution, normalized such that XEB = 1 corresponds to perfectly reproducing the ideal distribution (= no error ), and XEB = 0 corresponds to the uniform distribution, which occurs when circuits are overwhelmed by noise (= error rate 100% )..
We note that the XEB should be a good fidelity benchmark"

QuEra quantum computer of 48-logical qubits is extremely error-prone with 90% error rate even after 99% qubits that showed errors were discarded, which fault-tolerant method is impractical.

This QuEra-Harvard's fault-tolerant quantum computer with 280 physical qubits divided into 48 logical qubits is still error-prone, impractical, unable to correct their errors.

This p.5-right-5th-paragraph says
"We obtain an XEB (= equal to fidelity or 1 minus error ) of approximately 0.1 for 48 logical qubits"

↑ This means QuEra's 48-logical-qubit quantum computer's error rate is too bad 90% error ! (= 0.9 = 1 - 0.1 ) which is completely useless, and this-3rd-last-paragraph of 0.5% error rate on QuEra's 48 qubits is false and just hype.

Furthermore, this very bad fidelity = 0.1 (= error rate was 90% ) was obtained after they discarded more than 99% of all qubits (= accepted fraction is less than 0.01 in 48 qubits, this Fig.5e ), whose ancilla or stabilizer qubits detected errors.

This p.12-left-3rd-paragraph says
"Typically, error detection refers to discarding (or postselecting) measurements in which any stabilizer errors occurred (= No mention of error correction )"

This Fig.7c says
"48-qubit XEB sliding-scale error-detection (= Not error-correction ) data. The point with full postselection on all stabilizers being perfect returned only eight samples (= only eight samples remained by the post-selection, and all other 138600 samples were artificially discarded due to showing errors, Fig.7a )"

Harvard-QuEra atomic qubits was too error-prone with more than 90% error rate.

As a result, QuEra 48-logical-qubit quantum computer's error rate is more than 90%, which is too impractical and they can Not correct errors (= instead, they just discarded erroneous qubits ).

This 13-14th paragraphs also say
"they needed to postselect on the event that no errors occurred (= "postselect no-error qubits" means "discarded error-qubits" ), which happened about 0.1% of the time (= 99.9% qubits showed errors, hence get discarded ) with their largest circuits. This just further underscores that they haven’t yet demonstrated a full error-correction cycle (= No error correction ).
They don’t claim to have demonstrated quantum supremacy with their logical qubits (= meaning ordinary classical computer is far faster and more useful than the current error-prone impractical quantum computer )."

Quantum computers cannot correct errors (= only ordinary classical computers can correct errors )

This 17th paragraph says
"This isn't full error correction. "What is happening in the paper is that the errors are corrected only after the calculation is done (= which means the calculated results by their quantum computers are still error-prone even after discarding errors, without error-correction ). So, what we have Not demonstrated yet is mid-circuit correction, where during the calculation, we measure... an indication of whether there's an error, correct it, and move forward."

This p.16-right-2nd-paragraph says their (deceptive) error correction does Not mean the error correction of the error-prone quantum computer hardware by physically flipping qubits. Instead, it just means classical computer-simulated software correction ( this p.17-right-IX. ) due to the impractically-error-prone quantum computer hardware.

Because the present quantum error correction operation itself increases error rate instead of decreasing.

 

to

2024/6/13 updated. Feel free to link to this site.