Top page ← 6/30/2024

Quantum supremacy is fake, not faster.

*(Fig.1) "Classically-mixed electric currents" is falsely treated as fictional unseen quantum mechanical superposition or parallel worlds, which deliberate misinterpretation led to the false quantum supremacy claim. *

Google quantum computer Sycamore with 53(or 54)-quantum bits or qubits is said to be able to perform the complicated task in 200 seconds, which allegedly would take 10000 years for ordinary classical (super-)computers to perform, which is called "quantum supremacy".

But all Google quantum computer did was just output random meaningless numbers, which is completely **useless** and impractical ( this 5-6th paragraphs, this 8th-paragraph ).

The fact that these dubious quantum computers cannot calculate any meaningful numbers faster than ordinary classical computers means the quantum computers are inherently Not faster in **any** tasks including outputting and sampling random meaningless numbers under fair comparison.

Actually, this Google quantum computer's supremacy claim was officially disporved later, which means traditional classical computers were proved to give more accurate results faster than these (dubious) quantum computers ( this 4th-last-paragaph, this p.2 fourth-paragraph, this 4~5th paragraphs ).

Google team also admits the current classical supercomputer could outperform Google's 53-qubit sycamore, where classical supercomputer called Frontier took only 6.18 second (= instead of the former wrong claim of 10000 years ! ) while Googlle's 53-qubit quantum computer took 200 second ( this 6th-paragraph, this 9-10 paragraphs ) in outputting random numbers.

So Google team **retracted** the former wrong supremacy claim (= classical computer may ridiculously take 10000 years ), and increased their number of qubits from 53 to 70 qubits to output random meaningless numbers, and claimed baselessly again that they might outperform classical computer which may take 47 years (= drastically decreased from the former 10000 years ! this p.5-Table1 ), which result in arxiv is not peer reviewd yet, and still useless research ( this last-paragraph, this 3rd-last~4th-last paragraphs, this 9th-paragraph ).

But there is **No** real quantum computer supremacy or advantage in this Google's meaningless random sampling method, because ordinary classical computers can output various patterns of random numbers **more quickly** than quantum computers (= actually their quantum computers' qubits are manipulated by ordinary classical computers, Not true quantum ).

And unlike the practical classical computers with No errors, all the current quantum computers are too error-prone and always unable to give the right answers, which cannot prove quantum advantage, existence of quantum superposition or simultaneous-parallel-world calculations.

Google Sycamore quantum computers' error rates are extremely high = estimated to be more than 90% error rate when outputting random numbers (= which means they output only **illegitimate erroneous** random numbers, which can**not** prove quantum supremacy by true errorless quantum random numbers ) with very low fidelity (= fidelity F = 1 - error rate, this Two-qubit gate error ).

↑ Google 53-qubit (and recent 70-qubit ) Sycamore quantum computer's fidelity (= F_{XEB} is 1 when no error or noiseless ) is miserable, extremely low, its fidelity = only 0.0024 (= only 0.2% ), which means their **error** (= noise ) rate is much **higher** than 99% ( this p.2,4-5, this p.3 ), while the current precise classical computer with No errors or No noise is fidelity = 1. ← **No** quantum computer's advantage over classical computer in giving "right" answers.

This p.3(or p.2)-first-paragraph criticizes the current error-prone quantum computers,

"we
do **Not** have the quantum resources to implement error correction and so experiments are highly
**noisy** (= including many errors ). For example, Google’s recent quantum supremacy experiment estimated that their fidelity (= correctness )
was merely ∼ 0.2% (i.e., the experiment was ∼ 99.8% noise or error = too **high error** rate ! ) and that their fidelity will
further **decrease** as they scale their system. Therefore their quantum supremacy claim hinges on
whether or not random circuit sampling is intractable in the high noise regime, in which there is only
a **small** signal of the correct experiment in a **sea of noise**, and this signal diminishes with system size."

In order to compare the real speeds of computers, the first thing to do is that both quantum and classical computers **must give the right** answers with No errors.

But the current error-prone quantum computers always give **wrong** answers whether they are random or non-random numbers (= quantum error correction is still impossible, this 3rd-paragraph ), which can neither achieve quantum supremacy nor outperform ordinary practical errorless classical computers. ← They try to **hide** a lot of errors given by quantum computers inside meaningless random numbers in this (fake) supremacy claim.

Another trick of these false claims of quantum computer's supremacy, speed-up or advantage is that they **falsely assume** the fictional quantum mechanical superposition or a dead and alive cat in unrealistic quantum parallel worlds may happen in each quantum computer's bit or qubit.

Major quantum computer companies such as Google and IBM use the superconducting electric circuits as quantum bits or (transmon) qubits ( this 6th-paragraph, this middle ).

Each superconducting transmon qubit is an electric circuit consisting of classical capacitor and inductor (= which inductor is replaced by Josephson junction ) where **many different electrons are mixed** and flowing with some frequencies.

↑ These electrons are oscillating, going back and force as electric currents whose frequencies or energies are used to define each qubit's state 0 (= lower energy or frequency ) and 1 (= higher energy or frequency, this p.4-8, this 3rd-paragraph, this p.12-13 ).

The superconducting qubit is often said to use (fictitious) Cooper pair allegedly binding two negative electrons, but two negative electrons are Not really bound to each other, because negative charges repel each other, so we should forget this paradoxical and unrealistic Cooper pair, and focus only on real electrons.

The point is this superconducting qubit called "artificial atom" is Not a single atom but consisting of **many** atoms and electrons moving differently.

They apply some microwave pulses to this qubit or circuit for changing the qubit into the so-called quantum superposition ( this 7th-paragraph ) which simultaneously has two mixed electrons' ocillations with two different frequencies as 0 and 1 qubit states allegedly using fictional parallel worlds.

↑ Quantum computer physicists falsely try to treat these states just **mixing** two **classically-different** electric currents or frequencies as (fantasy) irrelevant quantum superposition, a dead-and-alive cat or parallel worlds, which misrepresentation is the main cause of the current false claim of quantum supremacy or advantage.

Each superconducting qubit state is set to be detected as 0 or 1 by the resonator connected to each superconducting qubit circuit ( this p.8-right ). ← So these so-called quantum computer bits are **Not** natural objects but artificially-designed **pseudo**-atoms which can be easily **manipulated** and intentionally **misinterpreted** as (fictional) quantum superposition of unobservable dead-alive cat or parallel worlds.

In the Google quantum computer supremacy, physicists first applied some (classical) microwave pulses to all 53 superconducting qubits and tried to change them into the (imaginary and unobservable) quantum mechanical superposition ( this 5th-paragraph, this 8th-paragraph ) or a dead-alive cat state where 0 and 1 qubit states allegedly exist simultaneously ( this p.1-right ) using fictional parallel worlds, which are just classically-mixed electric current states **irrelevant** to quantum mechanical superposition, as I said.

Quantum mechanical unrealistic superposition or a dead and alive cat living in parallel worlds are unobservable and baseless ( which is like in the unknown black-box, this p.7 ), because quantum mechanics says when we try to observse such an unrealistic dead-alive cat superposition qubit state in parallel worlds, these parallel-world superposition states suddenly collapse into only one qubit state 0 or 1 in a **single** world ( this 5th-paragraph ).

This 7th paragraph says

"Immediately upon looking at the cat, an observer would immediately know if the cat was alive or dead and the "superposition" of the cat—the idea that it was in both states—would **collapse** into either the knowledge that "the cat is alive" or "the cat is dead," but **not** both." ← **No** direct evidence of quantum superposition or parallel worlds.

Each superconducting transmon qubit is far bigger, bulkier and more energy-inefficient (= each qubit is as big as ~mm ) than the ordinary classical computer's very small and compact bit or transistor (= only 20 nm ), hence such a big superconducting qubit can naturally and easily contain **multiple different** (classical) electrons' currents, which classically-mixed states are easily and falsely treated as (imaginary and unseen) quantum mechanical superposition or parallel worlds ( this p.1-left-lower ).

This 6th-paragraph says

"In the quantum world, systems are often said to be in a combination of states — in a so-called “superposition.” When you measure the system, it will “collapse” into just one of those states... the particular result is always fundamentally random."

They **randomly** changed those 53 (or 70 ) qubits by (classical) microwave pulses 20 times and obtained random meaningless 53-bitstring numbers out of 2^{53} (or 2^{70} ) different possible bitstring patterns. ← Each quantum bit or qubit can allegedly take 0 and 1 states (= actually each very big artificial qubit is designed to take more than two states to express the **pseudo**-superposition state in the middle of 0 and 1 currents, which is trick ), so the total possible states of 53 qubits are 2^{53} different bitstrings.

↑ They baselessly claim that quantum computer with 53 qubits can take 2^{53} different (unseen) superposition parallel-world states simultaneously ( this 8th-paragraph ), but when they try to observe such many superposition states, the 2^{53} superposition states magically and suddenly collapse into only one single state conveniently ( this middle ). ← This means No evidence of the quantum superposition or parallel worlds which are directly **unobservable**.

↑ Getting these random meaningless 53-bitstring numbers a million (= 10^{6} ) times takes about 200 seconds ( this 2nd-paragraph, this 7-8th paragraphs, this Fig.4, this 9-10th paragraphs ), which (still-useless) quantum computer's speed is Not faster than the ordinary classical computers at all.

The ordinary classical computers can easily obtain such random meaningless numbers of 53 bitstring **far faster** than 200 seconds of Google Sycamore quantum computer.

But they falsely claim that only quantum computers can exploit the (illusory) parallel-world superposition, which means the 53-qubit quantum computer has the power of exploiting the simultaneous 2^{53} different superposition or parallel-world states (= each qubit can allegedly take two states 0 and 1 simultaneously, hence, the total is 2^{53} qubit states in 53 qubits ) to obtain random (meaningless) 53-bitstring numbers (= of course, 2^{53} quantum superposition states are unobservable, only one bitstring can be observed when measured ).

And they claim that only the ordinary classical computer, which cannot use (fantasy) quantum parallel worlds, has to prepare and perform as many as 2^{53} = 10^{16} different calculations just to obtain random meaningless 53-bitstring numbers, which would take 10000 years ( this p.2 ) !

This 6-7th-paragraphs say

"This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a **million times in 200 seconds**."

".. For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the **2^53 possible states**, where the 53 comes from the number of qubits -- the exponential scaling is why people are interested in quantum computing to begin with.."

↑ So the quantum computer's supremacy or advantage does **Not** mean the (still-useless) quantum computers can calculate faster (= because ordinary classical computer can output random numbers more quickly than quantum computers ).

These false quantum computer's supremacy claims are caused by **false assumption** of the baseless quantum superposition parallel worlds and **unfairly** forcing only the classical computer to perform as many as 2^{53} = 10^{16} different calculations, while the quantum computer is allowed to get random meaningless 53-bitstring numbers **only a million** (= only 10^{6} ) times ( this 6th-paragraph, this 9th-paragraphs ) **without** performing any meaningful calculations.

↑ The huge difference in the numbers of times the classical (= 2^{53} = 10^{16} times ! ) and quantum (= only 10^{6} times ) computers were made to perform the bit-manipulating tasks to obtain the random meaningless numbers is the trick of (illusory) quantum computer's supremacy or pseudo-speed-up.

This p.13-last says

"a crucial aspect of this estimator and of Google’s statistical approach as a whole is that relatively small samples (of length m ∼ 10^{6} = output random numbers just a million times
) allows powerful confirmation for the behavior of an unknown
distribution on a huge probability space (of size 2^{n} or 2^{53}
that can be as large as 10^{12}–10^{16} )."

This Q3 says

"And even though we’ll only see a number of samples that’s **tiny** (= quantum computer is allowed to output random meaningless numbers only a million or 10^{6} times in 200s ) compared to 2^{n} (= only classical computer is unfairly forced to perform 2^{53} or 10^{16} calculations which would take 10000 years ), we can check whether the samples preferentially cluster among the strings that are predicted to be likelier, and thereby **build up our confidence** (← ? unscientific uncertain words ) that something classically intractable is being done."

Chinese team also claimed to have achieved quantum computer's advantage using 60 (out of 66) superconducting qubits which just output random meaningless numbers ( this 7th paragraph ) only 70 **million** times in 4.2 hours **instead of** performing 2^{60} (= 10^{18} ) different calculations which unrealistically-time-consuming tasks were **unfairly** forced only on classical computers ( this Fig.4 = this also output random numbers with many errors, low-fidelity )

Quantum computer physics often uses the nonphysical math or matrix representation to express the (fictional) quantum superposition of each qubit being 0 and 1 states simultaneously.

The microwave pulse applied for manipulating each qubit is also expressed as the nonphysical 2×2 matrices using even unrealistic imaginary number (= i ) as probability amplitude of each qubit 0 or 1 state, as shown in the above figure.

For example, the squre root of X matrix represents the microwave pulse allegedly changes each qubit into (unseen) quantum superposition ( this middle, this middle ) or a dead-and-alive cat state where each qubit can allegedly take 0 and 1 states simultaneously using parallel worlds ( this Fig.3, this p.29, this Fig.3 ), which nonphysical quantum superposition states are just classically-mixed states, as I said, so No quantum speed-up, and quantum computers are still useless.

↑ If physicists really want to prove these baseless and unseen quantum superpositions or parallel worlds are really happening, they must perform some useful calculations such as factoring large numbers much faster than the classical computers using the quantum parallel-world calculations based on Shor's algorithm ( this 9th-paragraph ).

But so far, the quantum computers can factor only very small meaningless numbers such as 21 = 3 × 7, which is useless and **far slower** than the ordinary powerful classical computers ( this 6th-paragraph, this 3rd-paragraph, this 2-3rd paragraphs ). ← Calculations of middle~larger numbers can **Not** be performed by error-prone quantum computers that always give wrong answers. = No quantum advantage was proven.

↑ The fact that useful factoring using Shor's algorithm based on truly quantum mechanical parallel worlds is **impossible** means such occult quantum mechanical parallel worlds ( this last ) or superposition do **Not** exist.

As a result, there is **No** evidence of the quantum computer's supremacy or advantage based on (baseless and illusory) superposition or parallel worlds, which is why we can never utilize really-useful quantum computers.

*(Fig.2) Illusory photon's quantum computer's advantage is based on useless tasks called "boson sampling" just detecting random lights or photons with No actual speed-up. *

Fake news abounds in the current mainstream science especially in the (fictional) parallel-world quantum computer pseudo-science.

"Chinese photon's quantum computer could calculate some (useless) task in 200 seconds, which calculation would take classical (super-) computer 2.5 billion years ?" and "Canadian Xanadu's photon's quantum computer could solve a 9000-year problem in 36 microseconds ?" are all **wrong** claims based on **unfair** illegitimate comparison between classical and quantum computers.

All these dubious photon's quantum computer's advantage and speed-up are based on the meaningless task called "boson sampling" which is completely useless and impractical ( this last-paragraph, this 3rd-last-paragraph, this 1-3rd paragraphs, this 4th-last-paragraph ), which boson sampling is just **randomly** detecting photons or lights at photodetectors with No meaningful calculations, and their quantum advantage claims are illegitimate ( this 2-3rd paragraphs, this p.1-right-2nd-paragraph ).

This 8th-paragraph "Caveats" says

"While we may draw conclusions about the power of quantum computers as they are today, the problems they are solving are experiments – the results are, for lack of a better word, **useless**. They are proof of concepts, not actually calculating a function with a usable result."

Fictional photon's quantum computer's advantage is due to falsely treating the real divisible classical light wave as a fictitious indivisible (pseudo-classical) photon ball or bead like in Galton board.

First of all, the alleged photon's quantum computer consists only of (classical) beam splitters, polarizers and (classical) light phase shifters, which is Not a real computer or calculator at all.

A quantum mechanical photon is also just a classical light wave.

Quantum mechanics ridiculously claims a (quantum) photon can split at a beam splitter into two different paths (= one path is 0, the other path is 1 in photon quantum computer's bit ) simultaneously using (fictional) quantum superposition or parallel universes.

In the useless meaningless task called boson sampling, for example, in the upper figure, four photons or four weak lights enter four input ports, split, interfere with each other at beam splitters and are randomly detected as (fictitious) photons at four photo-detectors (= four modes ), which can detect only electrons ejected by light instead of directly detecting a fictitious photon. ← That's all ( this 3rd-paragraph ), No meaningful calculations have been done in these (fake) photon's quantum computer's advantage experiments ( this last-paragraph ) !

All quantum computers including this photon's quantum computer suffer from high error rates due to severe photons' loss, which means their quantum computers are always **unable** to obtain correct answers unlike the current practical classical computers with No error. ← No quantum computer's advantage.

This is why all these error-prone quantum computers can do is output random meaningless numbers by randomly detecting photons in this useless "Boson sampling ( actually this Chinese quantum computer detected uncertain numbers of photons 43 ~ 76 photons due to their severe photon loss = low efficiency or low fidelity, this p.2, p.5-upper, p.8-upper, this p.68-Analysys of noise, this p.8 )".

↑ They try to hide quantum computer's high error rate ( this 2nd-last-paragraph ) due to easily photon loss into random meaningless numbers expressed by random detection of multiple photons.

↑ Hence, there is No quantum advantage (= the latest quantum advantage by random photon detection in boson sampling is fake ) as long as the present (useless) quantum computers cannot correct errors

In order to claim true quantum advantage, the quantum computer has to give right answers (= by correcting errors ) more quickly than the ordinary classical computer (= classical computer is errorless ), but all the current quantum computers are too error-prone to give right answers (= this is why they could output only random meaningless numbers where errors are hidden ), so No quantum advantage,

Especially, photon's quantum computer is completely useless due to its extremely high error rate and photon's loss (= low photon's detection efficiency < 70%, this p.4-left-3rd-paragraph, this p.1-right-1st-paragraph ).

This 5th-paragraph says

"They (= photon quantum computer ) mainly suffered from their low efficiency of, at best, **eleven** percent. This means that a large fraction of the light particles, and thus also of the data, are **lost** while being processed in the quantum system - a shortcoming especially when numerous quantum gates are to be connected consecutively in a quantum network and **losses add up** as a result."

In the (fake) quantum advantage based on boson sampling, physicists just randomly detected multiple classical light waves wrongly treated as quantum photons split by multiple beam splitters (= BS ) at photodetectors with **No** useful quantum computation (
this p.2-Fig.1 ).

As a representative of a classical particle, they intentionally **ignored** the real divisible light wave, and instead, picked up a **non-existent** indivisible pinball-like (classical) photon-ball model, which fictitious photon ball (like dropping into Galton board ) is supposed to be **unable** to split or interfere at beam splitters ( this p.32-Fig.1.9 ). ← This is a unreal classical photon.

↑ This fictitious indivisible photon ball model they choose as the classical representative object is **Not** a real (divisible) classical light wave (= ordinary classical computers use divisible classical light wave which physicists call "quantum photons", The non-existent classical photon-balls are Not used in any computers including classical computers ), so this quantum advantage experiment based on comparison with the indivisible **non-existent classical photon balls** is meaningless and **unreal**. = **No** quantum computer's advantage nor speed-up

This 4th-paragraph says

"Juizhang (= Chinese photon's quantum computer ) is **far from** a general-purpose computer... You can picture it as a **pinball** machine, but without the flippers. You drop the ball from the top of the machine. It dribbles down towards the bottom, bouncing on the various bumpers, to finally arrive in one of several slots... Replace the **balls** by **photons**, the bumpers by prisms and mirrors, add detectors in each of the bottom slots to monitor the number of photons, and you have a rather good analogy. Simulating the **classical** system (with the real **balls**) is relatively easy with a classical computer. However, the bar is higher when we deal with **quantum** objects (= quantum photon ). This is due to the (photon's) **interference** between various possible paths."

↑ This means quantum mechanics wrongly considers the ordinary light's interference to be a "(nonphysical) quantum mechanical photon's interference" instead of classical mechanical light wave phenomena, which is the root cause of the illusory photon's quantum computer's advantage or supremacy's claims.

Quantum mechanics baselessly makes an unrealistic assumption that each photon (= just weak classical light wave ) could split into multiple paths or (imaginary) parallel-world superposition states ( this p.2-right-2nd-paragraph, this p.4-quantum superposition, this p.40(or p.28)-lower ), and interfere with each other (= this interference used in boson sampling is called Hong-Ou-Mandel effect that can be explained by classical light wave, hence quantum mechanics is unnecessary and its quantum advantage is illusion ).

This 4th paragraph says

"The essence of the boson sampling problem can be readily understood by examining the quantum machine which solves it. Our machine begins by injecting indistinguishable single photons into a linear optical network of single-mode waveguides. Due to quantum interference as the photons traverse the network, the photons emerge in a complicated entangled state. The positions of the photons are then measured by single-photon detectors..."

".. The machine bears some resemblance to the classical Galton board, illustrated below, in which balls **randomly** fall through an array of pegs. However, while these distinguishable classical balls take familiar, distinct paths down the board, **rolling either left or right** off of each peg they encounter (= a classical light was treated as a **fictitious indivisible** ball ), the (quantum) photons in some sense collectively take **all possible paths** (= they considered only a quantum photon could split into imaginary quantum superposition states, this 2nd-paragraph ) through their network."

They (wrongly) prepare (imaginary) pinball-like classical photon balls that can Not split or interfere at beam splitters, and assume
the estimation of the photons' distribution or chance of detecting photons at photodetectors using those fictitious (indivisible) pinball-like classical photon-balls needs to calculate **all possible routes** taken by each (fictitious) classical photon-ball arriving at one of (four-upper-figure) detectors randomly ( this 6th-paragraph, this p.2-left-II ) for simulating (imaginary) quantum superposition.

↑ It takes too much time for a fictitious indivisible classical photon ball to mimic a quantum photon splitting into multiple paths simultaneously using fictional quantum superposition or parallel universes, which is the trick of this research's (fake) quantum advantage.

There is **No** evidence of quantum mechanical superposition = a dead and alive cat in parallel worlds (= unobservable ), which can be naturally explained by the normal **divisible classical** light wave (= unlike a fictitious classical photon ball, real classical light wave can split at a beam splitter, so No quantum advantage ).

Their quantum speed-up or advantage is illusion and useless.

Here we explain how (fictitious) classical photon balls can simulate or mimic (imaginary) quantum photon splitting into many routes (= many photodetectors ) using (fantasy) parallel world superposition in their (fake) quantum advantage experiments based on random boson sampling.

In boson sampling in the upper figure using four photons, they assume the chance of a (fictitious indivisible) photon-ball-1 hitting the photodetector-1 is supposed to be U11, the chance of photon-ball-2 (or 3 ) hitting the photodetector-2 (or 3 ) is U22 (or U33), and the chance of photon-ball-2 (or 4 ) hitting the photodetector-4 (or 2 ) is U24 (or U42, this p.5-6 ).

So the total chance summing all these different possible patterns of four photon-balls randomly hitting four photodetectors in this boson sampling is expressed as the "permanent ( this p.8-10, this p.3-left, this p.4-left )" whose number of terms is 4! ( this p.19-3.2,p.13 (= n photons, N detectors ), this p.22 ) = 4×3×2×1 = 24 pattern terms, which number is larger than 2^{4} = 16 ( this p.15, this p.15, this p.10-last-paragraph ).

↑ Calculation of this permanent (= calculating probabilities summing all possible paths taken by fictitious indivisible classical photons ) takes exponentially-enormous amount of time proportional to 2^{n} where n is photon's number ( this p.2-right-3~5th paragraphs, this p.1-right-last-paragraph~p.2-left, this Fig,1, speed performance )

In Chinese team's photon quantum computer advantage, they used 100 input photons (= 50 photons × 2 with horizontal H and vertical V polarizations, this p.6-upper ) and 100 output detectors or modes ( this 3rd-paragraph, this 8th-paragraph ), so the total number of terms summing all possible routes and patterns of detecting 100 photons by 100 different photodetectors is as many as 100! which is bigger than 2^{100} = 10^{30} patterns.

↑ Ordinary classical (super-)computer would take billions of years to calculate as many as 2^{100} or 10^{30} different routes or patterns taken by 100 (fictitious indivisible) photon-balls, they claim. ← But this estimation is completely wrong.

When there are n incident photons or n detectors (= modes ), physicists are supposed (= forced ) to calculate as many as 2^{n} different probability patterns of detecting the fictitious indivisible classical photon-balls at different photodetectors ( this 3rd-last-paragraph, this p.3-(13), this p.3-(2) ), which needs an unrealistically large exponential amount of time.

So if 144 photons are used, only classical computers (= considering photons as fictitious indivisible balls in Galton-board ) are **unfairly** forced to calculate as many as 2^{144} = 10^{43} different photons' detection patterns taking unrealistically too much time ( this p.8 5-6th-paragraphs ) while quantum photons, which allegedly could split into multiple superposition parallel worlds, are allowed to be sampled only a few times without any computation, which unfair comparison cannot prove (fake) quantum computer's speed-up or advantage at all.

↑ The point is (still-useless) quantum computers can**not** calculate these unrealistically-large number = 2^{n} or 2^{100} of photons' detection patterns (= permanent, this Eq.4 ), either, because the alleged quantum computer's boring task of boson-sampling is just sampling or randomly detecting the photons going through beam splitters without directly calculating any photons' detection probabilities.

A (fictitious) quantum mechanical photon is supposed to be able to split into multiple photons into different paths using fictional quantum mechanical superposition ( this 2nd-paragraph, this 4th-paragraph ) or parallel worlds, and interfere with each other to arrive at all 100 photodetectors simultaneously ( this Scenario, this 7th-paragraph ).

↑ Quantum mechanics often emphasizes the existence of (fantasy) superposition or parallel worlds, but physicists can detect only one photon or one qubit state in a single world at once (= all other superposition states suddenly disappear when measured, they insist ), so the quantum mechanical superposition is an unobservable and **baseless** phenomenon.

This 9th-paragraph says

"Photons are first sent into a network of channels. There, each photon encounters a series of beam splitters, each of which sends the photon down two paths simultaneously, in what’s called a quantum superposition. Paths also merge together, and the repeated splitting and merging causes the **photons** to **interfere** with one another according to **quantum** rules."

So based on this **baseless** quantum mechanical (unseen) superposition or parallel worlds, Chinese team just randomly detected (or sampled ) multiple photons at 100 photodetectors only about a **million** times (= 10^{6}, or they actually sampled photons 3097810 or 3 million times, this p.3-middle-right, this p.8-upper ) in 200 seconds **without** performing any meaningful computations.

↑ This is the trick of (illusory) quantum computer's speed-up or supremacy, because they **unfairly** forced the unrealistically time-consuming tasks of calculating as many as 10^{30} different photons' detection patterns only on the classical computer, while the (still-useless) quantum computer is allowed to detect about 100 photons only a million (= 10^{6} ) times.

↑ This (boson) sampling number is **far smaller** than 10^{30} calculations forced only on classical computers (= their impractical photon's quantum computer just randomly detected or sampled photons only 3 million or 3 × 10^{6} times in 200 s, instead of 10^{30} times unfairly forced only on classical computers, this p.3-left-4th paragraph ) .

As a result, the (illusory) photon's quantum computer's advantage or speed-up is caused by the **unfair comparison** which forced only classical computer to do the unrealistically many (= 10^{30} ) calculations, and allowed the quantum computer to randomly sample 100 photons only a far-smaller number (= 10^{6} ) of times, baselessly assuming that the quantum mechanical photon can split into multiple different detectors simultaneously using (fictitious) superposition or parallel worlds which cannot be simulated by the indivisible (fictitious classical) photon-balls in Galton board.

If we replace this fictitiously-indivisible (classical) photon-ball model in Galton board by the realistically-**divisible** and interfering **classical light** wave, we can naturally obtain the same results as (fictitious) quantum mechanical photons allegedly splitting into fantasy superposition or parallel worlds. ← Quantum computer's advantage turned out to be **illusion**.

Canadian Xanadu's photon quantum computer's advantage is also based on this (useless) boson sampling task unreasonably treating the real divisible light wave as the unreal indivisible photon-balls or rigid objects ( this 17-20th-paragraphs ), so it's also **fake** quantum computer's supremacy, and **useless**, impractical ( this last, this 12th-paragraph, this 6th-paragraph, this 5th-last paragraph ).

This
or
this 10-11th paragraphs say

" So far, **No** one has been able to demonstrate quantum advantage for a “useful” computational task – the **random**-sampling problem first tackled by Google essentially has **No** applications beyond demonstrating quantum advantage... While ( Canadian Xanadu ) Borealis is an impressive jump forward in scale over Jiuzhang, it **falls short** of being a fully programmable quantum computer like Sycamore or Zuchongzhi."

↑ Xanadu just randomly detected about 100~200 photons ( this 3rd-paragraph ) one time in 36 microseconds with No meaningful calculations, which photons' distribution or detection pattern is hard to simulate using the fictitious indivisible (classical) photon-ball model which would take the classical (super-)computer 9000 years ( this Fig.1, p.6-left ).

↑ If we assume the real divisible classical light wave can also split and interfere, the false quantum computer's advantage based on the (illusory) quantum mechanical photons **disappears**.

There is **No** such thing as a (fictitious indivisible) classical photon-ball in this real world. There is only the realistically-divisible classical light wave in this world.

The current classical (super-)computer, which can perform as many as 200000 trillion calculations per second, is much faster than (still-useless) quantum computer which can get only one photons' sample in 36 microseconds.

So all the dubious photon quantum computer's advantage experiments are based on the (illusory) comparison between the **non-existent** (indivisible) classical photon-ball (like in Galton board) and non-existent quantum photon splitting into parallel worlds ignoring real divisible classical light wave, which means their experiments compared with the unreal non-existent objects or models (= non-existent indivisible classical photon-ball in Galton board ) are meaningless, and this is why all the current quantum computers are still useless despite an incredible amount of the media-hype ( this last ).

The so-called "programmable" photon's quantum computer does **Not** mean the real computer's programming (= because the current hopeless quantum computers cannot program or calculate any meaningful numbers due to their high error rates and severe photons' loss ). Their (misleadingly-used) programming just means the mere adjustment of (classical) beam splitters and light wave shifters with **No** meaningful computations.

↑ In Xanadu's photon quantum computer advantage experiment, they just detected the average 125 photons at a time ( out of the maximum 219 photons ), which **single detection took 36 μs without** performing any calculations.

They **unfairly** forced only classical supercomputer to calculate very time-consuming different photons' distribution probabilities many many times (= photon's quantum computer was allowed to just detect or sample photons only **one time**, and only classical supercomputer was forced to calculate different photons' detection probabilities **many, many times** ). ← **No** quantum computer's advantage was proven by this unfair comparison.

In order to calculate all possible combinations of photons detected at different detectors, they used the generalized version of permanents called "hafnian" which calculation algorithm would take time proportional to 2^{N/2} where N is photons' number ( this p.2-left-upper, this p.2,p.4, this p.1-2, this p.4 ).

The estimated time that the classical supercomputer (= Fugaku ) would take is based on the equation = Fugaku's one time calculation time (= 4.41×10^{-17} s ) × 216 (= photon detectors' (= mode ) number ) × N^{3} (= N = 125 mean detected photons' number ) × 2^{125/2} (= permanent or illusory parallel-world number ) = 10^{11} second = about **9000 years** ( this p.6 ). ← ridiculous **unscientific** estimation.

The recent Chinese research paper's ridiculous quantum advantage claim that their random (meaningless boson) sampling of photons was 10 quadrillion times faster than classical supercomputer is also based on the **wrong** assumption that (imaginary indivisible) classical photon balls (= corresponding to classical computer ) have to take an enormous amount of time for calculating all possible routes which were taken by a (imaginary) divisible quantum photon splitting into fictional parallel worlds, which phenomena can be explained by the realistic divisible classical light wave, so No quantum speed-up or advantage ( this-p.5-(3) ).

In these dubious photon quantum computer's advantage experiments, physicists often made the unreasonable claim that two photons could stick to each other at each beam splitter by (fictitious) quantum interference ( this 16th-paragraph, this 3rd-paragraph ), which phenomenon is called "Hong-Ou-Mandel (= HOM )" interference effect ( this 5th-paragraph, this 2nd-paragraph, this p.12-13 ).

↑ They **falsely** think this Hong-Ou-Mandel interference of two photons at a beam splitter uses (fictional) quantum mechanical superposition states ( this quantum beam splitter~Hong-Ou-Mandel, this p.2-Fig.1 ) where each photon could magically take different paths simultaneously using (fictional) parallel worlds, hence, this is the reason supporting their (fake) photon quantum computer advantage, though this HOM interference can be naturally explained by **classical** light wave interference, so **No** quantum advantage or superposition.

It is **impossible** that two photons with No electric charges magically stick to each other, hence, this phenomenon should be explained by the realistic classical light wave interference, and the so-called "quantum interference" is illusion and unnecessary.

Physicists often used the special beam splitter where only the reflected photon or light wave from one direction is set to change its wave phase by π (= half wavelength, this p.6 ).

↑ It means when two photons or lights enter this same beamsplitter from two directions, the photons or light wave exiting from one side interferes destructively (= due to phase change setting ) which cannot be detected as a photon because of its destructively-weakened light intensity not exceeding the photodetector's threshold, and the other photon or light wave exiting from the other side of the beamsplitter interferes constructively which constructively-stronger light intensity can be detected as a photon, which looks like as if two photons entering the same beamsplitter stick to each other, when they exit from the beamsplitter.

The wave phase of the photon or light entering the beamsplitter is random ( this p.6-right-upper ), so which side of the beamsplitter two photons or lights exit is random.

↑ Lights exiting from one side of the beam splitter (= one-side reflection causes light phase shift ) are (partially) constructively interfered with and detected as (fictitious) photons by photodetector due to their constructively-stronger light intensity (partially) exceeding the photon-detection threshold, and lights existing from the other side of the beam splitter are (partially) destructively interfered with and undetected as photons due to the destructively-weakened light intensity falling short of the photon detection threshold, which is the realistic **classical** mechanism of two photons (magically) sticking to each other or Hong-Ou-Mandel interference. ← **No** quantum mechanical superposition nor advantage.

As a result, if we use the realistic (classical) light wave interference from the beginning, all these illusory concepts such as the (fake) quantum computer's advantage or magically-sticking photons wouldn't have appeared.

Feel free to link to this site.