Category Archives: Quantum Physics

Hunting for quantum-classical crossover in condensed matter problems | npj Quantum Information – Nature.com

Our argument on the quantum-classical crossover is based on the runtime analysis needed to compute the ground state energy within desired total energy accuracy, denoted as . The primal objective in this section is to provide a framework that elucidates the quantum-classical crosspoint for systems whose spectral gap is constant or polynomially-shrinking. In this work, we choose two models that are widely known due to their profoundness despite the simplicity: the 2d J1-J2 Heisenberg model and 2d Fermi-Hubbard model on a square lattice (see the Method section for their definitions). Meanwhile, it is totally unclear whether a feasible crosspoint exists at all when the gap closes exponentially.

It is important to keep in mind that condensed matter physics often entails extracting physical properties beyond merely energy, such as magnetization, correlation function, or dynamical responses. Therefore, in order to assure that expectation value estimations can done consistently (i.e. satisfy N-representability), we demand that we have the option to measure the physical observable after computation of the ground state energy is done. In other words, for instance the classical algorithm, we perform the variational optimization up to the desired target accuracy ; we exclude the case where one calculates less precise quantum states with energy errors i and subsequently perform extrapolation. The similar requirement is imposed on the quantum algorithm as well.

Among the numerous powerful classical methods available, we have opted to utilize the DMRG algorithm, which has been established as one of the most powerful and reliable numerical tools to study strongly-correlated quantum lattice models especially in one dimension (1d)18,19. In brief, the DMRG algorithm performs variational optimization on tensor-network-based ansatz named Matrix Product State (MPS)20,21. Although MPS is designed to efficiently capture 1d area-law entangled quantum states efficiently22, the efficacy of DMRG algorithm allows one to explore quantum many-body physics beyond 1d, including quasi-1d and 2d systems, and even all-to-all connected models, as considered in quantum chemistry23,24.

A remarkable characteristic of the DMRG algorithm is its ability to perform systematic error analysis. This is intrinsically connected to the construction of ansatz, or the MPS, which compresses the quantum state by performing site-by-site truncation of the full Hilbert space. The compression process explicitly yields a metric called truncation error," from which we can extrapolate the truncation-free energy, E0, to estimate the ground truth. By tracking the deviation from the zero-truncation result EE0, we find that the computation time and error typically obeys a scaling law (See Fig. 2 for an example of such a scaling behavior in 2d J1-J2 Heisenberg model). The resource estimate is completed by combining the actual simulation results and the estimation from the scaling law. [See Supplementary Note 2 for detailed analysis.]

Although the simulation itself does not reach =0.01, learning curves for different bond dimensions ranging from D=600 to D=3000 collapse into a single curve, which implies the adequacy to estimate runtime according to the obtained scaling law. All DMRG simulations are executed using ITensor library61.

We remark that it is judicious to select the DMRG algorithm for 2d models, even though the formal complexity of number of parameters in MPS is expected to increase exponentially with system size N, owing to its intrinsic 1d-oriented structure. Indeed, one may consider another tensor network states that are designed for 2d systems, such as the Projected Entangled Pair States (PEPS)25,26. When one use the PEPS, the bond dimension is anticipated to scale as (D=O(log (N))) for gapped or gapless non-critical systems and D=O(poly(N)) for critical systems27,28,29 to represent the ground state with fixed total energy accuracy of =O(1) (it is important to note that the former would be D=O(1) if considering a fixed energy density). Therefore, in the asymptotic limit, the scaling on the number of parameters of the PEPS is exponentially better than that of the MPS. Nonetheless, regarding the actual calculation, the overhead involved in simulating the ground state with PEPS is substantially high, to the extent that there are practically no scenarios where the runtime of the variational PEPS algorithm outperforms that of DMRG algorithm for our target models.

Quantum phase estimation (QPE) is a quantum algorithm designed to extract the eigenphase of a given unitary U by utilizing ancilla qubits to indirectly read out the complex phase of the target system. More concretely, given a trial state (leftvert psi rightrangle) whose fidelity with the k-th eigenstate (leftvert krightrangle) of the unitary is given as fk=k2, a single run of QPE projects the state to (leftvert krightrangle) with probability fk, and yields a random variable (hat{phi }) which corresponds to a m-digit readout of k.

It was originally proposed by ref. 30 that eigenenergies of a given Hamiltonian can be computed efficiently via QPE by taking advantage of quantum computers to perform Hamiltonian simulation, e.g., (U=exp (-iHtau )). To elucidate this concept, it is beneficial to express the gate complexity for the QPE algorithm as schematically shown in Fig. 3 as

$$begin{array}{r}C sim {C}_{{{{rm{SP}}}}}+{C}_{{{{rm{HS}}}}}+{C}_{{{{{rm{QFT}}}}}^{{dagger} }},end{array}$$

(1)

where we have defined CSP as the cost for state preparation, CHS for the controlled Hamiltonian simulation, and ({C}_{{{{{rm{QFT}}}}}^{{dagger} }}) for the inverse quantum Fourier transformation, respectively (See Supplementary Note 4 for details). The third term ({C}_{{{{{rm{QFT}}}}}^{{dagger} }}) is expected to be the least problematic with ({C}_{{{{{rm{QFT}}}}}^{{dagger} }}=O(log (N))), while the second term is typically evaluated as CHS=O(poly(N)) when the Hamiltonian is, for instance, sparse, local, or constituted from polynomially many Pauli terms. Conversely, the scaling of the third term CSP is markedly nontrivial. In fact, the ground state preparation of local Hamiltonian generally necessitates exponential cost, which is also related to the fact that the ground state energy calculation of local Hamiltonian is categorized within the complexity class of QMA-complete31,32.

Here, ancilla qubits are projected to (leftVert {x}_{1}cdots {x}_{m}rightrangle) which gives a m-digit readout of the ground state (leftvert {psi }_{{{{rm{GS}}}}}rightrangle) of an N-qubit system.

Although the aforementioned argument seems rather formidable, it is important to note that the QMA-completeness pertains to the worst-case scenario. Meanwhile, the average-case hardness in translationally invariant lattice Hamiltonians remains an open problem, and furthermore we have no means to predict the complexity under specific problem instances. In this context, it is widely believed that a significant number of ground states that are of substantial interest in condensed matter problems can be readily prepared with a polynomial cost33. In this work, we take a further step to argue that the state preparation cost can be considered negligible as CSPCHS for our specific target models, namely the gapless spin liquid state in the J1-J2 Heisenberg model or the antiferromagnetic state in the Fermi-Hubbard model. Our argument is based on numerical findings combined with upper bounds on the complexity, while we leave the theoretical derivation for scaling (e.g. Eq. (4)) as an open problem.

For concreteness, we focus on the scheme of the Adiabatic State Preparation (ASP) as a deterministic method to prepare the ground state through a time evolution of period tASP. We introduce a time-dependent interpolating function (s(t):{mathbb{R}}mapsto [0,1](s(0)=0,s({t}_{{{{rm{ASP}}}}})=1)) such that the ground state is prepared via time-dependent Schrdinger equation given by

$$begin{array}{r}idisplaystylefrac{partial }{partial t}leftvert psi (t)rightrangle =H(t)leftvert psi (t)rightrangle ,end{array}$$

(2)

where H(t)=H(s(t))=sHf+(1s)H0 for the target Hamiltonian Hf and the initial Hamiltonian H0. We assume that the ground state of H0 can be prepared efficiently, and take it as the initial state of the ASP. Early studies suggested a sufficient (but not necessary) condition for preparing the target ground state scales as tASP=O(1/f3)34,35,36 where f=1GS(tASP) is the target infidelity and is the spectral gap. This has been refined in recent research as

$${t}_{{{{rm{ASP}}}}}=left{begin{array}{l}O(frac{1}{{epsilon }_{f}^{2}{Delta }^{2}}| log (Delta ){| }^{zeta })(zeta > 1) \ O(frac{1}{{Delta }^{3}}log (1/{epsilon }_{f}))quad end{array}right..$$

(3)

Two conditions independently achieve the optimality with respect to and f. Evidently, the ASP algorithm can prepare the ground state efficiently if the spectral gap is constant or polynomially small as =O(1/N).

For both of our target models, numerous works suggest that =1/237,38,39, which is one of the most typical scalings in 2d gapless/critical systems such as the spontaneous symmetry broken phase with the Goldstone mode and critical phenomena described by 2d conformal field theory. With the polynomial scaling of to be granted, now we ask what the scaling of CSP is, and how does it compare to other constituents, namely CHS and ({C}_{{{{rm{QFT}}}}}^{{dagger} }).

In order to estimate the actual cost, we have numerically calculated tASP required to achieve the target fidelity (See Supplementary Note 3 for details) up to 48 qubits. With the aim of providing a quantitative way to estimate the scaling of tASP in larger sizes, we reasonably consider the combination of the upper bounds provided in Eq. (3) as

$$begin{array}{r}{t}_{{{{rm{ASP}}}}}=Oleft(displaystylefrac{1}{{Delta }^{beta }}log (1/{epsilon }_{f})right).end{array}$$

(4)

Figure 4a, b illustrates the scaling of tASP concerning f and , respectively. Remarkably, we find that Eq. (4) with =1.5 gives an accurate prediction for 2d J1-J2 Heisenberg model. This implies that the ASP time scaling is ({t}_{{{{rm{ASP}}}}}=O({N}^{beta /2}log (1/{epsilon }_{f}))), which yields gate complexity of (O({N}^{1+beta /2}{{{rm{polylog}}}}(N/{epsilon }_{f}))) under optimal simulation for time-dependent Hamiltonians40,41. Thus, CSP proves to be subdominant in comparison to CHS if <2, which is suggested in our simulation. Furthermore, under assumption of Eq. (4), we can estimate tASP to at most a few tens for practical system size of N~100 under infidelity of f~0.1. This is fairly negligible compared to the controlled Hamiltonian simulation that requires dynamics duration to be order of tens of thousands in our target models (One must note that there is a slight different between two schemes. Namely, the time-dependent Hamiltonian simulation involves the quantum signal processing using the block-encoding of H(t), while the qubitization for the phase estimation only requires the block-encoding. This implies that T-count in the former would encounter overhead as seen in the Taylorization technique. However, we confirm that this overhead, determined by the degrees of polynomial in the quantum signal processing, is orders of tens41, so that the required T-count for state preparation is still suppressed by orders of magnitude compared to the qubitization). This outcome stems from the fact that the controlled Hamiltonian simulation for the purpose of eigenenergy extraction obeys the Heisenberg limit as CHS=O(1/), a consequence of time-energy uncertainty relation. This is in contrast to the state preparation, which is not related to any quantum measurement and thus there does not exist such a polynomial lower bound.

(a) Scaling with the target infidelity f for system size of 44 lattice. The interpolation function is taken so that the derivative up to -th order is zero at t=0,tASP. Here we consider the linear interpolation for =0, and for smoother ones we take ({{{{mathcal{S}}}}}_{kappa }) and ({{{{mathcal{B}}}}}_{kappa }) that are defined from sinusoidal and incomplete Beta functions, respectively (see Supplementary Note 3). While smoothness for higher ensures logarithmic scaling for smaller f, for the current target model, we find that it suffices to take s(t) whose derivative vanishes up to =2 at t=0,tASP. (b) Scaling with the spectral gap . Here we perform the ASP using the MPS state for system size of LxLy, where results for Lx=2,4,6 is shown in cyan, blue, and green data points. We find that the scaling exhibits tASP1/ with ~1.5.

As we have seen in the previous sections, the dominant contribution to the quantum resource is CHS, namely the controlled Hamiltonian simulation from which the eigenenergy phase is extracted into the ancilla qubits. Fortunately, with the scope of performing quantum resource estimation for the QPE and digital quantum simulation, numerous works have been devoted to analyzing the error scaling of various Hamiltonian simulation techniques, in particular the Trotter-based methods42,43,44. Nevertheless, we point out that crucial questions remain unclear; (A) which technique is the best practice to achieve the earliest quantum advantage for condensed matter problems, and (B) at which point does the crossover occur?

Here we perform resource estimation under the following common assumptions: (1) logical qubits are encoded using the formalism of surface codes45; (2) quantum gate implementation is based on Clifford+T formalism; Initially, we address the first question (A) by comparing the total number of T-gates, or T-count, across various Hamiltonian simulation algorithms, as the application of a T-gate involves a time-consuming procedure known as magic-state distillation. Although not necessarily, this procedure is considered to dominate the runtime in many realistic setups. Therefore, we argue that T-count shall provide sufficient information to determine the best Hamiltonian simulation technique. Then, with the aim of addressing the second question (B), we further perform high-resolution analysis on the runtime. We in particular consider concrete quantum circuit compilation with specific physical/logical qubit configuration compatible with the surface code implemented on a square lattice.

Let us first compute the T-counts to compare the state-of-the-art Hamiltonian simulation techniques: (randomized) Trotter product formula46,47, qDRIFT44, Taylorization48,49,50, and qubitization40. The former two commonly rely on the Trotter decomposition to approximate the unitary time evolution with sequential application of (controlled) Pauli rotations, while the latter two, dubbed as post-Trotter methods," are rather based on the technique called the block-encoding, which utilize ancillary qubits to encode desired (non-unitary) operations on target systems (See Supplementary Note 5). While post-Trotter methods are known to be exponentially more efficient in terms of gate complexity regarding the simulation accuracy48, it is nontrivial to ask which is the best practice in the crossover regime, where the prefactor plays a significant role.

We have compiled quantum circuits based on existing error analysis to reveal the required T-counts (See Supplementary Notes 4, 6, and 7). From results presented in Table 1, we find that the qubitization algorithm provides the most efficient implementation in order to reach the target energy accuracy =0.01. Although the post-Trotter methods, i.e., the Taylorization and qubitization algorithms require additional ancillary qubits of (O(log (N))) to perform the block encoding, we regard this overhead as not a serious roadblock, since the target system itself and the quantum Fourier transformation requires qubits of O(N) and (O(log (N/epsilon ))), respectively. In fact, as we show in Fig. 5, the qubitization algorithms are efficient at near-crosspoint regime in physical qubit count as well, due to the suppressed code distance (see Supplementary Note 9 for details).

Here, we estimate the ground state energy up to target accuracy =0.01 for 2d J1-J2 Heisenberg model (J2/J1=0.5) and 2d Fermi-Hubbard model (U/t=4), both with lattice size of 1010. The blue, orange, green, and orange points indicate the results that employ qDRIFT, 2nd-order random Trotter, Taylorization, and qubitization, where the circle and star markers denote the spin and fermionic models, respectively. Two flavors of the qubitization, the sequential and newly proposed product-wise construction (see Supplementary Note 5 for details), are discriminated by filled and unfilled markers. Note that Nph here does not account for the magic state factories, which are incorporated in Fig. 7.

We also mention that, for 2d Fermi-Hubbard model, there exists some specialized Trotter-based methods that improve the performance significantly16,17. For instance, the T-count of the QPE based on the state-or-the-art PLAQ method proposed in ref. 17 can be estimated to be approximately 4108 for 1010 system under =0.01, which is slightly higher than the T-count required for the qubitization technique. Since the scaling of PLAQ is similar to the 2nd order Trotter method, we expect that the qubitization remains the best for all system size N.

The above results motivate us to study the quantum-classical crossover entirely using the qubitization technique as the subroutine for the QPE. As is detailed in Supplementary Note 8, our runtime analysis involves the following steps:

Hardware configuration. Determine the architecture of quantum computers (e.g., number of magic state factories, qubit connectivity etc.).

Circuit synthesis and transpilation. Translate high-level description of quantum circuits to Clifford+T formalism with the provided optimization level.

Compilation to executable instructions. Decompose logical gates into the sequence of executable instruction sets based on lattice surgery.

It should be noted that the ordinary runtime estimation only involves the step (II); simply multiplying the execution time of T-gate to the T-count as NTtT. However, we emphasize that this estimation method loses several vital factors in time analysis which may eventually lead to deviation of one or two orders of magnitude. In sharp contrast, our runtime analysis comprehensively takes all steps into account to yield reliable estimation under realistic quantum computing platforms.

Figure 6 shows the runtime of classical/quantum algorithms simulating the ground state energy in 2d J1-J2 Heisenberg model and 2d Fermi-Hubbard model. In both figures, we observe clear evidence of quantum-classical crosspoint below a hundred-qubit system (at lattice size of 1010 and 66, respectively) within plausible runtime. Furthermore, a significant difference from ab initio quantum chemistry calculations is highlighted in the feasibility of system size N~1000 logical qubit simulations, especially in simulation of 2d Heisenberg model that utilizes the parallelization technique for the oracles (See Supplementary Note 8 for details).

Here we show the results for (a) 2d J1-J2 Heisenberg model of J2/J1=0.5 and (b) 2d Fermi-Hubbard model of U/t=4. The blue and red circles are the runtime estimate for the quantum phase estimation using the qubitization technique as a subroutine, whose analysis involves quantum circuit compilation of all the steps (I), (II), and (III). All the gates are compiled under the Clifford+T formalism with each logical qubits encoded by the surface code with code distance d around 17 to 25 assuming physical error rate of p=103 (See Supplementary Note 9). Here, the number of magic state factories nF and number of parallelization threads nth are taken as (nF,nth)=(1,1) and (16,16) for Single" and Parallel," respectively. The dotted and dotted chain lines are estimates that only involve the analysis of step (II); calculation is based solely on the T-count of the algorithm with realistic T-gate consumption rate of 1kHz and 1MHz, respectively. The green stars and purple triangles are data obtained from the actual simulation results of classical DMRG and variational PEPS algorithms, respectively, with the shaded region denoting the potential room for improvement by using the most advanced computational resource (See Supplementary Note 2). Note that the system size is related with the lattice size MM as N=2M2 in the Fermi-Hubbard model.

For concreteness, let us focus on the simulation for systems with lattice size of 1010, where we find the quantum algorithm to outperform the classical one. Using the error scaling, we find that the DMRG simulation is estimated to take about 105 and 109 seconds in 2d Heisenberg and 2d Fermi-Hubbard models, respectively. On the other hand, the estimation based on the dedicated quantum circuit compilation with the most pessimistic equipment (denoted as Single" in Fig. 6) achieves runtime below 105 seconds in both models. This is further improves by an order when we assume a more abundant quantum resource. Concretely, using a quantum computer with multiple magic state factories (nF=16) that performs multi-thread execution of the qubitization algorithm (nTh=16), the quantum advantage can be achieved within a computational time frame of several hours. We find it informative to also display the usual T-count-based estimation; it is indeed reasonable to assume a clock rate of 110kHz for single-thread execution, while its precise value fluctuates depending on the problem instance.

We note that the classical algorithm (DMRG) experiences an exponential increase in the runtime to reach the desired total energy accuracy =0.01. This outcome is somewhat expected, since one must enforce the MPS to represent 2d quantum correlations into 1d via cylindrical boundary condition38,51. Meanwhile, the prefactor is significantly lower than that of other tensor-network-based methods, enabling its practical use in discussing the quantum-classical crossover. For instance, although the formal scaling is exponentially better in variational PEPS algorithm, the runtime in 2d J1-J2 Heisenberg model exceeds 104 seconds already for the 66 model, while the DMRG algorithm consumes only 102 seconds (See Fig. 6a). Even if we assume that the bond dimension of PEPS can be kept constant for larger N, the crossover between DMRG and variational PEPS occurs only above the size of 1212. As we have discussed previously, we reasonably expect (D=O(log (N))) for simulation of fixed total accuracy, and furthermore expect that the number of variational optimization also scales polynomially with N. This implies that the scaling is much worse than O(N); in fact, we have used constant value of D for L=4,6,8 and observe that the scaling is already worse than cubic in our setup. Given such a scaling, we conclude that DMRG is better suited than the variational PEPS for investigating the quantum-classical crossover, and also that quantum algorithms with quadratic scaling on N runs faster in the asymptotic limit.

It is informative to modify the hardware/algorithmic requirements to explore the variation of quantum-classical crosspoint. For instance, the code distance of the surface code depends on p and as (See Supplementary Note 9)

$$d=Oleft(frac{log (N/epsilon )}{log (1/p)}right).$$

(5)

Note that this also affects the number of physical qubits via the number of physical qubit per logical qubit 2d2. We visualize the above relationship explicitly in Fig. 7, which considers the near-crosspoint regime of 2d J1-J2 Heisenberg model and 2d Fermi-Hubbard model. It can be seen from Fig. 7a, b, d, e that the improvement of the error rate directly triggers the reduction of the required code distance, which results in s significant suppression of the number of physical qubits. This is even better captured by Fig. 7c, f. By achieving a physical error rate of p=104 or 105, for instance, one may realize a 4-fold or 10-fold reduction of the number of physical qubits.

The panels denote (a) code distance d and (b) number of physical qubits Nph required to simulate the ground state of 2d J1-J2 Heisenberg model with lattice size of 1010 with J2=0.5. Here, the qubit plane is assumed to be organized as (nF,#thread)=(1,1). The setup used in the maintext, =0.01 and p=103, is indicated by the orange stars. c Focused plot at =0.01. Blue and red points show the results for code distance d and Nph, respectively, where the filled and empty markers correspond to floor plans with (nF,#thread)=(1,1) and (16,16), respectively. (df) Plots for 2d Fermi-Hubbard model of lattice size 66 with U=4, corresponding to (ac) for the Heisenberg model.

The logarithmic dependence for in Eq. (5) implies that the target accuracy does not significantly affect the qubit counts; it is rather associated with the runtime, since the total runtime scaling is given as

$$begin{array}{r}t=Oleft(displaystylefrac{{N}^{2}log (N/epsilon )}{epsilon log (1/p)}right),end{array}$$

(6)

which now shows polynomial dependence on . Note that this scaling is based on multiplying a factor of d to the gate complexity, since we assumed that the runtime is dominated by the magic state generation, of which the time is proportional to the code distance d, rather than by the classical postprocessing (see Supplementary Notes 8 and 9). As is highlighted in Fig. 8, we observe that in the regime with higher , the computation is completed within minutes. However, we do not regard such a regime as an optimal field for the quantum advantage. The runtime of classical algorithms typically shows higher-power dependence on , denoted as O(1/), with ~2 for J1-J2 Heisenberg model and ~4 for the Fermi-Hubbard model (see Supplementary Note 2), which both implies that classical algorithms are likely to run even faster than quantum algorithms under large values. We thus argue that the setup of =0.01 provides a platform that is both plausible for the quantum algorithm and challenging by the classical algorithm.

Panels (a) and (c) show results for 2d J1-J2 Heisenberg model of lattice size 1010 with J2=0.5, while (b) and (d) show results for 2d Fermi-Hubbard model of lattice size 66 with U=4. The floor plan of the qubit plane is assumed as (nF,#thread)=(1,1) and (16,16) for (a, b) and (c, d), respectively. The setup =0.01 and p=103, employed in Fig. 6, is shown by the black open stars.

See the rest here:

Hunting for quantum-classical crossover in condensed matter problems | npj Quantum Information - Nature.com

Tweak to Schrdinger’s cat equation could unite Einstein’s relativity and quantum mechanics, study hints – Livescience.com

Theoretical physicists have proposed a new solution to the Schrdinger's cat paradox, which may allow the theories of quantum mechanics and Einstein's relativity to live in better harmony.

The bizarre laws of quantum physics postulate that physical objects can exist in a combination of multiple states, like being in two places at once or possessing various velocities simultaneously. According to this theory, a system remains in such a "superposition" until it interacts with a measuring device, only acquiring definite values as a result of the measurement. Such an abrupt change in the state of the system is called a collapse.

Physicist Erwin Schrdinger summarized this theory in 1935 with his famous feline paradox using the metaphor of a cat in a sealed box being simultaneously dead and alive until the box is opened, thus collapsing the cat's state and revealing its fate.

However, applying these rules to real-world scenarios faces challenges and that's where the true paradox arises. While quantum laws hold true for the realm of elementary particles, larger objects behave in accordance with classical physics as predicted by Einstein's theory of general relativity, and are never observed in a superposition of states. Describing the entire universe using quantum principles poses even greater hurdles, as the cosmos appears entirely classical and lacks any external observer to serve as a measuring device for its state.

"The question is can the Universe, which does not have a surrounding environment, be in such a superposition?" lead author Matteo Carlesso, a theoretical physicist at the University of Trieste in Italy, told Live Science in an email. "Observations say no: everything goes along the classical predictions of General Relativity. Then, what is breaking such a superposition?"

Related: Quantum 'yin-yang' shows two photons being entangled in real-time

To tackle this question, Carlesso and his colleagues proposed modifications to the Schrdinger equation, which governs how all states, including those in superposition, evolve over time.

Get the worlds most fascinating discoveries delivered straight to your inbox.

"Specific modifications of the Schrdinger equation can solve the problem," Carlesso said. In particular, the team added terms to the equation that captured how the system interacts with itself, as well as adding some other specific terms. This in turn leads to superposition breaking down.

"Such effects are stronger the larger the system," Carlesso added.

Crucially, these modifications have little impact on microscopic quantum systems, such as atoms and molecules, but allow larger systems like the universe itself to collapse at frequent intervals, giving them definite values that fit with our observations of the cosmos. The team described their modified Schrdinger equation in February in the Journal of High Energy Physics.

In their tweaked version of quantum physics, the researchers eliminated the distinction between objects subject to measurement and measuring devices. Instead, they proposed that each system's state undergoes spontaneous collapse at regular intervals, leading to the acquisition of definite values for some of their attributes.

For large systems, spontaneous collapse occurs frequently, rendering them classical in appearance. Subatomic objects interacting with these systems become part of them, leading to rapid collapse of their state and the acquisition of definite coordinates, akin to measurement.

"With no action from external entities, any system localizes (or collapses) spontaneously in a particular state. In place of having a cat being dead AND alive, one finds it dead OR alive," Carlesso said.

The new model may explain why our universe's space-time geometry doesn't exist in a superposition of states and obeys the classical equations of Einstein's relativity.

"Our model describes a quantum Universe, which eventually collapsed thus becoming effectively classical," Carlesso said. "We show that spontaneous collapse models can explain the emergence of a classical Universe from a quantum superposition of Universes, where each of these Universes has a different space-time geometry."

While this theory may explain why the universe seems to be governed by classical laws of physics, it doesn't make new predictions about large-scale physical processes.

However, it does make predictions about how atoms and molecules will behave, albeit with minimal deviations from conventional quantum mechanics.

As a result, testing their modified quantum model won't be so simple. Future work will be aimed at coming up with such tests.

"Together with experimental collaborators, we are trying to test the effects of the collapse modifications or derive bounds on their parameters. This is completely equivalent to testing the limits of quantum theory."

The rest is here:

Tweak to Schrdinger's cat equation could unite Einstein's relativity and quantum mechanics, study hints - Livescience.com

A new kind of experiment at the LHC could unravel quantum reality – New Scientist

ForAlan Barr, it started during the covid-19 lockdowns. I had a bit more time. I could sit and think, he says.

He had enjoyed being part of the success at CERNs Large Hadron Collider (LHC) near Geneva, Switzerland the particle collider that discovered the Higgs boson. But now, he wondered, were they missing a trick? I had spent long hours screwing bits of it together. And I thought, Well, weve built this beautiful piece of apparatus, but maybe we could be doing more with it, he says.

The LHC is typically seen as a machine for finding new particles. But now Barr and a slew of other physicists are asking if it can also be used to probe the underlying meaning of quantum theory and why it paints reality as being so deeply weird.

Thats exactly what Barr and his colleagues are now investigating in earnest. Last year, they published the results of an experiment in which they showed that pairs of fundamental particles called top quarks could be put into the quantum state known as entanglement.

This was just the first of many entanglement experiments at particle colliders that could open up a whole new way of studying the nature of the universe. We can now ask why reality in quantum mechanics is so hard to pin down and what this has to do with experimenters or even particles having free will. Doing so could reveal whether space-time is fundamental or perhaps unveil a deeper reality that is even stranger than quantum mechanics. We can do really different things with this collider, says Barr.

Link:

A new kind of experiment at the LHC could unravel quantum reality - New Scientist

Australia just made a billion-dollar bet on building the world’s first ‘useful’ quantum computer in Brisbane. Will it pay off? – The Conversation

The Australian government has announced a pledge of approximately A$940 million (US$617 million) to PsiQuantum, a quantum computing start-up company based in Silicon Valley.

Half of the funding will come from the Queensland government, and in exchange, PsiQuantum will locate its planned quantum computer in Brisbane, with a regional headquarters at Brisbane Airport.

PsiQuantum claims it will build the worlds first useful quantum computer. Such a device could be enormously helpful for applications like cracking codes, discovering new materials and drugs, modelling climate and weather, and solving other tough computational problems.

Companies around the world and several national governments are racing to be the first to solve the quantum computing puzzle. How likely is it Australias bet on PsiQuantum will pay off?

Quantum computers are computers that run quantum algorithms. These are step-by-step sets of instructions that change data encoded with quantum information. (Ordinary computers run digital algorithms, step-by-step sets of instructions that change digital information.)

Digital computers represent information as long strings of 1s and 0s. Quantum computers represent information as long lists of numbers. Over the past century, scientists have discovered these numbers are naturally encoded in fine details of energy and matter.

Read more: Hype and cash are muddying public understanding of quantum computing

Quantum computing operates fundamentally differently from traditional computing. It uses principles of quantum physics and may be able to perform calculations that are not feasible for digital computers.

We know that quantum algorithms can solve some problems with far fewer steps than digital algorithms. However, to date nobody has built a quantum computer that can run quantum algorithms in a reliable way.

Researchers around the world are trying to build quantum computers using different kinds of technology.

PsiQuantums approach uses individual particles of light called photons to process quantum data. Photon-based quantum computers are expected to be less prone to errors than other kinds.

The Australian government has also invested around A$40 million in Sydney-based Silicon Quantum Computing. This company aims to encode quantum data in tiny particles trapped in silicon and other familiar materials used in current electronics.

A third approach is trapped ions individually captured electrically charged atomic particles, which have the advantage of being inherently stable and all identical. A company called IonQ is one taking this track.

However, many believe the current leading approach is artificial atoms based on superconducting circuits. These can be customised with different properties. This is the approach taken by Google, IBM, and Rigetti.

There is no clear winning technology. Its likely that a hybrid approach will eventually prevail.

The timeline set by PsiQuantum and supported by federal endorsements aims for an operational quantum computer by 2029. Some see this projected timeline as overly optimistic, since three years ago PsiQuantum was planning to meet a deadline of 2025.

Progress in quantum technology has been steady since its inception nearly three decades ago. But there are many challenges yet to overcome in creating a device that is both large enough to be useful and not prone to errors.

The announcement represents a significant commitment to advancing quantum computing technology both within Australian borders and worldwide. It falls under the Albanese governments Future Made in Australia policy.

However, the investment risks being overshadowed by a debate over transparency and the selection process.

Criticisms have pointed to a lack of detailed public disclosure about why PsiQuantum was chosen over local competitors.

Read more: Australia may spend hundreds of millions of dollars on quantum computing research. Are we chasing a mirage?

These concerns underscore the need for a more open dialogue about government spending and partnership selections to maintain public trust in such large-scale technological investments.

Public trust is difficult to establish when little to no effort has been made to educate people in quantum technology. Some claim that quantum literacy will be a 21st-century skill on par with digital literacy.

Australia has made its quantum hardware bet. But even if the hardware works as planned, it will only be useful if we have people who know how to use it and that means training in quantum theory and software.

The Australian Quantum Software Network, a collaboration of more than 130 of the nations leading researchers in quantum algorithms, software, and theory including myself was launched in late 2022 to achieve this.

The government says the PsiQuantum project is expected to create up to 400 specialised jobs, retaining and attracting new highly skilled talent to both the state and country. The media release also contains the dramatic forecast that success could lead to up to an additional $48 billion in GDP and 240,000 new jobs in Australia by 2040.

Efforts like the Sydney Quantum Academy, the Australian Centre for Quantum Growth, and my own quantum education startup Eigensystems, which recently launched the Quokka personal quantum computing and quantum literacy platform, will help to meet this goal.

In the coming decade, education and training will be crucial, not only to support this investment but also to expand Australias expertise so that it may become a net exporter in the quantum industry and a substantial player in the global race for a quantum computer.

Read more:

Australia just made a billion-dollar bet on building the world's first 'useful' quantum computer in Brisbane. Will it pay off? - The Conversation

Australia bets on US startup that aims to build the first massive quantum computer – Semafor

Quantum computers do not work like traditional computers. Instead of using microscopic transistors, which can represent either ones or zeros, they use particles known as qubits.

Unlike transistors, qubits can exist in multiple states at a time, allowing them to perform different types of calculations. The theory of quantum entanglement allows many qubits to be linked, allowing for an even larger number of computations.

Traditional computers are more or less limited by the laws of classical physics; quantum computers are not.

There are several ways to make qubits, and popular methods include using trapped ions or particles within superconductors.

PsiQuantum believes the best approach is using individual photons as qubits, by manipulating single particles of light. While this is among the most difficult methods of quantum computing, PsiQuantum made a bet that it was ultimately the most practical for large scale quantum computers because of the existing infrastructure built around photonics.

It has partnered with one of the biggest semiconductor manufacturers in the world, Global Foundries, to produce photonic computers with enough fidelity to work with individual photons.

Another major advantage of using photons as qubits is that photons can operate at room temperature. Most other supercomputers require extremely cold temperatures, making them impractical at scale.

PsiQuantums method still requires refrigeration, but not nearly as much as other methods. As a result, it plans to build its quantum computers inside cryogenic cabinets built by a company that makes meat lockers.

Those units are then networked together to increase the total number of qubits. By the end of 2027, PsiQuantum plans to have a quantum computer with 1 million qubits. The largest quantum computers today have about 1,000.

With 1 million qubits, PsiQuantum believes it can perform error correction, essentially making up for mistakes made by the qubits. Traditional computers also require error correction, but in the case of quantum computers, the majority of qubits are used for this task. Shadbolt said that sucks, but thats tough luck.

Networking the refrigerated units together was another hurdle for PsiQuantum. It needed to achieve a breakthrough in photonic switching, essentially sending photons back and forth with an unprecedented amount of fidelity, allowing very few photons to escape.

PsiQuantum revealed some of how it has achieved this in a paper that appeared online Friday.

Read the original here:

Australia bets on US startup that aims to build the first massive quantum computer - Semafor

Quantum forces used to automatically assemble tiny device – New Scientist

Triangular gold flakes can be manipulated using mysterious quantum forces

George Zograf/CC BY-NC 4.0

Tiny gold devices for controlling light have been built using strange quantum effects that hide in seemingly empty space.

In 1948, physicist Hendrik Casimir theorised that some objects experience a very weak attraction when they are held close to one another in space because of the imperceptible flickers of quantum fields in the gap between them. Researchers have since confirmed this Casimir effect in the lab. Betl Kkz at Chalmers University of Technology in Sweden and her colleagues have now found a way to make it useful.

They wanted to build a light-trapping cavity using two pieces of gold positioned parallel to one another, between which light would bounce back and forth, unable to escape. First, they created the lower end of the cavity by imprinting a triangular gold flake between 4 and 10 microns in size onto a small piece of glass. The upper end of the cavity also comprised a triangular gold flake, but instead of holding it in place with some implement, the researchers immersed the glass-mounted gold flake in a solution of salty water containing additional triangular gold flakes, then let forces that arose naturally do the job instead.

One of those forces was the electrostatic force caused by electrical charges associated with the dissolved salt. The other was the Casimir effect. Kkz says that she watched many runs of this experiment under the microscope and could always see the Casimir effect in action. It caused one of the free-floating gold flakes to move towards the one imprinted on glass, and then made it rotate above the imprinted flake until the triangular footprints of the two flakes matched.

This completed the assembly of the cavity, which could then trap light. The researchers had lots of control over the cavity-forming process, says Kkz. For instance, by using different concentrations of salt, they could tailor the strength of the electrostatic force to create cavities with slightly different dimensions, with distances between the flakes ranging between 100 and 200 nanometres, that could each then trap light of a different colour.

Ral Esquivel-Sirvent at the National Autonomous University of Mexico says the idea of self-assembly, which he compares to throwing a Lego set into a pot and having a structure emerge without ever manually pressing any pieces together, is not new. But he says the teams experiment is more detailed and controlled than previous attempts to use the Casimir effect for similar purposes. However, the Casimir effect can be so subtle, says Esquivel-Sirvent, that it is possible that there are still other, undetected effects at play here as well.

Going forward, Kkz and her colleagues want to use their cavities as part of more complex experiments with light, including some that involve placing objects inside the cavity between the two gold flakes.

Topics:

See the rest here:

Quantum forces used to automatically assemble tiny device - New Scientist

Physicists Simulated a Black Hole in The Lab. Then It Began to Glow. – ScienceAlert

A black hole analog could tell us a thing or two about an elusive radiation theoretically emitted by the real thing.

Using a chain of atoms in single-file to simulate the event horizon of a black hole, a team of physicists in 2022 observed the equivalent of what we call Hawking radiation particles born from disturbances in the quantum fluctuations caused by the black hole's break in spacetime.

This, they say, could help resolve the tension between two currently irreconcilable frameworks for describing the Universe: the general theory of relativity, which describes the behavior of gravity as a continuous field known as spacetime; and quantum mechanics, which describes the behavior of discrete particles using the mathematics of probability.

For a unified theory of quantum gravity that can be applied universally, these two immiscible theories need to find a way to somehow get along.

This is where black holes come into the picture possibly the weirdest, most extreme objects in the Universe. These massive objects are so incredibly dense that, within a certain distance of the black hole's center of mass, no velocity in the Universe is sufficient for escape. Not even light speed.

That distance, varying depending on the mass of the black hole, is called the event horizon. Once an object crosses its boundary we can only imagine what happens, since nothing returns with vital information on its fate. But in 1974, Stephen Hawking proposed that interruptions to quantum fluctuations caused by the event horizon result in a type of radiation very similar to thermal radiation.

If this Hawking radiation exists, it's way too faint for us to detect yet. It's possible we'll never sift it out of the hissing static of the Universe. But we can probe its properties by creating black hole analogs in laboratory settings.

This had been done before, but in November 2022 a team led by Lotte Mertens of the University of Amsterdam in the Netherlands tried something new.

A one-dimensional chain of atoms served as a path for electrons to 'hop' from one position to another. By tuning the ease with which this hopping can occur, the physicists could cause certain properties to vanish, effectively creating a kind of event horizon that interfered with the wave-like nature of the electrons.

The effect of this fake event horizon produced a rise in temperature that matched theoretical expectations of an equivalent black hole system, the team said, but only when part of the chain extended beyond the event horizon.

This could mean the entanglement of particles that straddle the event horizon is instrumental in generating Hawking radiation.

The simulated Hawking radiation was only thermal for a certain range of hop amplitudes, and under simulations that began by mimicking a kind of spacetime considered to be 'flat'. This suggests that Hawking radiation may only be thermal within a range of situations, and when there is a change in the warp of space-time due to gravity.

It's unclear what this means for quantum gravity, but the model offers a way to study the emergence of Hawking radiation in an environment that isn't influenced by the wild dynamics of the formation of a black hole. And, because it's so simple, it can be put to work in a wide range of experimental set-ups, the researchers said.

"This, can open a venue for exploring fundamental quantum-mechanical aspects alongside gravity and curved spacetimes in various condensed matter settings," the researchers wrote.

The research has been published in Physical Review Research.

A version of this article was first published in November 2022.

Read more here:

Physicists Simulated a Black Hole in The Lab. Then It Began to Glow. - ScienceAlert

Harnessing quantum information to advance computing – Nature.com

We highlight the vibrant discussions on quantum computing and quantum algorithms that took place at the 2024 American Physical Society March Meeting and invite submissions that notably drive the field of quantum information science forward.

The American Physical Society (APS) March Meeting is arguably one of the largest annual physics conferences of the world, and this years edition which was held in Minneapolis, USA on 38 March hosted over 10,000 scientists and students from around the globe, offering a rich platform to exchange novel ideas and breakthroughs that advance the field of physics. The meeting undoubtedly covered a comprehensive range of topics, of which many are of particular interest to our computational science community, such as electronic structure of materials, the dynamics of complex systems, and self-driving materials labs. Here, we focus on the stimulating discussions on quantum information science and its applications to various domains, given the growing interest and the multitude of avenues of future research in this area.

Credit: da-kuk / E+ / Getty Images

While quantum information science1 has recently seen myriad relevant advancements, many challenges still persist. A pressing issue in the field is the high level of noise in quantum bits (qubits), resulting in an error rate of about 102 to 103, which is much larger than the ideal error rate (1015) required for the successful implementation of large-scale quantum algorithms in practical applications. As such, overcoming the effects of noise remains the foremost challenge for advancing the field. At the APS meeting, a total of 14 sessions possibly the most attended ones in the event, at least to the eye of our editor in attendance were devoted to quantum error correction (QEC) and quantum error mitigation. For instance, the discussions surrounding QEC primarily focused on reducing time and qubit overheads. Among the numerous candidates, low-density parity-check codes emerged as one of the popular protocols for achieving low-overhead error correction2. During the Kavli Foundation Special Symposium, Mikhail Lukin, a professor of physics at Harvard University, emphasized the importance of optimized error-correction codes and highlighted the need for co-designing these codes with quantum algorithms and native hardware capabilities in order to achieve fault-tolerant quantum computation.

Another important and well-received focus at the conference was the application of quantum algorithms in noisy quantum computers, with the goal of demonstrating advantages of quantum computing in practical applications prior to achieving fault-tolerance. One such algorithm is quantum machine learning (QML)3, which embeds machine learning within the framework of quantum mechanics. A pivotal point of discussion in the conference revolved around how to practically harness QMLs strengths, such as its low training cost and efficient scalability. While QML has the potential to accelerate data analysis, especially when applied to quantum data from sources such as quantum sensors3, understanding its limitations and developing theoretically sound approaches are imperative tasks for achieving advantage in practical problems. In addition, proper considerations of practical constraints, such as bottlenecks in quantum data loading and the effects of noise, are equivalently important for algorithm design.

Efforts from the industry for advancing quantum information technology did not go unnoticed during the 2024 APS March Meeting either. Companies such as Google Quantum AI, AWS Center for Quantum Computing, IBM Quantum, Quantinuum, and QuEra Computing Inc. among others have been making substantial contributions to various aspects of quantum computing, from software and algorithm design to hardware advancements, such as the logical quantum processor with neural atom array4 and the 32-qubit trapped-ion system5. Furthermore, industrial partners play a crucial role in helping to identify pertinent problems for quantum algorithms, including, but not limited to, in the domains of physical sciences6,7, biological sciences8, and finance9.

At Nature Computational Science, we are keen on publishing studies that span a wide range of topics within quantum information science. Our interest extends from fundamental research aimed at the realization of quantum computing, including the development of codes such as QEC, to studies that deepen our understanding of quantum algorithms and contribute to the broader theoretical framework of quantum computing10,11. Furthermore, we are interested in well-motivated studies that apply quantum algorithms on real quantum computers for solving real-world, practical problems, showcasing clear advantages derived from quantum effects12,13. By fostering an ongoing dialogue on quantum computing and its implications in diverse fields, Nature Computational Science strives to contribute to the advancement of quantum information science and its transformative impact on society.

Read more:

Harnessing quantum information to advance computing - Nature.com

Researchers Discover Protective Quantum Effect in the Brain – ScienceBlog.com

Researchers have discovered a quantum effect in biological systems that could protect the brain from degenerative diseases like Alzheimers and enable ultra-fast information processing. The finding, published in The Journal of Physical Chemistry and selected as an Editors Choice by Science magazine, represents a significant advancement in the field of quantum biology.

The study focused on tryptophan, an amino acid found in many biological structures, including neurons in the brain. When arranged in large, symmetrical networks, tryptophan molecules exhibit a quantum property called superradiance, where they fluoresce stronger and faster than they would independently. This collective behavior is typically not expected in larger, warm, and noisy biological environments.

This publication is the fruit of a decade of work thinking of these networks as key drivers for important quantum effects at the cellular level, said Philip Kurian, Ph.D., principal investigator and founding director of the Quantum Biology Laboratory at Howard University.

The presence of quantum superradiance in neurons has two potential implications. First, it may protect the brain from degenerative diseases like Alzheimers, which are associated with oxidative stress. Tryptophan networks can efficiently absorb damaging UV light and re-emit it at a safer energy level, thanks to their powerful quantum effects.

Second, these tryptophan networks could function as quantum fiber optics, allowing the brain to process information hundreds of millions of times faster than chemical processes alone. This challenges the standard model of neuronal signaling and opens up new avenues for understanding information processing in the brain.

The study has also drawn the attention of quantum technology researchers, as the survival of fragile quantum effects in a messy environment is of great interest for making quantum information technology more resilient.

These new results will be of interest to the large community of researchers in open quantum systems and quantum computation, said Professor Nicol Defenu of the Federal Institute of Technology (ETH) Zurich in Switzerland.

The discovery of this quantum effect in biology represents a significant step forward in understanding the relationship between life and quantum mechanics, with potential applications in neuroscience, quantum computing, and the development of new therapeutic approaches for complex diseases.

Keyword/phrase: quantum biology in the brain

The material in this press release comes from the originating research organization. Content may be edited for style and length. Want more? Sign up for our daily email.

Read more here:

Researchers Discover Protective Quantum Effect in the Brain - ScienceBlog.com

New Mechanism of Order Formation in Quantum Systems – AZoQuantum

Apr 29 2024Reviewed by Lexie Corner

According to a study published in the journal Physical Review Research, researchers Kazuaki Takasan and Kyogo Kawaguchi of the University of Tokyo, along with Kyosuke Adachi of RIKEN, Japans largest comprehensive research institution, have demonstrated that increasing particle motility can induce ferromagnetism (an ordered state of atoms) and that repulsive forces between atoms are sufficient to maintain it.

The finding not only extends the idea of active matter to quantum systems but also adds to the creation of new technologies based on particle magnetic characteristics, such as magnetic memory and quantum computing.

Flocking birds, swarming bacteria, and cellular flows are all instances of active matter, which is the condition in which individual agents, such as birds, bacteria, or cells, arrange themselves. During a phase transition, the agents go from disordered to ordered. As a result, they move in an organized way without using an external controller.

Previous studies have shown that the concept of active matter can apply to a wide range of scales, from nanometers (biomolecules) to meters (animals). However, it has not been known whether the physics of active matter can be applied usefully in the quantum regime. We wanted to fill in that gap.

Kazuaki Takasan, Assistant Professor, Department of Physics, University of Tokyo

To close the gap, the scientists had to present a potential mechanism for inducing a quantum system and maintaining it in an ordered state. It was a joint effort between biophysics and physics. The researchers were inspired by the phenomenon of flocking birds because, due to the activity of each agent, the ordered state is more easily created than in other forms of active matter.

They developed a theoretical model in which atoms mimicked the behavior of birds. In this concept, increasing atom motility caused the repulsive interactions between atoms to reorganize themselves into an ordered state known as ferromagnetism. Spins, or angular momentum of subatomic particles and nuclei, align in one direction in the ferromagnetic state, exactly as flocking birds do while flying.

It was surprising at first to find that the ordering can appear without elaborate interactions between the agents in the quantum model. It was different from what was expected based on biophysical models, Takasan added.

The researcher used a multifaceted method to guarantee their discovery was not a fluke. Fortunately, the findings of computer simulations (mean-field theory, a statistical theory of particles, and mathematical proofs based on linear algebra)were consistent. This increased the credibility of their discovery, the first step in a new line of investigation.

Takasan concluded, The extension of active matter to the quantum world has only recently begun, and many aspects are still open. We would like to further develop the theory of quantum active matter and reveal its universal properties.

Takasan, K., et. al. (2024) Activity-induced ferromagnetism in one-dimensional quantum many-body systems. Physical Review Research. doi:10.1103/PhysRevResearch.6.023096

Source: https://www.u-tokyo.ac.jp/en/index.html

Read the rest here:

New Mechanism of Order Formation in Quantum Systems - AZoQuantum