Page 228«..1020..227228229230..240250..»

Binance: DOJ Recommends 3-Year Sentence for CZ’s Violations – Watcher Guru

A recent recommendation by the U.S. Department of Justice [DOJ] for Changpeng Zhao, the founder of Binance has caused uproar in the market. CZ pleaded guilty to violating the Bank Secrecy Act last November. This further prompted his resignation as Binances chief executive. The exchange also admitted to the violations which resulted in a substantial penalty of $4.32 billion. Now, as Zhao faces sentencing on April 30, the DOJ was seen pushing for 36 months in prison alongside a $50 million fine.

JUST IN: US Government seeks 3 year prison sentence for ex-#Binance CEO Changpeng Zhao (CZ).

The severity of the recommended sentence is notable for several reasons. One of them is that federal sentencing guidelines typically cap the maximum prison term at 18 months for violations of this degree. However, the DOJ argues that Zhaos misconduct was not only willful but also had a wide range of ramifications. The latest filing read,

The sentence in this case will not just send a message to Zhao but also to the world. Zhao reaped vast rewards for his violation of U.S. law, and the price of that violation must be significant to effectively punish Zhao for his criminal acts and to deter others who are tempted to build fortunes and business empires by breaking U.S. law.

Also Read: Philippines SEC Orders Apple & Google To Remove Binance

While the DOJ puts forth the magnitude of Zhaos violations, several critics of the regulator have been highlighting CZs contributions to the industry. Binance, for instance, emerged as one of the largest and most influential exchanges across the globe. This was under CZs leadership.

In addition to his potential situation, Zhaos personal circumstances add complexity to the whole scenario. This includes his inability to return to Dubai where his family resides. The postponement of his sentencing hearing from late February to April 30 further prolongs the uncertainty around his fate.

Just last week, Binances co-founder He Yi instilled hope in the market by suggesting that CZs situation in the U.S. is perceived to be relatively stable. However, the looming prospect of a lengthy sentence casts a shadow over Binance and the entire cryptocurrency market.

Also Read: Binance $1 Billion Emergency Fund Converted to USDC

See the original post here:

Binance: DOJ Recommends 3-Year Sentence for CZ's Violations - Watcher Guru

Read More..

Process To Extradite Fleeing Binance Chief Nadeem Anjarwalla Ongoing INTERPOL VP – Channels Television

The International Criminal Police Organisation (INTERPOL) says fleeing crypto chief Nadeem Anjarwalla will be smoked out and extradited to Nigeria to face tax evasion charges leveled against him and his colleague, Tigran Gambaryan, who is still in the custody of security agents in Abuja, Nigerias political capital.

Garba Umar, Vice President of INTERPOL (Africa) Executive Committee, made this known on Channels Televisions Sunrise Daily programme on Tuesday.

He said the paper works for Anjarwalla to be returned to Nigeria has commenced.

According to the INTERPOL boss, if a fugitive escapes, there are processes in which that country will follow. INTERPOL will only give information, assist them and inform them about the bilateral agreements and conventions that were signed to extradite a fleeing fugitive and the process is ongoing and Nadeem is not an exception.

Be rested assured, we have all the cooperation and we are working on it and definitely, one day, he will be brought to justice, its just a matter of time. We have done all the paper works, Umar stated.

In March, the Office of the National Security Adviser (NSA) detained Anjarwalla and Gambaryan in relation to alleged speculation against the naira, and for allegedly causing the value of the Nigerian currency to nosedive against the greenback of the United States.

Subsequently, the government approached the court to compel Binance to publish the names of its Nigerian traders but the platform wont budge.

The government also filed tax evasion charges against the platform but the NSA office said Anjarwalla escaped from custody when he observed a Jumat prayer on Friday, March 22, 2024, in Abuja. The NSA office said Anjarwalla fled Nigeria using a smuggled passport.

The Nigerian Government subsequently launched a manhunt for him, contacting the INTERPOL.

Commenting on the matter on Tuesday, the INTERPOL chief said, As soon as this individual escaped, there was a massive manhunt for this individual. Many countries we believed he must have boarded a plane or go by road to, we contacted them and we got some certain information which is not possible to share on this platform.

Be rest assured, we located where he was, how he boarded, all information about him and how he landed. We have done that to make sure that he doesnt escape justice.

Until his escape, Anjarwalla, holds British and Kenyan citizenship, serves as Binances Africa Regional Manager. There have been reports that he fled to the East African country.

On whether Anjarwalla has been captured in Kenya or not, the INTERPOL chief said, Im not aware but what I can tell you is that the last destination I know on my record of this guy when he fled (Nigeria) was Kenya. That I can confirm to you.

Asked whether the fleeing Binance executive would be returned to Nigeria to face trial, he said, Yeah, once a red notice is issued, we circulate to all member countries within the 196 including where the fugitive is suspected to be hiding.

Now, it is not only morally right but it is legally right for the country to get him apprehended, inform the requesting country that the fugitive you are looking for has been apprehended and is in our custody. Can you come and take him over?

This is the process. He may be in Kenya, he may be in hiding, he might have even left Kenya but because of the notices we have given, wherever he is, he will be smoked out.

Read more:

Process To Extradite Fleeing Binance Chief Nadeem Anjarwalla Ongoing INTERPOL VP - Channels Television

Read More..

Binance exec will remain in Nigerian custody until May 17 bail hearing: Report – Cointelegraph

Tigran Gambaryan, a Binance executive detained in Nigeria since February, will reportedly remain in custody until a bail hearing on May 17.

According to April 23 reports from local news outlets, Gambaryan will remain in Nigerias Kuje prison until at least May 17, when a judge will decide whether to grant the Binance executive bail. He initially traveled to Nigeria in February with fellow Binance executive Nadeem Anjarwalla to address claims the exchange manipulated the countrys fiat currency, the naira.Nigerian authorities detained both Binance executives as the crypto exchange announced that it intended to cease all naira transactions.

Gambaryan was expected to return to courton April 19 following an initial postponement, and the question of bail was to be addressed on April 22. He has pleaded not guilty to tax evasion and money laundering charges brought by Nigerias Economic and Financial Crimes Commission, with a trial scheduled for May 2.

Anjarwalla reportedly escaped Nigeria custody in March, using his Kenyan passport he is both a British and Kenyan national to fly out of Abuja. Reports from April 22 suggested that Kenyas police arrested Anjarwalla and may extradite him to Nigeria to face criminal charges.

Related: Nigeria launches first multilingual large language model in Africa

Many have criticized the governments charges as lacking merit, as Binance said Gambaryan had no decision-making power at the crypto firm. On March 30, Yuki Gambaryan, Tigrans wife, launched a petition for the U.S. State Department, Nigerias Economic and Financial Crimes Commission, the Nigerian government and U.S. President Joe Biden to to return him to the United States. As of April 23, the petition had 3,960 signatures.

In a separate case in the United States, former Binance CEO Changpeng Zhao is expected to be sentenced on April 30 following his guilty plea for failure to maintain an Anti-Money Laundering program while leading the exchange. He could face up to 10 years in prison.

Magazine: South Africas digital-nomad crypto hub: Cape Town, Crypto City Guide

See more here:

Binance exec will remain in Nigerian custody until May 17 bail hearing: Report - Cointelegraph

Read More..

Hunting for quantum-classical crossover in condensed matter problems | npj Quantum Information – Nature.com

Our argument on the quantum-classical crossover is based on the runtime analysis needed to compute the ground state energy within desired total energy accuracy, denoted as . The primal objective in this section is to provide a framework that elucidates the quantum-classical crosspoint for systems whose spectral gap is constant or polynomially-shrinking. In this work, we choose two models that are widely known due to their profoundness despite the simplicity: the 2d J1-J2 Heisenberg model and 2d Fermi-Hubbard model on a square lattice (see the Method section for their definitions). Meanwhile, it is totally unclear whether a feasible crosspoint exists at all when the gap closes exponentially.

It is important to keep in mind that condensed matter physics often entails extracting physical properties beyond merely energy, such as magnetization, correlation function, or dynamical responses. Therefore, in order to assure that expectation value estimations can done consistently (i.e. satisfy N-representability), we demand that we have the option to measure the physical observable after computation of the ground state energy is done. In other words, for instance the classical algorithm, we perform the variational optimization up to the desired target accuracy ; we exclude the case where one calculates less precise quantum states with energy errors i and subsequently perform extrapolation. The similar requirement is imposed on the quantum algorithm as well.

Among the numerous powerful classical methods available, we have opted to utilize the DMRG algorithm, which has been established as one of the most powerful and reliable numerical tools to study strongly-correlated quantum lattice models especially in one dimension (1d)18,19. In brief, the DMRG algorithm performs variational optimization on tensor-network-based ansatz named Matrix Product State (MPS)20,21. Although MPS is designed to efficiently capture 1d area-law entangled quantum states efficiently22, the efficacy of DMRG algorithm allows one to explore quantum many-body physics beyond 1d, including quasi-1d and 2d systems, and even all-to-all connected models, as considered in quantum chemistry23,24.

A remarkable characteristic of the DMRG algorithm is its ability to perform systematic error analysis. This is intrinsically connected to the construction of ansatz, or the MPS, which compresses the quantum state by performing site-by-site truncation of the full Hilbert space. The compression process explicitly yields a metric called truncation error," from which we can extrapolate the truncation-free energy, E0, to estimate the ground truth. By tracking the deviation from the zero-truncation result EE0, we find that the computation time and error typically obeys a scaling law (See Fig. 2 for an example of such a scaling behavior in 2d J1-J2 Heisenberg model). The resource estimate is completed by combining the actual simulation results and the estimation from the scaling law. [See Supplementary Note 2 for detailed analysis.]

Although the simulation itself does not reach =0.01, learning curves for different bond dimensions ranging from D=600 to D=3000 collapse into a single curve, which implies the adequacy to estimate runtime according to the obtained scaling law. All DMRG simulations are executed using ITensor library61.

We remark that it is judicious to select the DMRG algorithm for 2d models, even though the formal complexity of number of parameters in MPS is expected to increase exponentially with system size N, owing to its intrinsic 1d-oriented structure. Indeed, one may consider another tensor network states that are designed for 2d systems, such as the Projected Entangled Pair States (PEPS)25,26. When one use the PEPS, the bond dimension is anticipated to scale as (D=O(log (N))) for gapped or gapless non-critical systems and D=O(poly(N)) for critical systems27,28,29 to represent the ground state with fixed total energy accuracy of =O(1) (it is important to note that the former would be D=O(1) if considering a fixed energy density). Therefore, in the asymptotic limit, the scaling on the number of parameters of the PEPS is exponentially better than that of the MPS. Nonetheless, regarding the actual calculation, the overhead involved in simulating the ground state with PEPS is substantially high, to the extent that there are practically no scenarios where the runtime of the variational PEPS algorithm outperforms that of DMRG algorithm for our target models.

Quantum phase estimation (QPE) is a quantum algorithm designed to extract the eigenphase of a given unitary U by utilizing ancilla qubits to indirectly read out the complex phase of the target system. More concretely, given a trial state (leftvert psi rightrangle) whose fidelity with the k-th eigenstate (leftvert krightrangle) of the unitary is given as fk=k2, a single run of QPE projects the state to (leftvert krightrangle) with probability fk, and yields a random variable (hat{phi }) which corresponds to a m-digit readout of k.

It was originally proposed by ref. 30 that eigenenergies of a given Hamiltonian can be computed efficiently via QPE by taking advantage of quantum computers to perform Hamiltonian simulation, e.g., (U=exp (-iHtau )). To elucidate this concept, it is beneficial to express the gate complexity for the QPE algorithm as schematically shown in Fig. 3 as

$$begin{array}{r}C sim {C}_{{{{rm{SP}}}}}+{C}_{{{{rm{HS}}}}}+{C}_{{{{{rm{QFT}}}}}^{{dagger} }},end{array}$$

(1)

where we have defined CSP as the cost for state preparation, CHS for the controlled Hamiltonian simulation, and ({C}_{{{{{rm{QFT}}}}}^{{dagger} }}) for the inverse quantum Fourier transformation, respectively (See Supplementary Note 4 for details). The third term ({C}_{{{{{rm{QFT}}}}}^{{dagger} }}) is expected to be the least problematic with ({C}_{{{{{rm{QFT}}}}}^{{dagger} }}=O(log (N))), while the second term is typically evaluated as CHS=O(poly(N)) when the Hamiltonian is, for instance, sparse, local, or constituted from polynomially many Pauli terms. Conversely, the scaling of the third term CSP is markedly nontrivial. In fact, the ground state preparation of local Hamiltonian generally necessitates exponential cost, which is also related to the fact that the ground state energy calculation of local Hamiltonian is categorized within the complexity class of QMA-complete31,32.

Here, ancilla qubits are projected to (leftVert {x}_{1}cdots {x}_{m}rightrangle) which gives a m-digit readout of the ground state (leftvert {psi }_{{{{rm{GS}}}}}rightrangle) of an N-qubit system.

Although the aforementioned argument seems rather formidable, it is important to note that the QMA-completeness pertains to the worst-case scenario. Meanwhile, the average-case hardness in translationally invariant lattice Hamiltonians remains an open problem, and furthermore we have no means to predict the complexity under specific problem instances. In this context, it is widely believed that a significant number of ground states that are of substantial interest in condensed matter problems can be readily prepared with a polynomial cost33. In this work, we take a further step to argue that the state preparation cost can be considered negligible as CSPCHS for our specific target models, namely the gapless spin liquid state in the J1-J2 Heisenberg model or the antiferromagnetic state in the Fermi-Hubbard model. Our argument is based on numerical findings combined with upper bounds on the complexity, while we leave the theoretical derivation for scaling (e.g. Eq. (4)) as an open problem.

For concreteness, we focus on the scheme of the Adiabatic State Preparation (ASP) as a deterministic method to prepare the ground state through a time evolution of period tASP. We introduce a time-dependent interpolating function (s(t):{mathbb{R}}mapsto [0,1](s(0)=0,s({t}_{{{{rm{ASP}}}}})=1)) such that the ground state is prepared via time-dependent Schrdinger equation given by

$$begin{array}{r}idisplaystylefrac{partial }{partial t}leftvert psi (t)rightrangle =H(t)leftvert psi (t)rightrangle ,end{array}$$

(2)

where H(t)=H(s(t))=sHf+(1s)H0 for the target Hamiltonian Hf and the initial Hamiltonian H0. We assume that the ground state of H0 can be prepared efficiently, and take it as the initial state of the ASP. Early studies suggested a sufficient (but not necessary) condition for preparing the target ground state scales as tASP=O(1/f3)34,35,36 where f=1GS(tASP) is the target infidelity and is the spectral gap. This has been refined in recent research as

$${t}_{{{{rm{ASP}}}}}=left{begin{array}{l}O(frac{1}{{epsilon }_{f}^{2}{Delta }^{2}}| log (Delta ){| }^{zeta })(zeta > 1) \ O(frac{1}{{Delta }^{3}}log (1/{epsilon }_{f}))quad end{array}right..$$

(3)

Two conditions independently achieve the optimality with respect to and f. Evidently, the ASP algorithm can prepare the ground state efficiently if the spectral gap is constant or polynomially small as =O(1/N).

For both of our target models, numerous works suggest that =1/237,38,39, which is one of the most typical scalings in 2d gapless/critical systems such as the spontaneous symmetry broken phase with the Goldstone mode and critical phenomena described by 2d conformal field theory. With the polynomial scaling of to be granted, now we ask what the scaling of CSP is, and how does it compare to other constituents, namely CHS and ({C}_{{{{rm{QFT}}}}}^{{dagger} }).

In order to estimate the actual cost, we have numerically calculated tASP required to achieve the target fidelity (See Supplementary Note 3 for details) up to 48 qubits. With the aim of providing a quantitative way to estimate the scaling of tASP in larger sizes, we reasonably consider the combination of the upper bounds provided in Eq. (3) as

$$begin{array}{r}{t}_{{{{rm{ASP}}}}}=Oleft(displaystylefrac{1}{{Delta }^{beta }}log (1/{epsilon }_{f})right).end{array}$$

(4)

Figure 4a, b illustrates the scaling of tASP concerning f and , respectively. Remarkably, we find that Eq. (4) with =1.5 gives an accurate prediction for 2d J1-J2 Heisenberg model. This implies that the ASP time scaling is ({t}_{{{{rm{ASP}}}}}=O({N}^{beta /2}log (1/{epsilon }_{f}))), which yields gate complexity of (O({N}^{1+beta /2}{{{rm{polylog}}}}(N/{epsilon }_{f}))) under optimal simulation for time-dependent Hamiltonians40,41. Thus, CSP proves to be subdominant in comparison to CHS if <2, which is suggested in our simulation. Furthermore, under assumption of Eq. (4), we can estimate tASP to at most a few tens for practical system size of N~100 under infidelity of f~0.1. This is fairly negligible compared to the controlled Hamiltonian simulation that requires dynamics duration to be order of tens of thousands in our target models (One must note that there is a slight different between two schemes. Namely, the time-dependent Hamiltonian simulation involves the quantum signal processing using the block-encoding of H(t), while the qubitization for the phase estimation only requires the block-encoding. This implies that T-count in the former would encounter overhead as seen in the Taylorization technique. However, we confirm that this overhead, determined by the degrees of polynomial in the quantum signal processing, is orders of tens41, so that the required T-count for state preparation is still suppressed by orders of magnitude compared to the qubitization). This outcome stems from the fact that the controlled Hamiltonian simulation for the purpose of eigenenergy extraction obeys the Heisenberg limit as CHS=O(1/), a consequence of time-energy uncertainty relation. This is in contrast to the state preparation, which is not related to any quantum measurement and thus there does not exist such a polynomial lower bound.

(a) Scaling with the target infidelity f for system size of 44 lattice. The interpolation function is taken so that the derivative up to -th order is zero at t=0,tASP. Here we consider the linear interpolation for =0, and for smoother ones we take ({{{{mathcal{S}}}}}_{kappa }) and ({{{{mathcal{B}}}}}_{kappa }) that are defined from sinusoidal and incomplete Beta functions, respectively (see Supplementary Note 3). While smoothness for higher ensures logarithmic scaling for smaller f, for the current target model, we find that it suffices to take s(t) whose derivative vanishes up to =2 at t=0,tASP. (b) Scaling with the spectral gap . Here we perform the ASP using the MPS state for system size of LxLy, where results for Lx=2,4,6 is shown in cyan, blue, and green data points. We find that the scaling exhibits tASP1/ with ~1.5.

As we have seen in the previous sections, the dominant contribution to the quantum resource is CHS, namely the controlled Hamiltonian simulation from which the eigenenergy phase is extracted into the ancilla qubits. Fortunately, with the scope of performing quantum resource estimation for the QPE and digital quantum simulation, numerous works have been devoted to analyzing the error scaling of various Hamiltonian simulation techniques, in particular the Trotter-based methods42,43,44. Nevertheless, we point out that crucial questions remain unclear; (A) which technique is the best practice to achieve the earliest quantum advantage for condensed matter problems, and (B) at which point does the crossover occur?

Here we perform resource estimation under the following common assumptions: (1) logical qubits are encoded using the formalism of surface codes45; (2) quantum gate implementation is based on Clifford+T formalism; Initially, we address the first question (A) by comparing the total number of T-gates, or T-count, across various Hamiltonian simulation algorithms, as the application of a T-gate involves a time-consuming procedure known as magic-state distillation. Although not necessarily, this procedure is considered to dominate the runtime in many realistic setups. Therefore, we argue that T-count shall provide sufficient information to determine the best Hamiltonian simulation technique. Then, with the aim of addressing the second question (B), we further perform high-resolution analysis on the runtime. We in particular consider concrete quantum circuit compilation with specific physical/logical qubit configuration compatible with the surface code implemented on a square lattice.

Let us first compute the T-counts to compare the state-of-the-art Hamiltonian simulation techniques: (randomized) Trotter product formula46,47, qDRIFT44, Taylorization48,49,50, and qubitization40. The former two commonly rely on the Trotter decomposition to approximate the unitary time evolution with sequential application of (controlled) Pauli rotations, while the latter two, dubbed as post-Trotter methods," are rather based on the technique called the block-encoding, which utilize ancillary qubits to encode desired (non-unitary) operations on target systems (See Supplementary Note 5). While post-Trotter methods are known to be exponentially more efficient in terms of gate complexity regarding the simulation accuracy48, it is nontrivial to ask which is the best practice in the crossover regime, where the prefactor plays a significant role.

We have compiled quantum circuits based on existing error analysis to reveal the required T-counts (See Supplementary Notes 4, 6, and 7). From results presented in Table 1, we find that the qubitization algorithm provides the most efficient implementation in order to reach the target energy accuracy =0.01. Although the post-Trotter methods, i.e., the Taylorization and qubitization algorithms require additional ancillary qubits of (O(log (N))) to perform the block encoding, we regard this overhead as not a serious roadblock, since the target system itself and the quantum Fourier transformation requires qubits of O(N) and (O(log (N/epsilon ))), respectively. In fact, as we show in Fig. 5, the qubitization algorithms are efficient at near-crosspoint regime in physical qubit count as well, due to the suppressed code distance (see Supplementary Note 9 for details).

Here, we estimate the ground state energy up to target accuracy =0.01 for 2d J1-J2 Heisenberg model (J2/J1=0.5) and 2d Fermi-Hubbard model (U/t=4), both with lattice size of 1010. The blue, orange, green, and orange points indicate the results that employ qDRIFT, 2nd-order random Trotter, Taylorization, and qubitization, where the circle and star markers denote the spin and fermionic models, respectively. Two flavors of the qubitization, the sequential and newly proposed product-wise construction (see Supplementary Note 5 for details), are discriminated by filled and unfilled markers. Note that Nph here does not account for the magic state factories, which are incorporated in Fig. 7.

We also mention that, for 2d Fermi-Hubbard model, there exists some specialized Trotter-based methods that improve the performance significantly16,17. For instance, the T-count of the QPE based on the state-or-the-art PLAQ method proposed in ref. 17 can be estimated to be approximately 4108 for 1010 system under =0.01, which is slightly higher than the T-count required for the qubitization technique. Since the scaling of PLAQ is similar to the 2nd order Trotter method, we expect that the qubitization remains the best for all system size N.

The above results motivate us to study the quantum-classical crossover entirely using the qubitization technique as the subroutine for the QPE. As is detailed in Supplementary Note 8, our runtime analysis involves the following steps:

Hardware configuration. Determine the architecture of quantum computers (e.g., number of magic state factories, qubit connectivity etc.).

Circuit synthesis and transpilation. Translate high-level description of quantum circuits to Clifford+T formalism with the provided optimization level.

Compilation to executable instructions. Decompose logical gates into the sequence of executable instruction sets based on lattice surgery.

It should be noted that the ordinary runtime estimation only involves the step (II); simply multiplying the execution time of T-gate to the T-count as NTtT. However, we emphasize that this estimation method loses several vital factors in time analysis which may eventually lead to deviation of one or two orders of magnitude. In sharp contrast, our runtime analysis comprehensively takes all steps into account to yield reliable estimation under realistic quantum computing platforms.

Figure 6 shows the runtime of classical/quantum algorithms simulating the ground state energy in 2d J1-J2 Heisenberg model and 2d Fermi-Hubbard model. In both figures, we observe clear evidence of quantum-classical crosspoint below a hundred-qubit system (at lattice size of 1010 and 66, respectively) within plausible runtime. Furthermore, a significant difference from ab initio quantum chemistry calculations is highlighted in the feasibility of system size N~1000 logical qubit simulations, especially in simulation of 2d Heisenberg model that utilizes the parallelization technique for the oracles (See Supplementary Note 8 for details).

Here we show the results for (a) 2d J1-J2 Heisenberg model of J2/J1=0.5 and (b) 2d Fermi-Hubbard model of U/t=4. The blue and red circles are the runtime estimate for the quantum phase estimation using the qubitization technique as a subroutine, whose analysis involves quantum circuit compilation of all the steps (I), (II), and (III). All the gates are compiled under the Clifford+T formalism with each logical qubits encoded by the surface code with code distance d around 17 to 25 assuming physical error rate of p=103 (See Supplementary Note 9). Here, the number of magic state factories nF and number of parallelization threads nth are taken as (nF,nth)=(1,1) and (16,16) for Single" and Parallel," respectively. The dotted and dotted chain lines are estimates that only involve the analysis of step (II); calculation is based solely on the T-count of the algorithm with realistic T-gate consumption rate of 1kHz and 1MHz, respectively. The green stars and purple triangles are data obtained from the actual simulation results of classical DMRG and variational PEPS algorithms, respectively, with the shaded region denoting the potential room for improvement by using the most advanced computational resource (See Supplementary Note 2). Note that the system size is related with the lattice size MM as N=2M2 in the Fermi-Hubbard model.

For concreteness, let us focus on the simulation for systems with lattice size of 1010, where we find the quantum algorithm to outperform the classical one. Using the error scaling, we find that the DMRG simulation is estimated to take about 105 and 109 seconds in 2d Heisenberg and 2d Fermi-Hubbard models, respectively. On the other hand, the estimation based on the dedicated quantum circuit compilation with the most pessimistic equipment (denoted as Single" in Fig. 6) achieves runtime below 105 seconds in both models. This is further improves by an order when we assume a more abundant quantum resource. Concretely, using a quantum computer with multiple magic state factories (nF=16) that performs multi-thread execution of the qubitization algorithm (nTh=16), the quantum advantage can be achieved within a computational time frame of several hours. We find it informative to also display the usual T-count-based estimation; it is indeed reasonable to assume a clock rate of 110kHz for single-thread execution, while its precise value fluctuates depending on the problem instance.

We note that the classical algorithm (DMRG) experiences an exponential increase in the runtime to reach the desired total energy accuracy =0.01. This outcome is somewhat expected, since one must enforce the MPS to represent 2d quantum correlations into 1d via cylindrical boundary condition38,51. Meanwhile, the prefactor is significantly lower than that of other tensor-network-based methods, enabling its practical use in discussing the quantum-classical crossover. For instance, although the formal scaling is exponentially better in variational PEPS algorithm, the runtime in 2d J1-J2 Heisenberg model exceeds 104 seconds already for the 66 model, while the DMRG algorithm consumes only 102 seconds (See Fig. 6a). Even if we assume that the bond dimension of PEPS can be kept constant for larger N, the crossover between DMRG and variational PEPS occurs only above the size of 1212. As we have discussed previously, we reasonably expect (D=O(log (N))) for simulation of fixed total accuracy, and furthermore expect that the number of variational optimization also scales polynomially with N. This implies that the scaling is much worse than O(N); in fact, we have used constant value of D for L=4,6,8 and observe that the scaling is already worse than cubic in our setup. Given such a scaling, we conclude that DMRG is better suited than the variational PEPS for investigating the quantum-classical crossover, and also that quantum algorithms with quadratic scaling on N runs faster in the asymptotic limit.

It is informative to modify the hardware/algorithmic requirements to explore the variation of quantum-classical crosspoint. For instance, the code distance of the surface code depends on p and as (See Supplementary Note 9)

$$d=Oleft(frac{log (N/epsilon )}{log (1/p)}right).$$

(5)

Note that this also affects the number of physical qubits via the number of physical qubit per logical qubit 2d2. We visualize the above relationship explicitly in Fig. 7, which considers the near-crosspoint regime of 2d J1-J2 Heisenberg model and 2d Fermi-Hubbard model. It can be seen from Fig. 7a, b, d, e that the improvement of the error rate directly triggers the reduction of the required code distance, which results in s significant suppression of the number of physical qubits. This is even better captured by Fig. 7c, f. By achieving a physical error rate of p=104 or 105, for instance, one may realize a 4-fold or 10-fold reduction of the number of physical qubits.

The panels denote (a) code distance d and (b) number of physical qubits Nph required to simulate the ground state of 2d J1-J2 Heisenberg model with lattice size of 1010 with J2=0.5. Here, the qubit plane is assumed to be organized as (nF,#thread)=(1,1). The setup used in the maintext, =0.01 and p=103, is indicated by the orange stars. c Focused plot at =0.01. Blue and red points show the results for code distance d and Nph, respectively, where the filled and empty markers correspond to floor plans with (nF,#thread)=(1,1) and (16,16), respectively. (df) Plots for 2d Fermi-Hubbard model of lattice size 66 with U=4, corresponding to (ac) for the Heisenberg model.

The logarithmic dependence for in Eq. (5) implies that the target accuracy does not significantly affect the qubit counts; it is rather associated with the runtime, since the total runtime scaling is given as

$$begin{array}{r}t=Oleft(displaystylefrac{{N}^{2}log (N/epsilon )}{epsilon log (1/p)}right),end{array}$$

(6)

which now shows polynomial dependence on . Note that this scaling is based on multiplying a factor of d to the gate complexity, since we assumed that the runtime is dominated by the magic state generation, of which the time is proportional to the code distance d, rather than by the classical postprocessing (see Supplementary Notes 8 and 9). As is highlighted in Fig. 8, we observe that in the regime with higher , the computation is completed within minutes. However, we do not regard such a regime as an optimal field for the quantum advantage. The runtime of classical algorithms typically shows higher-power dependence on , denoted as O(1/), with ~2 for J1-J2 Heisenberg model and ~4 for the Fermi-Hubbard model (see Supplementary Note 2), which both implies that classical algorithms are likely to run even faster than quantum algorithms under large values. We thus argue that the setup of =0.01 provides a platform that is both plausible for the quantum algorithm and challenging by the classical algorithm.

Panels (a) and (c) show results for 2d J1-J2 Heisenberg model of lattice size 1010 with J2=0.5, while (b) and (d) show results for 2d Fermi-Hubbard model of lattice size 66 with U=4. The floor plan of the qubit plane is assumed as (nF,#thread)=(1,1) and (16,16) for (a, b) and (c, d), respectively. The setup =0.01 and p=103, employed in Fig. 6, is shown by the black open stars.

See the rest here:

Hunting for quantum-classical crossover in condensed matter problems | npj Quantum Information - Nature.com

Read More..

Tweak to Schrdinger’s cat equation could unite Einstein’s relativity and quantum mechanics, study hints – Livescience.com

Theoretical physicists have proposed a new solution to the Schrdinger's cat paradox, which may allow the theories of quantum mechanics and Einstein's relativity to live in better harmony.

The bizarre laws of quantum physics postulate that physical objects can exist in a combination of multiple states, like being in two places at once or possessing various velocities simultaneously. According to this theory, a system remains in such a "superposition" until it interacts with a measuring device, only acquiring definite values as a result of the measurement. Such an abrupt change in the state of the system is called a collapse.

Physicist Erwin Schrdinger summarized this theory in 1935 with his famous feline paradox using the metaphor of a cat in a sealed box being simultaneously dead and alive until the box is opened, thus collapsing the cat's state and revealing its fate.

However, applying these rules to real-world scenarios faces challenges and that's where the true paradox arises. While quantum laws hold true for the realm of elementary particles, larger objects behave in accordance with classical physics as predicted by Einstein's theory of general relativity, and are never observed in a superposition of states. Describing the entire universe using quantum principles poses even greater hurdles, as the cosmos appears entirely classical and lacks any external observer to serve as a measuring device for its state.

"The question is can the Universe, which does not have a surrounding environment, be in such a superposition?" lead author Matteo Carlesso, a theoretical physicist at the University of Trieste in Italy, told Live Science in an email. "Observations say no: everything goes along the classical predictions of General Relativity. Then, what is breaking such a superposition?"

Related: Quantum 'yin-yang' shows two photons being entangled in real-time

To tackle this question, Carlesso and his colleagues proposed modifications to the Schrdinger equation, which governs how all states, including those in superposition, evolve over time.

Get the worlds most fascinating discoveries delivered straight to your inbox.

"Specific modifications of the Schrdinger equation can solve the problem," Carlesso said. In particular, the team added terms to the equation that captured how the system interacts with itself, as well as adding some other specific terms. This in turn leads to superposition breaking down.

"Such effects are stronger the larger the system," Carlesso added.

Crucially, these modifications have little impact on microscopic quantum systems, such as atoms and molecules, but allow larger systems like the universe itself to collapse at frequent intervals, giving them definite values that fit with our observations of the cosmos. The team described their modified Schrdinger equation in February in the Journal of High Energy Physics.

In their tweaked version of quantum physics, the researchers eliminated the distinction between objects subject to measurement and measuring devices. Instead, they proposed that each system's state undergoes spontaneous collapse at regular intervals, leading to the acquisition of definite values for some of their attributes.

For large systems, spontaneous collapse occurs frequently, rendering them classical in appearance. Subatomic objects interacting with these systems become part of them, leading to rapid collapse of their state and the acquisition of definite coordinates, akin to measurement.

"With no action from external entities, any system localizes (or collapses) spontaneously in a particular state. In place of having a cat being dead AND alive, one finds it dead OR alive," Carlesso said.

The new model may explain why our universe's space-time geometry doesn't exist in a superposition of states and obeys the classical equations of Einstein's relativity.

"Our model describes a quantum Universe, which eventually collapsed thus becoming effectively classical," Carlesso said. "We show that spontaneous collapse models can explain the emergence of a classical Universe from a quantum superposition of Universes, where each of these Universes has a different space-time geometry."

While this theory may explain why the universe seems to be governed by classical laws of physics, it doesn't make new predictions about large-scale physical processes.

However, it does make predictions about how atoms and molecules will behave, albeit with minimal deviations from conventional quantum mechanics.

As a result, testing their modified quantum model won't be so simple. Future work will be aimed at coming up with such tests.

"Together with experimental collaborators, we are trying to test the effects of the collapse modifications or derive bounds on their parameters. This is completely equivalent to testing the limits of quantum theory."

The rest is here:

Tweak to Schrdinger's cat equation could unite Einstein's relativity and quantum mechanics, study hints - Livescience.com

Read More..

A new kind of experiment at the LHC could unravel quantum reality – New Scientist

ForAlan Barr, it started during the covid-19 lockdowns. I had a bit more time. I could sit and think, he says.

He had enjoyed being part of the success at CERNs Large Hadron Collider (LHC) near Geneva, Switzerland the particle collider that discovered the Higgs boson. But now, he wondered, were they missing a trick? I had spent long hours screwing bits of it together. And I thought, Well, weve built this beautiful piece of apparatus, but maybe we could be doing more with it, he says.

The LHC is typically seen as a machine for finding new particles. But now Barr and a slew of other physicists are asking if it can also be used to probe the underlying meaning of quantum theory and why it paints reality as being so deeply weird.

Thats exactly what Barr and his colleagues are now investigating in earnest. Last year, they published the results of an experiment in which they showed that pairs of fundamental particles called top quarks could be put into the quantum state known as entanglement.

This was just the first of many entanglement experiments at particle colliders that could open up a whole new way of studying the nature of the universe. We can now ask why reality in quantum mechanics is so hard to pin down and what this has to do with experimenters or even particles having free will. Doing so could reveal whether space-time is fundamental or perhaps unveil a deeper reality that is even stranger than quantum mechanics. We can do really different things with this collider, says Barr.

Link:

A new kind of experiment at the LHC could unravel quantum reality - New Scientist

Read More..

Physicists Simulated a Black Hole in The Lab. Then It Began to Glow. – ScienceAlert

A black hole analog could tell us a thing or two about an elusive radiation theoretically emitted by the real thing.

Using a chain of atoms in single-file to simulate the event horizon of a black hole, a team of physicists in 2022 observed the equivalent of what we call Hawking radiation particles born from disturbances in the quantum fluctuations caused by the black hole's break in spacetime.

This, they say, could help resolve the tension between two currently irreconcilable frameworks for describing the Universe: the general theory of relativity, which describes the behavior of gravity as a continuous field known as spacetime; and quantum mechanics, which describes the behavior of discrete particles using the mathematics of probability.

For a unified theory of quantum gravity that can be applied universally, these two immiscible theories need to find a way to somehow get along.

This is where black holes come into the picture possibly the weirdest, most extreme objects in the Universe. These massive objects are so incredibly dense that, within a certain distance of the black hole's center of mass, no velocity in the Universe is sufficient for escape. Not even light speed.

That distance, varying depending on the mass of the black hole, is called the event horizon. Once an object crosses its boundary we can only imagine what happens, since nothing returns with vital information on its fate. But in 1974, Stephen Hawking proposed that interruptions to quantum fluctuations caused by the event horizon result in a type of radiation very similar to thermal radiation.

If this Hawking radiation exists, it's way too faint for us to detect yet. It's possible we'll never sift it out of the hissing static of the Universe. But we can probe its properties by creating black hole analogs in laboratory settings.

This had been done before, but in November 2022 a team led by Lotte Mertens of the University of Amsterdam in the Netherlands tried something new.

A one-dimensional chain of atoms served as a path for electrons to 'hop' from one position to another. By tuning the ease with which this hopping can occur, the physicists could cause certain properties to vanish, effectively creating a kind of event horizon that interfered with the wave-like nature of the electrons.

The effect of this fake event horizon produced a rise in temperature that matched theoretical expectations of an equivalent black hole system, the team said, but only when part of the chain extended beyond the event horizon.

This could mean the entanglement of particles that straddle the event horizon is instrumental in generating Hawking radiation.

The simulated Hawking radiation was only thermal for a certain range of hop amplitudes, and under simulations that began by mimicking a kind of spacetime considered to be 'flat'. This suggests that Hawking radiation may only be thermal within a range of situations, and when there is a change in the warp of space-time due to gravity.

It's unclear what this means for quantum gravity, but the model offers a way to study the emergence of Hawking radiation in an environment that isn't influenced by the wild dynamics of the formation of a black hole. And, because it's so simple, it can be put to work in a wide range of experimental set-ups, the researchers said.

"This, can open a venue for exploring fundamental quantum-mechanical aspects alongside gravity and curved spacetimes in various condensed matter settings," the researchers wrote.

The research has been published in Physical Review Research.

A version of this article was first published in November 2022.

Read more here:

Physicists Simulated a Black Hole in The Lab. Then It Began to Glow. - ScienceAlert

Read More..

Australia bets on US startup that aims to build the first massive quantum computer – Semafor

Quantum computers do not work like traditional computers. Instead of using microscopic transistors, which can represent either ones or zeros, they use particles known as qubits.

Unlike transistors, qubits can exist in multiple states at a time, allowing them to perform different types of calculations. The theory of quantum entanglement allows many qubits to be linked, allowing for an even larger number of computations.

Traditional computers are more or less limited by the laws of classical physics; quantum computers are not.

There are several ways to make qubits, and popular methods include using trapped ions or particles within superconductors.

PsiQuantum believes the best approach is using individual photons as qubits, by manipulating single particles of light. While this is among the most difficult methods of quantum computing, PsiQuantum made a bet that it was ultimately the most practical for large scale quantum computers because of the existing infrastructure built around photonics.

It has partnered with one of the biggest semiconductor manufacturers in the world, Global Foundries, to produce photonic computers with enough fidelity to work with individual photons.

Another major advantage of using photons as qubits is that photons can operate at room temperature. Most other supercomputers require extremely cold temperatures, making them impractical at scale.

PsiQuantums method still requires refrigeration, but not nearly as much as other methods. As a result, it plans to build its quantum computers inside cryogenic cabinets built by a company that makes meat lockers.

Those units are then networked together to increase the total number of qubits. By the end of 2027, PsiQuantum plans to have a quantum computer with 1 million qubits. The largest quantum computers today have about 1,000.

With 1 million qubits, PsiQuantum believes it can perform error correction, essentially making up for mistakes made by the qubits. Traditional computers also require error correction, but in the case of quantum computers, the majority of qubits are used for this task. Shadbolt said that sucks, but thats tough luck.

Networking the refrigerated units together was another hurdle for PsiQuantum. It needed to achieve a breakthrough in photonic switching, essentially sending photons back and forth with an unprecedented amount of fidelity, allowing very few photons to escape.

PsiQuantum revealed some of how it has achieved this in a paper that appeared online Friday.

Read the original here:

Australia bets on US startup that aims to build the first massive quantum computer - Semafor

Read More..

Australia just made a billion-dollar bet on building the world’s first ‘useful’ quantum computer in Brisbane. Will it pay off? – The Conversation

The Australian government has announced a pledge of approximately A$940 million (US$617 million) to PsiQuantum, a quantum computing start-up company based in Silicon Valley.

Half of the funding will come from the Queensland government, and in exchange, PsiQuantum will locate its planned quantum computer in Brisbane, with a regional headquarters at Brisbane Airport.

PsiQuantum claims it will build the worlds first useful quantum computer. Such a device could be enormously helpful for applications like cracking codes, discovering new materials and drugs, modelling climate and weather, and solving other tough computational problems.

Companies around the world and several national governments are racing to be the first to solve the quantum computing puzzle. How likely is it Australias bet on PsiQuantum will pay off?

Quantum computers are computers that run quantum algorithms. These are step-by-step sets of instructions that change data encoded with quantum information. (Ordinary computers run digital algorithms, step-by-step sets of instructions that change digital information.)

Digital computers represent information as long strings of 1s and 0s. Quantum computers represent information as long lists of numbers. Over the past century, scientists have discovered these numbers are naturally encoded in fine details of energy and matter.

Read more: Hype and cash are muddying public understanding of quantum computing

Quantum computing operates fundamentally differently from traditional computing. It uses principles of quantum physics and may be able to perform calculations that are not feasible for digital computers.

We know that quantum algorithms can solve some problems with far fewer steps than digital algorithms. However, to date nobody has built a quantum computer that can run quantum algorithms in a reliable way.

Researchers around the world are trying to build quantum computers using different kinds of technology.

PsiQuantums approach uses individual particles of light called photons to process quantum data. Photon-based quantum computers are expected to be less prone to errors than other kinds.

The Australian government has also invested around A$40 million in Sydney-based Silicon Quantum Computing. This company aims to encode quantum data in tiny particles trapped in silicon and other familiar materials used in current electronics.

A third approach is trapped ions individually captured electrically charged atomic particles, which have the advantage of being inherently stable and all identical. A company called IonQ is one taking this track.

However, many believe the current leading approach is artificial atoms based on superconducting circuits. These can be customised with different properties. This is the approach taken by Google, IBM, and Rigetti.

There is no clear winning technology. Its likely that a hybrid approach will eventually prevail.

The timeline set by PsiQuantum and supported by federal endorsements aims for an operational quantum computer by 2029. Some see this projected timeline as overly optimistic, since three years ago PsiQuantum was planning to meet a deadline of 2025.

Progress in quantum technology has been steady since its inception nearly three decades ago. But there are many challenges yet to overcome in creating a device that is both large enough to be useful and not prone to errors.

The announcement represents a significant commitment to advancing quantum computing technology both within Australian borders and worldwide. It falls under the Albanese governments Future Made in Australia policy.

However, the investment risks being overshadowed by a debate over transparency and the selection process.

Criticisms have pointed to a lack of detailed public disclosure about why PsiQuantum was chosen over local competitors.

Read more: Australia may spend hundreds of millions of dollars on quantum computing research. Are we chasing a mirage?

These concerns underscore the need for a more open dialogue about government spending and partnership selections to maintain public trust in such large-scale technological investments.

Public trust is difficult to establish when little to no effort has been made to educate people in quantum technology. Some claim that quantum literacy will be a 21st-century skill on par with digital literacy.

Australia has made its quantum hardware bet. But even if the hardware works as planned, it will only be useful if we have people who know how to use it and that means training in quantum theory and software.

The Australian Quantum Software Network, a collaboration of more than 130 of the nations leading researchers in quantum algorithms, software, and theory including myself was launched in late 2022 to achieve this.

The government says the PsiQuantum project is expected to create up to 400 specialised jobs, retaining and attracting new highly skilled talent to both the state and country. The media release also contains the dramatic forecast that success could lead to up to an additional $48 billion in GDP and 240,000 new jobs in Australia by 2040.

Efforts like the Sydney Quantum Academy, the Australian Centre for Quantum Growth, and my own quantum education startup Eigensystems, which recently launched the Quokka personal quantum computing and quantum literacy platform, will help to meet this goal.

In the coming decade, education and training will be crucial, not only to support this investment but also to expand Australias expertise so that it may become a net exporter in the quantum industry and a substantial player in the global race for a quantum computer.

Read more:

Australia just made a billion-dollar bet on building the world's first 'useful' quantum computer in Brisbane. Will it pay off? - The Conversation

Read More..

Quantum forces used to automatically assemble tiny device – New Scientist

Triangular gold flakes can be manipulated using mysterious quantum forces

George Zograf/CC BY-NC 4.0

Tiny gold devices for controlling light have been built using strange quantum effects that hide in seemingly empty space.

In 1948, physicist Hendrik Casimir theorised that some objects experience a very weak attraction when they are held close to one another in space because of the imperceptible flickers of quantum fields in the gap between them. Researchers have since confirmed this Casimir effect in the lab. Betl Kkz at Chalmers University of Technology in Sweden and her colleagues have now found a way to make it useful.

They wanted to build a light-trapping cavity using two pieces of gold positioned parallel to one another, between which light would bounce back and forth, unable to escape. First, they created the lower end of the cavity by imprinting a triangular gold flake between 4 and 10 microns in size onto a small piece of glass. The upper end of the cavity also comprised a triangular gold flake, but instead of holding it in place with some implement, the researchers immersed the glass-mounted gold flake in a solution of salty water containing additional triangular gold flakes, then let forces that arose naturally do the job instead.

One of those forces was the electrostatic force caused by electrical charges associated with the dissolved salt. The other was the Casimir effect. Kkz says that she watched many runs of this experiment under the microscope and could always see the Casimir effect in action. It caused one of the free-floating gold flakes to move towards the one imprinted on glass, and then made it rotate above the imprinted flake until the triangular footprints of the two flakes matched.

This completed the assembly of the cavity, which could then trap light. The researchers had lots of control over the cavity-forming process, says Kkz. For instance, by using different concentrations of salt, they could tailor the strength of the electrostatic force to create cavities with slightly different dimensions, with distances between the flakes ranging between 100 and 200 nanometres, that could each then trap light of a different colour.

Ral Esquivel-Sirvent at the National Autonomous University of Mexico says the idea of self-assembly, which he compares to throwing a Lego set into a pot and having a structure emerge without ever manually pressing any pieces together, is not new. But he says the teams experiment is more detailed and controlled than previous attempts to use the Casimir effect for similar purposes. However, the Casimir effect can be so subtle, says Esquivel-Sirvent, that it is possible that there are still other, undetected effects at play here as well.

Going forward, Kkz and her colleagues want to use their cavities as part of more complex experiments with light, including some that involve placing objects inside the cavity between the two gold flakes.

Topics:

See the rest here:

Quantum forces used to automatically assemble tiny device - New Scientist

Read More..