Category Archives: Quantum Computer

Realization of higher-order topological lattices on a quantum computer – Nature.com

Mapping higher-dimensional lattices to 1D quantum chains

While small quasi-1D and 2D systems have been simulated on digital quantum computers27,28, the explicit simulation of higher-dimensional lattices remains elusive. Directly simulating a d-dimensional lattice of width L along each dimension requires ~Ld qubits. For large dimensionality d or lattice size L, this quickly becomes infeasible on NISQ devices, which are significantly limited by the number of usable qubits, qubit connectivity, gate errors, and decoherence times.

To overcome these hardware limitations, we devise an approach to exploit the exponentially large many-body Hilbert space of an interacting qubit chain. The key inspiration is that most local lattice models only access a small portion of the full Hilbert space (particularly non-interacting models and models with symmetries), and an Ld-site lattice can be consistently represented with far fewer than Ld qubits. To do so, we introduce an exact mapping that reduces d-dimensional lattices to 1D chains hosting d-particle interactions, which is naturally simulable on a quantum computer that accesses and operates on the many-body Hilbert space of a register of qubits.

At a general level, we consider a generic d-dimensional n-band model ({{{{{{{mathcal{H}}}}}}}}={sum}_{{{{{{{{bf{k}}}}}}}}}{{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{k}}}}}}}}}^{{{{dagger}}} }{{{{{{{mathcal{H}}}}}}}}({{{{{{{bf{k}}}}}}}}){{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{k}}}}}}}}}) on an arbitrary lattice. In real space,

$${{{{{{{mathcal{H}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}{sum}_{gamma {gamma }^{{prime} }}{h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}{c}_{{{{{{{{bf{r}}}}}}}}gamma }^{{{{dagger}}} }{c}_{{{{{{{{{bf{r}}}}}}}}}^{{prime} }{gamma }^{{prime} }},$$

(1)

where we have associated the band degrees of freedom to a sublattice structure , and ({h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}=0) for (| {{{{{{{bf{r}}}}}}}}-{{{{{{{{bf{r}}}}}}}}}^{{prime} }|) outside the coupling range of the model, i.e., adjacent sites for a nearest-neighbor (NN) model, next-adjacent for next-NN, etc. The operator cr annihilates particle excitations on sublattice of site r.

To take advantage of the degrees of freedom in the many-body Hilbert space, our mapping is defined such that the hopping of a single particle on the original d-dimensional lattice from (({{{{{{{{bf{r}}}}}}}}}^{{prime} },;{gamma }^{{prime} })) to (r, ) becomes the simultaneous hopping of d particles, each of a distinct species, from locations (({r}_{1}^{{prime} },ldots,{r}_{d}^{{prime} })) to (r1,, rd) and sublattice ({gamma }^{{prime} }) to on a 1D interacting chain. Explicitly, this map is given by

$${{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{r}}}}}}}}gamma }^{{{{dagger}}} } , mapsto {prod}_{alpha=1}^{d}{left[{omega }_{{r}_{alpha }gamma }^{alpha }right]}^{{{{dagger}}} },qquad {{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{r}}}}}}}}gamma } , mapsto {prod}_{alpha=1}^{d}{omega }_{{r}_{alpha }gamma }^{alpha },$$

(2)

where r is the th component of r, and ( { omega^{alpha}_{ell gamma} }_{alpha = 1}^{d}) represents d excitation species hosted on sublattice of site on the interacting chain, yielding

$${{{{{{{mathcal{H}}}}}}}}mapsto {{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{1D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}{sum}_{gamma {gamma }^{{prime} }}{h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}{prod}_{alpha=1}^{d}{left[{omega }_{{r}_{alpha }gamma }^{alpha }right]}^{{{{dagger}}} }{omega }_{{r}_{alpha }^{{prime} }{gamma }^{{prime} }}^{alpha }.$$

(3)

In the single-particle context, exchange statistics is unimportant, and {} can be taken to be commuting. This mapping framework accommodates any lattice dimension and geometry, and any number of bands or sublattice degrees of freedom. As the mapping is performed at the second-quantized level, any one-body Hamiltonian expressed in second-quantized form can be treated, which encompasses a wide variety of single-body topological phenomena of interest. We refer readers to Supplementary Note1 for a more expansive technical discussion. With slight modifications, this mapping can also be extended to admit interaction terms in the original d-dimensional lattice Hamiltonian, although we do not explore them further in this work.

For concreteness, we specialize our Hamiltonian to HOT systems henceforth and shall detail how our mapping enables them to be encoded on quantum processors. The simplest square lattice with HOT corner modes21 may be constructed from the paradigmatic 1D Su-Schrieffer Heeger (SSH) model29. To allow for sufficient degrees of freedom for topological localization, we minimally require a 2D mesh of two different types of SSH chains in each direction, arranged in an alternating fashion

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}={sum}_{(x,y)in {[1,L]}^{2}}left[{u}_{xy}^{x}{c}_{(x+1)y}^{{{{dagger}}} }+{u} , _{yx}^{y}{c}_{x(y+1)}^{{{{dagger}}} }right]{c}_{xy}+,{{mbox{h.c.}}},,$$

(4)

where cxy is the annihilation operator acting on site (x, y) of the lattice and ({u}_{{r}_{1}{r}_{2}}^{alpha }) takes values of either ({v}_{{r}_{1}{r}_{2}}^{alpha }) for intra-cell hopping (odd r2) or ({w}_{{r}_{1}{r}_{2}}^{alpha }) for inter-cell hopping (even r2), {x, y}. Conceptually, we recognize that the 2D lattice momentum space can be equivalently interpreted as the joint configuration momentum space of two particles, specifically, the (1+1)-body sector of a corresponding 1D interacting chain. We map cxyxy, where and annihilate hardcore bosons of two different species at site on the chain. In the notation of Eq. (2), we identify ({omega }_{ell }^{1},=,{omega }_{ell }^{x},=,{mu }_{ell }) and ({omega }_{ell }^{2},=,{omega }_{ell }^{y},=,{nu }_{ell }), and the sublattice structure has been absorbed into the (parity of) spatial coordinates. This yields an effective 1D, two-boson chain described by

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}},=, {sum}_{x=1}^{L}{sum}_{y=1}^{L}left[{u}_{xy}^{x}{mu }_{x+1}^{{{{dagger}}} }{mu }_{x}{n}_{y}^{nu },+,{u}_{yx}^{y}{nu }_{y+1}^{{{{dagger}}} }{nu }_{y}{n}_{x}^{mu }right],+,,{{mbox{h.c.}}},,$$

(5)

where ({n}_{ell }^{omega }) is the number operator for species at site of the chain. As written, each term in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) represents an effective SSH model for one particular species or , with the other species not participating in hopping but merely present (hence its number operator). These two-body interactions arising in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) appear convoluted, but can be readily accommodated on a quantum computer, taking advantage of the quantum nature of the platform. To realize ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) on a quantum computer, we utilize 2qubits to represent each site of the chain, associating the unoccupied, -occupied, -occupied and both , -occupied boson states to qubit states (leftvert 00rightrangle), (leftvert 01rightrangle), (leftvert 10rightrangle), and (leftvert 11rightrangle) respectively. Thus 2L qubits are needed for the simulation, a significant reduction from L2 qubits without the mapping, especially for large lattice sizes. We present simulation results on IBM quantum computers for lattice size (L sim {{{{{{{mathcal{O}}}}}}}}(10)) inthe Two-dimensional HOT square lattice section.

Our methodology naturally generalizes to higher dimensions. Specifically, ad-dimensional HOT lattice maps onto a d-species interacting 1D chain, and d qubits are employed to represent each site of the chain, providing sufficient many-body degrees of freedom to encode the 2d occupancy basis states of each site. We write

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}in {[1,L]}^{d}}{sum}_{alpha=1}^{d}{u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }{c}_{{{{{{{{bf{r}}}}}}}}+{hat{{{{{{{{bf{e}}}}}}}}}}_{alpha }}^{{{{dagger}}} }{c}_{{{{{{{{bf{r}}}}}}}}}+,{{mbox{h.c.}}},,$$

(6)

where enumerates the directions along which hoppings occur and ({hat{{{{{{{{bf{e}}}}}}}}}}_{alpha }) is the unit vector along . As before, the hopping coefficients alternate between inter- and intra-cell values that can be different in each direction. Compactly, ({u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }=[1-pi ({r}_{alpha })]{v}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }+pi ({r}_{alpha }){w}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }) for parity function , intra- and inter-cell hopping coefficients ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }), and r are spatial coordinates in non- directionssee Supplementary Table1 for details of the hopping parameter values used in this work. Using d hardcore boson species {} to represent the d dimensions, we map onto an interacting chain via ({c}_{{{{{{{{bf{r}}}}}}}}}mapsto {prod}_{alpha=1}^{d}{omega }_{{r}_{alpha }}^{alpha }), giving

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}in {[1,L]}^{d}}{sum}_{alpha=1}^{d}{u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }left[{left({omega }_{{r}_{alpha }+1}^{alpha }right)}^{{{{dagger}}} }{omega }_{{r}_{alpha }}^{alpha } prod_{beta=1 atop beta neq alpha}^d {n}_{{r}_{beta }}^{beta }right]+,{{mbox{h.c.}}},,$$

(7)

where ({omega }_{ell }^{alpha }) annihilates a hardcore boson of species at site of the chain and ({n}_{ell }^{alpha }) is the number operator of species . In the d=2 square lattice above, we had r=(x, y) and {}={, }. The highest dimensional HOT lattice we shall examine is the d=4 tesseract, for which r=(x, y, z, w) and {}={, , , }. In total, a d-dimensional HOT lattice Hamiltonian has d2d distinct hopping coefficients, since there are d different lattice directions and 2d1 distinct edges along each direction, each comprising two distinct hopping amplitudes for inter- and intra-cell hopping. Appropriately tuning these coefficients allows the manifestation of robust HOT modes along the boundaries (corners, edges, etc.) of the latticesschematics of the various lattice configurations investigated in our experiments are shown in later sections.

Accordingly, the equivalent interacting 1D chain requires dL qubits to realize, an overwhelming reduction from the Ld otherwise needed in a direct simulation of ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{dD}) without the mapping. We remark that such a significant compression is possible because HOT is inherently a single-particle phenomenon. See Methods for further details and optimizations of our mapping scheme on the HOT lattices considered, and Supplementary Note1 for an extended general discussion, including examples of other lattices and models.

With our mapping, a d-dimensional HOT lattice ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) with Ld sites is mapped onto an interacting 1D chain ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) with dL number of qubits, which can be feasibly realized on existing NISQ devices for (L sim {{{{{{{mathcal{O}}}}}}}}(10)) and d4. While the resultant interactions in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) are inevitably complicated, below we describe how ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) can be viably simulated on quantum hardware.

A high-level overview of our general framework for simulating HOT time-evolution is illustrated in Fig.2. To evolve an initial state (leftvert {psi }_{0}rightrangle), it is necessary to implement the unitary propagator (U(t)=exp (-i{{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}t)) as a quantum circuit, such that the circuit yields (leftvert psi (t)rightrangle=U(t)leftvert {psi }_{0}rightrangle) and desired observables can be measured upon termination. A standard method to implement U(t) is Trotterization, which decomposes ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) in the spin-1/2 basis and splits time-evolution into small steps (see Methods for details). However, while straightforward, such an approach yields deep circuits unsuitable for present-generation NISQ hardware. To compress the circuits, we utilize a tensor network-aided recompilation technique30,31,32,33. We exploit the number-conserving symmetries of ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) in each boson species, arising from ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) and the nature of our mapping (see Methods), to enhance circuit construction performance and quality at large circuit breadths (up to 32qubits). Moreover, to improve data quality amidst hardware noise, we employ a suite of error mitigation techniques, in particular, readout errormitigation (RO) that approximately corrects bit-flip errors during measurement34, a post-selection (PS) technique that discards results in unphysical Fock-space sectors30,35, and averaging across machines and qubit chains (see Methods).

a, b Mapping of a higher-dimensional lattice to a 1D interacting chain to facilitate quantum simulation on near-term devices. Concretely, a two-dimensional single-particle lattice can be represented by a two-species interacting chain; a three-dimensional lattice can be represented by a three-species chain with three-body interactions. c Overview of quantum simulation methodology: higher-dimensional lattices are first mapped onto interacting chains, then onto qubits; various techniques, such as d Trotterization and e ansatz-based recompilation, enable the construction of quantum circuits for dynamical time-evolution, or IQPE for probing the spectrum. The quantum circuits are executed on the quantum processor, and results are post-processed with RO and PS error mitigationsto reduce effects of hardware noise. See Methods for elaborations on the mapping procedure, and quantum circuit construction and optimization.

After acting on (leftvert {psi }_{0}rightrangle) by the quantum circuit that effects U(t), terminal computational-basis measurements are performed on the simulation qubits. We retrieve the site-resolved occupancy densities (rho ({{{{{{{bf{r}}}}}}}})=langle {c}_{{{{{{{{bf{r}}}}}}}}}^{{{{dagger}}} }{c}_{{{{{{{{bf{r}}}}}}}}}rangle=langle {prod}_{alpha=1}^{d}{n}_{{r}_{alpha }}^{alpha }rangle) on the d-dimensional lattice, and the extent of evolution of (leftvert psi (t)rightrangle) away from (leftvert {psi }_{0}rightrangle), whose occupancy densities are 0(r), is assessed via the occupancy fidelity

$$0le {{{{{{{{mathcal{F}}}}}}}}}_{rho }=frac{{left[{sum}_{{{{{{{{bf{r}}}}}}}}}rho ({{{{{{{bf{r}}}}}}}}){rho }_{0}({{{{{{{bf{r}}}}}}}})right]}^{2}}{left[mathop{sum}_{{{{{{{{bf{r}}}}}}}}}rho ({{{{{{{bf{r}}}}}}}})^2right] left[mathop{sum}_{{{{{{{{bf{r}}}}}}}}}{rho }_{0} ({{{{{{{bf{r}}}}}}}})^2right]} le 1.$$

(8)

Compared to the state fidelity ({{{{{{{mathcal{F}}}}}}}}=| langle {psi }_{0}| psi rangle {| }^{2}), the occupancy fidelity ({{{{{{{{mathcal{F}}}}}}}}}_{rho }) is considerably more resource-efficient to measure on quantum hardware.

In addition to time evolution, we can also directly probe the energy spectrum of our simulated Hamiltonian ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) through iterative quantum phase estimation (IQPE)36see Methods. Specifically, to characterize the topology of HOT systems, we use IQPE to probe the existence of midgap HOT modes at exponentially suppressed (effectively zero for L1) energies. In contrast to quantum phase estimation37,38, IQPE circuits are shallower and require fewer qubits, and are thus preferable for implementation on NISQ hardware. As our interest is in HOT modes, we initiate IQPE with maximally localized boundary states that are easily constructed a priori, which exhibit good overlap (>80% state fidelity) with HOT eigenstates, and examine whether IQPE converges consistently towards zero energy. These states are listed in Supplementary Table2.

As the lowest-dimensional incarnation of HOT lattices, the d=2 staggered square lattice harbors only one type of HOT modezero-dimensional corner modes (Fig.1a). Previously, such HOT corner modes on 2D lattices have been realized in various metamaterials39,40 and photonic waveguides41, but not in a purely quantum setting to-date. Our equivalent 1D hardcore boson chain can be interpreted as possessing interaction-induced topology that manifests in the joint configuration space of the d bosons hosted on the many-body chain. Here, the topological localization is mediated not due to physical SSH-like couplings or band polarization but due to the combined exclusion effects from all its interaction terms. We emphasize that our physically realized 1D chain contains highly non-trivial interaction terms involving multiple sitesthe illustrative example in Fig.3f for an L=6 chain already contains a multitude of interactions, even though it is much smaller than the L=10 and L=16 systems we simulated on quantum hardware. As evident, the (d times 2^d = 8) unique types of interactions, corresponding to the 8 different couplings on the lattice, are mostly non-local; but this does not prohibit their implementation on quantum circuits. Indeed, the versatility of digital quantum simulators in realizing effectively arbitrary interactions allows the implementation of complex interacting Hamiltonian terms, and is critical in enabling our quantum device simulations.

a Ordered eigenenergies on a 1010 lattice for the topologically trivial C0 and nontrivial C2 and C4 configurations. They correspond to 0, 2, and 4 midgap zero modes (red diamonds), as measured via IQPE on a 20-qubit quantum chain plus an additional ancillary qubit; the shaded red band indicates the IQPE energy resolution. The corner state profiles (right insets) and other eigenenergies (black and gray dots) are numerically obtained via ED. Time-evolution of four initial states on a 1616 lattice mapped onto a 32-qubit chainb, c localized at corners to highlight topological distinction, d localized along an edge, and e delocalized in the vicinity of a corner. Left plots show occupancy fidelity for the various lattice configurations, obtained from ED and quantumhardware (labeled HW), with insets showing the site-resolved occupancy density (x, y) of the initial states (darker shading represents higher density). The right grid shows occupancy density measured on hardware at two later times. States with good overlap with robust corners exhibit minimal evolution. Error bars represent standard deviation across repetitions on different qubit chains and devices. In general, the heavy overlap between an initial state and a HOT eigenstate confers topological robustness, resulting in significantly slowed decay. f Schematic of the interacting chain Hamiltonian, mapped from the parent 2D lattice, illustrated for a smaller 66 square lattice. The physical sites of the interacting boson chain are colored black, with their many-body interactions represented by colored vertices. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes (in){x, y} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{1}).

In our experiments, we consider three different scenarios: C0, having no topological corner modes; C2, having two corner modes at corners (x, y)=(1, 1) and (L, 1); and C4, having corner modes on all four corners. These scenarios can be obtained by appropriately tuning the eight coupling parameters in the Hamiltonian (Eq. (4))see Supplementary Table1 for parameter values42.

We first show that the correct degeneracy of midgap HOT modes can be measured on each of the configurations C0, C2, and C4 on IBM transmon-based quantum computers, as presented in Fig.3a. For a start, we used a 20-qubit chain, which logically encodes a 1010 HOT lattice, with an additional ancillary qubit for IQPE readout. The number of topological corner modes in each case is accurately obtained through the degeneracy of midgap states of exponentially suppressed energy (red), as measured through IQPE executed on quantum hardwaresee Methods for details. That these midgap modes are indeed corner-localized is verified via numerical (classical) diagonalization, as in the insets of Fig.3a.

Next, we demonstrate highly accurate dynamical state evolution on larger 32-qubit chains on quantum hardware. We time-evolve various initial states on 1616 HOT lattices in the C0, C2, and C4 configurations and measure their site-resolved occupancy densities (x, y), up to a final time t=0.8 when fidelity trends become unambiguous. The resultant occupancy fidelity plots (Fig.3be) conform to the expectation that states localized on topological corners survive the longest, and are also in excellent agreement with reference data from ED. For instance, a localized state at the corner (x0, y0)=(1, 1) is robust on C2 and C4 lattice configurations (Fig.3b), whereas one localized on the (x0, y0)=(1, L) corner is robust only on the C4 configuration (Fig.3c). These fidelity decay trends are corroborated with the measured site-resolved occupancy density (x, y): low occupancy fidelity is always accompanied by a diffused (x, y) away from the initial state, whereas strongly localized states have high occupancy fidelity. In general, the heavy overlap between an initial state and a HOT eigenstate confers topological robustness, resulting in significantly slowed decay; this is apparent from the occupancy fidelities, which remain near unity over time. In comparison, states that do not enjoy topological protection, such as the (1, L)-localized state on the C2 configuration and all initial states on the C0 configuration, rapidly delocalize and decay quickly.

Our experimental runs remain accurate even for initial states that are situated away from the lattice corners, such that they cannot enjoy full topological protection. In Fig.3d, the initial state at (x0, y0)=(2, 1), which neighbors the corner (1, 1), loses its fidelity much sooner than the corner initial state of Fig.3b, even for the C2 and C4 topological corner configurations. That said, its fidelity evolution still agrees well with ED reference data. In a similar vein, an initial state that is somewhat delocalized at a corner (Fig.3e) is still conferred a degree of stability when the corner is topological.

Next, we extend our investigation to the staggered cubic lattice in 3D, which hosts third-order HOT corner modes (Fig.1a). These elusive corner modes have to date only been realized in classical platforms43 or in synthetic electronic lattices44. Compared to the 2D cases, the implementation of the 3D HOT lattice (Eq. (6)) as a 1D interacting chain (Eq. (7)) on quantum hardware is more sophisticated. The larger dimensionality of the staggered cubic lattice, in comparison to the square lattice, is reflected by a larger density of multi-site interaction terms on the interacting chain. This is illustrated in Fig.4b for the minimal 444 lattice, where the combination of the various d=3-body interactions gives rise to emergent corner robustness (which appears as up to 3-body boundary clustering as seen on the 1D chain).

a The header row displays energy spectra for the topologically trivial C0 and inequivalent nontrivial C4a, C4b, and C8 configurations. The configurations host 0, 4, and 8 midgap zero modes (red diamonds), as measured via IQPE on an 18-qubit chain plus an ancillary qubit; the shaded red band indicates the IQPE energy resolution. Schematics illustrating the locations of topologically robust corners are shown on the right. Subsequent rows depict the time-evolution of five initial states on a 666 lattice mapped onto an 18-qubit chainlocalized at a corner, on an edge, on a face, and in the bulk of the cube, and delocalized in the vicinity of a corner. The leftmost column plots occupancy fidelity for the various lattice configurations, obtained from ED and quantum hardware (labeled HW), with insets showing the site-resolved occupancy density (x, y, z) of the initial state (darker shading represents higher density). The central grid shows occupancy density measured on hardware at a later time (t=0.6), for the corresponding initial state (row) and lattice configuration (column). Error bars represent standard deviation across repetitions on different qubit chains and devices. Again, initial states localized close to topological corners exhibit higher occupational fidelity. b Hamiltonian schematic of the interacting chain realizing a minimal 444 cubic lattice. Sites on the chain are colored black; colored vertices connecting to multiple sites on the chain denote interaction terms. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }){x, y, z} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{2}).

On quantum hardware, we implemented 18-qubit chains representing 666 cubic lattices in four configurations, specifically, the trivial lattice (C0), two geometrically inequivalent configurations hosting four topological corners (C4a, C4b), and a configuration with all 23=8 topological corners (C8). Similar to the 2D HOT lattice, we first present the degeneracy of zero-energy topological modes (header row of Fig.4a) with low-energy spectral data (red diamonds) accurately obtained via IQPE.

From the first row of Fig.4a, it is apparent that initial states localized on topological corners enjoy significant robustness. Namely, the measured site-resolved occupancy densities (x, y, z) (four right columns) indicate that the localization of (x0, y0, z0)=(1, 1, 1) corner initial states on C4a, C4b, and C8 configurations are maintained, and measured occupancy fidelities remain near unity. In comparison, an initial corner-localized state on the C0 configuration, which hosts no topological corner modes, delocalizes quickly. Moving away from the corners, an edge-localized state adjacent to a topological corner is conferred slight, but nonetheless present, stability (second row of Fig.4a), as observed from the slower decay of the (x0, y0, z0)=(2, 1, 1) state on C4a, C4b, and C8 configurations in comparison to the C0 topologically trivial lattice. This conferred robustness is diminished for states localized further from topological corners, for instance, surface-localized states (third row), and is virtually unnoticeable for states localized in the bulk (fourth row), which decay rapidly for all topological configurations. Initial states that are slightly delocalized near a corner enjoy some protection when the corner is topological, but are unstable when the corner is trivial (fifth row of Fig.4a). We again highlight the quantitative agreement of our quantum hardware simulation results with theoretical ED predictions.

We now turn to our key resultsthe NISQ quantum hardware simulation of four-dimensional staggered tesseract HOT lattices. A true 4D lattice is difficult to simulate on most experimental platforms, and with a few exceptions45, most works to date have relied on using synthetic dimensions18,46. In comparison, utilizing our exact mapping (Eqs. (6) and (7)) that exploits the exponentially large many-body Hilbert space accessible by a quantum computer, a tesseract lattice can be directly simulated on a physical 1D spin (qubit) chain, with the number of spatial dimensions only limited by the number of qubits. The tesseract unit cell can be visualized as two interlinked three-dimensional cubes (spanned by x, y, z axes) living in adjacent w-slices (Fig.5). The full tesseract lattice of side length L is then represented as successive cubes with different w coordinates, stacked successively from inside out, with the inner and outer wireframe cubes being w=1 and w=L slices. Being more sophisticated, the 4D HOT lattice features various types of HOT corner, edge, and surface modes (Fig.1a); we presently focus on the fourth-order (hexadecapolar) HOT corner modes, as well as the third-order (octopolar) HOT edge modes.

A L=6 tesseract lattice is illustrated as six cube slices indexed by w and highlighted on a color map. The header row displays energy spectra computed numerically for the topologically trivial C0 and nontrivial C4, C8, and C16 configurations. The configurations host 0, 4, 8, and 16 midgap zero modes (black circles). Schematics on the right illustrate the locations of the topologically robust corners. Subsequent rows depict the time-evolution of three initial states on a 6666 lattice mapped onto a 24-qubit chainlocalized on a a corner, b an edge, and c a face. The leftmost column plots occupancy fidelity for the various lattice configurations, obtained from ED and quantum hardware (labeled HW), with insets showing the site-resolved occupancy density (x, y, z, w) of the initial state. Central grid shows occupancy density measured on hardware at the final simulation time (t=0.6), for the corresponding initial state (row) and lattice configuration (column). The color of individual sites (spheres) denotes their w-coordinate and color saturation denotes occupancy of the site; unoccupied sites are translucent. Error bars represent standard deviation across repetitions on different qubit chains and devices. Initial states with less overlap with topological corners exhibit slightly lower stability than their lower dimensional counterparts, as these states diffuse into the more spacious 4D configuration space. d Hamiltonian schematic of the interacting chain realizing a minimal 4444 tesseract lattice. Sites on the chain are colored black; colored vertices connecting to multiple sites on the chain denote interaction terms. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes (in){x, y, z, w} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{3}). To limit visual clutter, only ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) intra-cell couplings are shown; a corresponding set of ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) inter-cell couplings are present in the Hamiltonian but have been omitted from the diagram.

To start, we realized a dL=46=24-qubit chain on the quantum processor, which encodes a 6666 HOT tesseract. The 4-body (8-operator) interactions now come in d2d=64 typeshalf of them are illustrated in Fig.5d, which depicts only the minimal L=4 case. As discussed inthe Mapping higher-dimensional lattices to 1D quantum chains section, these interactions are each a product of d1 density terms and a hopping process, the latter acting on the particle species that encodes the coupling direction on the HOT tesseract. In generic models with non-axially aligned hopping, these interactions could be a product of up to d hopping processes. As we shortly illustrate, despite the complexity of the interactions, the signal-to-noise ratio in our hardware simulations (Fig.5a) remains reasonably good.

In Fig.5, we consider the configurations C0, C4, C8, and C16, which correspond respectively to the topologically trivial scenario and lattice configurations hosting four, eight, and all sixteen HOT corner modes, as schematically sketched in the header row. Similar to the 2D and 3D HOT lattices, site-resolved occupancy density (x, y, z, w) and occupancy fidelities measured on quantum hardware reveal strong robustness for initial states localized at topological corners, as illustrated by the strongly localized final states in the C4, C8, and C16 cases (Fig.5a). However, their stability is now slightly lower, partly due to the more spacious 4D configuration space into which the state can diffuse, as seen from the colored clouds of partly occupied sites after time evolution. Evidently, the stability diminishes as we proceed to the edge- and surface-localized initial states (Fig.5b and c).

Next, we investigate a lattice configuration that supports HOT edge modes (or commonly referred to as topological hinge states in literature22). So far we have seen topological robustness only from topological corner sites (Fig.5); but with appropriate parameter tuning (see Supplementary Table1), topological modes can be made to lie along entire edges. This is illustrated in the header row of Fig.6, where topological modes lie along the y-edges. As our HOT lattices are constructed from a mesh of alternating SSH chains, we expect the topological edges to have wavefunction support (nonzero occupancy) only on alternate sites, consistent with the cumulative occupancy densities of the midgap zero-energy modes. This is corroborated by site-resolved occupancy densities and occupancy fidelities measured on quantum hardware, which demonstrate that initial states localized on sites with topological wavefunction support are significantly more robust (Fig.6a, b), i.e., (x0, y0, z0, w0)=(1, 3, 1, L) overlaps with the topological mode on (1, y, 1, L), y{1, 3, 5} sites and is hence robust, but (1, 2, 1, L) is not. The stability of the initial state is reduced as we move farther from the corner, as can be seen, for instance, by comparing occupancy fidelities and the size of the final occupancy cloud for (1, 1, 1, L) and (1, 3, 1, L) in Fig.6a, b, which is expected from the decaying y-profile of the topological edge mode. Finally, our measurements verify that surface-localized states do not enjoy topological protection (Fig.6c) as they are localized far away from the topological edges. It is noteworthy that such measurements into the interior of the 4D lattice can be made without additional difficulty on our 1D qubit chain, but doing so can present significant challenges on other platforms, even electrical (topolectrical) circuits.

Our mapping facilitates the realization of any desired HOT modes, beyond the aforementioned corner mode examples. The header row on the left displays the energy spectrum for a configuration of the tesseract harboring topologically non-trivial edges (midgap mode energies in black). Accompanying schematic highlights alternating sites with topological edge wavefunction support. Subsequent columns present site-resolved occupancy density (x, y, z, w) for a 6666 lattice mapped onto a 24-qubit chain, measured on quantum hardware at t=0 (first row) and final simulation time t=0.6 (second row), for three different experiments. a A corner-localized state along a topological edge is robust, compared to one along a non-topological edge. b On a topologically non-trivial edge, a state localized on a site with topological wavefunction support is robust, compared to one localized on a site without support. c A surface-localized state far away from the topological edges diffuses into a large occupancy cloud. The bottom leftmost summarizes occupancy fidelities for the various initial states, obtained from ED and hardware (labeled HW). Error bars represent standard deviation across repetitions on different qubit chains and devices.

Our approach of mapping a d-dimensional HOT lattice onto an interacting 1D chain enabled a drastic reduction in the number of qubits required for simulation, and served a pivotal role in enabling the hardware realizations presented in this work. Here, we further illustrate that employing this mapping for simulation on quantum computers can provide a resource advantage over ED on classical computers, particularly at large lattice dimensionality d or linear size L. For this discussion, we largely leave aside tensor network methods, as their advantage over ED is unclear in the generic setting of lattice dimensionality d>1, with arbitrary initial states and evolution time (which may generate large entanglement).

To be concrete, we consider simulation tasks of the following broad type: given an initial state (leftvert {psi }_{0}rightrangle), we wish to perform time-evolution to (leftvert psi (t)rightrangle), and extract the expectation value of an observable O that is local, that is, O is dependent on ({{{{{{{mathcal{O}}}}}}}}({l}^{d})) number of sites on the lattice for a fixed neighborhood of radius l independent of L. State preparation or initialization resources for (leftvert {psi }_{0}rightrangle) are excluded from our considerations, as there can be significant variations in costs depending on the choice of specification of the state for both classical and quantum methods. Measurement costs for computing O, however, are considered. To ensure a meaningful comparison, we assume first-order Pauli-basis Trotterization for the construction of quantum circuits, such that circuit preparation is algorithmically straightforward given a lattice Hamiltonian. As a baseline, classical ED of a d-dimensional, length L system with a single particle generally requires ({{{{{{{mathcal{O}}}}}}}}({L}^{3d})) run-time and ({{{{{{{mathcal{O}}}}}}}}({L}^{2d})) dense classical storage to complete a task of such a type47.

A direct implementation of a generic Hamiltonian using our mapping gives ({{{{{{{mathcal{O}}}}}}}}(d{L}^{d}cdot {2}^{d})) Pauli strings per Trotter step (see Methods), where hoppings along each edge of the lattice, extensive in number, are allowed to be independently tuned. However, physically relevant lattices typically host only a systematic subset of hopping processes, described by a sub-extensive number of parameters. In particular, in the HOT lattices we considered, the hopping amplitude ({u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }) along each axis is dependent only on and the parities of coordinates r. Noting the sub-extensive number of distinct hoppings, the lattice Hamiltonian can be written in a more favorable factorized form, yielding ({{{{{{{mathcal{O}}}}}}}}(dLcdot {2}^{2d})) Pauli strings per Trotter step (see Methods). Decomposing into a hardware gate set, the total number of gates in a time-evolution circuit scales as ({{{{{{{mathcal{O}}}}}}}}({d}^{2}{L}^{2}cdot {2}^{2d}/epsilon )) in the worst-case for simulation precision , assuming all-to-all connectivity between qubits. Imposing linear NN connectivity on the qubit chain does not alter this bound. Crucially, there is no exponential scaling of d in L (of form ~Ld), unlike classical ED.

For large L and d, the circuit preparation and execution time can be lower than the ({{{{{{{mathcal{O}}}}}}}}({L}^{3d})) run-time of classical ED. We illustrate this in Fig.7, which shows a qualitative comparison of run-time scaling between the quantum simulation approach and ED. We have assumed execution time on hardware to scale as the number of gates in the circuit ({{{{{{{mathcal{O}}}}}}}}({d}^{2}{L}^{2}cdot {2}^{2d}/epsilon )), which neglects speed-ups afforded by parallelization of single- or two-qubit gates acting on disjoint qubits48. The difference in asymptotic complexities implies a crossover at large L or d beyond which quantum simulation exhibits a growing advantage. The exact crossover boundary is sensitive to platform-specific details such as gate times and control capabilities; given the large spread in gate timescales (3 orders of magnitude) across present-day platforms49,50, and uncertain overheads from quantum error correction or mitigation, we avoid giving definite numerical promises on breakeven L and d values. Classical memory usage is similarly bounded during circuit construction, straightforwardly reducible to ({{{{{{{mathcal{O}}}}}}}}(dL)) by constructing and executing gates in a streaming fashion51, and worst-case ({{{{{{{mathcal{O}}}}}}}}({2}^{ld})) during readout to compute O, reducible to a constant supposing basis changes to map components of O onto the computational basis of a fixed number of measured qubits can be implemented on the quantum circuits52.

Comparison of asymptotic computational time required for the dynamical simulation of d-dimensional, size-L lattice Hamiltonians of similar complexity as our HOT lattices. a With fixed lattice dimension d and increasing lattice size L, the time taken with our approach on a quantum computer (labeled QC) scales with L2, rather than the higher power of L3d through classical ED. b For fixed L and varying d, our approach scales promisingly, scaling like 4d instead of ({({L}^{3})}^{d}) for ED. We assume conventional Trotterization for circuit construction, and at large L and d, our mapping and quantum simulation approach can provide a resource advantage over classical numerical methods (e.g., ED).

The favorable resource scaling (run-time and memory), in combination with the modest dL qubits required, suggests promising scalability of our mapped quantum simulation approach, especially in realizing larger and higher-dimensional HOT lattices. We re-iterate, however, that Trotterized circuits without additional optimization remain largely too deep for present-generation NISQ hardware to execute feasibly. The use of qudit hardware architectures in place of qubits can allow shallower circuits53; in particular, using a qudit of local Hilbert space dimension 2d instead of a group of d qubits avoids, to a degree, decomposition of long-range multi-site gates, assuming the ability to efficiently and accurately perform single- and two-qudit operations54. Nonetheless, for the quantum simulation of sophisticated topological lattices as described to be achieved in their full potential, fault-tolerant quantum computation, at the least quantum devices with vastly improved error characteristics and decoherence times, will likely be needed.

Here is the original post:
Realization of higher-order topological lattices on a quantum computer - Nature.com

With spin centers, quantum computing takes a step forward – UC Riverside

Quantum computing, which uses the laws of quantum mechanics, can solve pressing problems in a broad range of fields, from medicine to machine learning, that are too complex for classical computers. Quantum simulators are devices made of interacting quantum units that can be programmed to simulate complex models of the physical world. Scientists can then obtain information about these models and, by extension, about the real world by varying the interactions in a controlled way and measuring the resulting behavior of the quantum simulators.

In a paper published in Physical Review Band selected by the journal as an editors' suggestion,a UC Riverside-led research team has proposed a chain of quantum magnetic objects, called spin centers, that, in the presence of an external magnetic field, can quantum simulate a variety of magnetic phases of matter as well as the transitions between these phases.

We are designing new devices that house the spin centers and can be used to simulate and learn about interesting physical phenomena that cannot be fully studied with classical computers, said Shan-Wen Tsai, a professor of physics and astronomy, who led the research team. Spin centers in solid state materials are localized quantum objects with great untapped potential for the design of new quantum simulators.

According to Troy Losey, Tsais graduate student and first author of the paper, advances with these devices could make it possible to study more efficient ways of storing and transferring information, while also developing methods needed to create room temperature quantum computers.

We have many ideas for how to make improvements to spin-center-based quantum simulators compared to this initial proposed device, he said. Employing these new ideas and considering more complex arrangements of spin centers could help create quantum simulators that are easy to build and operate, while still being able to simulate novel and meaningful physics.

Below, Tsai and Losey answer a couple of questions about the research:

Tsai: It is a device that exploits the unusual behaviors of quantum mechanics to simulate interesting physics that is too difficult for a regular computer to calculate. Unlike quantum computers that operate with qubits and universal gate operations, quantum simulators are individually designed to simulate/solve specific problems. By trading off universal programmability of quantum computers in favor of exploiting the richness of different quantum interactions and geometrical arrangements, quantum simulators may be easier to implement and provide new applications for quantum devices, which is relevant because quantum computers arent yet universally useful.

A spin center is a roughly atom-sized quantum magnetic object that can be placed in a crystal. It can store quantum information, communicate with other spin centers, and be controlled with lasers.

Losey: We can build the proposed quantum simulator to simulate exotic magnetic phases of matter and the phase transitions between them. These phase transitions are of great interest because at these transitions the behaviors of very different systems become identical, which implies that there are underlying physical phenomena connecting these different systems.

The techniques used to build this device can also be used for spin-center-based quantum computers, which are a leading candidate for the development of room temperature quantum computers, whereas most quantum computers require extremely cold temperatures to function. Furthermore, our device assumes that the spin centers are placed in a straight line, but it is possible to place the spin centers in up to 3-dimensional arrangements. This could allow for the study of spin-based information devices that are more efficient than methods that are currently used by computers.

As quantum simulators are easier to build and operate than quantum computers, we can currently use quantum simulators to solve certain problems that regular computers dont have the abilities to address, while we wait for quantum computers to become more refined. However, this doesnt mean that quantum simulators can be built without challenge, as we are just now getting close to being good enough at manipulating spin centers, growing pure crystals, and working at low temperatures to build the quantum simulator that we propose.

Link:
With spin centers, quantum computing takes a step forward - UC Riverside

Guest Post Controlling the Qubits: Overcoming DC Bias and Size Challenges in Quantum – The Quantum Insider

Guest Post by Gobinath Tamil Vanan (bio below)

Quantum computing, with its promise of efficient calculations in challenging applications, is rapidly advancing in research and development. The pivotal technology for quantum computing lies in the control and evaluation of qubits.

Quantum computing is gaining attention for its ability to solve complex problems that prove difficult for regular computers. In this journey, instruments like the DC bias source play a crucial role, especially for flux-tunable superconducting and silicon spin qubits. The DC bias source helps to adjust the flux to decide the resonance frequency of the superconducting qubit and to apply DC bias voltage to each gate terminal of silicon spin qubits. In addition, the number of qubits employed in a quantum computer increases the physical size of the machine based on the number of DC bias sources needed to control the qubits.

Figure 1. Single qubit control and evaluation system for flux-tunable superconducting and silicon spin qubits. The instruments and lines indicated in red represent the DC voltage bias source and wiring. For the flux-tunable superconducting qubit, the DC voltage bias source helps tune the resonance frequency using the magnetic flux generated in the coil. For the silicon spin qubit, the DC voltage bias source works by tuning the electric potential of gate terminals.

An engineer can initialize, control, and read the qubit states by using control evaluation systems, as depicted in Figure 1. This control evaluation system enables the characterization of qubit properties like coherence time and fidelity and the execution of benchmark tests, thereby advancing the research and development of quantum computers.

Challenges in DC biasing of qubits

There are two significant challenges when using DC power supplies: 1) Voltage fluctuations due to DC power supply noise and environmental interference through long cables induce qubit decoherence and 2) DC power supplies that may number several hundred require substantial storage space and can introduce significant qubit decoherence.

Qubits are highly susceptible to noise and even minor voltage fluctuations. DC bias voltage can quickly induce unintended changes in the quantum state. These changes can lead to the loss of information stored in the qubit, a phenomenon known as decoherence. This results in a decline in the precision of qubit control and evaluation. Moreover, quantum computers have now reached a stage where they can exceed 100 qubits. It is essential to supply an independent DC bias to each qubit by securing substantial space to house several hundred general power supplies.

Voltage fluctuations induce qubit decoherence

In the quantum world, qubits exist in a superposition of states, representing both 0 and 1 simultaneously. This unique property makes them exceptionally powerful for certain computations. However, it also makes them incredibly sensitive to external influences. The challenge arises when DC power supply noise and environmental interference introduce voltage fluctuations that disturb the delicate balance of the qubits superposition.

Even the slightest variation in voltage can cause the qubits quantum state to waver, leading to decoherence making them less reliable for computations. This is a significant challenge in quantum computing because maintaining the integrity of qubit states is crucial for accurate and reliable quantum information processing.

Figure 2. Fluctuation in DC voltage bias propagated to qubits

Fluctuations in DC bias voltage primarily contribute to the DC power supplys output voltage noise. Furthermore, environmental interference, such as electromagnetic interference and physical vibrations of the cables, can contribute to voltage instability. The qubits sensitivity to noise necessitates continual monitoring of the potential noise source. Figure 2 illustrates how this effect becomes more pronounced when extending the cables because the DC power supply rack is at a distance from the entrance of the cryostat or if the power supply is in the lower sections of the rack.

The occurrence of voltage fluctuations disrupting qubit coherence is rooted in the fundamental nature of quantum systems. The challenge is not just about preventing external disturbances but also about developing tools and technologies that can shield qubits from these disturbances, ensuring stable and coherent quantum states for reliable computational processes.

Larger quantum computers can introduce more qubit decoherence

One important direction in the practical application of quantum computers is the increase in the number of qubits to run more complex quantum algorithms. For example, the current noisy intermediate-scale quantum (NISQ) machines under development require the implementation of tens to hundreds of qubits. This means the need for significant numbers of DC power supplies that both need to be physically stored and can introduce a lot of noise into the system.

This proliferation of DC bias sources introduces additional sources of noise into the system. The noise from DC bias sources can stem from three factors as follows:

1.Power Supply Imperfections: Not all power supplies were made for precision, and even small fluctuations or imperfections in the DC bias source can translate to noise in the qubits operation. 2. Crosstalk: In a setup with numerous DC bias sources in close proximity, crosstalk can occur. This means that the adjustments made to one qubits bias source can affect neighboring qubits, leading to unwanted noise. 3. Electromagnetic Interference (EMI): The operation of multiple DC bias sources in a confined space can generate electromagnetic fields that interfere with each other. This interference can manifest as noise that disrupts the qubits quantum states.

As the number of DC bias sources increases to accommodate a larger number of qubits, the overall size of the system increases, and the cumulative effect of these noise sources becomes more pronounced. Each additional DC bias source adds another layer of potential noise, making it challenging to maintain the precision and coherence of the qubits states.

Figure 3. Configuration of 100-channel conventional precision power sources with a large footprint

Taking an example of a quantum computer that uses 100 qubits, providing DC bias voltage to each qubit presents an additional challenge. Figure 3 illustrates that each qubit requires at least one DC power supply. The test rack must fit in a minimum of 100 power supply channels to bias all qubits. Even with power supplies of a typical size 2U half-rack housed in a maximum-sized rack, a single rack can only store 40 channels.

Consequently, securing an ample space measuring 180 cm in width, 90.5 cm in depth, and 182 cm in height for three racks becomes necessary in a laboratory filled with various other instruments, such as arbitrary waveform generators (AWG). This necessity creates a logistical challenge regarding physical space within quantum computing laboratories. This spatial challenge not only impacts the physical layout of the lab but also raises practical concerns about efficient management, accessibility, and equipment maintenance. There is a growing emphasis on developing compact and efficient power supply solutions that can cater to the individual requirements of each qubit while minimizing the overall footprint to address this challenge. Streamlining the power supply infrastructure is crucial for the scalability of quantum computing projects, enabling researchers to expand their quantum systems without space constraints.

Enabling an Effective and Efficient Quantum Computing Development

Solving spatial constraints helps scale quantum computing efforts, enabling researchers to explore larger quantum systems. Managing voltage fluctuations and spatial limitations in DC bias sources for quantum computing is crucial for progress.

Figure 4. Voltage source noise density with the combination of a source meter and a low-noise filer adapter

It is essential to use low-noise power supplies or source meters / source measure units (SMUs) to provide clean bias voltage positioned as close as possible to the cryostat to achieve this. This approach can significantly help with unnecessary environmental interference caused by exposed cable lengths.

You can attach optional accessories, such as a low-noise filter adapter (LNF), to the precision source meters to further push the stability of the bias voltage. In some cases, reducing the noise level to 25 V rms (10 Hz to 20 MHz, 6 V range) is illustrated in Figure 4.

Figure 5 shows, from a rack setup perspective, that using source meters that are as compact in channel density allows for placements directly at the entrance of the cryostat, even at elevated positions. This approach will significantly help to minimize the DC voltage bias fluctuation, enabling ideal quantum control and precise qubit characterization through long coherence time.

Figure 5. A 100-channel configuration with high-density source meters set close to the cryostat, providing clean DC bias voltage

Tips to minimize the fluctuations in the bias voltage

Given the significant impact of the surrounding environment and experimental setup on DC bias fluctuations, achieving a clean bias voltage requires proper setup and usage. When constructing your DC bias line, you can minimize voltage noise and effectively utilize a high-precision source measure unit and a low-noise filter by paying attention to certain aspects.

Using different grounds for each instrument can create a circuit known as a ground loop. Ground loops can be a source of noise. Figure 6 illustrates the steps to stabilize the DC bias voltage. Avoiding ground loops using techniques such as single-point grounding is necessary.

Figure 6. Examples of wiring that creates a ground loop (left) and avoids a ground loop (right)

You can either short the LF terminal to the frame ground or leave it floating. This choice could impact the noise level of the DC bias voltage. If your system design does not have specific requirements for the LF terminals potential, you can experiment with both configurations and choose the one that yields better results.

According to Faradays law, electromagnetic induction can contribute to the noise if the HF and LF cables become spatially separated. To prevent this outcome, keep the HF and LF cables as close as possible or use a twisted pair configuration.

To solve these challenges, use a combination of source meter options that helps to provide a high channel density, low noise, and precision voltage supply to provide a stable and clean bias voltage to more than 100 qubits. Ensure the source meters are as close as possible to the cryostat to minimize electromagnetic interference from long cables. Always take note of the potential configurations of the LF terminal, avoid ground loops, and implement twisted pair configurations to reduce the impact of Faradays Law further.

Gobinath Tamil VananKeysight Technologies

Gobinath graduated from the Swinburne University of Technology with a Degree in Electrical and Electronics Engineering and has more than 9 years of experience in the semiconductor, aerospace & defense, and automotive industries, as well as the field of automated testing. At Keysight, he works closely with field engineers, product managers, and R&D engineers to ensure that all relevant customer needs in the industry are brought out well and early to enable customer success and solve the grand challenges of tests and measurements.

The rest is here:
Guest Post Controlling the Qubits: Overcoming DC Bias and Size Challenges in Quantum - The Quantum Insider

NIST will fire the starting gun in the race to quantum encryption – Nextgov/FCW

As the National Institute of Standards and Technology is slated to soon debut the first round of encryption algorithms it has deemed suited for the potential arrival of a viable quantum computer, experts have advice for organizations: know your code.

The need for strong cryptographic governance ahead of migrating digital networks to a post-quantum standard will be a major component to updated cybersecurity best practices, as both public and private sectors begin to reconcile their network security with new algorithmic needs.

Matthew Scholl, the chief of the computer security division in the National Institute of Standards and Technologys Information Technology Laboratory, said that understanding what a given organizations security capabilities are will offer insight into what aspects of a network should transition first.

Deep understanding of what current encryption methods do and precisely where they are will be a fundamental aspect of correctly implementing the three forthcoming quantum-resistant algorithms.

With that information, you should then be able to prioritize what to change and when, and you should plan for the long term changes and updates going forward, Scholl told Nextgov/FCW.

Scott Crowder, vice president for IBM Quantum Adoption and Business Development, echoed Scholls points on creating a cryptographic inventory to ensure the algorithms are properly configured. Crowder said that while overhauling encryption code is a comprehensive transition, understanding what needs to change can be difficult based on who wrote the code in the first place.

It's a painbecause it's actually at two levels, Crowder told Nextgov/FCW. First you get all the code that you've written, but then you've got all the rest of your IT supply chain that vendors provide.

Based on client conversations, Crowder estimates that 20% of the transformation problem hinges on an entitys internal code, while the remaining 80% is ensuring the vendors in their supply chains have correctly implemented NISTs new algorithms.

From our experience, and doing some work with clients, typically for one application area, it's like three to six months to discover the environment and do some of the basic remediation, he said. But, you know, that's like a small part of the elephant.

In addition to creating a comprehensive cryptographic inventory that can determine which code should be updated, Scholl said that cybersecurity in a quantum-ready era needs to be versatile.

You need to build your systems with flexibility so that it can change, he said. Don't put something that's [going] to be the next generation's legacy. Build something that is agile and flexible.

The debut of the three standardized post-quantum algorithms ML-KEM, CRYSTALS-Dilithium, and Sphinx Plus will enable classical computers to keep data encrypted against a future fault-tolerant, quantum-powered computer. During their implementation processes, Scholl said that organizations need to both continue monitoring the configuration of the newly implemented algorithms as well as consistently test for vulnerabilities.

Scholl said that the fourth algorithm, Falcon, which was selected as a winning algorithm in 2022 along with the other three, will be released for implementation later this year.

Despite the milestone in quantum cryptography readiness, Crowder notes that this is just the beginning for a new era of cybersecurity hygiene.

You can think of the NIST standardization as basically the starting gun, he said. But there's a lot of work to be done on taking those standards, making sure that all the open source implementations, all the proprietary implementations get done, and then rippling through and doing all the hard work in terms of doing the transformation upgrade.

See the original post:
NIST will fire the starting gun in the race to quantum encryption - Nextgov/FCW

Quantinuum and Science and Technology Facilities Council (SFTC) Hartree Center Partner to Advance Quantum Innovation and Development in the UK -…

Quantinuum, the worlds largest integrated quantum computing company, has signed a Joint Statement of Endeavour with the STFC Hartree Center, one of Europes largest supercomputing centers dedicated to industry engagement. The partnership will provide UK industrial and scientific users access to Quantinuums H-Series, the worlds highest-performing trapped-ion quantum computers, via the cloud and on-premise.

Research and scientific discovery are central to our culture at Quantinuum, and we are proud to support the pioneers at the Hartree Center, said Raj Hazra, CEO of Quantinuum. As we accelerate quantum computing, the Hartree Centerand the UK quantum ecosystem will be on the forefront of building solutions powered by quantum computers at scale.

Both organisations aim to support UK businesses and research organizations in exploring quantum advantage in quantum chemistry, computational biology, quantum artificial intelligence and quantum-augmented cybersecurity. The UK has a strong global reputation in each domain, and quantum computing is expected to accelerate development in the coming years.

Quantinuums H-Series hardware will benefit scientists across various areas of research, including exascale computing algorithms, fusion energy development, climate resilience and more, said Kate Royse, Director of the STFC Hartree Center. This partnership also furthers our five-year plan to unlock the high growth potential of advanced digital technologies for UK industry.

The Hartree Centeris part of theScience and Technology Facilities Council (STFC) withinUK Research and Innovation building on a wealth of establishedscientific heritage and a network of international expertise. The centers experts collaboratewith industry and the research community to explore the latest technologies, upskill teamsand apply practical digitalsolutions across supercomputing, data science and AI.

Quantinuums H-Series quantum computers are the highest-performing in the world, having consistently held the world record for quantum volume, a widely used benchmark for quantum computing performance, for over three years and currently standing at 220.

In April 2024, Quantinuum and Microsoftreported a breakthrough demonstrationof four reliable logical qubits using quantum error correction an important technology necessary for practical quantum computing. During the same month, Quantinuum extended its industry leadership with its H-Series computer becoming thefirst to achieve three 9s 99.9 % two-qubit gate fidelity across all qubit pairs in a production device, a critical milestone that enables fault-tolerant quantum computing.

This achievement was immediately available to Quantinuum customers, who depend on using the very best quantum hardware and software, enabling them to push the boundaries on new solutions in areas such as materials development, drug discovery, machine learning, cybersecurity, and financial services.

Quantinuum formerly known as Cambridge Quantum prior to its 2021 combination with Honeywell Quantum Solutions was one of the UK governments delivery partners, following the 2014 launch of the National Quantum Technologies Programme. Cambridge Quantum ran the Quantum Readiness Programme for several years to inspire UK business and industry to invest in quantum computing to explore the potential use cases of this revolutionary technology.

Earlier this year, Quantinuumwas selected as a winnerin the 15 m SBRI Quantum Catalyst Fund, to support the UK Government in delivering the benefits of quantum technologies, with an initial focus on simulating actinide chemistry using quantum computers.

Read more:
Quantinuum and Science and Technology Facilities Council (SFTC) Hartree Center Partner to Advance Quantum Innovation and Development in the UK -...

Mysterious quantum computing restrictions spread across multiple nations UK cites national security risks and … – Tom’s Hardware

Quantum computers are apparently a "national security risk" for some countries, which have mysteriously issued identical restrictions on exports of quantum computing systems. France, Spain, the United Kingdom, the Netherlands, and Canada have all restricted the sale of quantum computers containing more than 34 qubits and above a certain error threshold.

The quantum computing export bans across all these countries have matching specific qualifications for what makes a quantum computer "dangerous enough" to deserve a ban. New Scientist, which first broke the news, contacted dozens of countries asking about the bans and was rebuffed, with the UK claiming that explaining the rationale behind the numbers would be a national security risk.

The countries who have issued the mysterious identical bans are all participants in the Wassenaar Agreement, an export control regime that facilitates a way for its 42 member nations to set limits on dual-use technologies or tech that can be used for both civilian and military applications. New Scientist also wrote to many other Wassenaar states who haven't yet set the matching bans about the source of the 34 qubit number or the potential for other nations to join in the bans.

Milan Godin, a Belgian advisor to the EU, responded, "We are obviously closely following Wassenaar discussions on the exact technical control parameters relating to quantum," pointing to some level of international cooperation on the research behind the quantum restrictions. Experts in the quantum computing field have no clue where the numbers may have come from, with Christopher Monroe of IonQ saying, "I have no idea who determined the logic behind these numbers."

Quantum computers work fundamentally differently from standard computers. A qubit is analogous to a bit (or more accurately a transistor) in a computer, with higher qubit counts meaning higher power. While classical computers work with deterministic outcomes and dead-set calculations, quantum computers handle multiple variable problems at insane complexities that would stall the most powerful supercomputers of today. Our quantum computing explainer, linked at the top of this paragraph, goes in-depth into how quantum works and what it can do.

There is an immense amount of excitement but also fear for the tech, with governments concerned about the potential for military applications, like use for designing new nuclear or biological weapons, but quantum computers will eventually be able to crack the best cryptographic encryption in minutes. The only problem is that quantum computers today are not that good. Quantum computers have high error rates and require cooling solutions that take the qubits down to temperatures of -269 Celsius to function efficiently, so economics dictate that, barring a massive breakthrough in the tech, quantum will not pose any serious risk to anyone for years to come.

Other Wassenaar Agreement nations are likely to come out with similar trade restrictions on quantum computers in the coming days. However, the "national security risk" behind the bans is likely nothing to worry about based on the low capabilities of quantum systems. What is likely to happen in the short term is more national isolation of quantum computing research, as U.S.-based companies will no longer be able to expand to the UK or vice versa.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Originally posted here:
Mysterious quantum computing restrictions spread across multiple nations UK cites national security risks and ... - Tom's Hardware

Quantum computer researchers have achieved a cooling system that’s colder than space – XDA Developers

Key Takeaways

Quantum computing has the power to revolutionize how quickly we get computations done, but it's not like you can just swap out your processor for a quantum one and call it a day. With the new technology comes new challenges, which researchers are trying to overcome to unlock this new power. Now, they've managed to get a cooling system running that can chill a quantum processor to a point that's colder than space itself.

This cooler not only looks smart, but it'll keep your CPU cool under load.

As reported by Tom's Hardware, researchers at the Swiss Federal Institute of Technology Lausanne have managed to plunge a quantum processor to 100mK. That's 100 millikelvins, or -273C/-459F. As a point of reference, the temperature of outer space clocks in at 2.7 Kelvin, so it's a good deal cooler for the processor.

So, why are researchers going this far to cool the system? Like current computers, quantum PCs need to be consistently cooled to keep running. However, you can't just slap a PC cooler on it and get it running. The parts that do the quantum computing, called "qubits," need to be kept as close to zero Kelvin as possible, or else the heat will disturb them. As such, lots of work has gone into making it feasible to run a quantum computer as accurately and easily as possible.

The team achieved this by taking the heat energy produced by the components and converted them into electricity to keep things icy cool. And the best bit is, not only are these coolers possible to build using existing components, but it's as efficient as regular PC coolers.

It's a huge win for sure, and it'll likely spark a revolution in quantum computing and make it a lot easier for researchers to keep their components cool. However, we may have to wait a little longer until we can use these amazing coolers to max out Cyberpunk 2077's settings.

Original post:
Quantum computer researchers have achieved a cooling system that's colder than space - XDA Developers

2D quantum cooling system reaches temperatures colder than outer space by converting heat into electrical voltage – Tom’s Hardware

A research team at theSwiss Federal Institute of Technology Lausanne (EFPL)developed a 2D quantum cooling system that allowed it to reduce temperatures to 100 millikelvins by converting heat into electrical voltage. Very low temperatures are crucial for quantum computing, as quantum bits (qubits) are sensitive to heat and must be cooled down to less than 1K. Even the thermal energy generated by the electronics needed to run the quantum computer has been known to impact the performance of qubits.

"If you think of a laptop in a cold office, the laptop will still heat up as it operates, causing the temperature of the room to increase as well. In quantum computing systems, there is currently no mechanism to prevent this heat from disturbing the qubits," LANES PhD student Gabriele Pasquale explained.

However, most conventional cooling solutions no longer work efficiently (or don't work at all) at these temperatures. Because of this, heat-generating electronics must be separated from quantum circuits. This, in turn, adds noise and inefficiencies to the quantum computer, making it difficult to create larger systems that would run outside of lab conditions.

The headlining 2D cooling system was fabricated by a research team led by Andras Kis at EPFL's Laboratory of Nanoscale Electronics and Structures (LANES). Aside from its capability to cool down to 100mK, the more astounding innovation is that it does so at the same efficiency as current cooling technologies running at room temperature.

Pasquale said, "We are the first to create a device that matches the conversion efficiency of current technologies, but that operates at the low magnetic fields and ultra-low temperatures required for quantum systems. This work is truly a step ahead."

The LANES team called their technological advance a 2D quantum cooling system because of how it was built. At just a few atoms thick, the new material behaves like a two-dimensional object, and the combination of graphene and the 2D-thin structure allowed it to achieve highly efficient performance. The device operates using the Nernst effect, a thermomagnetic phenomenon where an electrical field is generated in a conductor that has both a magnetic field and two different temperatures on each side of the material.

Aside from its performance and efficiency, the 2D quantum cooling system is made from readily manufactured electronics. This means it could be easily added to quantum computers in other labs that require such low temperatures. Pasqual adds, "These findings represent a major advancement in nanotechnology and hold promise for developing advanced cooling technologies essential for quantum computing at millikelvin temperatures. We believe this achievement could revolutionize cooling systems for future technologies."

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

But even if some manufacturer mass produces this 2D cooling system that can hit sub-1K temperatures in the near future, don't expect to find it on Newegg to use it for overclocking your CPU, unless you plan to overclock a quantum computer in your living room lab.

View original post here:
2D quantum cooling system reaches temperatures colder than outer space by converting heat into electrical voltage - Tom's Hardware

Quantum Computing is Becoming More Accessible as Costs Drop & Cloud Access Expands Dr. Mark Jackson – The Quantum Insider

Dr. Mark Jackson, a leading expert in quantum computing and Senior Quantum Evangelist at Quantinuum, recently shared his views on the imminent impact of quantum technology. With a PhD in superstring theory and cosmology, Jacksons extensive background positions him as a crucial voice in the quantum revolution. Here, he offered his vision for the future and the necessity of early investment in quantum computing.

The market potential for quantum computing isnt just in the billions; its believed that it will be in the trillions in the next one to two years, Jackson said. This staggering projection underscores the vast economic impact expected from quantum advancements.

Jackson stressed the importance of early adoption.

It takes time to write the software, to understand how this works, to understand how it affects your industry. Its not simply a matter of turning on a dime once you see the headlines about quantum being relevant, he said. The complexity and novelty of quantum computing demand a proactive approach to ensure organizations are ready to leverage its capabilities.

Explaining the fundamental difference between quantum and classical computers, Jackson noted: A normal computer is based on bits which are zero or one. A quantum computer is based on quantum bits, or qubits, which can be zero and one at the same time. This property enables quantum computers to consider multiple solutions simultaneously, vastly increasing computational power. You get this exponential scaling of possible solutions that a quantum computer would consider, he added.

Jackson called attention to some key applications where quantum computing excels.

One thing that quantum computers are very good at is chemistry, being able to do material science calculations, trying to simulate molecules and understand how theyll behave, he said. Personalized medicine is another promising field, as quantum computing could significantly reduce the time and cost required to develop new drugs. With a quantum computer, we think that we could speed this up and make it much more efficient, Jackson explained.

Cybersecurity is also a critical area of concern and opportunity. Jackson pointed out: Quantum computing is relevant to hacking or cybersecurity. Now that quantum is becoming pretty powerful, a lot of governments and communications companies are very concerned about this. Companies like Apple and Zoom have already started upgrading their cybersecurity measures to protect against potential quantum threats.

Despite its potential, Jackson acknowledged the current limitations of quantum technology.

Unfortunately, its very expensive to build a quantum computer right now, and so its only really very developed countries that are investing in this, he said. However, he remains optimistic about the future accessibility of quantum computing. The price of quantum computing is coming down, and a lot of people have access to it over the cloud.

Jackson dispelled the common misconception that quantum computing is still decades away.

By far the biggest misconception that I come across is that people think that quantum computing might be relevant in 20 years, he said. He stressed that significant breakthroughs have occurred in the past decade, rapidly advancing the field. Quantum has increased its performance by about a factor of ten every year, Jackson noted, while predicting that practical applications of quantum computing will emerge within the next two years.

Jackson urges organizations to begin investing in quantum technology now to stay ahead.

The organizations which will take most advantage of this are those who have already begun. It really is essential that if youre not already investing in quantum, you start developing expertise and investing in this now, he advised. The future of quantum computing promises to revolutionize various industries, and early preparation will be key to capitalizing on its transformative potential.

Featured image: Credit: PNNL

Read more from the original source:
Quantum Computing is Becoming More Accessible as Costs Drop & Cloud Access Expands Dr. Mark Jackson - The Quantum Insider

Time Crystals Could be the Circuit Boards of Future Quantum Computers – The Debrief

Scientists from Swinburne University of Technology in Australia and Jagiellonian University in Poland have proposed using time crystals as a core component of a quantum computer. In the preprint paper, the scientists propose using time crystals as a type of circuit to keep the quantum components within the computer from interfering with each other and causing errors. While more research is required in order to check the feasibility of the idea, it could have significant implications for the future of quantum technology.

The concept of a time crystal was first proposed around the mid-2010s. The idea is that, like a crystal has a repeated structure in space (with multiple faces and sides), a time crystal has a repeated structure in time. While difficult to understand, the time crystal can be likened to a perpetual motion machine, where atomic or particle arrangements repeatedly transform over repeated time segments in a never-ending train of particles.

While the time crystal began as a theoretical concept, it has now been constructed using high-powered lasers and ultracold atoms. The laser can produce discrete patterns of light in specific time intervals, causing the particles to be excited or change quantum states repeatedly.

Because of their discrete timing patterns, physicists believe that time crystals may be able to help isolate individual quantum bits or qubits that make up the processing units of a quantum computer.

Quantum computers utilize quantum mechanical phenomena, such as superposition and entanglement, to solve complex problems that a traditional or classical computer is unable to solve. Their power comes from their ability to transform and change the qubits inside them, which can be individual atoms, photon light particles, ions, or other particles. Companies like Google, IBM, and Quantinuum, along with many smaller start-ups, each use different atoms as qubits within their systems, showing the many types of quantum computers.

One of the challenges in creating a working quantum computer is the fragility of the qubits. Qubits can become susceptible to environmental or outside noise, causing them to change quantum states or become unentangled from other qubits in a process known as decoherence. The qubits within a quantum computer can also interfere with each other, which makes scaling up quantum computers from only a few qubits to a few hundred qubits a big challenge. Not only will more qubits interfere with each other, but they can add to the environmental noise that may affect the entire system.

While scientists and engineers are working to overcome these challenges, time crystals could be a potential avenue to explore as a solution to these issues.

In this new preprint paper, the scientists propose integrating time crystals into a quantum computer as a time-tronic circuit board. In this circuit board, the time crystals could regulate the timing of analysis and information moving through the qubits, isolating them from each other and mitigating some of the potential errors that could happen.

The elements of these devices can correspond to structures of dimensions higher than three and can be arbitrarily connected and reconfigured at any moment, the researchers write about the time-tronic circuit in their paper. They add that these circuit boards could be used for other quantum devices, with quantum computing being the most prominent application.

While experiments are needed to validate the researchers theory, the team simulated using a time crystal to control a group of ultracold potassium ions being directed by a laser pulse, showing that the time crystal could create a steady rhythm for the ions to move to.

Combining quantum computing and time crystals is not a new idea. Australian physicists simulated a time crystal using a quantum computer in 2022, creating one with 57 particles, the biggest time crystal thus far. Before this, Googles quantum computing team created a 20-qubit time crystal using Googles Sycamore quantum computer.

While quantum computers have previously been used to create time crystals, the future of quantum computing innovation may depend on time crystals being integrated into bigger quantum computers and other devices.

Kenna Hughes-Castleberry is the Science Communicator at JILA (a world-leading physics research institute) and a science writer at The Debrief. Follow and connect with her on X or contact her via email at kenna@thedebrief.org

Follow this link:
Time Crystals Could be the Circuit Boards of Future Quantum Computers - The Debrief