Page 675«..1020..674675676677..680690..»

Why Enterprises and Governments Must Prepare for Q-Day Now – Infosecurity Magazine

In todays hyperconnected world, enterprises and governments are accelerating digital transformations to revolutionize the way businesses operate improving efficiency and productivity, creating new revenue streams and delivering value to customers. As part of this effort, governments and businesses alike are investing in quantum computing to help address societal challenges and improve efficiencies and insights.

While quantum computing has huge potential for good, when in the wrong hands it also has the ability to cause tremendous harm. So what are the quantum threats facing enterprises and governments and why should they start preparing for the impending Q-Day now?

Across the world, we are seeing governments and enterprises increase investments in quantum computing to tackle issues around sustainability, defense and climate change. Organizations that invest early in quantum computing are more likely to reap significant benefits.

For example, already we are seeing quantum computing help banks run more advanced financial computations and companies like Mercedes Benz shaping the future of electric cars. As the potential to use quantum computing for good appears to be limitless, we can expect companies and countries will continue to leverage its capabilities.

Just as we are able to leverage quantum-speed problem solving for good, it can also be used to wield quantum-speed cyber-attacks. However, this will require a cryptographically relevant quantum computer (CRQC) which does not yet exist. So, if quantum computers havent yet arrived, you may ask why do we need to worry now?

With technology advances, Q-Day the day a viable CRQC will be able to break most of todays public key encryption algorithms is moving ever closer. Some experts predict it could be as soon as 2030, so the sooner we can prepare ourselves the better. In fact, governments and enterprises are already at risk; even today, bad actors are employing harvest now, decrypt later tactics, to decrypt data for a mass Q-day attack.

In the hands of bad actors, quantum computers carry the potential to impact economies, disrupt critical research or worse, endanger lives. Human-critical networks such as power grids, healthcare networks, utilities, public safety and financial systems, are particularly vulnerable given the potential for financial gain associated with interference in these sectors. The diversity of cyber-attacks that were currently seeing across industries indicates that cybercriminals are targeting multiple sectors to find vulnerable systems and victims no sector is exempt.

To cause disruption, bad actors will use a CRQC to unravel current data encryption protocols that protect sensitive data, making current public key cryptography methods obsolete. They could hijack millions of connected IoT devices to create distributed denial of service (DDoS) botnets that flood IP and optical networks with terabits of data and hundreds of millions of packets per second.

With public citizen data, national security, financial records, intellectual property and critical infrastructure all at risk, we must prepare our enterprise and critical networks for the possibility of quantum computing threats now. This will involve network modernization, including the updating and upgrading of network infrastructure and protocols, as well as implementing security measures to ensure the safety of communications. A multi-layer approach from the optical core to the IP edge and application layer will be essential in effectively encrypting in-flight network data, according to the transmission and network infrastructure.

Quantum computers are here and becoming accessible around the globe, so now is the time to build quantum-safe networks with advanced cybersecurity protection and post-quantum era encryption. Critical enterprises and governments need to protect themselves and their critical infrastructure against these attacks, so they are ready for Q-Day.

Link:
Why Enterprises and Governments Must Prepare for Q-Day Now - Infosecurity Magazine

Read More..

Best practices for portfolio optimization by quantum computing … – Nature.com

Dataset

The data are collected from Yahoo!@finance29 using yfinance30, an open-source tool that uses Yahoos publicly available APIs. This tool, according to its creator, is intended for research and educational purposes.

To explore the efficiency of the proposed approach, small-sized examples are considered by extracting at most (N=4) different assets: (text {Apple}), (text {IBM}), (text {Netflix}) and (text {Tesla}). These are representative global assets with interesting dynamics influenced by financial and social events. For each asset i, with (1le i le N), the temporal range between 2011/12/23 and 2022/10/21 is considered. For each day t in this range ((0le t le T)), the performance of an asset is well represented by its closing price (p^t_i). A sub-interval of dates considered is shown in Table1. Additional experiments, performed on different dataset and falling within the same time interval considered here, are available in the supplementary information.

The first information extracted from this data set consists in the list P of current prices (P_i) of the considered assets.

$$begin{aligned} P_{i}=p^T_i. end{aligned}$$

(1)

Moreover, for each asset, the return (r^t_i) between the days (t-1) and t can be calculated:

$$begin{aligned} r^t_i=frac{p^t_i-p^{t-1}_i}{p^{t-1}_i} end{aligned}$$

(2)

These returns, calculated for days when the initial and the end prices are known, cannot be used for inference. Instead, it is convenient to define the expected return of an asset as an educated guess of its future performance. Assuming a normal distribution of the returns, the average of their values at each time t on the set of historical observations is a good estimator of the expected return. Therefore, given the entire historical data set, the expected return of each asset (mu _i) is calculated by:

$$begin{aligned} mu _i=E[r_i]=frac{1}{T}sum _{t=1}^{T}r^t_i. end{aligned}$$

(3)

Following the same principle, the variance of each asset return and the covariance between returns of different assets over the historical series can be calculated as follows:

$$begin{aligned}&sigma ^{2}_{i}=E[(r_{i}-mu _{i})^2]=frac{1}{T-1}sum _{t=1}^{T}(r^{t}_{i}-mu _{i})^{2}, \&sigma _{ij}=E[(r_{i}-mu _{i})(r_{j}-mu _{j})]=frac{1}{T-1}sum _{t=1}^{T}((r^{t}_{i}-mu _{i})(r^{t}_{j}-mu _{j})) nonumber . end{aligned}$$

(4)

The traditional theory of PO was initially formulated by Markowitz1. There are multiple possible formulations of PO, all embodying different degrees of approximation of the real-life problem. This work deals with Multi-Objective Portfolio optimization: this approach tries to simultaneously maximize the return and minimize the risk while investing the available budget. Even if other formulations include more objectives, the aim is still the solution of a constrained quadratic optimization problem; therefore, the formulation considered here is general enough to test the performances of the proposed approach.

A portfolio is defined as the set of investments (x_{i}) (measured as a fraction of the budget or number of asset units) allocated for each ith asset of the market. Therefore, the portfolio consists of a vector of real or integer numbers with dimensions equal to the number of assets considered. An optimal strategy for portfolio allocations aims to achieve the maximum portfolio return (mu ^{text {T}} x) while minimizing risk, defined as the portfolio variance (x^{text {T}}Sigma x) (whose square root is the portfolio volatility), where (mu ) is the vector of mean asset returns for each asset i calculated by (3), (Sigma ) is the covariance matrix calculated by (4), and x is the vector of investments measured as fractions of budget. Hence, the task of finding the optimal portfolio aims at finding the x vector that maximizes the following objective function:

$$begin{aligned} {{mathscr {L}}}(x): mu ^{text {T}} x - qx^{text {T}}Sigma x, end{aligned}$$

(5)

where the risk aversion parameter q expresses the propensity to risk of the investor (a trade-off weight between the risk and the return).

In a realistic scenario, the available budget B is fixed. Therefore, the constraint that the sum of (x_i) equals 1 must hold. Moreover, if only buying is allowed, each (x_ige 0), this constraint does not hold if either buying or selling is possible. As a consequence, in the general case, the problem can be stated as follows:

$$begin{aligned}&underset{x}{max }{mathscr {L}}(x): underset{x}{max }(mu ^{text {T}} x - qx^{text {T}}Sigma x),\&text {s.t.} quad sum ^{N}_{i=1}x_i=1 nonumber end{aligned}$$

(6)

However, if x is a possible solution to the problem with continuous variables, each product (x_iB) must be an integer multiple of the corresponding price (P_i) calculated by (1) since an integer number of units of each asset can be exchanged. Therefore, only a subset of the possible solutions corresponding to integer units is acceptable, and the problem is better stated as follows:

$$ begin{gathered} mathop {max }limits_{n} {mathcal{L}}(n):mathop {max }limits_{n} (mu ^{{prime {text{T}}}} n - qn^{{text{T}}} Sigma ^{prime } n), hfill \ {text{s}}{text{.t}}{text{.}}quad P^{{prime {text{T}}}} n = 1 hfill \ end{gathered} $$

(7)

where n is the vector of (n_i) integer units of each asset, while (P'=P/B), (mu '=P'circ mu ) and (Sigma '=(P'circ Sigma )^{text {T}}circ P') are appropriate transformations of (mu ) and (Sigma ). The latter formulation (7) is an integer constrained quadratic optimization problem.

Possible solutions to the problem (6) are those satisfying the constraint. Among them, some correspond to possible solutions to problem (7). The collection of possible solutions corresponding to portfolios with maximum return for any risk is called Markowitz efficient frontier. The solution of the constrained quadratic optimization problem lies on the efficient frontier, and the distance from minimum risk depends on q.

The general problem, if regarded in terms of continuous variables, can be solved exactly by Lagrange multipliers in case of equality constraints, or by KarushKuhnTucker conditions, which generalize the method of Lagrange multipliers to include inequality constraint31, as the covariance matrix is positive semi-definite32. Optimizing a quadratic function subject to linear constraints leads to a linear system of equations, solvable by Cholesky decomposition33 of the symmetrical covariance matrix. The exact solution involves the computation of the inverse of an (N times N) matrix, where N is the number of assets, thus requiring about (O(N^3)) floating-point operations34.

As long as integer or binary variables are considered, the problem turns into combinatorial optimization. The computational complexity is known to be high since the optimization problem is NP-hard35,36, while the decision version is NP-complete37. Indeed, a search approach should find the optimal one among possible solutions whose number increases exponentially with the number of assets (e.g., for b binary variables, (2^b) possible solutions, while for N integer variables ranging from 0 to (n_{max}), ({left( n_{max}+1right) }^N) possible solutions).

In practice, various methods are currently employed, either based on geometric assumptions, such as the branch-and-bound method2,3, or rather heuristic algorithms4,5,6, such as Particle Swarms, Genetic Algorithms, and Simulated Annealing. These have some limitations but allow to obtain approximate solutions. However, in all cases, the exact or approximate solution is feasible only for a few hundreds of assets on current classical computers.

Using quantum mechanical effects, like interference and entanglement, quantum computers can perform computational operations within the Bounded-error Quantum Polynomial (BQP) class of complexity, which is the quantum analogue of the Bounded-error polynomial probabilistic (BPP) class. Even if there is no NP problem for which there is a provable quantum/classical separation, it is widely believed that BQP (not subset ) BPP, hence when considering time complexity, quantum computers are more powerful than classical computers. More generally, it is conjectured that P is a subset of BQP. Therefore, while all problems that can be efficiently solved classically, are efficiently solvable by quantum computers as well, some problems exist that are considered intractable, until nowadays, by classical computers in polynomial space, and can they be solved with quantum machines. These facts are still matter of investigation but there are good reasons to believe that there are problems solvable by QC more efficiently than classical computers, thus QC will have a disruptive potential over some hard problems38, among which constrained quadratic optimization problems, including PO.

The branch-and-bound method3,7 is used in this work as a classical benchmark to compare the results of the proposed approach. It is based on the Lagrangian dual relaxation and continuous relaxation for discrete multi-factor portfolio selection model, which leads to an integer quadratic programming problem. The separable structure of the model is investigated by using Lagrangian relaxation and dual search. This algorithm is capable of solving portfolio problems with up to 120 assets.

Specifically, the library CPLEX freely available on Python provides a robust implementation of the aforementioned classical solving scheme.

As formulated in Eq. (7), the PO problem lies within the class of quadratic optimization problems. To be quantum-native, it has to be converted into a Quadratic Unconstrained Binary Optimization (QUBO) problem, i.e., the target vector to be found has to be expressed as a vector of zeros and ones, and constraints have to be avoided.

Therefore, the binary conversion matrix C is constructed with a number of binarizing elements (d_i) for each asset i depending on the price (P_i). Hence

$$begin{aligned} n^{max}_{i}=Intleft( frac{B}{P_i}right) , end{aligned}$$

(8)

where the operation Int stands for the integer part, and

$$begin{aligned} d_i=Intleft( log _{2}{n^{max}_i}right) , end{aligned}$$

(9)

such that

$$begin{aligned} n_{i}=sum _{j=0}^{d_i}2^{j}b_{i,j}. end{aligned}$$

(10)

In this way, the overall dimension of the binarized target vector, (b=left[ b_{1,0},dots ,b_{1,d_1},dots ,b_{N,0},dots ,b_{N,d_N}right] ), is (text {dim}(b) =sum _{i=1}^{N}left( d_i+1right) ), which is lower than that used in implementation available in Qiskit15. Conveniently, the encoding matrix C is defined as follows:

$$begin{aligned} C= begin{pmatrix} 2^{0} &{} dots &{} 2^{d_1} &{} 0 &{} dots &{} 0 &{} dots &{} 0 &{} dots &{} 0 \ 0 &{} dots &{} 0 &{} 2^{0} &{} dots &{} 2^{d_2} &{} dots &{} 0 &{} dots &{} 0 \ vdots &{} ddots &{} vdots &{}vdots &{} ddots &{}vdots &{} ddots &{}vdots &{}ddots &{}vdots \ 0 &{} dots &{} 0 &{} 0 &{} dots &{} 0 &{} dots &{} 2^{0} &{} dots &{} 2^{d_N} end{pmatrix}, end{aligned}$$

(11)

and thus, the conversion can be written in short notation as (n = Cb). It is possible to redefine the problem (7), in terms of the binary vector b, applying the encoding matrix by (mu ''=C^{text {T}}mu '), (Sigma ''=C^{text {T}}Sigma 'C) and (P''=C^{text {T}}P'):

$$ begin{gathered} mathop {max }limits_{b} {mathcal{L}}(b):mathop {max }limits_{b} left( {mu ^{{prime prime {text{T}}}} b - qb^{{text{T}}} Sigma ^{{prime prime }} b} right), hfill \ {text{s}}{text{.t}}{text{.}}quad P^{{prime prime {text{T}}}} b = 1 hfill \ quad quad b_{i} in { 0,1} quad forall i in left[ {1, ldots ,dim(b)} right]. hfill \ end{gathered} $$

(12)

The problem (12) falls into the wide set of binary quadratic optimization problems, with a constraint, given by the total budget. In this form, the problem cannot be cast directly into a suitable set of quantum operators that run on quantum hardware: the constraint, in particular, is troublesome, as it poses a hard limitation on the sector of Hilbert space that needs to be explored by the algorithm, to find a solution. It is thus necessary to convert the problem into a QUBO (Quadratic Unconstrained Binary Optimization) by transforming the constraint into a penalty term in the objective function. Each kind of constraint can be converted into a specific penalty term39, and the one considered in (12), which is equality, linear in the target variable, maps into (lambda (P''^{text {T}} b-1)^{2}), such that (12) can be written in terms of the following QUBO problem:

$$ mathop {max }limits_{b} {mathcal{L}}(b):mathop {max }limits_{b} left( {mu ^{{prime prime {text{T}}}} b - qb^{{text{T}}} Sigma ^{{prime prime }} b - lambda (P^{{prime prime {text{T}}}} b - 1)^{2} } right). $$

(13)

The penalty coefficient (lambda ) is a key hyperparameter to state the problem as the QUBO of the objective function (13).

There is a strong connection, technically an isomorphism, between the QUBO and the Ising Hamiltonian40: Ising Hamiltonian was originally constructed to understand the microscopic behavior of magnetic materials, particularly to grasp the condition that leads to a phase transition. However, its relative simplicity and natural mapping into QUBO have made the Ising model a fundamental benchmark well beyond the field of quantum physics. To convert (13) into an Ising, it is convenient to expand it in its components:

$$ {mathcal{L}}(b):sumlimits_{i} {mu _{i}^{prime } b_{i} } - qsumlimits_{{i,j}} {Sigma _{{i,j}}^{prime } } b_{i} b_{j} - lambda left( {sumlimits_{i} {P_{i}^{prime } b_{i} - 1} } right)^{2} , $$

(14)

where (mu ''_{i}, Sigma ''_{i,j}, P''_{i} ), are the components of the transformed return, covariance, and price, respectively, and (i,jin left[ 1,dim(b)right] ). Since the Ising represents spin variables (s_{i}), which have values ({-1,1}), the transformation (b_{i}rightarrow frac{1+s_{i}}{2}) is applied and coefficients are re-arranged, to obtain the Ising objective function to minimize:

$$begin{aligned}&underset{s}{min }{mathscr {L}}(s): underset{s}{min }left( sum _{i}h_{i}s_{i}+ sum _{i,j} J_{i,j}s_{i}s_{j}+lambda (sum _{i}pi _{i}s_{i}-beta )^{2}right) ,\&text {s.t.} quad s_{i,j}in {-1,1} quad forall i nonumber , end{aligned}$$

(15)

with (J_{i,j}) being the coupling term between two spin variables. It is now straightforward to obtain the corresponding quantum Hamiltonian, whose eigenvector corresponding to the minimum eigenvalue corresponds to the solution: in fact, the eigenvalues of the Pauli operators Z are (pm 1). Thus they are suitable for describing the classical spin variables (s_{i}). Furthermore, the two-body interaction term can be modeled with the tensor product between two Pauli operators, i.e., (Z_{i}otimes Z_{j}). The quantum Ising Hamiltonian reads:

$$begin{aligned} H= sum _{i}h_{i}Z_{i} + sum _{i,j} J_{i,j}Z_{i}otimes Z_{j}+lambda (sum _{i}pi _{i}Z_{i}-beta )^{2}. end{aligned}$$

(16)

With the procedure described above, the integer quadratic optimization problem of a portfolio allocation with budget constraints is expressed first as a binary problem via the binary encoding, then it is translated into a QUBO, transforming the constraints into a penalty term by the chosen penalty coefficient, and finally into a quantum Hamiltonian written in term of Pauli gates. Hence, the PO problem (7) is now formulated as the search of the ground state, i.e., the minimum energy eigenstate, of the Hamiltonian (16). Therefore, it is possible to use the VQE, employing real quantum hardware, and iteratively approximate such a state, as described in the following section, which corresponds to the optimal portfolio.

The VQE is a hybrid quantum-classical algorithm41, which is based on the variational principle: it consists in the estimation of the upper bound of the lowest possible eigenvalue of a given observable with respect to a parameterized wave-function (ansatz). Specifically, given a Hamiltonian H representing the observable, and a parameterized wave-function ({|{psi (theta )}rangle }), the ground state (E_{0}) is the minimum energy eigenstate associated s

$$begin{aligned} E_{0}le frac{{langle {psi (theta }|}H{|{psi (theta )}rangle }}{{langle {psi (theta )|psi (theta )}rangle }}, quad forall quad theta . end{aligned}$$

(17)

Hence, the task of the VQE is finding the optimal set of parameters, such that the energy associated with the state is nearly indistinguishable from its ground state, i.e., finding the set of parameters (theta ), corresponding to energy (E_{min}), for which (|E_{min}-E_{0}|

$$begin{aligned} E_{text {min}}=underset{theta }{min }{langle {textbf{0}}|}U^{dagger }(theta )HU(theta ){|{textbf{0}}rangle }. end{aligned}$$

(18)

where (U(theta )) is the parametrized unitary operator that gives the ansatz wave-function when applied on the initial state, (E_{min}) is the energy associated with the parametrized ansatz. The Hamiltonian H, defined for the specific problem, and in this case corresponding to (16), can be written in a specific operator basis that makes it naturally measurable on a quantum computer: this choice depends on the architecture considered. In this work, given the extensive use of the IBM quantum experience42, it is convenient to map the Hamiltonian into spin operators base. This base is formed by the tensor product of Pauli strings: (P_{l}in {I,X,Y,Z}^{otimes N}). In this base the Hamiltonian can always be written in the general form, (H=sum _{l}^{D}c_{l}P_{l}), where D is the number of Pauli strings that define the Hamiltonian and (c_{l}) is a suitable set of weights. It follows that the VQE in Eq. (18) can be written as:

$$begin{aligned} E_{text {min}}=underset{theta }{min }sum _{l}^{D}c_{l}{langle {textbf{0}}|}U^{dagger }(theta )P_{l} U(theta ){|{textbf{0}}rangle }. end{aligned}$$

(19)

Each term in Eq. (19) corresponds to the expectation value of the string (P_{l}) and is computed on quantum hardware (or a simulator). The summation and the optimization of the parameters are computed on a classical computer, choosing an ad-hoc optimizer. The eigenvector corresponding to the ground state corresponds to the solution of the problem(13), thus to the optimal portfolio.

Schematic of the VQE algorithm. The ansatz wave-function ({|{psi (theta )}rangle })) is initialized with random parameters and encoded in a given set of quantum gates. The PO problem is translated into an Ising Hamiltonian and encoded into a set of Pauli gates. The collection of output measurement allows the reconstruction of the expectation value of the Hamiltonian H, which is the energy that needs to be minimized. A classical optimization algorithm provides an update rule for the parameters of the wave-function, which ideally moves iteratively towards the ground state of the problem, thus providing an estimation of the corresponding eigenstate. This corresponds to the solution of the original PO problem.

In light of what is stated above, the complete VQE estimation process can be decomposed in a series of steps, as depicted in Fig. 1. First, it is necessary to prepare a trial wave-function (ansatz) on which the expectation value needs to be evaluated and realized via a parameterized quantum circuit. Then, it is necessary to define the Hamiltonian (16), whose ground state is the solution to the problem to be addressed, and convert it into the Pauli basis so that the observable can be measured on the quantum computer. Finally, the parameters are trained using a classical optimizer. This hybrid system ideally converges to a form that produces a state compatible with the ground state of the Hamiltonian.

This procedure includes two hyperparameters that have to be settled, i.e., the type of ansatz and the optimizer. When defining the ansatz, two main features have to be taken into account: its expressivity, i.e., the set of states that can be spanned by the ansatz itself, and the trainability, i.e., the ability of the ansatz to be optimized efficiently with available techniques. It is worth pointing out the problem of the barren plateau43, related to the possibility of vanishing gradients when the cost function gradients converge to zero exponentially, as a function of the specific characteristic of the problem to be solved. The barren plateau depends on the number of qubits, the high expressivity of the ansatz wave-function, the degree of entanglement, and the quantum noise44. There are several methods to avoid or mitigate the effect of the barren plateau, especially in the context of VQE, most of which consist in finding a trade-off between the expressivity of the ansatz and its trainability and reducing the effective size of the Hilbert space of the problem formulation45.

The following ansatzes are available in Qiskit and are analyzed in this work: Two Local ansatz, where qubits are coupled in pairs, the Real Amplitude ansatz, which assumes real-valued amplitude for each base element of the wave-function, and the Pauli Two ansatz, used mainly in quantum machine learning for the mitigation of barren plateu46. Although other ansatzes are provided in Qiskit, they are generally unsuitable for a PO problem. For instance, the Excitation preserving ansatz preserves the ratio between basis vector components, hence does not allow, in principle, any weight imbalance in the output distribution while moving towards the solution of the problem.

For all the ansatzes considered, the convergence of four different possible assumptions on the entanglement structure of the wave-function is checked, namely the full entanglement, the linear entanglement, the circular and the pairwise entanglement. The former modifies the ansatz such that any qubit is entangled with all the others pairwisely. In the linear case, the entanglement is built between consecutive pairs of qubits. The circular case is equivalent to the linear entanglement but with an additional entanglement layer connecting the first and the last qubit before the linear sector. Finally, in the pairwise entanglement construction, in one layer, the ith qubit is entangled with qubit (i+1) for all even i, and in a second layer, qubit i is entangled with qubit (i+1), for odd values of i.

Once the ansatz is defined, its parameters must be optimized classically until convergence is reached. The choice of the optimizer is crucial because it impacts the number of measurements that are necessary to complete the optimization cycle since, when properly chosen, it can mitigate the barren plateau problem and minimize the number of iterations required to reach convergence. In this work, dealing with the PO problem, different optimizers are tested to select which one fulfills its task faster, among those available on Qiskit, i.e., Cobyla, SPSA, and NFT47.

The experimental results presented in this work are obtained on real quantum hardware, specifically using the platforms provided by IBM superconducting quantum computers. These quantum machines belong to the class of NISQ devices, which stands for Noisy Intermediate Scale Quantum devices, i.e., a class of hardware with a limited number of qubits and where noise is not suppressed. Noise, in quantum computers, comes from various sources: decoherence, gate fidelities, and measurement calibration. Decoherence is the process that most quantum mechanical systems undergo when interacting with an external environment48. It causes the loss of virtually all the quantum properties of the qubits, which then collapse into classical bits. Gate fidelities measure the ability to implement the desired quantum gates physically: in the IBM superconducting qubits hardware, these are constructed via pulses, which are shaped and designed to control the superconductors. Given the limited ability to strictly control these pulses, a perfect gate implementation is highly non-trivial and subject to imperfections. Last, measurement errors are caused by the limits of the measurement apparatus, improper calibration, and imperfect readout techniques. Hence, NISQ devices do not always provide reliable results due to the lack of fault tolerance. However, they provide a good benchmark for testing the possibilities of quantum computing. Furthermore, ongoing research is on the possibility of using NISQ in practical applications, such as machine learning and optimization problems.

In this work, both simulators and real quantum computers are used. Even though error mitigation techniques49 can be applied, the main goal of this paper is to test the performances of the quantum computers on a QUBO problem, such as PO, without error mitigation, with the binary encoding strategies and the budget constraints as described in the previous sections. Therefore, in all computations, there is no error mitigation, aiming to build an indirect but comprehensive analysis of the hardware limitations and to improve the quality of the results offered by a proper selection of the hyperparameters. This will provide a solid benchmark for the following experimental stages, which will be enabled in the coming years by large and nearly fault-tolerant quantum computers.

Hence, the experiments run on simulators (without noise) are also executed by adding noise mimicking real hardware: this operation can be readily implemented on Qiskit by inserting a noise model containing the decoherence parameters and the gate error rate from real quantum hardware.

Moreover, experiments are run on IBM NISQ devices with up to 25 qubits. Specifically, a substantial subset of the available quantum computers in the IBM quantum experience was employed: IBM Guadalupe, Toronto, Geneva, Cairo, Auckland, Montreal, Mumbai, Kolkata, and Hanoi. These machines have either 16 or 27 qubits, but they have different quantum volumes (QV) and Circuit Layer Operations Per Second (CLOPS). QV and CLOPS are useful metrics to define the performances of a quantum computation pipeline50. Generally, a bigger QV means that the hardware can sustain deeper circuits with a relatively small price on the performance. At the same time, the CLOPS quantifies the number of operations that can be handled by the hardware per unit of time. Hence, altogether, they qualify the quality and speed of quantum computation.

Link:
Best practices for portfolio optimization by quantum computing ... - Nature.com

Read More..

Cleveland Clinic makes quantum computing available to startups – Healthcare IT News

With a new accelerator program offered by Cleveland Clinic, early- and growth-stage healthcare innovation companies could gain access to the massively powerful data and analytics insights promised by its new quantum computer, billed as the first such machine dedicated to healthcare research.

Through the Clinic Quantum Innovation Catalyzer Program, these startups will also be able to collaborate with Cleveland Clinicresearchers and investigators, and participate in the Quantum Working Group on Healthcare and Life Sciences established with IBM Quantum.

WHY IT MATTERS

Cleveland Clinic launched the program to foster collaboration with the Cleveland Clinic-IBM Discovery Accelerator, which is the Cleveland Innovation District's technology foundation of biomedical research, according to Friday's announcement.

The State of Ohio, JobsOhio and Clevelands healthcare and higher education institutions formed the district to create jobs, accelerate research and educate the future workforce.

"Through the Discovery Accelerator, Cleveland Clinic and our partners at IBM are exploring quantum computings vast potential to transform medicine, from drug discovery to digital health and biomarker analysis," said Dr. Lara Jehi, Cleveland Clinic's chief research information officer, in a statement.

The program seeks those building quantum computing algorithms and exploring the utility of quantum computing for computational biomedicine in order "to address some of healthcares most intractable problems," according to Cleveland Clinic's Office of Research Development in thecompetition program description.

In addition to access to Cleveland Clinic's IBM quantum computer and opportunities to discuss research ideas and projects with clinicians and investigators, selected start-ups can take advantage of technical seminars, healthcare programs and quantum education.

At the end of the program, Cleveland Clinic said it would host a demo day spotlighting the cohort of startups to a curated audience of investorsand clinical, corporate and ecosystem partners.

Proposal submissions are due by January 15, 2024, and teams will be selected by March 15.

THE LARGER TREND

In March, Cleveland Clinic and IBM announcedthe first quantum computer dedicated to healthcare research. Cleveland Clinic plans to use System One's computing power to speed biomedical discoveries across an array of clinical and pharmaceutical needs as part of its 10-year agreement with IBM.

Near-term prospects for quantum computing in healthcare and medicine include genomic sequence analysis, virtual screening in drug discovery, medical-image classification, disease-risk predictionand adaptive radiotherapy, according to Frederik Flther, lead quantum and deputy CEO at QuantumBasel, uptownBasel Infinity Corp.

He toldHealthcare IT Newsin April thathealth system IT leaders should also be exploring quantum-safe solutions for data security.

"It is imperative that organizations, particularly those dealing with sensitive data that need to be kept secure for a long time (as is common in the medical space), start developing roadmaps for the transition to quantum-safe cryptographic standards."

ON THE RECORD

"We look forward to welcoming the first class of start-ups to our new Cleveland Clinic Quantum Innovation Catalyzer Program and helping them to leverage quantum to make breakthroughs in healthcare as we grow an ecosystem of advanced computation for healthcare and life sciences," Jehi said in the announcement.

Andrea Fox is senior editor of Healthcare IT News.Email:afox@himss.orgHealthcare IT News is a HIMSS Media publication.

See the original post:
Cleveland Clinic makes quantum computing available to startups - Healthcare IT News

Read More..

GENCI/CEA, FZJ, and PASQAL Announce Significant Milestone in … – HPCwire

DENVER, Nov. 9, 2023 In the context of the SuperComputing 2023 conference in Denver (SC23), Grand Equipement National de Calcul Intensif (GENCI), Commissariat lnergie atomique et aux nergies alternatives (CEA), Forschungszentrum Jlich (FZJ), and PASQAL are demonstrating progresses in the framework of the European project High-Performance Computer and Quantum Simulator hybrid (HPCQS).

HPC-Quantum Computing applications in finance, pharma, and energy are leveraging the upcoming quantum computers that are currently being installed at the supercomputing centers CEA/TGCC (France) and FZJ/JSC (Germany), providing already concrete results.

Now, PASQAL is delivering two 100+-qubit quantum computers to its first customers in France (GENCI/CEA) and Germany (FZJ). These devices, acquired in the framework of the European project HPCQS, and co-funded by the EuroHPC Joint Undertaking, France and Germany, will be coupled respectively with the Joliot-Curie and JURECA DC supercomputers.

Over the past months, several HPC-Quantum Computing and Simulation (HPC-QCS) applications have been studied on the targeted 100+-qubit quantum computing platform based on neutral atoms. These explorations have involved several industrial partners from various fields who provided practical use cases that, with the support of the PASQAL team, were ported on the quantum system, enabling the development of more efficient drugs, more efficient electricity consumption, and competitive advantage in risk management.

A significant illustration of this is the development of a novel quantum algorithm to accelerate drugs discovery. A joint collaboration between PASQAL and the Qubit Pharmaceuticals startup has been launched end of 2021, co-funded by the Pack Quantique (PAQ) initiative of the Region Ile-de-France for an 18-month project. This collaboration aims at improving the understanding of protein hydration, a crucial element in determining how the medicine candidate can inhibit the toxic behavior of the targeted protein. A preliminary version of the algorithm for identifying the presence of water molecules in the pockets of a protein has been implemented on PASQALs analog quantum computer to validate theoretical predictions with impressive match. The follow up of this project is being co-funded by the Wellcome Trust Quantum for Bio program.

PASQAL will showcase this exploration in favor of commercial and strategic advantages on the booths of both CEA and FZJ/JSC at the SuperComputing 2023 conference in Denver through live demos.

The two PASQAL quantum computers will be accessible to a wide range of European users in 2024. They are the first building blocks of a federated European HPC-QCS infrastructure that will also consist of the six quantum computers acquired by the EuroHPC JU and hosted in France (GENCI/CEA), Germany (LRZ), Czech Republic (IT4I @ VSB), Poland (PSNC), Spain (BSC-CNS) and Italy (CINECA).

HPCQS users are already able to validate their use cases through various entry points, such as the Pulser environment deployed on the Joliot-Curie and JURECA DC environments, as well as thanks to remote access to a 100+-qubit device hosted on PASQALs premises in Massy, France. Currently, some HPCQS users from JSC are performing remote simulations on this device to benchmark it and to demonstrate quantum many-body scarring, a phenomenon that has recently attracted a lot of interest in foundations of quantum statistical physics and potential quantum information processing applications. European end-users will also soon have access to a more scalable, tensor network-based emulator from PASQAL, called EMU-TN, which will also be deployed on both French and German environments.

About HPCQS

HPCQS is an open and evolutionary infrastructure that aims at expanding in the future by including a diversity of quantum computing platforms at different technology readiness levels and by allowing the integration of other European quantum nodes. The HPCQS infrastructure realizes, after the Julich UNified Infrastructure for Quantum computing (JUNIQ), a second step towards a European Quantum Computing and Simulation Infrastructure (EuroQCS), as advocated for in the Strategic Research Agenda of the European Quantum Flagship of 2020. At FZJ, HPCQS is fully integrated in JUNIQ. During the preparations for the Strategic Research and Industry Agenda (SRIA 2030) for Quantum Technologies in the European Union, the name of the EuroQCS infrastructure was changed to EuroHPC-QCS to emphasize the involvement of HPC as well.

Source: Grand Equipement National de Calcul Intensif

Read the rest here:
GENCI/CEA, FZJ, and PASQAL Announce Significant Milestone in ... - HPCwire

Read More..

IQM Quantum Computers launches IQM Radiance – a 150 qubit system paving the way to quantum advantage – Yahoo Finance

IQM Radiance comes in two variants: 54 qubits, target for availability is Q3/2024 and 150 qubits, targeted from Q1/2025.

Aiming to pave the way to quantum advantage, using the 150-qubit system as a stepping-stone focusing on high-qualityqubits and gates.

IQM Radiance is designed for businesses, high performance computing centers, data centers and governments.

ESPOO, Finland, Nov. 9, 2023 /PRNewswire/ -- IQM Quantum Computers(IQM), a global leader in building quantum computers, today unveiled its quantum computing platform, "IQM Radiance", aiming to pave the way to quantum advantage within the next years with a 150-qubit quantum system.

IQM Radiance

IQM Radiance offers quantum computing capabilities to businesses and governments and can be deployed in high-performance computing and data centres.

"This is the right moment for businesses to invest and harness quantum advantage as early as possible to gain a competitive edge. IQM Radiance allows enterprises to target real-life use cases, testing applications with the most business potential. High-potential areas include machine learning, cybersecurity, system control, energy grid and route optimisation, drug and chemical research and carbon capture," says Dr. Jan Goetz, CEOand Co-founder of IQM Quantum Computers.

Charting out the path to quantum advantage

IQM Radiance follows the launch of IQM Spark, a quantum computer with a pre-installed 5-qubit quantum processing unit tailored for universities and research institutions for learning and giving users full control of experiments.

Radiance starts as a 54-qubit system, and IQM plans for it to be available in 2024 to provide early adopters with the opportunity to master system operations, integrate systems into existing environments, explore algorithm behaviour, and perform quantum advantage experiments.

In addition, IQM will provide customers the opportunity to upgrade the 54-qubit system to a 150-qubit system in 2025. IQM will continue to support customers on their path to quantum advantage by replacing the initial 150-qubit chips by higher performance chips as soon as these are available. This will enable customers to bring added value to end users for them to solve real-life problems with less computing time, or less power, or by achieving more accurate results, as compared to the best classical device of similar size, weight, and cost.

Story continues

"Radiance will be an enterprise-graded system for which we are optimistic that it will bring quantum utility to some applications even with a relatively modest number of quality qubits. Through the acquisition of IQM Radiance, businesses will gain a significant head start on practical applications and system integration. Our upgrade path allows early adopters to start with a smaller system while receiving a larger system with a significant leap in computing power later," explains Dr. Bjrn Ptter, Head of Product at IQM Quantum Computers.

IQM has already demonstrated its technical capabilities in developing technologies to scale up quantum computers in a successful partnership with institutions such as the VTT Technical Research Centre of Finland, where it delivered a remarkable 20-qubit quantum computer, achieving outstanding results. IQM plans to pilot the delivery of a 54-qubit system to VTT in the second quarter of 2024.

"To meet the needs of our customers, we have a product portfolio with offerings that cover the low- to high-end segment of the market," adds Ptter.

About IQM Quantum Computers:

IQM is a global leader in building quantum computers. IQM provides on-premises quantum computers for supercomputing centres and research labs and offers full access to its hardware. For industrial customers, IQM delivers quantum advantage through a unique application-specific, co-design approach. IQM's commercial quantum computers include Finland's first commercial 50-qubit quantum computer with VTT, IQM-led consortium's (Q-Exa) HPC quantum accelerator in Germany, and IQM processors will also be used in the first quantum accelerator in Spain. IQM has over 280 employees with offices in Paris, Madrid, Munich, Singapore, and Espoo.

http://www.meetiqm.com

IQM Quantum Computers Logo

Cision

View original content:https://www.prnewswire.com/apac/news-releases/iqm-quantum-computers-launches-iqm-radiance--a-150-qubit-system-paving-the-way-to-quantum-advantage-301981384.html

SOURCE IQM Quantum Computers

View post:
IQM Quantum Computers launches IQM Radiance - a 150 qubit system paving the way to quantum advantage - Yahoo Finance

Read More..

Australia to buy quantum computer from US | Information Age | ACS – ACS

The Commonwealth is planning to build a quantum computer. Image: Shutterstock

EXCLUSIVE: The Commonwealth government is looking to buy a quantum computing system through a secret procurement process that is rumoured to favour a US-based company, leaving Australias quantum sector annoyed by the apparent snub.

Sources told Information Age the government has been looking to buy its first quantum computer from PsiQuantum, a California-based firm with a stated mission to build and deploy the worlds first useful quantum computer.

The Department of Industry and Science did not respond to Information Ages request for comment.

Australia has a wealth of local expertise in quantum technologies and has, for decades, been a world leader in the nascent fields research and development.

When Industry and Science Minister Ed Husic took office last year, he showed a public desire to take advantage of local talents, knowledge, and manufacturing capabilities to make Australia the quantum capital of the globe.

Indeed, Husics department led the development of Australias first quantum strategy.

But the governments apparent move to go overseas for what one insider described as Australias biggest ever investment in quantum, has been seen by many in the industry as a slap in the face.

Husics office did not respond to Information Ages request for comment.

One industry source, who wished to remain anonymous, questioned why there wasnt an open tender process and said they would have liked the opportunity to form a consortium of Australian companies to apply.

While they didnt disagree in principle with the idea of the Commonwealth buying a quantum computer, the quantum expert said a government decision to buy technology from a US-based company could negatively impact how the local industry is perceived by international investors and buyers.

The government has not previously stated an intention to buy a quantum computer. In this year's budget the Department of Industry and Science added around $20 million for a quantum commercialistation centre and $40 million for the Critical Technologies Challenges Program.

Internationally, government-funded quantum computing projects have proved expensive. The Finnish government last month committed $116 million (EU70 million) to scale up its 20 qubit system while Germany announced in May that it will pour around $5 billion (EU3 billion) to build a 100 qubit system by 2026.

Simon Devitt, a senior lecturer at the University of Technology Sydney and member of the governments National Quantum Advisory Committee, was willing to publicly state that he thinks the government buying as-yet-unproven technology is a ludicrous waste of money that would be better spent on funding to shore up local academic research.

These systems are often extremely expensive and their value is questionable at the very least, he told Information Age.

They do not provide any kind of commercial utility for HPC [high-performance computing], and the utility for developing quantum algorithms or in education is essentially non-existent.

Devitt could not speak to anything discussed in the National Quantum Advisory Committee.

Why quantum?

Quantum computers are probabilistic and can theoretically solve problems that would take a classical computer thousands of years to compute.

They have potential applications in areas like cryptography, finance, and pharmaceutical development, although quantum advantage the ability for one of these systems to outperform classical supercomputers has yet to be proven outside niche experimental settings.

Companies around the world are exploring different ways to create and maintain systems of sufficiently large, error-corrected quantum bits (qubits).

PsiQuantum is pursuing photonic quantum computing technology which involves storing and processing information using individual quanta of light.

The company claims its chips can be rigorously tested using industrial-scale facilities at room temperature which gives them an edge over technologies that must remain cryogenically cooled for longer parts of the testing phase.

Photonic quantum computing is not room temperature since photon detectors still need to be cooled to near absolute zero.

Individual quantum photonic chips may have fewer qubits than competing technologies, but using light as a foundation may allow a cluster of connected chips to pass quantum information between one another via fibre optic cables and scale-up systems with existing technology.

PsiQuantum has an Australian link through its CEO and co-founder Professor Jeremy OBrien who studied in Queensland and Western Australia and completed his PhD with the University of New South Wales.

The company is partnered with US semiconductor firm GlobalFoundries that produces PsiQuantums photonic chip wafers at an industrial scale.

PsiQuantum did not respond to Information Ages request for comment.

View post:
Australia to buy quantum computer from US | Information Age | ACS - ACS

Read More..

Optimizing quantum noise-induced reservoir computing for … – Nature.com

Theoretical framework

We develop QNIR theory starting from general RC theory. RC is a computational paradigm and class of machine learning algorithms that derives from RNNs. RC involves mapping input signals, or time series sequences, into higher dimensional feature spaces provided by the dynamics of a non-linear system with fixed coupling constants, called a reservoir. Having a smaller number of trainable weights confined to a single output layer is a core benefit of RC because it makes training fast and efficient compared to RNNs. RC has a number of properties that should be met28,29 including adequate reservoir dimensionality, nonlinearity, fading memory/echo state property (ESP) and response separability.

For the univariate case, a reservoir, f, is a recurrent function of an input sequence, (u_t), and prior reservoir states, (bar{x}_{t-1}), as

$$begin{aligned} bar{x}_t = f(bar{x}_{t-1},u_t). end{aligned}$$

(1)

As output sequences, (bar{x}_t), training sequences are selected between time-steps (t=t_i) and (t=t_f), and form a training design matrix, (textbf{X}_{tr}). The initial sequence, (t

$$begin{aligned} textbf{y} = W^T textbf{X}_{tr}, end{aligned}$$

(2)

is trained based on least squares, where (textbf{y}) is the target vector and W is an initial weight vector. The trained model has the form:

$$begin{aligned} hat{textbf{y}} = W^T_{opt}textbf{X}, end{aligned}$$

(3)

with an optimized weight vector, (W^T_{opt}), to give a predicted sequence, (hat{textbf{y}}), from new sequences, (textbf{X}).

Circuit channel diagrams of the QNIR computer in the unrolled view, composed using30. The initial state of the quantum reservoir is (|+rangle ^{otimes n}) and the quantum channels labeled (mathscr {T}_{u_i}) evolve the density operator as in Eq. (4), where N quantum circuits are required for N time steps. A number of output sequences, n, are concatenated from sequential, single-qubit expectation value measurements (langle Z_{i} rangle) on n qubits.

For QNIR with artificial noise channels, the RC framework that has been developed is now instantiated in the following way. The density operator evolves in time steps as

$$begin{aligned} rho _t = mathscr {T}_{u_t}(rho _{t-1}), end{aligned}$$

(4)

where the reservoir map (mathscr {T}_{u_t}) is composed of a sequence unitary quantum gates, (U_i), and associated artificial noise channels, (mathscr {E}_i), that are completely positive and trace preserving (CPTP). The reservoir map can be represented as a composition of quantum channels

$$begin{aligned} mathscr {T}_{u_t}(rho _{t-1}) = mathscr {E}_{U_K} circ ldots circ mathscr {E}_{U_2} circ mathscr {E}_{U_1} (rho _{t-1}), end{aligned}$$

(5)

where the notation (mathscr {E}_{U_i} = mathscr {E}_i( U_i rho U_i^{dagger } )) is used for clarity and to emphasize that each quantum gate is acted on by a noisy channel and K is the number of noise channels in the time step. We will refer to (mathscr {T}_{u_t}) as a noisy quantum circuit. QNIR requires an initial washout phase, (t

The unitary, noiseless part of the quantum circuit is composed of an initial layer of RX gates followed by an entanglement scheme of ({RZ!Z}_{i,j}) gates, which are 2-qubit entangling gates

$$begin{aligned} (C!X_{i,j}RZ_j(theta )C!X_{i,j})RX^{otimes n}(theta ) = {RZ!Z}_{i,j}(theta )RX^{otimes n}(theta ), end{aligned}$$

(6)

where all (RX(theta )) and (RZ(theta )) rotation gates encode the time series data with a scaling map, (theta =phi (u)). The purpose and structure of the unitary encoding gates is detailed in subsection: Reservoir circuit designs.

Single-qubit expectation values, (langle Z_{i} rangle = Tr(Z_i rho )), are measured for all n qubits at each time-step,

$$begin{aligned} h_t = [langle Z_{1} rangle ,langle Z_{2} rangle ,ldots ,langle Z_{n} rangle ]^T, end{aligned}$$

(7)

as shown in a circuit diagram in Fig. 1. Figure2 depicts that time series values are encoded to all reservoir qubits and (langle Z_{i} rangle) are measured of all qubits, which are concatenated for each time step to give n reservoir feature sequences (q_i = {langle Z_{i} rangle }_{t=0}^N), where N is the number of time steps. In turn, (q_i) form a design matrix (textbf{X}) and the QNIR model is trained as in Eq. (3). A schematic of the full QNIR computer is shown in Fig. 3.

This drawing represents many repeats of data encoding of a single value, (u_i), to all reservoir qubits (left) and measurements of single-qubit Z expectation values (right). This two-part process occurs at each time step i to build feature signals by concatenation. Noisy quantum circuits are shown for each time step in Fig. 1. This drawing shows an example of a four-qubit reservoir with fixed, pair-separable dynamics.

In this graphic the first layer contains an array of duplicates of a single time series value. Each value in the input array is encoded to all qubits of the reservoir as in Eq. (6). The second layer is a quantum reservoir with arbitrary entanglement scheme, represented by connecting lines between qubit nodes. The Z observable expectation value, (langle Z_{i}rangle), is measured for all qubits. These measurements are repeated and concatenated to build output signals, (q_i). In the final layer, these signals are used in multiple linear regression for time series prediction, as in Eq. (3).

It is important in RC and by extension QRC that the reservoir system can capture the temporal dynamics of the target system. To ensure this we implement a reservoir optimization scheme for QNIR. The artificial noise channels, (mathscr {E}_i), of the quantum reservoir circuit are iteratively updated by an optimization routine with an MSE cost function based on the time series prediction performance. This serves to optimize the quantum reservoir for time series prediction. Details of the optimization approach are in subsection: Reservoir noise parameterization.

This section is concerned with the architecture and purpose of the unitary gates of the quantum circuit, the high-level structure of the noisy quantum circuits and entanglement scheme. The details of the noise scheme are covered in subsection: Reservoir noise parameterization.

The initial state of the quantum reservoir, (|+rangle ^{otimes n}), is prepared by an initial Hadamard gate layer. Continuing with Eq. (6), an n-qubit QNIR circuit has a fixed sequence of quantum gates

$$begin{aligned} begin{aligned} U_{b}(u)&= (C!X_{i,j}RZ_j(theta )C!X_{i,j})RX^{otimes n}(theta ) \&= {RZ!Z}_{i,j}(theta )RX^{otimes n}(theta ) end{aligned} end{aligned}$$

(8)

where i,j are indices for two qubits that denote the placement of multiple 2-qubit (RZ!Z) entangling gates. The decomposed form of the circuit with (C!X) and RZ gates23 is implemented with noise channels (see subsection: Reservoir noise parameterization). A time series data value, u, is encoded to all (RX(theta )) and (RZ!Z(theta )) gates by angle (theta = phi (u)), where (phi) is a scaling map.

To implement the recurrent architecture of QNIR, a set of N quantum circuits are executed for a time series ({u_t}^N_{t=0}). The first circuit encodes ({u_0}), the second circuit encodes ({u_0,u_1}), and the Nth circuit encodes ({u_t}^N_{t=0}) as

$$begin{aligned} text {U}_{t=N} = U_{b}(u_N) ldots U_{b}(u_1)U_{b}(u_0). end{aligned}$$

(9)

All unitaries (text {U}_t) for arbitrary t constrain the i expectation values to a zero bitstring

$$begin{aligned} langle Z_{i} rangle _{t} = langle Phi _0|text {U}^{dagger }_t Z_i text {U}_t |Phi _0rangle = 000..., end{aligned}$$

(10)

where (|Phi _0rangle = |+rangle ^{otimes n}) is the initial reservoir state and (Z_i) represents n single-qubit Z measurement operators. It is the action noise that ensures the qubit signals are non-zero, feature sequences, (q_i). Now considering the full QNIR circuits with artificial noise, the noisy quantum circuit for the final iteration, encoding ({u_t}^N_{t=0}), is the quantum channel

$$begin{aligned} {varvec{mathscr {T}}}_{N} = {mathscr {T}}_{u_N} {circ } ldots {circ } {mathscr {T}}_{{u}_{2}} {circ } {mathscr {T}}_{{u}_{1}}. end{aligned}$$

(11)

The noisy quantum circuit with artificial noise scheme will be detailed in the next subsection: Reservoir noise parameterization. This scheme may further reduce resources by circuit truncation based on a memory criterion29,31,32,33.

For (RZ!Z_{i,j}) gates, the degree of entanglement between qubits i and j is a function of (u_t). It is important that the range of magnitudes of the data values is constrained and we observe that values much larger than (2pi) cause undesirable effects. We consider benchmarks that do not require re-scaling.

Drawing from the close connection with quantum feature maps23,34,35,36, entanglement schemes are defined by the number and placement, i.e. the architecture, of (RZ!Z) gates in Eq. (6). Common entanglement schemes that could be trialed are full, linear, pair-wise, and what we call pair-separable used inSuzuki et al.11. The pair-separable (PS) and linear entanglement (LE) schemes explored in this work have (RZ!Z) gates indexed as (i,j in {(0,1),(2,3),(4,5),...,(N-1,N)}) and respectively (i,j in {(0,1),(1,2),(2,3),...,(N-1,N)}). To clarify, for an LE scheme, every additional (RZ!Z) gate is in a new circuit layer, increasing the circuit depth each time. The LE scheme creates whole circuit entangled states23. The state vector for a PS entanglement scheme evolves in a product state of qubit pairs, (|psi rangle = bigotimes _{i=1}^{n/2} |phi rangle _i), where (|phi rangle _i) are two-qubit entangled states. The state, (|psi rangle), can be efficiently classically simulated and can be parallelized in classical simulation or on quantum computers37,38.

QNIR uses noise as a necessary resource to generate non-trivial feature sequences. We use artificial noise that can be programmed to a quantum computer. Within this scheme, many such artificial noise models can be implemented to produce different effects. To implement a noise scheme, we associate parameterized, single-qubit noise channels with each unitary gate in the quantum circuit, Eq. (6), as shown in Fig. 4. Note that this differs from Kubota et al.12, where noise channels were situated at the end of every time step. In the following, we assume each noise channel depends on a single noise parameter.

A 2-qubit quantum circuit channel diagram of an reservoir noise parameterization. Each unitary gate has an associated noise channel represented by (mathscr {E}(p_i)). This represents the novel quantum circuit parameterization approach proposed in this work.

This graphic shows the QNIR noise optimization scheme. The quantum model is trained and tested iteratively in a classical optimization loop, where dual annealing or evolutionary optimization are used. The quantum reservoir circuits have a number of gate-associated noise channels, each of which has a single error probability parameter that is iteratively updated.

Noise channels are associated with all quantum gates in the reservoir circuit in Fig. 4. Each noise channel (mathscr {E}(p)) is a function of a probability for the noise effect to occur. We use probabilities, (p_i), to parameterize the reservoir for optimization. The number of probability parameters scales linearly with the number of qubits. For pair-separable entanglement reservoir, the number of parameters is (n_{p_i} = frac{7}{2} n), where (n=2,4,6,...), and for linear entangled reservoir (n_{p_i} = 6n-5), where (n=2,3,4,...).

QNIR resource-noise optimization is performed through iterative training (Eq. 2) and testing (Eq. 3) of QNIR, giving optimized noise probability parameters, (p_i in textbf{p}) (see Fig. 5). The parameters in the initial parameter vector, (textbf{p}), are probabilities randomly selected from a uniform distribution, (p_i sim U(0,1), forall i).

Two optimization approaches were trialed in this work, evolutionary optimization27 and dual annealing39, where the latter is available in the SciPy optimization package40. The mean squared error (MSE) was used as a suitable cost function to measure prediction performance, which is minimized as

$$begin{aligned} min _{textbf{p}}; { text {MSE}(hat{textbf{y}}(textbf{p}),textbf{y}) : p_i in [0,1], forall i }, end{aligned}$$

(12)

where (hat{textbf{y}} = W^T_{opt} textbf{X}(textbf{p})) is the QNIR test set prediction and (textbf{X}(textbf{p})) are the reservoir signals matrix dependent on noise probabilities (textbf{p}).

In this work, we use only reset noise channels that can be simply implemented with a classical ancilla system (see next subsection: Reset noise).

We propose a simple hybrid quantum-classical algorithm for a reset noise channel that consists of probabilistically triggering a reset instruction using a classical ancillary system. A deterministic reset instruction is an important element of a quantum instruction set, for the need to reset qubit states. A quantum instruction set is an abstract quantum computer model41,42. In this work we consider a reset to (|0rangle) noise channel given by (mathscr {E}_{PR}(rho ) = p|0rangle langle 0| + (1-p)rho), where p is the reset probability43. (mathscr {E}_{PR}(rho )) is trace-preserving, (Tr(mathscr {E}_{PR}(rho ))=1).

Using dynamic circuits, quantum computers can implement a reset instruction with a mid-circuit measurement followed by a classically controlled quantum X gate that depends on the measurement outcome44 (see Fig. 6). For example, this is how a reset is now implemented on IBM quantum computers supported by OpenQASM341.

A deterministic RESET instruction (left) is executed with this dynamic circuit. This can be used as a basis for a reset noise channel, (mathscr {E}_{PR}). A single line represents a qubit and a double-line represents a classical bit. A model classical ancillary system (right) would be executed on a classical computer. The classical NOT gate, (X_p), is executed with probability p, which in turn triggers a classical controlled RESET instruction with probability p.

In classical computing, execution of a probabilistic instruction is triggered using a random number generator (RNG), such as those widely available in software as PRNGs or in hardware as HRNGs. Here we employ a classical RNG to probabilistically activate a reset, which is identical to reset noise. In this way, artificial reset noise is implemented without ancilla qubits. Ancilla qubits would be an undesirable overhead in the larger scheme presented in this work in which unitary gates require potentially many corresponding noise channels. This hybrid approach may be viable for other noise channels. For example, reset noise can approximate amplitude damping noise to high precision43.

View post:
Optimizing quantum noise-induced reservoir computing for ... - Nature.com

Read More..

Where Will IonQ Stock Be in 1 Year? – The Motley Fool

IonQ (IONQ -1.58%) has taken investors on a wild ride since its public debut. The quantum computing company merged with a special purpose acquisition company (SPAC) and started trading at $10.60 per share on Oct. 1, 2021. IonQ's stock then nearly tripled to its all-time high of $31 on Nov. 17, 2021, but plummeted over the next 13 months to a low of just $3.04 per share on Dec. 28, 2022.

Like many other hyper-growth stocks, IonQ lost its luster as rising interest rates popped its bubbly valuations, highlighted its losses, and drove investors toward more conservative investments. But after bottoming out, IonQ's stock bounced back to about $12 again. Let's see if this volatile stock can stay ahead of the market over the next 12 months.

Image source: Getty Images.

Traditional computers use binary "bits" of zeros and ones to process data. Meanwhile, quantum computers use "qubits" which can store zeros and ones to simultaneously process data at much faster rates. That sounds like a generational leap forward in computing technology, but quantum computers are still much larger than traditional computers.

For example, IBM'scasing for a single qubit processing unit (QPU) is about six feet wide. Alphabet's Google has been developing a qubit processing system that is about 20 feet wide.

IonQ is tackling that problem with a newer type of QPU system that is only two inches wide. It built that system with a "trapped ion" architecture, which makes it smaller and easier to scale. That technology enabled it to build the "world's most powerful trapped-ion quantum computer," and it serves up that computing power as a cloud-based service through Amazon'squantum cloud computing service Braket, Microsoft'sAzure, and Google Cloud.

IonQ measures its quantum processing power in algorithmic qubits (AQs). During its pre-merger presentation, it claimed it could grow from AQ 22 in 2021 to AQ 29 in 2023. However, it actually hit AQ 29 seven months ahead of schedule in the first quarter of 2023 -- and it's now set on reaching its next milestones of AQ 35 in 2024 and AQ 64 in 2025. After that, it expects to achieve exponential growth and achieve AQ 1,024 by 2028.

IonQ initially predicted its revenue would reach $5 million in 2021, triple to $15 million in revenue in 2022, and then reach $34 million in 2023 as more companies used its services. But in reality, it only generated $3 million in revenue in 2022 and $11 million in revenue in 2023. It expects its revenue to grow about 70% to 73% to about $19 million in 2024.

IonQ's failure to meet its pre-merger targets caused many investors to lump it together with other SPAC-backed companies that overpromised and underdelivered. Its red ink made it even less appealing: It narrowed its net loss from $106 million in 2021 to $49 million in 2022, but racked up an even wider loss of $71 million in the first half of 2023.

IonQ's broken promises caused its stock to sink to its all-time low last December, but the growing interest in artificial intelligence (AI) stocks over the past year drove the bulls back to its stock. The continued expansion of the AI market will likely drive the growth of the quantum computing market as companies explore even faster ways to process data.

IonQ still has room to grow. IDC expects the quantum computing market to grow at a compound annual growth rate (CAGR) of 48% from 2022 to 2027, and IonQ could pace with the market if it continues to increase its computing power.

With an enterprise value of $2.18 billion, IonQ might seem overpriced at 115 times this year's sales. However, analysts expect its revenue to rise from $19 million in 2023 to $88 million in 2025, which would represent a CAGR of 115%.

If you think IonQ can successfully scale up its business and hit those targets, then its stock might not seem too expensive at 25 times its 2025 sales. It could also become a compelling takeover target for a larger tech company if it proves its trapped-ion technology is superior to other quantum computing technologies. However, the recent departure of its co-founder and chief science officer Chris Monroe -- who co-developed the trapped-ion process -- raises a few red flags.

I believe IonQ's stock will remain volatile as high interest rates and other macro headwinds generate headwinds for unprofitable hyper-growth stocks. It might still be a good speculative play for investors who can afford to tune out the noise for a couple more years, but I'm not too confident it can outperform the market over the next 12 months.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Leo Sun has positions in Alphabet and Amazon. The Motley Fool has positions in and recommends Alphabet, Amazon, and Microsoft. The Motley Fool recommends International Business Machines. The Motley Fool has a disclosure policy.

Read the original here:
Where Will IonQ Stock Be in 1 Year? - The Motley Fool

Read More..

Reader in Quantum Computing job with KINGS COLLEGE LONDON … – Times Higher Education

The Department of Informatics is looking to recruit a Reader in Quantum Computing.

This is an exciting time to join us as we continue to grow our department and realise our vision of a diverse, inclusive and innovative department of Informatics at one of the most prestigious universities in the UK. We seek applicants who will support us in our ambitions in inclusivity and diversity.

As we continue to grow and strengthen our department and given the increased research activity in quantum computing across the department and faculty, we are seeking to appoint a Reader in quantum computing. The aim of this position is to expand the departmental research in quantum computing, provide leadership in this area, and coordinate efforts in a strategic manner. A wide range of research topics are of interest, including but not limited to, quantum information science, quantum algorithms, quantum software engineering, quantum machine learning, quantum communication. (Candidates with a focus on cryptography are invited to apply for a dedicated position that we advertise in parallel). Outstanding candidates engaged in research and teaching which complements that of the existing members of the Department will be considered favourably.

To realise our mission, we look at computer science and quantum computing challenges with a broad perspective and regularly sit in the program committees of and publish in top-tier and well-known venues of computer science. Top-quality research establishes members of the Department as leaders in their fields, but it is its transformative aspect that provides the opportunity to serve the society while supporting Kings as an outstanding institution in science and technology. As such, the Department has strong links with industry, which engages with us in collaborative research projects.

We offer undergraduate and postgraduate education (in both computer science and artificial intelligence), catering for the needs of our students and the industries in which they will work. It is essential that applicants have the enthusiasm and commitment needed to ensure the success of these programmes. The successful applicant for this post will be involved in delivering teaching in core areas of computer science.

The successful candidate will be invited to join a research group aligned with their research activity and will have the opportunity to contribute to departmental hubs. Research collaboration across research groups, with departmental hubs and with other Departments in the Faculty and across the College is strongly encouraged.

Applicantsmust have a PhD, an excellent publication record, and an established record of research funding.It is essential that applicants have the enthusiasm and commitment required to contribute to the further development of the research standing of the Department of Informatics, and to make a full contribution to teachingandadministrative activities.

Diversity is positively encouraged with a number of family-friendly policies, including the operation of a core hours policy, the right to apply for flexible working and support for staff returning from periods of extended absence, for example maternity leave. The Department of Informatics is committed to ensuring an inclusive interview process and will reimburse up to 250 towards any additional care costs (for a dependent child or adult) incurred as a result of attending an interview for this position.

For further information about the Department of Informatics at Kings, please see https://nms.kcl.ac.uk/luc.moreau/informatics/overview.pdf.

This post will be offered on an indefinite contract

This is a full-time post - 100% full time equivalent

Read more here:
Reader in Quantum Computing job with KINGS COLLEGE LONDON ... - Times Higher Education

Read More..

NEC and Gurobi Optimization Sign System Integration Partnership … – NEC

Mathematical optimization (e.g., linear programming and mixed-integer programming) is a problem-solving method that involves defining real-world objectives, constraints, and decision variables and then using a mathematical optimization solver (Gurobi Optimizer) to quickly identify the optimal decision out of trillions of possibilities.

"Through our alliance with NEC, we're not just integrating technologies; we're creating a future where optimal decision-making is more accessible," said Duke Perrucci, CEO of Gurobi Optimization. Combining Gurobi's decision-intelligence technology with NEC's quantum computing solutions represents a paradigm shift in the world of optimization. Our commitment is to ensure businesses navigate the ever-complex landscape of decision-making with unparalleled efficiency and accuracy."

NEC will provide Gurobi Optimizer application services, with technical support from October Sky Corporation, a branch office of Gurobi in Japan. NEC will also train its employees in Gurobi Optimizer application skills.

NEC has helped many customers to optimize their planning operations, such as production planning and delivery planning using quantum computing technology through its NEC Vector Annealing Service. Going forward, NEC will leverage its track record of providing optimization solutions in various business fields, delivered by its highly skilled optimization experts, to help customers identify optimal solutions to complex problemsby combining the Gurobi Optimizer with NEC's quantum computing technology and AI.

By integrating Gurobi into our solutions, we aim to empower our customers to make optimal business decisions in an ever-evolving, complex landscape," said Shigeki Wada, Corporate SVP of the Global Innovation Business Unit, NEC Corporation. This collaboration is a testament to NEC's dedication to innovation and providing comprehensive solutions that address the diverse challenges our customers face."

To learn more about NEC's Management and Business Optimization Consulting Service, visit https://www.nec.com/en/global/quantum-computing/index.html.

See more here:
NEC and Gurobi Optimization Sign System Integration Partnership ... - NEC

Read More..