Page 220«..1020..219220221222..230240..»

JPMorgan Chase and AWS study the prospects for quantum speedups with near-term Rydberg atom arrays | Amazon … – AWS Blog

This post was contributed by Martin Schuetz, Ruben Andrist, Grant Salton, and Helmut Katzgraber from the Amazon Quantum Solutions Lab, and Pierre Minssen, Romina Yalovetzky, Shouvanik Chakrabarti, Dylan Herman, Niraj Kumar, Ruslan Shaydulin, Yue Sun, and Marco Pistoia from JPMorgan Chase

Many companies face combinatorial optimization problems and across both science and industry there are prominent examples in areas like transportation and logistics, telecommunications, manufacturing, and finance.Analog neutral-atom quantum machines can provide a novel platform to design and implement quantum optimization algorithms, with scientists in both industry and academia searching for the most promising types of problems for which an early quantum advantage could be demonstrated.

Over the past year, the Amazon Quantum Solutions Lab (QSL) has worked together with the Global Technology Applied Research team at JPMorgan Chase to conduct a systematic study of the hardness of certain optimization problems, inspired by real-world use cases in finance.

In this post, well describe our project and summarize the key results. Motivated by recent experiments reporting a potential super-linear quantum speedup [2], we studied the problem native to Rydberg atom arrays (the maximum independent set problem on unit-disk graphs). We identified a class of problem instances with controllable hardness and minimal overhead for neutral atom quantum hardware.

We think this work sets the stage for potentially impactful future experiments towards quantum advantage. For further technical details, feel free to check out our original work published in Physical Review Research [1].

Given its potentially far-reaching impact, the potential demonstration of quantum speedups for practically relevant, computationally hard problems has emerged as one of the greatest milestones in quantum information science. Over the last few years,Rydberg atom arrays have established themselves among the leading contenders for the demonstration of such quantum speedups; see this blog post for more details. In particular, in Ref. [2] a potential super-linear quantum speedup over classical simulated annealing has been reported for the maximum independent set problem (MIS) on unit-disk graphs (MIS-UD), based on variational quantum algorithms run on Rydberg atom arrays with up to 289 qubits arranged in two spatial dimensions.

This work focused on benchmarking quantum variational algorithms against a specific implementation of simulated annealing (representing a classical analogue of the adiabatic algorithm), yet it left open the question of benchmarking against other state-of-the-art classical solvers. In our work, we study the MIS-UD problem in detail (see Figure 1 for a schematic illustration), with a broader range of classical solvers beyond the scope of the original paper. Our main goal is to empirically assess the hardness of the MIS-UD problem, to help zero in on problem instances and system sizes where quantum algorithms could ultimately be useful, and thus identify the most promising directions for future experiments with Rydberg atoms.

Figure 1. Schematic illustration of the problem. (a) We consider unit-disk graphs with nodes arranged on a two-dimensional square lattice with side length L and ~80% of all lattice sites filled, and edges connecting all pairs of nodes within a unit distance (illustrated by the circle). (b) Our goal is to solve the MIS problem on this family of Union-Jack-like instances (as depicted here with nodes colored in red in the right panel) and assess the hardness thereof using both exact and heuristic algorithms.

The MIS problem is an important combinatorial optimization problem with practical applications in network design, vehicle routing, and finance, among others, and is closely related to the maximum clique, minimum vertex cover, and set packing problems [3]. Here we provide two complementary problem formulations, one based on an integer linear program and one based on an Ising-type Hamiltonian compatible with Rydberg atom arrays. We discuss important figures of merit to assess problem hardness.

Consider a graph G = (V, E) with vertex set V and edge set E, with an independent set defined as a subset of vertices that are not connected with each other. The MIS problem is then the task to find the largest independent set. Introducing a binary variable xi for every node (with xi = 1 if node i is in the independent set, and xi = 0 otherwise), the MIS problem can formally be expressed as a compact integer linear program of the form:

with the objective to maximize the marked vertices while adhering to the independence constraint.

Alternatively, a problem formulation commonly used in physics literature expresses this program in terms of a Hamiltonian thatincludes a soft penalty to non-independent configurations (that is when two vertices in the set are connected by an edge) to model the hard independence constraint. This Hamiltonian is given by

with a negative sign in front of the first term because the largest independent set is searched for within an energy minimization problem, and where the penalty parameter V > 1 enforces the constraints.

Energetically, this Hamiltonian favors having each variable in the state xi = 1 unless a pair of vertices are connected by an edge. This second unconstrained formulation provides a straightforward connection to Rydberg atom arrays. Specifically, by mappingthe binary variables xi to two-level Rydberg atoms, MIS-UD problems can be encoded efficiently with Rydberg atoms placed at the vertices of the target problem graph. Strong Rydberg interactions between atoms (as described by the second term in the Hamiltonian) then prevent two neighboring atoms from being simultaneously in the excited Rydberg state.Using a coherent drive with Rabi frequency and detuning , one can then search for the ground state of the Hamiltonian H (encoding the MIS) via, for example, quantum-annealing-type approaches. Compare this blog post for more details on annealing-type quantum optimization algorithms running on Rydberg hardware.

To assess the practical hardness of the MIS-UD problem and compare the performance of various algorithms, we consider the following key figures of merit:

In our work, we studied the MIS-UD problem described earlier using both exact and heuristic algorithms. Here we provide a brief overview of our tool suite for more technical details we refer to Ref. [1].

We now turn to our numerical results. Here we will highlight just a few selected results for more details we refer to Ref. [1].

TTS as function of system size. First, we study TTS as a function of system size (given by the number of nodes in the graph). Our results are displayed in Fig. 2. We find that typical quasi-planar instances with Union-Jack-like connectivity (as studied in Ref. [2]) can be solved to optimality for up to thousands of nodes within minutes, with both custom and generic commercial solvers on commodity hardware, without any instance-specific fine-tuning. For most instances (taken as 98th percentile) we can upper-bound the TTS needed by classical B&B or SA solvers through a runtime scaling of the form TTS = O(2aN); we find a = 0.0045 and a = 0.0128 for our B&B and SA solvers, respectively; these resultsset an interesting, putative bar for quantum algorithms to beat. In addition, we observe a relatively large spreadspanning several orders of magnitude in TTS, displaying significant instance-to-instance variations, even for fixed system size, thus motivating a more detailed analysis of problem hardness, as discussed next.

Figure 2. TTS as a function of system size. (Left) B&B solver: Problems with hundreds (thousands) of nodes can be solved to optimality in subsecond (minute) timescales. The solid line is the linear regression over instances whose TTS are in the highest 2%. (Right) SA solver: Time required to reach 99% success probability for the heuristic SA solver as a function of system size (how long the solver should run for a 99% chance of finding the optimal solution). For every system size, 1000 random UD instances have been considered.

Hardness parameter.We now consider algorithmic performance in terms of the hardness parameter HP that accounts for both the degeneracy of the ground as well as first excited states. Our resultsfor both the exact SLA as well as the heuristic SA solvers are displayed in Fig 3., showing a remarkably different behavior. The SA solver displays a strong dependence on the hardness parameter. Conversely, virtually no dependence is observed for the exact SLA solver, thereby demonstrating that the conductance-like hardness parameter HP successfully captures hardness for algorithms undergoing Markov-chain dynamics, while alternative algorithmic paradigms (like sweeping line) may require a different notion of hardness.

In particular, we find that for the SA solver the success probability PMIS fits well to the functional form PMIS=1-exp(-C HPa), where C refers to a positive fitted constant and smaller values of a yield larger success rate. We find 0.66 for our implementation of SA, competitive with a=0.63 as reported for the optimized quantum algorithm demonstrated in Ref. [2] (for smaller systems than studied here, and when restricting the analysis to graphs with minimum energy gaps sufficiently large to be resolved in the duration of the noisy quantum evolution). This is much better than the SA baseline results in Ref. [2] with 1.03. As such, the quantum speedup reported in Ref. [2] could be classified as limited sequential quantum speedup, based on comparing a quantum annealing type algorithm with a particular implementation of classical SA, while our analysis points at a potential next milestone, in the form of the experimental demonstration of a (more general) limited non-tailored quantum speedup, by comparing the performance of the quantum algorithm to the best-known generic classical optimization algorithm.

Figure 3. Dependence on hardness parameter HP (for different system sizes, for lattices with side length L=13 and about 135 nodes up to lattices with L=33 and about 870 nodes). (Left) Time-to-solution (TTS) for the exact SLA solver as a function of the hardness parameter HP. Virtually no dependence on HP is observed, showing that TTS is fully determined by the system size N~L^2. (Right) Conversely, for the Markov-chain based SA solver, we observe a strong correlation between algorithmic performance and the hardness parameter HP. Here we plot log(1 P_MIS), for UD graphs selected from the top two percentile of hardness parameter for each system size. Power-law fits to the form ~HP^(-a) are used to extract scaling performance with graph hardness.

Tuning problem hardness. We now study hardness as we gradually change the topology of the problem instances. Specifically, we analyze TTS following two protocols by either (i) systematically tuning the blockade radius or (ii) randomly rewiring edges of the graph. While protocol (i) prepares UD graphs (with varying connectivity), protocol (ii) explicitly breaks the UD structure via random (potentially long-range) interactions, ultimately preparing random structure-less Erds-Rnyi (ER) graphs. The results of this analysis are shown in Fig. 4. We find that TTS for the established B&B solver can be tuned systematically over several orders of magnitude. As such, these two protocols suggest a potential recipe to benchmark quantum algorithms on instances orders of magnitude harder for established classical solvers than previously studied, and motivate interesting future experiments towards quantum advantage; in particular, our protocols help identify small, but hard instances, as needed for thorough scaling analyses.

Figure 4. Hardness transition. (Left) Hardness transition as a function of the disk radius (in units of the lattice spacing), as given by the time-to-solution (TTS) for the B&B solver, shown here for systems with about 350 nodes, with 100 random seeds per radius. (Right) Hardness transition from unit-disk to random Erds-Rnyi (ER) graphs (denoted by the red shaded bands). Here TTS is given as a function of the fraction of edges rewired. Starting from Union-Jack-type UD graphs (left), edges are randomly selected and rewired, thereby gradually breaking the UD connectivity, and ultimately generating random ER graphs (right). While the original UJ graphs can be solved to optimality in ~10^(-2)s, we observe TTS potentially orders of magnitudes larger in both plots.

Our work provides an in-depth look into the hardness of themaximum independent set problem on unit-disk graphs, the problem native toRydberg atom arrays. Our results establish well-defined goal posts for quantum algorithms to beat. In particular, we have shown that the hardness parameter put forward in Ref. [2] captures problem hardness for a certain class of Markov chain Monte Carlo solvers, while virtually no dependence between time-to-solution and this parameter is observed for alternative solvers. Finally, we have identified protocols to systematically tune time-to-solution over several orders of magnitude, pinpointing problem instancesorders of magnitude harder for established classical solvers than previously studied.

These results should help identify the most promising directions for applications of Rydberg devices and direct the communitys on-going efforts towards quantum advantage, hopefully inspiring many interesting future experiments further exploring the hardness of the MIS problem with Rydberg atom arrays.

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this blog.

Thisblogpost is for informational purposes only and is not intended as legal, tax, financial, investment, accounting or regulatory advice. Opinions expressed herein are the personal views of the individual(s) and do not represent the views of JPMorgan Chase & Co. The accuracy of any statements, linked resources, reported findings or quotations is not the responsibility of JPMorgan Chase & Co.

[1]R. S. Andrist, M. J. A. Schuetz, P. Minssen, R. Yalovetzky, S. Chakrabarti, D. Herman, N. Kumar, G. Salton, R. Shaydulin, Y. Sun, M. Pistoia, and H. G. Katzgraber, Hardness of the Maximum Independent Set Problem on Unit-Disk Graphs and Prospects for Quantum Speedups, Phys. Rev. Research 5, 043277 (2023); arXiv:2307.09442.

[2]S. Ebadi, A. Keesling, M. Cain, T. T. Wang, H. Levine, D. Bluvstein, G. Semeghini, A. Omran, J.-G. Liu, R. Samajdar, et al., Quantum optimization of Maximum Independent Set using Rydberg atom arrays, Science 376, 1209 (2022).

[3]J. Wurtz, P. L. S. Lopes, N. Gemelke, A. Keesling, and S. Wang, Industry applications of neutral-atom quantum computing solving Independent Set problems, arXiv:2205.08500 (2022).

Read the original post:
JPMorgan Chase and AWS study the prospects for quantum speedups with near-term Rydberg atom arrays | Amazon ... - AWS Blog

Read More..

Japan to expand export restrictions on semiconductor and quantum computing technology – DatacenterDynamics

The Japanese government has announced plans to expand export restrictions on technologies related to semiconductors and quantum computing.

According to a Bloomberg report, impacted technologies include scanning electron microscopes and gate-all-around transistors, which companies including Samsung Electronics have been using to improve semiconductor design.

The report added that the Japanese government will also start requiring licenses for the shipment of quantum computers and cryogenic CMOS circuits, which are used to control the input and output signals of qubits in quantum computers.

Favored trading partners of Japan, including South Korea, Singapore, and Taiwan, will not be exempt from the new rules, which are expected to come into force in July following a period of public consultation.

At the start of 2023, it was reported that Japan, alongside the Netherlands, had agreed to comply with a number of US-led restrictions relating to the exportation of high-tech chipmaking technology to China.

Originally posted here:
Japan to expand export restrictions on semiconductor and quantum computing technology - DatacenterDynamics

Read More..

Quix checks off another condition to build universal quantum computer Bits&Chips – Bits&Chips

12:23

Researchers using Quix Quantums technology have successfully demonstrated the on-chip generation of so-called Greenberger-Horne-Zeilinger (GHZ) states, a critical component for the advancement of photonic quantum computing. The Dutch startup focusing on photonics-based quantum computing hails the result as a breakthrough that validates the companys roadmap towards building a scalable universal quantum computer.

The creation of GHZ states is necessary for photonic quantum computers. In a matter-based quantum computer, qubits are stationary, typically positioned on a specialized chip. By contrast, a photonic quantum computer uses flying qubits of light to process and transmit information. This information is constantly passed from one state to another through a process called quantum teleportation. The GHZ states entanglements across three photonic qubits are the crucial resource enabling the computer to maintain this information.

This milestone demonstrates the capability of photonic quantum computers to generate multi-photon entanglement in a way that advances the roadmap toward large-scale quantum computation. The generation of GHZ states is evidence of the transformative potential of Quix Quantums photonic quantum computing technology, commented CEO Stefan Hengesbach of Quix.

Quix next challenge is now making many of these devices. When comparing one GHZ state to a million GHZ states, think of it as the spark needed to create a blazing fire. The more GHZ states a photonic quantum computer contains, the more powerful it becomes, added Chief Scientist Jelmer Renema.

Read more from the original source:
Quix checks off another condition to build universal quantum computer Bits&Chips - Bits&Chips

Read More..

Enhancing Quantum Error Correction Effectiveness – AZoQuantum

Apr 30 2024Reviewed by Lexie Corner

In a study published in the journal Nature Physics, a team of scientists led by researchers from the University of ChicagosPritzker School of Molecular Engineering (PME) created the blueprint for a quantum computer that can fix errors more efficiently.

Although quantum computers are an extremely potent computational tool, their delicate qubits challenge engineers: how can they design useful, functional quantum systems using bits that are easily disrupted and erased of data by minute changes in their environment?

Engineers have long grappled with how to make quantum computers less error-prone, frequently creating methods to identify and rectify problems rather than preventing them in the first place. However, many of these error-correction systems entail replicating information over hundreds or thousands of physical qubits simultaneously, making it difficult to scale up efficiently.

The system makes use of reconfigurable atom array hardware, which enables qubits to communicate with more neighbors and, consequently, allows the qLDPC data to be encoded in fewer qubits, as well as a new framework based on quantum low-density party-check (qLDPC) codes, which can detect errors by examining the relationship between bits.

With this proposed blueprint, we have reduced the overhead required for quantum error correction, which opens new avenues for scaling up quantum computers.

Liang Jiang, Study Senior Author and Professor, Pritzker School of Molecular Engineering, University of Chicago

While standard computers rely on digital bitsin an on or off positionto encode data, qubits can exist in states of superposition, giving them the ability to tackle new computational problems. However, qubits unique properties also make them incredibly sensitive to their environment; they change states based on the surrounding temperature and electromagnetism.

Quantum systems are intrinsically noisy. Theres really no way to build a quantum machine that wont have error. You need to have a way of doing active error correction if you want to scale up your quantum system and make it useful for practical tasks.

Qian Xu, Graduate Student, Pritzker School of Molecular Engineering, University of Chicago

For the previous few decades, scientists have primarily relied on one type of error correction, known as surface codes, for quantum systems. In these systems, users encode the same logical information into several physical bits grouped in a wide two-dimensional grid. Errors can be detected by comparing qubits to their immediate neighbors. A mismatch indicates that one qubit misfired.

Xu added, The problem with this is that you need a huge resource overhead. In some of these systems, you need one thousand physical qubits for every logical qubit, so in the long run, we dont think we can scale this up to very large computers.

Jiang, Xu, and colleagues from Harvard University, Caltech, the University of Arizona, and QuEra Computing designed a novel method to fix errors using qLDPC codes. This type of error correction had long been contemplated but not included in a realistic plan.

With qLDPC codes, data in qubits is compared to both direct neighbors and more distant qubits. It enables a smaller grid of qubits to do the same number of comparisons for error correction. However, long-distance communication between qubits has always been a challenge when implementing qLDPC.

The researchers devised a solution in the form of new hardware: reconfigurable atoms that can be relocated using lasers to enable qubits to communicate with new partners.

With todays reconfigurable atom array systems, we can control and manipulate more than a thousand physical qubits with high fidelity and connect qubits separated by a large distance. By matching the structure of quantum codes and these hardware capabilities, we can implement these more advanced qLDPC codes with only a few control lines, putting the realization of them within reach with today's experimental systems.

Harry Zhou, Ph.D. Student, Harvard University

When researchers paired qLDPC codes with reconfigurable neutral-atom arrays, they achieved a lower error rate than surface codes using only a few hundred physical qubits. When scaled up, quantum algorithms requiring thousands of logical qubits might be completed with fewer than 100,000 physical qubits, vastly outperforming the gold-standard surface codes.

Theres still redundancy in terms of encoding the data in multiple physical qubits, but the idea is that we have reduced that redundancy by a lot, Xu added.

Though scientists are developing atom-array platforms quickly, the framework is still theoretical and represents a step toward the real-world use of error-corrected quantum computation. The PME team is now striving to improve its design even more and ensure that reconfigurable atom arrays and logical qubits relying on qLDPC codes can be employed in computation.

Xu concluded, We think in the long run, this will allow us to build very large quantum computers with lower error rates.

Xu, Q., et. al. (2024) Constant-overhead fault-tolerant quantum computation with reconfigurable atom arrays. Nature Physics. doi:10.1038/s41567-024-02479-z

Source: https://www.uchicago.edu/en

Read the original post:
Enhancing Quantum Error Correction Effectiveness - AZoQuantum

Read More..

Global Quantum Processors Industry Research 2024: A $5+ Billion Market by 2033 – Collaborations and Partnerships … – GlobeNewswire

Dublin, May 01, 2024 (GLOBE NEWSWIRE) -- The "Global Quantum Processors Market - A Global and Regional Analysis: Focus on Application, Type, Business Model, and Regional and Country-Level Analysis - Analysis and Forecast, 2023-2033" report has been added to ResearchAndMarkets.com's offering.

The global quantum processors market is projected to reach a value of $5.02 billion by 2033 from $1.07 billion in 2023, growing at a CAGR of 16.7%

The global quantum processors market is experiencing rapid growth, driven by advancements in quantum computing technology and increasing investments from both the public and private sectors. Quantum processors, the core components of quantum computers, offer the potential to solve complex problems at speeds far beyond traditional computing systems.

This has led to heightened interest from industries such as healthcare, finance, and cybersecurity, where quantum computing promises groundbreaking solutions. Key players in the quantum processors market are continuously striving to enhance processor performance, scalability, and reliability to meet the evolving demands of various applications.

Additionally, collaborations between technology companies, research institutions, and government agencies are fostering innovation and accelerating the commercialization of quantum processors. Despite these advancements, challenges such as maintaining qubit coherence and error correction remain significant barriers to widespread adoption. However, ongoing research efforts and investments in quantum computing infrastructure are expected to drive the market forward, unlocking new possibilities across industries and reshaping the computing landscape in the years to come.

Market Lifecycle Stage

The global quantum processors market is undergoing rapid evolution, characterized by distinct phases of introduction, growth, maturity, and potential decline. In the introductory phase, pioneering companies and research institutions are driving innovation, developing prototypes, and exploring potential applications. As technological advancements and investments surge, the market enters a phase of rapid growth, marked by increasing demand from various sectors such as finance, healthcare, and cybersecurity.

This growth phase sees the emergence of new players, intensified competition, and acceleration of commercialization efforts. In the maturity phase, quantum processors become more mainstream, with established use cases and a growing customer base. Market saturation may occur as competition reaches its peak, leading to price stabilization and consolidation among key players. However, innovation remains crucial to sustaining market momentum and staying ahead of competitors.

The future trajectory of the quantum processors market depends on factors such as technological breakthroughs, regulatory environment, and market acceptance. While the potential for transformative impact is immense, challenges such as scalability, error correction, and cost-effectiveness need to be addressed to ensure sustained growth and market relevance.

Industrial Impact

The advent of quantum processors marks a revolutionary stride in computing technology, promising unprecedented capabilities that could redefine various industries. In the realm of finance, quantum processors hold the potential to revolutionize complex calculations, optimizing trading strategies, risk assessment, and portfolio management.

Additionally, quantum computing can enhance data encryption techniques, crucial for safeguarding sensitive financial information in the banking and cybersecurity sectors. In healthcare, quantum processors promise to accelerate drug discovery processes by simulating molecular interactions and predicting compound behaviors with unparalleled accuracy.

Furthermore, industries reliant on optimization problems, such as logistics and supply chain management, stand to benefit from quantum computing's ability to solve complex logistical challenges efficiently. As quantum computing continues to advance, its impact across industries is poised to reshape business operations, drive innovation, and unlock new avenues for growth and development.

Key Market Players and Competition Synopsis

The global quantum processors market has been segmented by different types, among which superconducting qubits accounted for around 43.05%, trapped-ion qubits held around 20.29%, topological qubits accounted for approximately 2.76%, quantum dots held around 6.15%, photonic qubits held approximately around 20.94%, cell assembly held around 2.14% and cold atom processors held for around 4.69% of the total quantum processors market in 2022 in terms of value.

Key Attributes:

Market Dynamics Overview

Photonics: The Next Big Quantum Computing Technology

Trends: Current and Future Impact Assessment

Market Drivers

Market Challenges

Market Opportunities

Company Profiles

Superconducting Qubits

Trapped-Ion Qubits

Topological Qubits

Quantum Dots

Photonic Qubits

Cell Assembly

Cold Atom Quantum Processors

Supply Chain Overview

Research and Development Review

Snapshot of the Quantum Computing Market

For more information about this report visit https://www.researchandmarkets.com/r/tb6y6k

About ResearchAndMarkets.com ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

See original here:
Global Quantum Processors Industry Research 2024: A $5+ Billion Market by 2033 - Collaborations and Partnerships ... - GlobeNewswire

Read More..

Intel Research Opens the Door for Mass Production of Silicon-based Quantum Processors, a Requirement for Making … – IndianWeb2.com

Intel has made a significant advancement in quantum computing by demonstrating high fidelity and uniformity in single-electron control on spin qubit wafers. This achievement, as reported in a recent research paper, published in Nature, indicates a major step towards the scalability of silicon-based quantum processors, which are essential for the development of fault-tolerant quantum computers.

Quantum computing researchers at Intel Foundry Technology Research developed a 300-millimeter (mm) cryogenic probing process to collect high-volume data on the performance of spin qubit devices across full wafers, resulting in state-of-the-art uniformity, fidelity, and measurementstatistics of spin qubits.

For an uninitiated, Spin qubits are a type of quantum bit, or qubit, which are the fundamental building blocks of quantum computers. They are based on the quantum property of electron spin. In classical computing, a bit can be in one of two states: 0 or 1. However, in quantum computing, due to the principle of superposition, a qubit like a spin qubit can be in a state that is a complex combination of both 0 and 1 simultaneously.

With this, Intel advances in controlling single-electron spins with high fidelity and uniformity across a wafer. This is significant because it suggests the possibility of scaling up the production of spin qubits using established semiconductor fabrication methods, which is a crucial step towards building practical quantum computers.

Intel is taking steps toward building fault-tolerant quantum computers by improving three factors (1) Qubit density, (2) Reproducibility of uniform qubits, and (3) Measurement statistics from high volume testing.

This research is being conducted by Samuel Neyens and colleagues and demonstrates the application of CMOS industry techniques to the fabrication and measurement of spin qubits. The researchers successfully automated measurements of the operatingpoint of spin qubits and probed the transitions of single electrons across full wafers. Their analysis of the random variation in single-electron operating voltages indicated that this fabrication process leads to low levels of disorder at the 300 mm scale.

This breakthrough is a key step towards scalable quantum computers capable of tackling real-world applications, as it leverages the mature chipmaking industry's methods for fabricating and testing conventional computer chips. The ability to probe single electrons with such precision is essential for the development of fault-tolerant quantum computers that require vast numbers of physical qubits.

The practical applications of probing single electrons in spin qubit wafers are still largely in the developmental stage, but the technology holds significant promise for the future of quantum computing. The ability to probe single electrons with high precision is crucial for creating scalable quantum computers, which could revolutionize various fields by performing complex computations much faster than traditional computers.

More here:
Intel Research Opens the Door for Mass Production of Silicon-based Quantum Processors, a Requirement for Making ... - IndianWeb2.com

Read More..

3 Machine Learning Stocks with the Potential to Make You an Overnight Millionaire – InvestorPlace

Keep an eye on machine learning stocks. Companies are tripping over each other for the technology, which involves feeding data to a machine so it can learn and even make human-like decisions. It could be a$503.4 billion marketby 2030. Two years later, it could be worth $771.38 billion, according to Precedence Research

Even better, were seeing substantial machine learning demand from just about every major industry. That includes healthcare, finance, retail, entertainment and manufacturing just as they are adopting the technology to boost revenue, cut costs and automate operations, as noted by Learn.G2.com. Even more impressive, about48% of global businesses are already using machine learning with 44% of them seeing lower business costs.

While we can always jump intoNvidia(NASDAQ:NVDA), its now an $880 stock thats already made quite a few investors very wealthy. In fact, the last time I mentioned NVDA in a machine learning article, it was only a $700 stock.If you missed its run, dont worry. There are plenty of other machine learning stocks with similar potential.

Source: Phonlamai Photo / Shutterstock.com

Lets start withLantern Pharma(NASDAQ:LTRN), a $52.95 million company trading at less than $5.

An artificial intelligence company, its helping to transform the cost and speed to oncology drug discovery and development with itsAI and machine learning platform, RADR.With the help of machine learning, AI and advanced genomics, its platform can scan billions of data points to help identity compounds that could help cancer patients.

Typically, with early-stage discovery and development, the traditional approachcan take three to five years. However, with companies like Lantern, the process can be as low as two years.

Most recently, the company announced, Multiple clinical trials across three AI-guided drug candidates are active with first expected data and readouts for LP-184 (for use across multiple cancer indications) in the second half of 2024; with additional next-generation drug development programs approaching IND studies.

Source: Al Serov / Shutterstock.com

Theres alsoRekor Systems(NASDAQ:REKR), a $156.57 million company thats leveraging AI and machine learning to identity infrastructure concerns for transportation, public safety and urban mobility. One of its key solutions is Rekor One an AI-powered roadway intelligence platform.

Most recently, the company announced substantial growth throughout 2023. Gross 2023 revenues of $34.9 million, for example, was 75% better than year ago numbers. Fourth quarter gross revenue jumped 71% year over year.

The total value of its contract values jumped 124% year over year to $49.1 million. And it narrowed its adjusted EBITDA loss from $37.4 million to $28.7 million for 2023.

Even better, as noted byInvetorplace contributor Josh Enomoto, For the current fiscal year, experts are calling for revenue of $66.07 million. Thats up a staggering 89.1% from last years tally of $34.93 million. In the following year, sales could jump to $88.74 million, implying a 34.3% gain over projected 2024 revenue.

Source: Owlie Productions / Shutterstock.com

Or, we can diversify at a lower cost with an exchange-traded fund such as theInvesco AI and Next Gen Software ETF(NYSEARCA:IGPT). With an expense ratio of 0.61%, the ETF holds some of the top AI and machine learning stocks on the market, including Nvidia,Alphabet(NASDAQ:GOOG),Meta Platforms (NASDAQ:META),Adobe(NASDAQ:ADBE),Advanced Micro Devices(NASDAQ:AMD) andQualcomm(NASDAQ:QCOM) to name a few.

Whats nice about this ETF is that we can buy 100 shares of it for just under $4,300, which allows us to diversify with its 98 holdings. Or, we can just buy NVDA and not have the same diversification, and pay about $87,700 for 100 shares.

Since bottoming out at around $30.27 in October, the IGPT ETF hit a high of $47.03 in March. Now back to $42.78, Id like to see it initially retest its prior high. Even better, the ETF is technically oversold on RSI, MACD, and Williams %R all of which are pivoting higher.

On Penny Stocks and Low-Volume Stocks:With only the rarest exceptions,InvestorPlacedoes not publish commentary about companies that have a market cap of less than $100 million or trade less than 100,000 shares each day. Thats because these penny stocks are frequently the playground for scam artists and market manipulators. If we ever do publish commentary on a low-volume stock that may be affected by our commentary, we demand thatInvestorPlace.comswriters disclose this fact and warn readers of the risks.

Read More:Penny Stocks How to Profit Without Getting Scammed

On the date of publication, Ian Cooperdid not have (either directly or indirectly) any positions in the securities mentioned.The opinions expressed in this article are those of the writer, subject to theInvestorPlace.comPublishing Guidelines.

Ian Cooper, a contributor to InvestorPlace.com, has been analyzing stocks and options for web-based advisories since 1999.

Read the original:
3 Machine Learning Stocks with the Potential to Make You an Overnight Millionaire - InvestorPlace

Read More..

Art-focused university using AI in admissions – Inside Higher Ed

In 2019, Kyle OConnell had a vision of leveraging technology to boost in-person relationships with students at the School of the Art Institute of Chicago. He set out to create a machine learning-enabled system that could help with the admissions process, ultimately meant to direct employees energy and resources toward students in an earlier and more effective way.

We deal with the technology, but ultimately we want to bring it back to getting more in-person time with who we can make the most impact on, said OConnell, director of enrollment analytics and forecasting at the Chicago institution, known as SAIC. And theres more information we have about students than youre able to assess as an individual person.

He admitted that the machine-learning attempt didnt knock it out of the park on the first try, undermined by data that was not very robust. He worked to adjust the data-gathering process over the next couple years, and his timeline coincided with an opportunity to work with the Chicago technology consulting firm SPR to use machine-learning models during the application process.

Most Popular

At the start of 2023, SPR asked organizations to send in pitches on how to better the local community, with the winner getting $50,000 in honor of the companys 50th anniversary. The criteria were broadSPR received pitches on topics as diverse as drones and deforestationbut the firm ultimately chose SAIC because it fit best with our mission of boosting the local community, according to Steven Devoe, SPRs data specialty director.

Stacks of data from applicants with offers to attend SAIC are entered into the model, which parses more than 100 factors including the number of SAIC events the applicants attended, the types of programs they are interested in, and where they went to high school. It then spits back two outcomes: the likelihood a student would accept the admissions offersay, 50percent chance they would say yes to accepting an admissions offerand then a further yes or no if a student would actually end up attending the university. Oftentimes, institutions see summer melt from students who accept an enrollment offer but do not end up attending.

Both OConnell and Devoe were quick to point out the technology is not being used to dictate which students should and should not be accepted into the institution. Instead, the data illuminate the likelihood of the already-accepted students choosing to attend the university.

There are certainly things you could do with AI that are terrible, but the things a school would do with data is want to know more about directing resources and energy toward people we can help the most, OConnell said. Its, How can we find them better and earlier?

SPR and SAIC began working on the model in the first half of 2023 and began to utilize it in the latter half of the year. The results of using the model are largely unknown, as SAIC is still in its admissions cycle.

While OConnell said the institution needs to spend the next several months gathering the data before ultimately deciding on more uses, Devoe believes this could ultimately lend itself to budget and time savings.

If, for example, the art school determines that students from a specific country do not have a high likelihood of accepting an offer from SAIC, its officials may spend less marketing in that country. It also helps with planning for class size and sections, with SAIC officials having a more accurate outlook on which students are likely to end up on campus.

We created this focused on how to get more students access to higher ed, help the institute plan better and maybe spend dollars more effectively in terms of where its investing, Devoe said.

He added other higher education institutions have begun reaching out to ask for similar tools or models that could be used for other purposes, such as predicting the likelihood of students dropping out in their first academic term.

This is the first time SAIC is using AI and machine learning in admissions, but many institutions across the nation have turned toward the increasingly pervasive technology.

According to a September survey from higher education-focused magazine Intelligent, half of universities were using AI in their admissions process. This year, that number is expected to jump to more than 80percent. Institutions reported most often using AI to review transcripts and recommendation letters. Many of them stated that they used it to review personal essays as well, with some going as far as to conduct preliminary interviews with applicants using AI.

Application readers have been mechanically doing at least the first screen of applications for decades now, based on some uniform criteria given to them by the institution, Diane Gayeski, a professor of strategic communications at Ithaca College and a higher ed adviser for Intelligent, said in a previous interview with Inside Higher Ed.

Some of that can easily be done by a machine, she said. These are all algorithms. Whether a person uses them or a machine does, it doesnt make much difference.

However, SAIC does seem to be the first among art- and design-focused institutions to utilize the technology in admissions. While art students typically have to submit a portfolio in the application process, Lavoe stressed the machine-learning technology is not judging the portfolio in any way.

The art portfolio, it didn't find that interesting, he said, except for clocking which type of program a student would be interested in, such as painting or sculpture.

Many schools of art and design, while harboring some concerns, are leaning into the technology after the launch of ChatGPT late in 2022. Even the most angry illustration faculty have said, I hate it, I wish we could go traditional, but if youre a student today you would be an idiot if you didnt learn this before you go into the world, said Rick Dakan, chair of the AI Task Force at the Ringling College of Art and Design in Sarasota, Fla. It will be part of your career.

SAIC, upon receiving the machine-learning model for free, can utilize it as long as it sees fit. It may upgrade eventually, but for now, OConnell is content with taking things slowlyin contrast to the normal rhythm of the quick-moving tech world.

Its, Lets not try to do too much; lets start with a single thing were trying to look at, he said. Which is, can we use the tool along with other reporting and assessments? How does this fit into our workflow? And then, what are its possibilities from there.

View post:
Art-focused university using AI in admissions - Inside Higher Ed

Read More..

Google upgrades the Chrome Omnibox address bar with machine learning – Android Central

Google is making an under-the-hood change in the latest version of Chrome that is designed to improve the suggested webpage results that appear in the address bar, also known as the Omnibox.

In Chrome 124, these suggestions are now made with the help of machine learning models, which the company says are replacing "hand-built and hand-tuned formulas." Now that the address bar is powered by ML models, results should be more accurate and personalized to each user.

Justin Donnelly, a Chrome engineering lead working on the Omnibox, explains in a blog post that the old scoring system could not be adapted or changed over time. The engineer described it as "inflexible," and due to the lack of flexibility, "the scoring system went largely untouched for a long time." So, when looking at ways to improve the address bar and its suggestions, the Chrome team saw machine learning as the obvious solution.

ML models can often detect trends and insights that get past the human eye, and that was the case with the models powering the Omnibox. One tangible change in address bar behavior due to the switch to ML is a shift in how the "time since navigation" signal is perceived. Previously, the manual formula would give a higher relevance score to URLs that were recently accessed. However, the ML models found that this was not, in fact, what users were looking for.

"It turns out that the training data reflected a pattern where users sometimes navigate to a URL that was not what they really wanted and then immediately return to the Chrome omnibox and try again," Donnelly explains. "In that case, the URL they just navigated to is almost certainlynotwhat they want, so it should receive a low relevance score during this second attempt."

Aside from altering the way results are scored by relevance, Google will use ML models in the address bar to make webpage suggestions "more precise and relevant to you." Presumably, your browsing habits and other data Google collects will be used to tweak the Omnibox's behavior to best suit your needs. In other words, the way that people use the Chrome address bar can be used to retrain ML models that power it over time.

The new address bar is included in Chrome 124 for desktops, though you won't notice any visual differences. In the future, Google wants to add more signals to factor into relevance scores, such as time of day and environment.

Get the latest news from Android Central, your trusted companion in the world of Android

Read more here:
Google upgrades the Chrome Omnibox address bar with machine learning - Android Central

Read More..

Random robots are more reliable – EurekAlert

video:

Researchers tested the new AI algorithm's performance with simulated robots, such as NoodleBot.

Credit: Northwestern University

Northwestern University engineers have developed a new artificial intelligence (AI) algorithm designed specifically for smart robotics. By helping robots rapidly and reliably learn complex skills, the new method could significantly improve the practicality and safety of robots for a range of applications, including self-driving cars, delivery drones, household assistants and automation.

Called Maximum Diffusion Reinforcement Learning (MaxDiff RL), the algorithms success lies in its ability to encourage robots to explore their environments as randomly as possible in order to gain a diverse set of experiences. This designed randomness improves the quality of data that robots collect regarding their own surroundings. And, by using higher-quality data, simulated robots demonstrated faster and more efficient learning, improving their overall reliability and performance.

When tested against other AI platforms, simulated robots using Northwesterns new algorithm consistently outperformed state-of-the-art models. The new algorithm works so well, in fact, that robots learned new tasks and then successfully performed them within a single attempt getting it right the first time. This starkly contrasts current AI models, which enable slower learning through trial and error.

The research will be published on Thursday (May 2) in the journal Nature Machine Intelligence.

Other AI frameworks can be somewhat unreliable, said NorthwesternsThomas Berrueta, who led the study. Sometimes they will totally nail a task, but, other times, they will fail completely. With our framework, as long as the robot is capable of solving the task at all, every time you turn on your robot you can expect it to do exactly what its been asked to do. This makes it easier to interpret robot successes and failures, which is crucial in a world increasingly dependent on AI.

Berrueta is aPresidential Fellowat Northwestern and a Ph.D. candidate in mechanical engineering at theMcCormick School of Engineering. Robotics expertTodd Murphey, a professor of mechanical engineering at McCormick and Berruetas adviser, is the papers senior author. Berrueta and Murphey co-authored the paper withAllison Pinosky, also a Ph.D. candidate in Murpheys lab.

The disembodied disconnect

To train machine-learning algorithms, researchers and developers use large quantities of big data, which humans carefully filter and curate. AI learns from this training data, using trial and error until it reaches optimal results. While this process works well for disembodied systems, like ChatGPT and Google Gemini (formerly Bard), it does not work for embodied AI systems like robots. Robots, instead, collect data by themselves without the luxury of human curators.

Traditional algorithms are not compatible with robotics in two distinct ways, Murphey said. First, disembodied systems can take advantage of a world where physical laws do not apply. Second, individual failures have no consequences. For computer science applications, the only thing that matters is that it succeeds most of the time. In robotics, one failure could be catastrophic.

To solve this disconnect, Berrueta, Murphey and Pinosky aimed to develop a novel algorithm that ensures robots will collect high-quality data on-the-go. At its core, MaxDiff RL commands robots to move more randomly in order to collect thorough, diverse data about their environments. By learning through self-curated random experiences, robots acquire necessary skills to accomplish useful tasks.

Getting it right the first time

To test the new algorithm, the researchers compared it against current, state-of-the-art models. Using computer simulations, the researchers asked simulated robots to perform a series of standard tasks. Across the board, robots using MaxDiff RL learned faster than the other models. They also correctly performed tasks much more consistently and reliably than others.

Perhaps even more impressive: Robots using the MaxDiff RL method often succeeded at correctly performing a task in a single attempt. And thats even when they started with no knowledge.

Our robots were faster and more agile capable of effectively generalizing what they learned and applying it to new situations, Berrueta said. For real-world applications where robots cant afford endless time for trial and error, this is a huge benefit.

Because MaxDiff RL is a general algorithm, it can be used for a variety of applications. The researchers hope it addresses foundational issues holding back the field, ultimately paving the way for reliable decision-making in smart robotics.

This doesnt have to be used only for robotic vehicles that move around, Pinosky said. It also could be used for stationary robots such as a robotic arm in a kitchen that learns how to load the dishwasher.As tasks and physical environments become more complicated, the role of embodiment becomes even more crucial to consider during the learning process.This is an important step toward real systems that do more complicated, more interesting tasks.

The study, Maximum diffusion reinforcement learning, was supported by the U.S. Army Research Office (grant number W911NF-19-1-0233) and the U.S. Office of Naval Research (grant number N00014-21-1-2706).

Nature Machine Intelligence

Computational simulation/modeling

Not applicable

Maximum diffusion reinforcement learning

2-May-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

See more here:
Random robots are more reliable - EurekAlert

Read More..