Page 184«..1020..183184185186..190200..»

Quantum Computing Enters the Atomic Realm – Optics & Photonics News

Atom-based architectures may have a scalability advantage over other platforms in the quest to build more powerful quantum processors.

An experimental scheme demonstrated by researchers at Princeton and Yale universities is able to convert physical noise into errors that can be corrected more easily. [F. Wojciechowski / Princeton University]

Quantum computers built from arrays of ultracold atoms have recently emerged as a serious contender in the quest to create qubit-powered machines that can outperform their classical counterparts. While other hardware architectures have yielded the first fully functioning quantum processors to be available for programming through the cloud, recent developments suggest that atom-based platforms might have the edge when it comes to future scalability.

The scalability advantage of atom-based platforms stems from the exclusive use of photonic technologies to cool, trap and manipulate the atomic qubits.

That scalability advantage stems from the exclusive use of photonic technologies to cool, trap and manipulate the atomic qubits. Side-stepping the need for complex cryogenic systems or the intricacies of chip fabrication, neutral-atom quantum computers can largely be built from existing optical components and systems that have already been optimized for precision and reliability.

The traps are optical tweezers, the atoms are controlled with laser beams and the imaging is done with a camera, says Jeff Thompson, a physicist at Princeton University, USA, whose team has been working to build a quantum computer based on arrays of ytterbium atoms. The scalability of the platform is limited only by the engineering that can be done with the optical system, and there is a whole industry of optical components and megapixel devices where much of that work has already been done.

Jeff Thompson and his team at Princeton University, USA, have pioneered the use of ytterbium atoms to encode and manipulate quantum information. [S.A. Khan / Fotobuddy]

Such ready availability of critical components and systems has enabled both academic groups and commercial companies to scale their quantum processors from tens of atomic qubits to several hundred in the space of just a few years. Then, in November 2023, the California-based startup Atom Computing announced that it had populated a revamped version of its commercial system with almost 1,200 qubitsmore than had yet been reported for any hardware platform. Its exciting to be able to showcase the solutions we have been developing for the past several years, says Ben Bloom, who founded the company in 2018 and is now its chief technology officer. We have demonstrated a few firsts along the way, but while we have been building, the field has been getting more and more amazing.

Neutral atoms offer many appealing characteristics for encoding quantum information. For a start, they are all identicalcompletely free of any imperfections that may be introduced through fabricationwhich means that they can be controlled and manipulated without the need to tune or calibrate individual qubits. Their quantum states and interactions are also well understood and characterized, while crucial quantum properties such as superposition and entanglement are maintained over long enough timescales to perform computational tasks.

However, early attempts to build quantum computers from neutral atoms met with two main difficulties. The first was the need to extend existing methods for trapping single atoms in optical tweezers to create large-scale atomic arrays. Although technologies such as spatial light modulators enable laser beams to be used to produce a regular pattern of microtraps, loading the atoms into the tweezers is a stochastic processwhich means that the probability of each trap being occupied is 50%. As a result, the chances of creating a defect-free array containing large numbers of atoms becomes vanishingly small.

The solution came in 2016, when three separate groupsbased at the Institut dOptique, France, Harvard University, USA, and the Korea Advanced Institute of Science and Technology (KAIST), Republic of Koreademonstrated a concept called rearrangement. In this scheme, an image is taken of the atoms when they are first loaded into the tweezers, which identifies which sites are occupied and which are empty. All the vacant traps are switched off, and then the loaded ones are moved to fill the gaps in the array. This shuffling procedure can be achieved, for example, by using acousto-optic deflectors to alter the positions of the trapping laser beams, creating dynamic optical tweezers that can be combined with real-time control to assemble large arrays of single atoms in less than a second.

[Enlarge image]Large defect-free arrays of single atoms can be created through the process of rearrangement. In this example, demonstrated by a team led by Antoine Browaeys of the Institut dOptique, France, an ordered array of 324 atoms was created from 625 randomly filled traps. [Reprinted with permission from K.-N. Schymik, Phys. Rev. A 106, 022611(2022); 2022 by the American Physical Society]

Before that, there were lots of complicated ideas for generating single-atom states in optical tweezers, remembers Thompson. This rearrangement technique enabled the creation of large arrays containing one hundred or so single atoms without defects, and that has since been extended to much higher numbers.

In these atomic arrays, the qubits are encoded in two long-lived energy states that are controlled with laser light. In rubidium, for example, which is often used because its well-understood atomic transitions can be manipulated relatively easily, the single outermost electron occupies one of two distinct energy levels in the ground state, caused by the coupling between the electron spin and the nuclear spin. The atoms are easily switched between these two energy states by flipping the spins relative to each other, which is achieved with microwave pulses tuned to 6.8 GHz.

While atoms in these stable low-energy levels offer excellent single-qubit properties, the gate operations that form the basis of digital computation require the qubits to interact and form entangled states. Since the atoms in a tweezer array are too far apart for them to interact while remaining in the ground state, a focused laser beam is used to excite the outermost electron into a much higher energy state. In these highly excited Rydberg states, the atom becomes physically much larger, generating strong interatomic interactions on sub-microsecond timescales.

One important effect of these interactions is that the presence of a Rydberg atom shifts the energy levels in its nearest neighbors, preventing them from being excited into the same high-energy state. This phenomenon, called the Rydberg blockade, means that only one of the atoms excited by the laser will form a Rydberg state, but its impossible to know which one. Such shared excitations are the characteristic feature of entanglement, providing an effective mechanism for controlling two-qubit operations between adjacent atoms in the array.

Until recently, however, the logic gates created through two-atom entanglement were prone to errors. For a long time, the fidelity of two-qubit operations hovered at around 80%much lower than could be achieved with superconducting or trapped-ion platforms, says Thompson. That meant that neutral atoms were not really taken seriously for gate-based quantum computing.

The sources of these errors were not fully understood until 2018, when breakthrough work by Antoine Browaeys and colleagues at the Institut dOptique and Mikhail Lukins team at Harvard University analyzed the effects of laser noise on the gate fidelities. People had been using very simple models of the laser noise, says Thompson. With this work, they figured out that phase fluctuations were the major contributor to the high error rates.

At a stroke, these two groups showed that suppressing the laser phase noise could extend the lifetime of the Rydberg states and boost the fidelity of preparing two-qubit entangled states to 97%. Further enhancements since then have yielded two-qubit gate fidelities of more than 99%the minimum threshold for fault-tolerant quantum computing.

While rubidium continues to be a popular choice, several groups believe that ytterbium could offer some crucial benefits for large-scale quantum computing.

That fundamental advance established atomic qubits as a competitive platform for digital quantum computing, catalyzing academic groups and quantum startups to explore and optimize the performance of different atomic systems. While rubidium continues to be a popular choice, several groups believe that ytterbium could offer some crucial benefits for large-scale quantum computing. Ytterbium has a nuclear spin of one half, which means that the qubit can be encoded purely in the nuclear spin, explains Thompson. While all qubits based on atoms or ions have good coherence by default, we have found that pure nuclear-spin qubits can maintain coherence times of many seconds without needing any special measures.

Pioneering experiments in 2022 by Thompsons Princeton group, as well as by a team led by Adam Kaufman at JILA in Boulder, CO, USA, first showed the potential of the ytterbium-171 isotope for producing long-lived atomic qubits. Others have followed their lead, with Atom Computing replacing the strontium atoms in its original prototype with ytterbium-171 in the upgraded 1,200-qubit platform. Strontium also supports nuclear qubits, but we found that we needed to do lots of quantum engineering to achieve long coherence times, says Bloom. With ytterbium, we can achieve coherence times of tens of seconds without the need for any of those extra tricks.

Atom Computings first-generation quantum computer exploited around 100 qubits of single strontium atoms, while its next-generation platform can accommodate around 1,200 ytterbium atoms. [Atom Computing]

The rich energy-level structure of ytterbium also provides access to a greater range of atomic transitions from the ground state, offering new ways to manipulate and measure the quantum states. Early experiments have shown, for example, that this additional flexibility can be exploited to measure some of the qubits while a quantum circuit is being run but without disturbing the qubits that are still being used for logical operations.

Indeed, the ability to perform these mid-circuit measurements is a critical requirement for emerging schemes to locate and correct physical errors in the system, which have so far compromised the ability of quantum computers to perform complex computations. These physical errors are caused by noise and environmental factors that perturb the delicate quantum states, with early estimates suggesting that millions of physical qubits might be needed to provide the redundancy needed to achieve fault-tolerant quantum processing.

More recently, however, it has become clear that fewer qubits may be needed if the physical system can be engineered to limit the impact of the errors. One promising approach is the concept of erasure conversiondemonstrated in late 2023 by a team led by Thompson and Shruti Puri at Yale University, USAin which the physical noise is converted into errors with known locations, also called erasures.

In their scheme, the qubits are encoded in two metastable states of ytterbium, for which most errors will cause them to decay back to the ground state. Importantly, those transitions can easily be detected without disturbing the qubits that are still in the metastable state, allowing failures to be spotted while the quantum processor is still being operated. We just flash the atomic array with light after a few gate operations, and any light that comes back illuminates the position of the error, explains Thompson. Just being able to see where they are could ultimately reduce the number of qubits needed for error correction by a factor of ten.

Experiments by the Princeton researchers show that their method can currently locate 56% of the errors in single-qubit gates and 33% of those in two-qubit operations, which can then be discarded to reduce the effects of physical noise. The team is now working to increase the fidelity that can be achieved when using these metastable states for two-qubit operations, which currently stands at 98%.

A team led by Mikhail Lukin (right) at Harvard University, USA, pictured with lab member Dolev Bluvstein, created the first programmable logical quantum processor, capable of encoding up to 48 logical qubits. [J. Chase / Harvard Staff Photographer]

Meanwhile, Lukins Harvard group, working with several academic collaborators and Boston-based startup QuEra Computing, has arguably made the closest approach yet to error-corrected quantum computing. One crucial step forward is the use of so-called logical qubits, which mitigate the effects of errors by sharing the quantum information among multiple physical qubits.

Previous demonstrations with other hardware platforms have yielded one or two logical qubits, but Lukin and his colleagues showed at the end of 2023 that they could create 48 logical qubits from 280 atomic qubits. They used optical multiplexing to illuminate all the rubidium atoms within a logical qubit with identical light beams, allowing each logical block to be moved and manipulated as a single unit. Since each atom in the logical block is addressed independently, this hardware-efficient control mechanism prevents any errors in the physical qubits from escalating into a logical fault.

For more-scalable processing of these logical qubits, the researchers also divided their architecture into three functional zones. The first is used to store and manipulate the logical qubitsalong with a reservoir of physical qubits that can be mobilized on demandensuring that these stable quantum states are isolated from processing errors in other parts of the hardware. Pairs of logical qubits can then be moved, or shuttled, into the second entangling zone, where a single excitation laser drives two-qubit gate operations with a fidelity of more than 99.5%. In the final readout zone, the outcome of each gate operation is measured without affecting the ongoing processing tasks.

[Enlarge image]Schematic of the logical processor, split into three zones: storage, entangling and readout. Logical single-qubit and two-qubit operations are realized transversally with efficient, parallel operations. [D. Bluvstein et al. Nature,626, 58 (2024); CC-BY-NC 4.0]

The team also configured error-resistant quantum circuits to run on the logical processor, which in one example yielded a fidelity of 72% when operating on 10 logical qubits, increasing to 99% when the gate errors detected in the readout zone at the end of each operation were discarded. When running more complex quantum algorithms requiring hundreds of logical gates, the performance was up to 10 times better when logical qubits were used instead of their single-atom counterparts.

While this is not yet full error correction, which would require the faults to be detected and reset in real time, this demonstration shows how a logical processor can work in tandem with error-resistant software to improve the accuracy of quantum computations. The fidelities that can be achieved could be improved still further by sharing the quantum information among more physical qubits, with QuEras technology roadmap suggesting that by 2026 it will be using as many as 10,000 single atoms to generate 100 logical qubits. This is a truly exciting time in our field as the fundamental ideas of quantum error correction and fault tolerance start to bear fruit, Lukin commented. Although there are still challenges ahead, we expect that this new advance will greatly accelerate the progress toward large-scale, useful quantum computers.

In another notable development, QuEra has also won a multimillion-dollar contract to build a version of this logical processor at the UKs National Quantum Computing Centre (NQCC). The QuEra system will be one of seven prototype quantum computers to be installed at the national lab by March 2025, with others including a cesium-based neutral-atom system from Infleqtion (formerly ColdQuanta) and platforms exploiting superconducting qubits and trapped ions.

Once built, these development platforms will be used to understand and benchmark the capabilities of different hardware architectures, explore the types of applications that suit each one, and address the key scaling challenges that stand in the way of fault-tolerant quantum computing. We know that much more practical R&D will be needed to bridge the gap between currently available platforms and a fully error-corrected neutral-atom quantum computer with hundreds of logical qubits, says Nicholas Spong, who leads the NQCCs activities in tweezer-array quantum computing. For neutral-atom architectures, the ability to scale really depends on engineering the optics, lasers and control systems.

Researchers at the Boston-based startup QuEra, which collaborates on neutral-atom quantum computing with Mikhail Lukins group at Harvard University, USA. [Courtesy of QuEra]

One key goal for hardware developers will be to achieve the precision needed to control the spin rotations of individual atoms as they become more closely packed into the array. While global light fields and qubit shuttling provide efficient and precise control mechanisms for bulk operations, single-atom processes must typically be driven by focused laser beams operating on the scale of tens of nanometers.

To relax the strict performance criteria for these local laser beams, Thompsons group has demonstrated an alternative solution that works for divalent atoms such as ytterbium. We still have a global gate beam, but then we choose which atoms experience that gate by using a focused laser beam to shift specific atoms out of resonance with the global light field, he explains. It doesnt really matter how big the light shift is, which means that this approach is more robust to variations in the laser. Being able to control small groups of atoms in this way is a lot faster than moving them around.

Another key issue is the number of single atoms that can be held securely in the tweezer array. Current roadmaps suggest that arrays containing 10,000 atoms could be realized by increasing the laser power, but scaling to higher numbers could prove tricky. Its a challenge to get hundreds of wattsof laser powerinto the traps while maintaining coherence across the array, explains Spong. The entire array of traps should be identical, but imperfect optics makes it hard to make the traps around the edge work as well as those in the center.

With that in mind, the team at Atom Computing has deployed additional optical technologies in its updated platform to provide a pathway to larger-scale machines. If we wanted to go from 100 to 1,000 qubits, we could have just bought some really big lasers, says Bloom. But we wanted to get on a track where we can continue to expand the array to hundreds of thousands of atoms, or even a million, without running into issues with the laser power.

cA quantum engineer measures the optical power of a laser beam at Atom Computings research and development facility in Boulder, CO, USA. [Atom Computing]

The solution for Atom Computing has been to combine the atomic control provided by optical tweezers with the trapping ability of optical lattices, which are most commonly found in the worlds most precise atomic clocks. These optical lattices exploit the interference of laser beams to create a grid of potential wells on the subwavelength scale, and their performance can be further enhanced by adding an optical buildup cavity to generate constructive interference between many reflected laser beams. With these in-vacuum optics, we can create a huge array of deep traps with only a moderate amount of laser power, says Bloom. We chose to demonstrate an array that can trap 1,225 ytterbium atoms, but theres no reason why we couldnt go much higher.

Importantly, in a modification of the usual rearrangement approach, this design also allows the atomic array to be continuously reloaded while the processor is being operated. Atoms held in a magneto-optical trap are first loaded into a small reservoir array, from which they are transferred into the target array that will be used for computation. The atoms in both arrays are then moved into the deep trapping potential of the optical lattice, where rapid and low-loss fluorescence imaging determines which of the sites are occupied. Returning the atoms to the optical tweezers then allows empty sites within the target array to be filled from the reservoir, with multiple loading cycles yielding an occupancy of 99%.

Researchers working in the field believe the pace of progress is already propelling the technology toward the day when a neutral-atom quantum computer will outperform a classical machine.

Repeatedly replenishing the reservoir with fresh atoms ensures that the target array is always full of qubits, which is essential to prevent atom loss during the execution of complex quantum algorithms. Large-scale error-corrected computations will require quantum information to survive long past the lifetime of a single qubit, Bloom says. Its all about keeping that calculation going when you have hundreds of thousands of qubits.

While many challenges remain, researchers working in the field believe the pace of progress in recent years is already propelling the technology toward the day when a neutral-atom quantum computer will be able to outperform a classical machine. Neutral atoms allow us to reach large numbers of qubits, achieve incredibly long coherence times and access novel error-correction codes, says Bloom. As an engineering firm, we are focused on improving the performance still further, since all thats really going to matter is whether you have enough logical qubits and sufficiently high gate fidelities to address problems that are interesting for real-world use cases.

Susan Curtis is a freelance science and technology writer based in Bristol, UK.

Read more:
Quantum Computing Enters the Atomic Realm - Optics & Photonics News

Read More..

Ripple publishes math prof’s warning: ‘public-key cryptosystems should be replaced’ – Cointelegraph

Professor Massimiliano Sala, of the University of Trento in Italy, recently discussed the future of blockchain technology, as it relates to encryption and quantum computing, with the crew at Ripple as part of the companys ongoing university lecture series.

Salas discussion focused on the potential threat posed by quantum computers as the technology matures. According to the professor, current encryption methods could be easy for tomorrows quantum computers to solve, thus putting entire blockchains at risk.

Per Sala:

What the professor is referring to is a hypothetical paradigm called Q-day, a point at which quantum computers become sufficiently powerful and available for bad actors to break classical encryption methods.

While this would have far-reaching implications for any field where data security is important including emergency services, infrastructure, banking, and defense it could theoretically devastate the world of cryptocurrency and blockchain.

Specifically, Sala warns that all classical public-key cryptosystems should be replaced with counterparts secure against quantum attacks. The idea here being that a future quantum computer or quantum attack algorithm could crack the encryption on these keys using mathematical brute force.

It bears mention that Bitcoin, the worlds most popular cryptocurrency and blockchain, would fall under this category.

While there currently exists no practical quantum computer capable of such a feat, governments and science institutions around the globe have been preparing for Q-day as if its an eventuality. For his part, Sala says that such an event may not be imminent. However, physicists at dozens of academic and commercial laboratories have demonstrated breakthroughs that have led many in the field to believe such systems could arrive within a matter of years.

Ultimately, Sala says hes satisfied with the progress being made in the sector and recommends that blockchain developers continue to work with encryption experts who understand the standards and innovations surrounding quantum-proofing modern systems.

Related: Harvard built hacker-proof quantum network in Boston using existing fiber cable

Original post:
Ripple publishes math prof's warning: 'public-key cryptosystems should be replaced' - Cointelegraph

Read More..

The 3 Best Quantum Computing Stocks to Buy in May 2024 – InvestorPlace

Despite taking a hit in April, quantum computing stocks have a long runway ahead

Quantum computing stocks on the whole took a generous hit in April, mostly due to the markets realization that interest rates will not be going down any time soon. Moreover, the U.S. Labor Departments most recent report on inflation has stymied any hopes of a near-term rate cut, with the most likely date for cuts now occurring in September of 2024 if inflation continues to cool. According to the report, year-over-year inflation numbers decreased from 3.5% to 3.4%. Most consumers wont notice, but it shows the right trend and might motivate larger corporations to rethink expenditures.

After all, the world of quantum computing relies heavily on expenditures in the esoteric. Many of the investments made today to progress the technology may not see maturation for decades to come. As such, investors should be very picky about which quantum computing stocks to buy. Not all are created equal, and may not have equal shares of the future of this revolutionary technology.

Source: shutterstock.com/LCV

Ive been bullish before on International Business Machines (NYSE:IBM) due to its contributions to artificial intelligence and nanomaterials. Many of the companys advantages in the world of technology come from its mastery of computer design and manufacturing. Nowhere is this more evident than in IBMs quantum computing projects, which offer a return to massive supercomputers of the 20th century, yet at computing speeds exponentially more powerful.

From its Quantum System Two to the new Heron Processor, IBM is constantly on the cutting edge of quantum technology and shows no signs of slowing down. For investors, this means investing in a company with both the capital and reputation to lead in new quantum technologies.

Furthermore, IBM is incredibly well-diversified, touching several industries from consumer to corporate computers, to AI models and beyond. Thus, even if its quantum computing projects underperform, IBM has diverse ways to make it up to investors.

Source: T. Schneider / Shutterstock

Branding itself as the practical quantum computing company, D-Wave Quantum (NYSE:QBTS) still sits at a significant discount from its special acquisition company merger (SPAC) price of around $10. Two years after going public via the SPAC, the company has lost 87% of its value. This means trouble for anyone who bought into it then. However, the tides are turning for D-Wave Quantums stock, as the company has one of the most pertinent business models in quantum computing stocks to buy list.

By leveraging its resources in quantum computing data centers, the company offers The Leap quantum cloud service, which makes the power of quantum computers available for a fraction of the price. Subscribers can use quantum cloud computing to solve mathematical algorithms at fractions of the speed of traditional computing.

Ultimately, this business model has resulted in the companys Q1 2024 revenue up 56% year-over-year (YOY) with Q1 bookings up 54% YOY and gross profit up 294% YOY. As such, QBTS could easily be a quantum computing stock to buy due to sheer value for money.

Source: Amin Van / Shutterstock.com

From a quantum computing technology standpoint, IonQ (NYSE:IONQ) still offers some of the most compelling computers on the market. Thats because IONQs proprietary design relies on electromagnetism and atomic interactions to perform sustained calculations. From a quantum mechanics standpoint, this allows for solving far more complex and time-consuming algorithms. These computers are also exceptionally scalable to customer needs, making them versatile.

This is why I recommended the stock back in March 2024. Now Im doubling down on my recommendation thanks to its recent Q1 2024 report showing decent revenue growth. Though somewhat meager, its $7.6 million in revenue for the quarter represents a 77% growth year-over-year. Moreover, the company is keeping generous cash reserves at $434.4 million to maintain operations and research around its projects.

Bearing all this in mind, analysts have awarded the stock strong buy ratings for now. As a result, investors looking for a pure-play in quantum computing should not pass up IONQ.

On the date of publication, Viktor Zarevdid not have (either directly or indirectly) any positions in the securities mentioned in this article.The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Viktor Zarev is a scientist, researcher, and writer specializing in explaining the complex world of technology stocks through dedication to accuracy and understanding.

Read the original post:
The 3 Best Quantum Computing Stocks to Buy in May 2024 - InvestorPlace

Read More..

Glimpse of next-generation internet Harvard Gazette – Harvard Gazette

Its one thing to dream up a next-generation quantum internet capable of sending highly complex, hacker-proof information around the world at ultra-fast speeds. Its quite another to physically show its possible.

Thats exactly what Harvard physicists have done, using existing Boston-area telecommunication fiber, in a demonstration of the worlds longest fiber distance between two quantum memory nodes. Think of it as a simple, closed internet carrying a signal encoded not by classical bits like the existing internet, but by perfectly secure, individual particles of light.

Thegroundbreaking work, published in Nature, was led by Mikhail Lukin, the Joshua and Beth Friedman University Professor in the Department of Physics, in collaboration with Harvard professorsMarko LonarandHongkun Park,who are all members of theHarvard Quantum Initiative.The Naturework was carried out with researchers atAmazon Web Services.

The Harvard team established the practical makings of the first quantum internet by entangling two quantum memory nodes separated by optical fiber link deployed over a roughly 22-mile loop through Cambridge, Somerville, Watertown, and Boston. The two nodes were located a floor apart in Harvards Laboratory for Integrated Science and Engineering.

Map showing path of two-node quantum network through Boston and Cambridge.

Credit: Can Knaut via OpenStreetMap

Quantum memory, analogous to classical computer memory, is an important component of a quantum computing future because it allows for complex network operations and information storage and retrieval. While other quantum networks have been created in the past, the Harvard teams is the longest fiber network between devices that can store, process, and move information.

Each node is a very small quantum computer, made out of a sliver of diamond that has a defect in its atomic structure called a silicon-vacancy center. Inside the diamond, carved structures smaller than a hundredth the width of a human hair enhance the interaction between the silicon-vacancy center and light.

Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers.

The silicon-vacancy center contains two qubits, or bits of quantum information: one in the form of an electron spin used for communication, and the other in a longer-lived nuclear spin used as a memory qubit to store entanglement, the quantum-mechanical property that allows information to be perfectly correlated across any distance.

(In classical computing, information is stored and transmitted as a series of discrete binary signals, say on/off, that form a kind of decision tree. Quantum computing is more fluid, as information can exist in stages between on and off, and is stored and transferred as shifting patterns of particle movement across two entangled points.)

Using silicon-vacancy centers as quantum memory devices for single photons has been a multiyear research program at Harvard. The technology solves a major problem in the theorized quantum internet: signal loss that cant be boosted in traditional ways.

A quantum network cannot use standard optical-fiber signal repeaters because simple copying of quantum information as discrete bits is impossible making the information secure, but also very hard to transport over long distances.

Silicon-vacancy-center-based network nodes can catch, store, and entangle bits of quantum information while correcting for signal loss. After cooling the nodes to close to absolute zero, light is sent through the first node and, by nature of the silicon vacancy centers atomic structure, becomes entangled with it, so able to carry the information.

Since the light is already entangled with the first node, it can transfer this entanglement to the second node, explained first author Can Knaut, a Kenneth C. Griffin Graduate School of Arts and Sciences student in Lukins lab. We call this photon-mediated entanglement.

Over the last several years, the researchers have leased optical fiber from a company in Boston to run their experiments, fitting their demonstration network on top of the existing fiber to indicate that creating a quantum internet with similar network lines would be possible.

Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers, Lukin said.

A two-node quantum network is only the beginning. The researchers are working diligently to extend the performance of their network by adding nodes and experimenting with more networking protocols.

The paper is titled Entanglement of Nanophotonic Quantum Memory Nodes in a Telecom Network. The work was supported by the AWS Center for Quantum Networkings research alliance with the Harvard Quantum Initiative, the National Science Foundation, the Center for Ultracold Atoms (an NSF Physics Frontiers Center), the Center for Quantum Networks (an NSF Engineering Research Center), the Air Force Office of Scientific Research, and other sources.

Read more from the original source:
Glimpse of next-generation internet Harvard Gazette - Harvard Gazette

Read More..

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage – HPCwire

What an interesting panel, Quantum Advantage Where are We and What is Needed? While the panelists looked slightly weary theirs was, after all, one of the last panels at ISC 2024 the discussion was fascinating and the panelists knowledgable. No such panel would be complete without also asking when QA will be achieved. The broad unsurprising answer to that question is not especially soon.

The panel included: Thomas Lippert, head of Jlich Supercomputing Centre (JSC) and director at the Institute for Advanced Simulation; Laura Schulz, acting head of quantum computing and technologies, Leibniz Supercomputing Centre; Stefano Mensa, advanced computing and emerging technologies group leader STFC Hartree Centre; and Sabrina Maniscalco, CEO and co-founder, Algorithmiq Ltd. The moderator was Heike Riel, IBM Fellow, head of science & technology and lead of IBM Research Quantum Europe.

Missing from the panel was a pure-play quantum computer developer that might have added a different perspective. Maybe next year. Topics included quantum-HPC integration, the need for benchmarks (though when and how was not clear), the likely role for hybrid quantum-HPC applications in the NISQ world; familiar discussion around error mitigation and error correction, and more.

Of the many points made, perhaps the strongest was around the idea that Europe has mobilized to rapidly integrate quantum computers into its advanced HPC centers.

Schulz said, The reason that our work in the Munich Quantum Valley (MQV) is so important is because when we look at the European level. We have the EuroHPC Joint undertaking. We have the six quantum systems that are going to be placed in hosting centers that European wide, and we all [have] different modalities, and we all have to integrate. We have to think about this at the European level for how were going to bring these systems together. We do not want multiple schedulers. We do not want multiple solutions that could then clash with one another. We want to try to find unity, where it makes sense and be able to amplify the user experience and smooth the user experience European-wide for them.

The idea is to connect all of these EuroHPC JU systems and make them widely available to academia and industry. LRZ and JSC, for example, have already fielded or are about to field several quantum computers in their facilities (see slides below).

Lippert emphasized that, at least for this session, the focus is on how to achieve quantum advantage when we talk about quantum utility, when this becomes useful, then the quantum computer is able to solve problems of practical usage significantly faster than any classical computer [based on] CPUs, GPUs, of comparable size, weight and power in similar environments. We think this is the first step to be made with quantum-HPC hybrid type of simulation, optimization, machine learning algorithms. Now, how do you realize such quantum advantage? You build HPC-hybrid compute systems. We have the approach that we talk about the modular supercomputing architecture.

Our mission is to establish a vendor-agnostic comprehensive public quantum computer user infrastructure integrated in to our modular complex of supercomputers to . [Its] is a user friendly and peer reviewed access. So like we do with supercomputing.

Schulz drilled down in the software stack being developed at LRZ in collaboration with many partners. On the left side of the slide below are traditional parts co-scheduling, co-resource management, all those components that we need to think of, and that we do think of with things like disaggregated acceleration, said Schulz.

When you get to the right side, she noted, we have to deal with the new physics environment or the new quantum computing environment. So we have a quantum compiler that we are developing, we have a quantum representation moving between them. Weve got a robust, customized, comprehensive toolkit with things like the debuggers, the optimizers, all of those components thats built with our partners in the ecosystem. Then we have an interface, this QBMI (quantum back-end manager interface) and this is what connects the systems individually into our whole framework.

Now, this is really important. And this is part of the evolution. Weve been working on this for two years, actively building this up, and were already starting to see the fruits of our labor. In our quantum Integration center (QIC), we are already able to go from our HPC environment, so our HPC testbed that we have, using our Munich quantum software stack, we are able to go to an access node on HPC system, the same hardware, and call to the quantum system. We have that on prem, it is co located these systems, and it is an integrated effort with our own software stack. So we are making great strides, Schulz said.

The pan-European effort to integrate quantum computing into HPC centers is impressive and perhaps furthest along worldwide. Its emphasis is on handling multiple quantum modalities (superconducting, trapped ion, photonic, neutral atom) and approaches (gate-based and annealing) and trying develop relatively-speaking a common easy-to-use software stack connecting HPC and the quantum.

Mensa of the U.K.s STFC zeroed in on benchmarking. Currently there are many efforts but few widely agreed-upon benchmarks. Roughly, the quantum community talks about system benchmarks (low and middle level) that evaluate a systems basic attributes (fidelity, speed, connectivity, etc) and application-oriented benchmarks intended to look more at time-to-solution, quantum resources needed, and accuracy.

No one disputes the need for quantum benchmarks. Mensa argued for a coordinated effort and suggested the SPEC model as something to look at it. The SPEC Consortium for HPC is a great example, because its a nonprofit and it establishes and maintains and endorses standardized benchmarks. We need to seek something like that, he said

He took a light shot at the Top500 metric not being the best approach, noting it didnt represent practical workloads today, and added the You know that your car can go up to 260. But on a normal road, we never do that. Others noted the Top500, based on Linpack, does at least show you can actually get your system up and running correctly. Moreover, noted Lippert and Schulz, the truth is that the Top500 score is not on the criteria lists they use to evaluate advanced systems procurements.

Opinions on benchmarking varied, but it seems that the flurry of separate benchmark initiatives are likely to continue and remain disparate for now. One issue folks agree on is that quantum technology is moving so fast that its hard to keep up with, and maybe its too early to settle on just a few benchmarks. Moreover benchmarking hybrid quantum-HPC systems becomes even more confusing. All seem to favor use of a suite of benchmarks over a single metric. This is definitely a stay-tuned topic.

Turning to efforts to achieve practical uses, Maniscalco presented two use cases that demonstrate the ability to combine quantum and HPC resources by using classical computing to mitigate errors. Her company Algorithmic Ltd, is developing algorithms for use in bioscience. She provided a snapshot of a technique that Algorithmic has developed to use tensor processing in post-process on classical systems to mitigate errors on the quantum computer.

HPC and quantum computers are seen almost as antagonists in the sense that we can use, for example, tensor network methods to simulate quantum systems, and this is, of course, its very important for benchmarking, said Maniscalco. But what we are interested in is bringing these two together and the quantum-centric supercomputing idea brought forward by IBM is important for us and what we do is specifically focused on this interface between the quantum computer and the HPC.

We develop techniques that are able to measure or extract information from the quantum computers in a way that allows [you] to optimize the efficiency in terms of number of measurements, this eventually corresponds to shorter wall time overhead overall, and also allows to optimize the information that you extract from the quantum computer, and importantly, allows in post processing, she said. (best to read the associated papers for details)

At the end of Q&A, moderator Heike Riel asked the panel, Where will we be in five years? Here are their brief answers in the order given:

Read more here:
Europe's Race towards Quantum-HPC Integration and Quantum Advantage - HPCwire

Read More..

‘Quantum-inspired’ laser computing is more effective than both supercomputing and quantum computing, startup claims – Livescience.com

Engineers have developed an optical computer, about the size of a desktop PC, that can purportedly execute complex artificial intelligence (AI) calculations in nanoseconds, rivaling the performance of both quantum and classical supercomputers.

The computer, dubbed the LPU100, uses an array of 100 lasers to perform calculations through a process called laser interference, LightSolver representatives said in a March 19 statement.

In this process, an optimization problem that requires solving is encoded onto physical obstacles on the lasers' paths using a device called a programmable spatial light modulator. These obstacles prompt the lasers to adjust their behavior to minimize energy loss, similar to how water naturally finds the easiest route downhill by following the path of least resistance.

By quickly altering their state to minimize energy waste, the lasers achieve a state of minimal energy loss. This directly corresponds to the problem's solution.

The LPU100 then uses conventional cameras to detect and interpret these laser states, translating them into a mathematical solution to the original optimization problem.

According to the company, the LPU100 can perform complex operations such as vector-matrix multiplications a demanding computational workload in just 10 nanoseconds. That is hundreds of times quicker than the fastest graphics processing units (GPUs) can perform the same task.

Bob Sorensen, senior vice president of research and chief analyst for quantum computing at Hyperion Research, said in a statement that LightSolver's technology presented "a low barrier to entry for a wide range of advanced computing users."

Get the worlds most fascinating discoveries delivered straight to your inbox.

Vector matrix multiplication is key to handling complex tasks involving a large number of potential outcomes. One example is the vehicle routing problem, a logistics challenge used in the transportation and delivery sector to determine the most efficient routes for vehicle fleets.

In benchmark tests published by LightSolver, the LPU100 identified the most efficient route for vehicle fleets in less than a tenth of a second outperforming Gurobi, a commonly used logistics tool, which often failed to find a solution within 10 seconds.

Previous studies published by researchers at Cornell University found that the LPU100 outperformed traditional GPUs in Max-2-SAT challenges, which are used for testing the efficiency of logic-solving algorithms, as well as in the 3-Regular 3-XORSAT problem, a test for evaluating the performance of algorithms used for handling difficult problems that involve sorting through numerous combinations to find the best solution.

While the LPU100 employs what LightSolver dubs "quantum-inspired" technology, it doesn't rely on qubits nor the laws of quantum mechanics. Instead, it borrows the principle of processing multiple operations simultaneously at very high speeds, which classical computers cannot do.

According to LightSolver, the LPU100's laser array can handle 100 continuous variables, theoretically allowing it to address computational problems involving an astronomically large number of variable combinations (120 to the power of 100).

This makes it particularly well suited for industries such as finance, aerospace, logistics and manufacturing, all of which have resource-intensive data demands, the company said.

Quantum computers require extremely cold temperatures to operate and remain highly experimental, whereas supercomputers typically consume large amounts of energy and need to be housed in purpose-built facilities. By contrast, because the LPU100 lacks electronics, it can operate efficiently at room temperature and maintain a compact size similar to a desktop computer.

It is also built entirely from "well-understood laser technology and commercially available components." This makes it a more practical alternative to resource-intensive quantum computers and supercomputers, LightSolver representatives said.

LightSolver now offers select enterprise customers the ability to use the LPU100 through its cloud platform for problems involving up to 1 million variables.

Excerpt from:
'Quantum-inspired' laser computing is more effective than both supercomputing and quantum computing, startup claims - Livescience.com

Read More..

Researchers at the SQMS Center achieve leading performance in transmon qubits – Fermi National Accelerator Laboratory

Scientists and engineers at the Superconducting Quantum Materials and Systems Center, hosted by the U.S. Department of Energys Fermi National Accelerator Laboratory,have achieved reproducible improvements in superconducting transmon qubit lifetimes with record values of 0.6 milliseconds. The result was achieved through an innovative materials technique that eliminated a major loss source in the devices.

These results have been published in Nature Partner Journal Quantum Information.

Quantum devices such as qubits are critical for storing and manipulating quantum information. The qubit lifespan, known as coherence time, determines how long data can be stored and processed before an error occurs. This phenomenon, called quantum decoherence, is a key obstacle to operating quantum processors and sensors.

Electron microscopy images show the surface of the various superconducting transmon qubits fabricated at SQMS with the novel encapsulation technique. The qubit with the native niobium oxide is compared to the tantalum and gold capping layers that prevent the re-growth of the niobium oxide. Graphic: SQMS Center, Fermilab

The novel process called surface encapsulation protects key layers of the qubit during fabrication and prevents the formation of problematic, lossy oxides at the surfaces and interfaces of these devices. By carefully investigating and comparing various materials and deposition techniques, SQMS researchers have studied different oxides that lead to longer qubit lifetimes and fewer losses.

SQMS is pushing the envelope of qubit performance, said Alexander Romanenko, a senior scientist at Fermilab and SQMS Centers quantum technology thrust leader. These efforts show that undergoing a systematic review of processes and materials and attacking what matters most first is the key to pushing qubit coherence. Pursuing device fabrication and characterization, hand in hand with materials science is the right recipe to deepen our scientific understanding of loss mechanisms and improve quantum devices in the future.

Anna Grassellino, Fermilab senior scientist and SQMS Center director, and Akshay Murthy, SQMS Materials Focus area leader and Materials Characterization group leader, apply state-of-the-art characterization techniques in the Materials Science Lab, such as X-ray photoelectron spectroscopy and time-of-flight secondary ion mass spectrometry, to examine the effectiveness of niobium surface capping. Photo: Ryan Postel, Fermilab

There are many types of qubits. These basic building blocks of quantum computers process information differently and potentially faster than classical computers. The longer a qubit can store quantum information, the better its potential for use in a quantum computer.

Since its inception in 2020, the SQMS research team has focused on understanding the source of errors and decoherence in transmon qubits. This type of qubit is patterned on a chip consisting of a metallic niobium layer on top of a substrate, such as silicon or sapphire. Many consider these superconducting qubits to be the most advanced platform for quantum computers. Tech companies in the United States and around the world are also exploring them.

Mustafa Bal, nanofabrication group leader at the Fermilab SQMS division and leader of the SQMS Center national nanofabrication taskforce (left) and graduate student Francesco Crisa hold transmon chips of leading performance they produced at the Pritzker Nanofabrication Facility. Photo: Dan Svoboda, Fermilab

However, scientists must still overcome some challenges before quantum computers can fulfill their promise of solving previously unsolvable problems. Specific properties of the materials used to create these qubits can lead to the decoherence of quantum information. At SQMS, developing a deeper scientific understanding of these properties and loss mitigation strategies is an active area of research.

Shaojiang Zhu, qubit design and simulation group leader at the Fermilab SQMS Division, holds transmon qubits prepared with the surface encapsulation technique ready to be measured at the SQMS Quantum Garage at Fermilab. Photo: Dan Svoboda, Fermilab

SQMS scientists studying the losses in transmon qubits pointed to the niobium surface as the primary culprit. These qubits are fabricated in a vacuum, but when exposed to air, an oxide forms on the surface of niobium. Though this oxide layer is thin only about 5 nanometers it is a major source of energy loss and leads to shorter coherence times.

Our prior measurements indicate that niobium is the best superconductor for these qubits. While the metal losses are near zero, the niobium surface oxide is problematic and the main driver of losses in these circuits. Romanenko said.

SQMS scientists proposed encapsulating the niobium during fabrication so it would never be exposed to air and, therefore, its oxide would not form. While they had a hypothesis on which materials would work best for capping, determining the optimal material required a detailed study. So, they systematically tested this technique with different materials, including aluminum, tantalum, titanium nitride, and gold.

With each capping layer attempt, SQMS scientists analyzed the materials using several advanced characterization techniques at material science labs at Fermilab, Ames National Laboratory, Northwestern University, and Temple University. Qubit performances were measured inside a dilution refrigerator at the SQMS Quantum Garage at Fermilab. This cryogenic device cools qubits to just a tick above absolute zero. The results demonstrated that the researchers could prepare qubits with 2 to 5 times coherence improvement compared to samples prepared without a capping layer (containing the niobium oxide layer).

The team found that the capping process improved coherence times for all materials explored in the study. Of these materials, tantalum and gold proved to be the most effective for enabling a higher coherence time, with an average of 0.3 milliseconds and maximum values as high as 0.6 milliseconds. These results shed further light on the nature, hierarchy, and the mechanism of losses in these qubits. They are found to be driven by the presence of amorphous oxides and interfaces.

When fabricating a qubit, there are many variables, more or less hidden, that can impact performance, said Mustafa Bal, a scientist at Fermilab and head of the SQMS nanofabrication group and task force. This is a first-of-its-kind study that very carefully compares one material change and one process change at a time, on a chip of a fixed geometry, across different fabrication facilities. This approach ensures that we develop reproducible techniques for improvement in qubit performance.

The teams fabricated and tested qubits in different facilities as part of the SQMS Centers National Nanofabrication Taskforce. Fermilab led the way with the SQMS nanofabrication group headed by Bal, making qubits at the Pritzker Nanofabrication Facility at the University of Chicago. Other facilities included Rigetti Computing, a quantum computing company with a quantum foundry, and the National Institute of Standards and Technology Boulder Laboratories. Both are flagship partners at the SQMS Center. Fabricating the chip at Rigettis commercial foundry proved that the technique is easily reproducible and scalable for the industry.

At Rigetti Computing, we want to make the best possible superconducting qubits to make the best possible quantum computers, and extending the lifetimes of qubits in a reproducible way has been one of the hardest problems, said Andrew Bestwick, senior vice president of quantum systems at Rigetti. These are among the leading transmon coherence times that the field has been able to achieve on a two-dimensional chip. Most importantly, the study has been guided by the scientific understanding of qubit loss, leading to reproducibility across different labs and in our fabrication facility.

Rigettis Fab-1 is the industrys first dedicated and integrated quantum device and manufacturing facility, located in Fremont, California. The qubit surface encapsulation technique was easily reproduced at the Rigetti facilities. Photo: Rigetti Computing

At NIST, scientists are interested in using quantum technology to make fundamental measurements of photons, microwave radiation, and voltage. This has been a great team effort and a good planting of a flag that shows both how far we have come and the challenges that remain to be faced, said Peter Hopkins, a physicist at NIST who leads the superconductive electronics group and is a lead member of the SQMS Center National Nanofabrication Taskforce.

Following this work, SQMS researchers continue to push qubits performance frontier further. The next steps include engineering creative and robust nanofabrication solutions for applying this technique to other transmon qubit surfaces to eliminate all lossy interfaces present in these devices. The underlying substrate upon which these qubits are prepared also represents the next major source of losses. SQMS researchers are already hard at work characterizing and developing better silicon wafers or other lower-loss substrates suitable for quantum applications.

Moreover, SQMS scientists are working to ensure these advances in the coherence studies can be preserved in more complex chip architectures with several interconnected qubits.

Given the breadth of the SQMS Center collaboration, the Centers vision and mission are multi-fold. The researchers seek to improve the performance of the building blocks of a quantum computer and apply these innovations in mid-scale prototypes of quantum processors.

At SQMS, two main superconducting quantum computing platforms are under exploration: 2D transmon qubit chip-based and 3D cavity-based architectures. For the chip-based processors, SQMS researchers work hand in hand with industry partners such as Rigetti to advance performance and scalability of these platforms.

Currently, SQMS researchers from Fermilab and Rigetti have co-developed a 9-qubit processor incorporating these surface encapsulation advances. The chip is being installed in the SQMS Quantum Garage at Fermilab. Its performance will be evaluated and benchmarked in the upcoming weeks.

This timeline shows shows a roadmap for the SQMS Centers development of 2D transmon qubits and 3D cavity-based platforms. Graphic: Samantha Koch, Fermilab

For the 3D cavity-based platforms, Fermilab scientists have been working to integrate these qubits with superconducting radio-frequency cavities. Scientists initially developed these cavities for particle accelerators and Fermilab builds upon decades of experience in making the worlds best SRF cavities, demonstrating photon lifetimes of up to 2 seconds. When combined with transmon qubits, these cavities can also be used as building blocks of quantum computing platforms. Such an approach promises potentially better coherence, scalability and qubit connectivity. To date, Fermilab scientists have achieved up to several milliseconds of coherence in these cavity-qubit combined systems.

We know how to make the worlds best cavities, but the success of the 3D platforms under construction at Fermilab also heavily depends on how far we can keep pushing the performance of these transmon qubits used to control and manipulate the quantum states in the cavities, said Romanenko. So, its kind of two birds with one stone. As we push to advance our transformational 3D technologies, we also work alongside industry to enable important advances in 2D chip-based quantum computing platforms.

The Superconducting Quantum Materials and Systems Center at Fermilab is supported by theDOE Office of Science.

The Superconducting Quantum Materials and Systems Center is one of the five U.S. Department of Energy National Quantum Information Science Research Centers. Led by Fermi National Accelerator Laboratory, SQMS is a collaboration of more than 30 partner institutions national labs, academia and industry working together to bring transformational advances in the field of quantum information science. The center leverages Fermilabs expertise in building complex particle accelerators to engineer multiqubit quantum processor platforms based on state-of-the-art qubits and superconducting technologies. Working hand in hand with embedded industry partners, SQMS will build a quantum computer and new quantum sensors at Fermilab, which will open unprecedented computational opportunities. For more information, please visitsqmscenter.fnal.gov.

Fermi National Accelerator Laboratory is Americas premier national laboratory for particle physics research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance LLC. Visit Fermilabs website athttps://www.fnal.govand follow us on Twitter@Fermilab.

See the original post here:
Researchers at the SQMS Center achieve leading performance in transmon qubits - Fermi National Accelerator Laboratory

Read More..

USask partners with PINQ to access Canada’s only IBM Quantum System One – USask News

Scientists at USask are on the forefront of groundbreaking research thanks to a partnership with the PINQ (Qubec Digital and Quantum Innovation Platform), the sole administrator of Canadas only IBM Quantum System One, a utility-scale quantum computer located at IBMs research facility in Bromont, Quebec.

USasks three-year agreement with PINQ enables faculty and students affiliated with USasks Centre for Quantum Topology and Its Applications (quanTA) to have access to the machine via PINQs quantum computing platform. This collaboration significantly enhances the existing quantum computing research activities at USask.

IBM Quantum System One is powered by a 127-qubit processor, which has achieved utility-scale performance, a point at which quantum computers could serve as scientific tools to explore a new scale of problems that classical systems may never be able to solve. Under ideal circumstances, a qubit can be astoundingly powerful in comparison to the ordinary bits in conventional computers.

One of the partnerships first projects will be a study of complex health data in children suffering from chronic diseases, including juvenile arthritis. Using patient-derived data, researchers will deploy quantum-enhanced data analysis and machine learning techniques to uncover and understand hidden factors that may lead to such diseases, leading potentially to future preventatives and therapies. The nature of this work augments what is possible using traditional computing methods.

The groundbreaking research made possible through this partnership will see USasks quantum scientists working together with many other scientists from diverse fields, further showcasing the interdisciplinary work currently underway at USask.

"Our government is supporting quantum computing capacity in Canada through this unique collaboration between USask and Quantum System One, the next-generation quantum computer at IBMs research facility. Todays investment gives USask access to that system and the computing power that will allow them to tackle difficult problems in key areas like health care, climate sciences, and beyond. This access will lead to exponential growth in research and development, while boosting innovation and further solidifying USask as a scientific centre of excellence.

- The Honourable Dan Vandal, Minister for PrairiesCan

USask is a leader in quantum computing and this exciting new partnership allows us to further our influential work in the quantum ecosystem. We are committed to training the next generation of researchers, leaders and changemakers. Access to IBM Quantum System One will be a central component to recruit highly qualified students and build the skills of the next generation of quantum leaders.

-Baljit Singh, Vice President Research, USask

Over the past 60 years or so, computers have become one of the most important tools in a scientists back pocket, on par with the microscope. But the time has come where ordinary computing can no longer keep up with the problems that society needs to solve today, such as climate change and accelerated vaccine design. While still in its infancy, quantum computing promises to be the next indispensable tool in science. Some of the first real-world use cases for this technology will be developed right here at USask, thanks to this one-of-a-kind partnership with IBM and PINQ and owing to the strong interdisciplinary culture on our campus.

- Dr. Steven Rayan, director of USasks quanTA Centre and lead of USasks Quantum Innovation Signature Area of Research.

We are delighted to collaborate with USask, granting their researchers access to one of the worlds most powerful quantum computers. This partnership promises groundbreaking research and innovation, and we eagerly anticipate the outcomes arising from this collaboration. Our mission is to facilitate accelerated digital transformation for organizations and empower individuals in utilizing the capabilities of quantum computing. This partnership exemplifies our commitment to achieving that goal.

- ric Capelle, Managing Director, PINQ (Qubec Digital and Quantum Innovation Platform)

Excerpt from:
USask partners with PINQ to access Canada's only IBM Quantum System One - USask News

Read More..

JUDAS PRIEST’s IAN HILL Weighs In On Use Of Artificial Intelligence In Music – BLABBERMOUTH.NET

In a new interview with Elena Rosberg of Radiocast BG, JUDAS PRIEST bassist Ian Hill was asked how he thinks heavy metal music and the metal community can combat the negative effects of artificial intelligence in music, particularly as it relates to the creative process. Hill responded (as transcribed by BLABBERMOUTH.NET): "I don't know. You'd know, wouldn't you? If something was put together they're not gonna be able to play the instruments for a start, so they're gonna have to use some kind of music stem, some sort of source. And you'd know. I'd know, I'd know if something was put together. You can listen to I don't know even now songs, pop songs in particular, and you think, 'That's not a bass guitar. Some guy's playing that on a keyboard.' You know it. It might fool some of the public, because, obviously, I go into things a little bit more deeply. I drive my wife mad. She won't let me anywhere near If she's going to a concert, I don't go, basically because I start picking it apart. But that's what we do. A welder would do the same thing. He'd look at something [and go], "That's a load of rubbish.' That's what you do."

He continued: "But I don't know. I just think that artificial intelligence can't really perform live. I mean, this is what it's coming down to. A lot of music, especially in the pop world these days, is a little bit on the false side people mime to it and what have you. And A.I. can't even do that. You can't have artificial intelligence standing on stage. That ain't gonna work. So, from a recording point of view, yeah, they might fool people they might fool a hell of a lot of people but, actually, when they say there's a band playing live, that's gonna be the acid test, isn't it? And I can't see, really, unless they're all holograms standing up there. which has been done. What am I saying? ABBA have just done it, haven't they? But it's there. It is advertised. You know it ain't ABBA. It's trickery. But it's in the live performance where it'll fall down and it won't stand up to scrutiny, I don't think."

JUDAS PRIEST kicked off the U.S. leg of the "Invincible Shield" world tour on April 18 at Toyota Oakdale Theatre in Wallingford, Connecticut.

Hill is the sole remaining original member of PRIEST, which formed in 1969. Singer Rob Halford joined the group in 1973 and guitarist Glenn Tipton signed on in 1974. Rob left PRIEST in the early 1990s to form his own band, then came back to PRIEST in 2003. Original guitarist K.K. Downing parted ways with the band in 2011, and was replaced by Richie Faulkner.

PRIEST's latest album, "Invincible Shield", entered the U.K. chart at No. 2, just behind Ariana Grande's "Eternal Sunshine".

Prior to "Invincible Shield"'s arrival, PRIEST's highest U.K. chart achievement was with 1980's "British Steel", which reached No. 4.

PRIEST's 2018 album "Firepower" entered the chart at No. 5.

"Invincible Shield" was JUDAS PRIEST's fifth Top 10 album, after the aforementioned "British Steel" and "Firepower", as well as 2014's "Redeemer Of Souls" (No. 6) and the 1979 live album "Unleashed In The East" (No. 10).

"Invincible Shield" landed at No. 1 in Germany, Finland, Sweden and Switzerland, as well as No. 5 in France, No. 8 in Italy and No. 16 in Australia.

Read more from the original source:
JUDAS PRIEST's IAN HILL Weighs In On Use Of Artificial Intelligence In Music - BLABBERMOUTH.NET

Read More..

Study including WVU and Marshall analyzes cyber threats to Artificial Intelligence systems – West Virginia MetroNews

MORGANTOWN, W.Va. Researchers from West Virginia University, Marshall University, and Florida International University are exploring the cybersecurity needs of artificial intelligence technologies with a $1.75 million grant from the Defense Advanced Research Projects Agency (DARPA).

Professor and Chairman of the Statler College of Engineering and Mineral Resources in the Lane Department of Computer Science and Electrical Engineering, Anurag Srivastava, said the AI-CRAFT project is intended to develop ways to secure the emerging technology. Artificial intelligence is developing rapidly and being pushed into larger real-time applications.

Our goal is to look at what those are, and how do I defend myself if someone is trying to hack into or make the AI behave in a way it is not supposed to behave? Srivastava said.

The teams are building the artificial intelligence systems while engineering security and safety as they are deployed. The complexity results from the evolution from simple calculation systems used a decade ago to the addition of millions of data points as AI systems are taught to think like the human brain.

Look at this from a new point of view now; what is the attack vector now? Srivastava asked. Can someone reverse engineer the AI? Can someone poison the AI so it behaves in a way it should not?

AI technologies have quickly grown from application in autonomous cars to use in very lifelike robots and vital systems like public utilities. The research also includes developing secure data practices, access controls, and continuous monitoring to assess the security and usefulness of AI systems.

Especially those that will be used to operate critical systems like robots or the power grid, Srivastava said.

On the academic side, students will have many hands-on opportunities in labs and training platforms, designed to equip students with the skills and knowledge needed to thrive in a rapidly evolving cybersecurity industry.

Other than solving this problem for complex systems like a power grid, robotics, or autonomous cars, our goal is also to teach it because this is also a new topic, Srivastava said.

Officials from WVU, Marshall University, and the U.S. Department of Defense will break ground on the new Institute for Cyber Security in Huntington on May 17.

Go here to read the rest:
Study including WVU and Marshall analyzes cyber threats to Artificial Intelligence systems - West Virginia MetroNews

Read More..