Category Archives: Quantum Computer

Noisy Quantum Computers Could Be Good for Chemistry Problems …

Scientists and researchers have long extolled the extraordinary potential capabilities of universal quantum computers, like simulating physical and natural processes or breaking cryptographic codes in practical time frames. Yet important developments in the technologythe ability to fabricate the necessary number of high-quality qubits (the basic units of quantum information) and gates (elementary operations between qubits)is most likely still decades away.

However, there is a class of quantum devicesones that currently existthat could address otherwise intractable problems much sooner than that. These near-term quantum devices, coined Noisy Intermediate-Scale Quantum (NISQ) by Caltech professor John Preskill, are single-purpose, highly imperfect, and modestly sized.

Dr. Anton Toutov is the cofounder and chief science officer of Fuzionaire and holds a PhD in organic chemistry from Caltech. You can follow him at @AntonToutov.

Dr. Prineha Narang is an assistant professor of computational materials science at the John A. Paulson School of Engineering and Applied Sciences at Harvard University. You can follow her @NarangLab.

As the name implies, NISQ devices are noisy, meaning that the results of calculations have errors, which in some cases can overwhelm any useful signal.

Why is a noisy, single-purpose, 50- to few-hundred-qubit quantum device exciting, and what can we do with it in the next five to 10 years? NISQs provide the near-term possibility of simulating systems that are so mathematically complex that conventional computers cannot practically be used. And chemical systems definitely fit that bill. In fact, chemistry could be a perfect fit for NISQ computation, especially because errors in molecular simulations may translate into physical features.

To understand this, its valuable to consider what noise is and how it occurs. Noise arises because physical and natural systems do not exist in isolationthey are part of a larger environment, which has many particles, each of which are moving in different (and unknown) directions. This randomness, when discussing chemical reactions and materials, creates thermal fluctuations. When dealing with measurement and computing, this is referred to as noise, which manifests itself as errors in calculations. NISQ devices themselves are very sensitive to their external environment, and noise is already naturally present in qubit operations. For many applications of quantum devices, such as cryptography, this noise can be a tremendous limitation and lead to unacceptable levels of error.

However, for chemistry simulations, the noise would be representative of the physical environment in which both the chemical system (e.g., a molecule) and the quantum device exist. This means that NISQ simulation of a molecule will be noisy, but this noise actually tells you something valuable about how the molecule is behaving in its natural environment.

With errors as features, we may not need to wait until qubits are hyperprecise in order to start simulating chemistry with quantum devices.

Perhaps the most immediate application for near-term quantum computers is the discovery of new materials for electronics. In practice, however, this research is often done with little or no computer-based optimization and design. This is because it is too hard to simulate these materials using classical computers (except in very idealized scenarios, such as when there is only a single electron moving in the whole material). The difficulty comes from the fact that the electrical properties of materials are governed by the laws of quantum physics, which contain equations that are extremely hard to solve. A quantum computer doesnt have this problemby definition the qubits already know how to follow the laws of quantum physicsand the application of NISQs to the discovery of electronic materials is an important research direction in the Narang lab.

What is special about electronic materials is that they are usually crystalline, meaning that atoms are laid out in an organized, repeating pattern. Because the material looks the same everywhere, we dont need to keep track of all atoms, but only of a few representative ones. This means that even a computer with a modest number of qubits may be able to simulate some of these systems, opening up opportunities for highly efficient solar panels, faster computers, and more sensitive thermal cameras.

Chemical research has been going on for centuries, yet new chemistry is most typically discovered by intuition and experimentation. An application of quantum devices in which we are particularly interested at Fuzionaire is the simulation of chemical processes and catalysts, which are substances that accelerate chemical reactions in remarkable ways. Catalysts are at the heart of the entire chemical industry and are relied on each day in the production of medicines, materials, cosmetics, fragrances, fuels, and other products. Significant challenges exist, but this area is a very important opportunity for NISQ devices in the next five to 10 years.

For example, the Haber-Bosch synthesis (HB) is an industrial chemical process that turns hydrogen (H2) and nitrogen (N2) into ammonia (NH3). HB makes it possible to produce enough ammonia-based fertilizer to feed the world, but the process is energy-intensive, consuming approximately 1 to 2 percent of global energy and generating about 3 percent of total global CO2 emissions.

At the heart of the entire process is a catalyst based on iron, which is only active at high temperatures and without which the process fails. Scientists have been trying to discover new catalysts for HB that would make the chemistry more efficient, less energy-intensive, and less environmentally damaging. However, the catalyst discovery and testing process is challenging, painstaking, and costly. Despite many decades of tremendous effort by chemists and engineers, the iron catalyst discovered over 100 years ago remains the industrial state-of-the-art.

Near-term NISQ systems would be used to give chemists unprecedented insights into the inner workings of the current iron catalyst in its physical environment and would be applied to simulate novel, viable catalyst architectures, including those based on elements other than iron.

Biological systems are extraordinarily complex, which makes modeling and simulation very challenging. Prediction of biological molecules and biochemical interactions with conventional computers, especially in biologically relevant environments, becomes difficult or impossible. This forces even basic, earliest-stage biomedical research to be done by working with chemicals, cells, and animals in a lab and hoping for reproducible conditions between experiments and organisms. This is why drug discovery, a vital area of biomedical innovation that encompasses both chemistry and biology, is such a tantalizing opportunity for NISQ intervention.

Developing new medicines for cancer, neurodegenerative diseases, viruses, diabetes, and heart disease is one of the most important activities within the entire chemistry enterprise. However, the current reality is that bringing a new drug to market continues to be slow and costly, to the tune of about 10 to 15 years and more than $2 billion, by some estimates.

A central challenge within the drug discovery process is to identify a biological target that has relevance to human disease and to design molecules that could inhibit that target with the hope that this would treat the disease. Quantum devices could be used to simulate common biological targets such as kinases, G-protein-coupled receptors (GPCRs), and nuclear receptors in their dynamic environments and in complex with inhibitor molecules. These simulations would enable drug discovery scientists to identify potentially active molecules early in the process and discard non-actives from consideration. The most promising drug candidate molecules would then be synthesized and promoted to biological studies (e.g., pharmacology, toxicology) in the laboratory.

While there are great opportunities for near-term quantum devices and much hope for improved systems in the future, we must not get carried away. Research will need to solve significant challenges, including creating systems with many more qubits, improving qubit performance, and developing coding languages for quantum computers, among others.

Nevertheless, there are great reasons to be optimistic as we look forward to the next five to 10 years. Significant resources are being committed by large companies like IBM, Google, and Microsoft to quantum computing efforts; healthy investment is flowing into quantum hardware startup companies like Rigetti, D-Wave, IonQ, and others; and important academic results are being reported using current or near-term quantum devices, including solving lattice protein folding problems, predicting the optical response of exotic materials, investigating the mechanism of nitrogen fixation by nitrogenase, and many others.

As a professional chemist and physicist, were excited about the current capabilities and optimistic about the utility of near-term quantum devices. Were hopeful that these systems will provide to the scientific community new insights that will accelerate discovery and help us solve problems to improve the human condition.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here. Submit an op-ed atopinion@wired.com

Link:
Noisy Quantum Computers Could Be Good for Chemistry Problems …

What is a Quantum Computer? – Definition from Techopedia

One of the basic characteristics of quantum computing relates to the units used for data manipulation. In a conventional computer, these units are bits, which are binary values. In quantum computing and quantum computer models, the basic units are qubits, which can have a zero or a one value, or one of several additional values. The problem of representing these qubits in a data storage space is one of the essential barriers to practical quantum computer design.

Another characteristic of quantum computers relates to command structures. A traditional and linear computer has only one command for a given state; this command is described as deterministic. Models like the nondeterministic Turing machine (NTM) provide more than one possible command response to a given state. This is a fundamental aspect of quantum computer design.

In general, quantum computers use concepts like entanglement, or other ideas that enhance the structure of basic models, from qubits to larger nondeterministic concepts or ideas about how quantum mechanics can be applied to a computing model.

Follow this link:
What is a Quantum Computer? – Definition from Techopedia

What Is a Quantum Computer? | JSTOR Daily

For decades, we have been hearing about the incredible potential of quantum computers. Now, researchers claim to have turned back time inside a quantum machine. These devices, which currently exist only as prototypes, have the potential to be much faster than any current computer. But what is a quantum computer?

Rather than microchips and circuits, quantum computing relies on the principles of quantum mechanics. In particular it relies on so-called quantum entanglement, or the ability of one subatomic particle to influence a different subatomic particle some distance away. The influence is virtually instantaneous, hence the potential for computing speed.

Robert F. Service writes in Science thatquantum computers, just like regular computers, store information as 0s and 1s, known as bits. A principle of quantum mechanics, however, is that subatomic particles exist in all possible conditions, or states, simultaneously. The particle only settles on a state once it is observed. Since the particles transmitting the information have multiple states at once, quantum bits (qubits) can be both partially 0 and partially 1 simultaneously. These weird hybrid bits can basically take on any percentage value between 0 and 1 at any time. Effectively, a quantum computer calculates all possible outcomes of a calculation at once. With such power, it wont take many bits for a quantum computer to be able to calculate just about anything.

A few problems remain. For one thing, the multiple states and entanglement between particles are fragile, so the qubits can easily fall apart in a process called decoherence. To guard against decoherence, extra qubits are needed as a backup. According to science reporterCharles Q. Choi in ASEE Prism, the physics of quantum computing are well understood. The issue is actually building a machine that can manipulate not just qubits but multiple qubits connected by networks of circuits called quantum logic gates. Engineers have tried a few different approaches, including using lasers, ionized particles trapped by magnetic fields, and superconductors. Despite years of trying, however, nobody has built a quantum computer where qubits have lasted longer than a second or two. Stringing qubits together or connecting them to a regular computer that can process the information into a useable output have also proven difficult.

Despite the engineering challenges, some are confident that a working quantum computer will indeed be reality. Given the high cost and energy requirements, however, it is unlikely that quantum computers are coming soon to a desktop near you.

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

By: Robert F. Service

Science, New Series, Vol. 292, No. 5526 (Jun. 29, 2001), pp. 2412-2413

American Association for the Advancement of Science

By: Charles Q. Choi

ASEE Prism, Vol. 26, No. 5 (January 2017), pp. 22-28

American Society for Engineering Education

Read more from the original source:
What Is a Quantum Computer? | JSTOR Daily

Measuring Quantum Computer Power With IBM Quantum Volume …

If you cant measure it, you cant improve it. IBM created the Quantum Volume metric to measure the power of quantum computers.Quantum Computers have the potential to be vastly more powerful than regular computers.IBM created a Quantum Volume Metric to integrate all of the factors that effect the processing capability of quantum computers.IBM recently updated its Quantum Volume metric from an earlier definition.The single-number metric, quantum volume, can be measured using a concrete protocol on near-term quantum computers of modest size (less than 50 qubits) and measure it on several state-of-the-art transmon devices, finding values as high as 8. The quantum volume is linked to system error rates, and is empirically reduced by uncontrolled interactions within the system. It quantifies the largest random circuit of equal width and depth that the computer successfully implements. Quantum computing systems with high-fidelity operations, high connectivity, large calibrated gate sets, and circuit rewriting toolchains are expected to have higher quantum volumes. The quantum volume is a pragmatic way to measure and compare progress toward improved system-wide gate error rates for near-term quantum computation and error-correction experiments.Quantum volume is architecture independent, and can be applied to any system that is capable of running quantum circuits. We implement this metric on several IBM Q devices, and find a quantum volume as high as 8. We conjecture that systems with higher connectivity will have higher quantum volume given otherwise similar performance parameters.From numerical simulations for a given connectivity, IBM found that there are two possible paths for increasing the quantum volume. Although all operations must improve to increase the quantum volume, the first path is to prioritize improving the gate fidelity above other operations, such as measurement and initialization. This sets the roadmap for device performance to focus on the errors that limit gate performance, such as coherence and calibration errors. The second path stems from the observation that, for these devices and this metric, circuitoptimization is becoming important. They implemented various circuit optimization passes (far from optimal) and showed a measurable change in the experimental performance. IBM introduced an approximate method for NISQ devices, and used it to show experimental improvements.IBM has determined that their quantum devices are close to being fundamentally limited by coherence times, which for IBM Q System One averages 73 microseconds.SOURCES- IBM Research, Arxiv Validating quantum computers using randomized model circuitsWritten By Brian Wang

Read more here:
Measuring Quantum Computer Power With IBM Quantum Volume …

Explainer: What is a quantum computer …

A quantum computer harnesses some of the almost-mystical phenomena of quantum mechanics to deliver huge leaps forward in processing power. Quantum machines promise to outstrip even the most capable of todaysand tomorrowssupercomputers.

They wont wipe out conventional computers, though. Using a classical machine will still be the easiest and most economical solution for tackling most problems. But quantum computers promise to power exciting advances in various fields, from materials science to pharmaceuticals research. Companies are already experimenting with them to develop things like lighter and more powerful batteries for electric cars, and to help create novel drugs.

The secret to a quantum computers power lies in its ability to generate and manipulate quantum bits, or qubits.

Today’s computers use bitsa stream of electrical or optical pulses representing1s or0s. Everything from your tweets and e-mails to your iTunes songs and YouTube videos are essentially long strings of these binary digits.

Quantum computers, on the other hand, usequbits, whichare typically subatomic particles such as electrons or photons. Generating and managing qubits is a scientific and engineering challenge. Some companies, such as IBM, Google, and Rigetti Computing, use superconducting circuits cooled to temperatures colder than deep space. Others, like IonQ, trap individual atoms in electromagnetic fields on a silicon chip in ultra-high-vacuum chambers. In both cases, the goal is to isolate the qubits in a controlled quantum state.

Qubits have some quirky quantum properties that mean a connected group of them can provide way more processing power than the same number of binary bits. One of those properties is known as superposition and another is called entanglement.

Qubits can represent numerous possible combinations of 1and 0 at the same time. This ability to simultaneously be in multiple states is called superposition. To put qubits into superposition, researchers manipulate them using precision lasers or microwave beams.

Thanks to this counterintuitive phenomenon, a quantum computer with several qubits in superposition can crunch through a vast number of potential outcomes simultaneously. The final result of a calculation emerges only once the qubits are measured, which immediately causes their quantum state to collapse to either 1or 0.

Researchers can generate pairs of qubits that are entangled, which means the two members of a pair exist in a single quantum state. Changing the state of one of the qubits will instantaneously change the state of the other one in a predictable way. This happens even if they are separated by very long distances.

Nobody really knows quite how or why entanglement works. It even baffled Einstein, who famously described it as spooky action at a distance. But its key to the power of quantum computers. In a conventional computer, doubling the number of bits doubles its processing power. But thanks to entanglement, adding extra qubits to a quantum machine produces an exponential increase in its number-crunching ability.

Quantum computers harness entangled qubits in a kind of quantum daisy chain to work their magic. The machines ability to speed up calculations using specially designed quantum algorithms is why theres so much buzz about their potential.

Thats the good news. The bad news is that quantum machines are way more error-prone than classical computers because of decoherence.

The interaction of qubits with their environment in ways that cause their quantum behavior to decay and ultimately disappear is called decoherence. Their quantum state is extremely fragile. The slightest vibration or change in temperaturedisturbances known as noise in quantum-speakcan cause them to tumble out of superposition before their job has been properly done. Thats why researchers do their best to protect qubits from the outside world in those supercooled fridges and vacuum chambers.

But despite their efforts, noise still causes lots of errors to creep into calculations. Smart quantum algorithmscan compensate for some of these, and adding more qubits also helps. However, it will likely take thousands of standard qubits to create a single, highly reliable one, known as a logical qubit. This will sap a lot of a quantum computers computational capacity.

And theres the rub: so far, researchers havent been able to generate more than 128 standard qubits (see our qubit counter here). So were still many years away from getting quantum computers that will be broadly useful.

That hasnt dented pioneers hopes of being the first to demonstrate quantum supremacy.

Its the point at which a quantum computer can complete a mathematical calculation that is demonstrably beyond the reach of even the most powerful supercomputer.

Its still unclear exactly how many qubits will be needed to achieve this because researchers keep finding new algorithms to boost the performance of classical machines, and supercomputing hardware keeps getting better. But researchers and companies are working hard to claim the title, running testsagainst some of the worlds most powerful supercomputers.

Theres plenty of debate in the research world about just how significant achieving this milestone will be. Rather than wait for supremacy to be declared, companies are already starting to experiment with quantum computers made by companies like IBM, Rigetti, and D-Wave, a Canadian firm. Chinese firms like Alibaba are also offering access to quantum machines. Some businesses are buying quantum computers, while others are using ones made available through cloud computing services.

One of the most promising applications of quantum computers is for simulating the behavior of matterdown to the molecular level. Auto manufacturers like Volkswagen and Daimler are using quantum computers to simulate the chemical composition of electrical-vehicle batteries to help find new ways to improve their performance. And pharmaceutical companies are leveraging them to analyze and compare compounds that could lead to the creation of new drugs.

The machines are also great for optimization problems because they can crunch through vast numbers of potential solutions extremely fast. Airbus, for instance, is using them to help calculate the most fuel-efficient ascent and descent paths for aircraft. And Volkswagen has unveiled a service that calculates the optimal routes for buses and taxis in cities in order to minimize congestion. Some researchers also think the machines could be used to accelerate artificial intelligence.

It could take quite a few years for quantum computers to achieve their full potential. Universities and businesses working on them are facing a shortage of skilled researchersin the fieldand a lack of suppliersof some key components. But if these exotic new computing machines live up to their promise, they could transform entire industries and turbocharge global innovation.

Visit link:
Explainer: What is a quantum computer …

What Can We Do with a Quantum Computer? | Institute for …

When I was in middle school, I read a popular book about programming in BASIC (which was the most popular programming language for beginners at that time). But it was 1986, and we did not have computers at home or school yet. So, I could only write computer programs on paper, without being able to try them on an actual computer.

Surprisingly, I am now doing something similarI am studying how to solve problems on a quantum computer. We do not yet have a fully functional quantum computer. But I am trying to figure out what quantum computers will be able to do when we build them.

The story of quantum computers begins in 1981 with Richard Feynman, probablythe most famous physicist ofhis time. At a conference on physics and computation atthe Massachusetts Institute of Technology, Feynman asked the question: Can we simulate physics on a computer?

The answer wasnot exactly. Or, more preciselynot all of physics. One of the branches of physics is quantum mechanics, which studiesthe laws of nature on the scale of individual atoms and particles. If we try to simulate quantum mechanics on a computer, we run into a fundamental problem. The full description of quantum physics has so many variables that we cannot keep track of all of them on a computer.

If one particle can be described by two variables, then to describe the most general state of n particles, we need 2n variables. If we have 100 particles, we need 2100 variables, which is roughly 1 with 30 zeros. This number is so big that computers will never have so much memory.

By itself, this problem was nothing newmany physicists already knew that. But Feynman took it one step further. He asked whether we could turn this problem into something positive: If we cannot simulate quantum physics on a computer, maybe we can build a quantum mechanical computerwhich would be better than the ordinary computers?

This question was asked by the most famous physicist of the time. Yet, over the next few years, almost nothing happened. The idea of quantum computers was so new and so unusual that nobody knew how to start thinking about it.

But Feynman kept telling his ideas to others, again and again. He managed to inspire a small number of people who started thinking: what would a quantum computer look like? And what would it be able to do?

Quantum mechanics, the basis for quantum computers, emerged from attempts to understand the nature of matter and light. At the end of the nineteenth century, one of the big puzzles of physics was color.

The color of an object is determined by the color of the light that it absorbs and the color of the light that it reflects. On an atomic level, we have electrons rotating around the nucleus of an atom. An electron can absorb a particle of light (photon), and this causes the electron to jump to a different orbit around the nucleus.

In the nineteenth century, experiments with heated gasses showed that each type of atom only absorbs and emits light of some specific frequencies. For example,visible light emitted by hydrogen atoms only consists of four specific colors. The big question was: how can we explain that?

Physicists spent decades looking for formulas that would predict the color of the light emitted by various atoms and models that would explain it. Eventually, this puzzle was solved by Danish physicist Niels Bohr in 1913 when he postulated that atoms and particles behave according to physical laws that are quite different from what we see on a macroscopic scale. (In 1922, Bohr, who would become a frequent Member at the Institute, was awarded a Nobel Prize for this discovery.)

To understand the difference, we can contrast Earth (which is orbiting around the Sun) and an electron (which is rotating around the nucleus of an atom). Earth can be at any distance from the Sun. Physical laws do not prohibit the orbit of Earth to be a hundred meters closer to the Sun or a hundred meters further. In contrast, Bohrs model only allows electrons to be in certain orbits and not between those orbits. Because of this, electrons can only absorb the light of colors that correspond to a difference between two valid orbits.

Around the same time, other puzzles about matter and light were solved bypostulating that atoms and particles behave differently from macroscopic objects. Eventually, this led to the theory of quantum mechanics, which explains all of those differences, using a small number of basic principles.

Quantum mechanics has been an object of much debate. Bohr himself said, Anyone not shocked by quantum mechanics has not yet understood it. Albert Einstein believed that quantum mechanics should not be correct. And, even today, popular lectures on quantum mechanics often emphasize the strangeness of quantum mechanics as one of the main points.

But I have a different opinion. The path of how quantum mechanics was discovered was very twisted and complicated. But the end result of this path, the basicprinciples of quantum mechanics, is quite simple. There are a few things that are different from classical physics and one has to accept those. But, once you accept them, quantum mechanics is simple and natural. Essentially, one can think of quantum mechanics as a generalization of probability theory in which probabilities can be negative.

In the last decades, research in quantum mechanics has been moving into a new stage. Earlier, the goal of researchers was to understand the laws of nature according to how quantum systems function. In many situations, this has been successfully achieved. The new goal is to manipulate and control quantum systems so that they behave in a prescribed way.

This brings the spirit of research closer to computer science. Alan Key, a distinguished computer scientist, once characterized the difference between natural sciences and computer science in the following way. In natural sciences, Nature has given us the world, and we just discovered its laws. In computers, we can stuff the laws into it and create the world. Experiments in quantum physics are now creating artificial physical systems that obey the laws of quantum mechanics but do not exist in nature under normal conditions.

An example of such an artificial quantum system is a quantum computer. A quantum computer encodes information into quantum states and computes by performing quantum operations on it.

There are several tasks for which a quantum computer will be useful. The one that is mentioned most frequently is that quantum computers will be able to read secret messages communicated over the internet using the current technologies (such as RSA, Diffie-Hellman, and other cryptographic protocols that are based on the hardness of number-theoretic problems like factoring and discrete logarithm). But there are many other fascinating applications.

First of all, if we have a quantum computer, it will be useful for scientists for conducting virtual experiments. Quantum computing started with Feynmans observation that quantum systems are hard to model on a conventional computer. If we had a quantum computer, we could use it to model quantum systems. (This is known as quantum simulation.) For example, we could model the behavior of atoms and particles at unusual conditions (for example, very high energies that can be only created in the Large Hadron Collider) without actually creating those unusual conditions. Or we could model chemical reactionsbecause interactions among atoms in a chemical reaction is a quantum process.

Another use of quantum computers is searching huge amounts of data. Lets say that we have a large phone book, ordered alphabetically by individual names (and not by phone numbers). If we wanted to find the person who has the phone number 6097348000, we would have to go through the whole phone book and look at every entry. For a phone book with one million phone numbers, it could take one million steps. In 1996, Lov Grover from Bell Labs discovered that a quantum computer would be able to do the same task with one thousand steps instead of one million.

More generally, quantum computers would be useful whenever we have to find something in a large amount of data: a needle in a haystackwhether this is the right phone number or something completely different.

Another example of that is if we want to find two equal numbers in a large amount of data. Again, if we have one million numbers, a classical computer might have to look at all of them and take one million steps. We discovered that a quantum computer could do it in a substantially smaller amount of time.

All of these achievements of quantum computing are based on the same effects of quantum mechanics. On a high level, these are known as quantum parallelism and quantum interference.

A conventional computer processes information by encoding it into 0s and 1s. If we have a sequence of thirty 0s and 1s, it has about one billion of possible values. However, a classical computer can only be in one of these one billion states at the same time. A quantum computer can be in a quantum combination of all of those states, called superposition. This allows it to perform one billion or more copies of a computation at the same time. In a way, this is similar to a parallel computer withone billion processors performing different computations at the same timewith one crucial difference. For a parallel computer, we need to have one billion different processors. In a quantum computer, all one billion computations will be running on the same hardware. This is known as quantum parallelism.

The result of this process is a quantum state that encodes the results of onebillion computations. The challenge for a person who designs algorithms for a quantum computer (such as myself) is: how do we access these billion results? If we measured this quantum state, we would get just one of the results. All of the other 999,999,999 results would disappear.

To solve this problem, one uses the second effect, quantum interference. Consider a process that can arrive at the same outcome in several different ways. In the non-quantum world, if there are two possible paths toward one result and each path is taken with a probability , the overall probability of obtaining this result is += . Quantumly, the two paths can interfere, increasing the probability of success to 1.

Quantum algorithms combine these two effects. Quantum parallelism is used to perform a large number of computations at the same time, and quantum interference is used to combine their results into something that is both meaningful and can be measured according to the laws of quantum mechanics.

The biggest challenge is building a large-scale quantum computer. There are several ways one could do it. So far, the best results have been achieved using trapped ions. An ion is an atom that has lost one or more of its electrons. An ion trap is a system consisting of electric and magnetic fields, which can capture ions and keep them at locations. Using an ion trap, one can arrange several ions in a line, at regular intervals.

One can encode 0 into the lowest energy state of an ion and 1 into a higher energy state. Then, the computation is performed using light to manipulate the states of ions. In an experiment by Rainer Blatts group at the University of Innsbruck, Austria, this has been successfully performed for up to fourteen ions. The next step is to scale the technology up to a bigger number of trapped ions.

There are many other paths toward building a quantum computer. Instead of trapped ions, one can use electrons or particles of lightphotons. One can even use more complicated objects, for example, the electric current in a superconductor. A very recent experiment by a group led by John Martinis of the University of California, Santa Barbara, has shown how to perform quantum operations on one or two quantum bits with very high precision from 99.4% to 99.92% using the superconductor technology.

The fascinating thing is that all of these physical systems, from atoms to electric current in a superconductor, behave according to the same physical laws. And they all can perform quantum computation. Moving forward with any of these technologies relates to a fundamental problem in experimental physics: isolating quantum systems from environment and controlling them with high precision. This is a very difficult and, at the same time, a very fundamental task and being able to control quantum systems will be useful for many other purposes.

Besides building quantum computers, we can use the ideas of information to think about physical laws in terms of information, in terms of 0s and 1s. This is the way I learned quantum mechanicsI started as a computer scientist, and I learned quantum mechanics by learning quantum computing first. And I think this is the best way to learn quantum mechanics.

Quantum mechanics can be used to describe many physical systems, and in each case, there are many technical details that are specific to the particular physicalsystem. At the same time, there is a common set of core principles that all of those physical systems obey.

Quantum information abstracts away from the details that are specific to a particular physical system and focuses on the principles that are common to all quantum systems. Because of that, studying quantum information illuminates the basic concepts of quantum mechanics better than anything else. And, one day, this could become the standard way of learning quantum mechanics.

For myself, the main question still is: how will quantum computers be useful? We know that they will be faster for many computational tasks, from modeling nature to searching large amounts of data. I think there are many more applications and, perhaps, the most important ones are still waiting to be discovered.

Originally posted here:
What Can We Do with a Quantum Computer? | Institute for …

Qubit – Wikipedia

In quantum computing, a qubit()or quantum bit(sometimes qbit) is the basic unit of quantum informationthe quantum version of the classical binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include: the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two states can be taken to be the vertical polarization and the horizontal polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superpositionof both states/levels simultaneously, a property which is fundamental to quantum mechanics and quantum computing.

The coining of the term qubit is attributed to Benjamin Schumacher.[1] In the acknowledgments of his 1995 paper, Schumacher states that the term qubit was created in jest during a conversation with William Wootters. The paper describes a way of compressing states emitted by a quantum source of information so that they require fewer physical resources to store. This procedure is now known as Schumacher compression.

A binary digit, characterized as 0 and 1, is used to represent information in classical computers. A binary digit can represent up to one bit of Shannon information, where a bit is the basic unit of information.However, in this article, the word bit is synonymous with binary digit.

In classical computer technologies, a processed bit is implemented by one of two levels of low DC voltage, and whilst switching from one of these two levels to the other, a so-called forbidden zone must be passed as fast as possible, as electrical voltage cannot change from one level to another instantaneously.

There are two possible outcomes for the measurement of a qubitusually taken to have the value “0” and “1”, like a bit or binary digit. However, whereas the state of a bit can only be either 0 or 1, the general state of a qubit according to quantum mechanics can be a coherent superpositionof both.[2] Moreover, whereas a measurement of a classical bit would not disturb its state, a measurement of a qubit would destroy its coherence and irrevocably disturb the superposition state. It is possible to fully encode one bit in one qubit. However, a qubit can hold more information, e.g. up to two bits using superdense coding.

For a system of n components, a complete description of its state in classical physics requires only n bits, whereas in quantum physics it requires 2n1 complex numbers.[3]

In quantum mechanics, the general quantum state of a qubit can be represented by a linear superposition of its two orthonormal basis states (or basis vectors). These vectors are usually denoted as | 0 = [ 1 0 ] {displaystyle |0rangle ={bigl [}{begin{smallmatrix}1\0end{smallmatrix}}{bigr ]}} and | 1 = [ 0 1 ] {displaystyle |1rangle ={bigl [}{begin{smallmatrix}0\1end{smallmatrix}}{bigr ]}} . They are written in the conventional Diracor “braket”notation; the | 0 {displaystyle |0rangle } and | 1 {displaystyle |1rangle } are pronounced “ket 0” and “ket 1”, respectively. These two orthonormal basis states, { | 0 , | 1 } {displaystyle {|0rangle ,|1rangle }} , together called the computational basis, are said to span the two-dimensional linear vector (Hilbert) space of the qubit.

Qubit basis states can also be combined to form product basis states. For example, two qubits could be represented in a four-dimensional linear vector space spanned by the following product basis states: | 00 = [ 1 0 0 0 ] {displaystyle |00rangle ={biggl [}{begin{smallmatrix}1\0\0\0end{smallmatrix}}{biggr ]}} , | 01 = [ 0 1 0 0 ] {displaystyle |01rangle ={biggl [}{begin{smallmatrix}0\1\0\0end{smallmatrix}}{biggr ]}} , | 10 = [ 0 0 1 0 ] {displaystyle |10rangle ={biggl [}{begin{smallmatrix}0\0\1\0end{smallmatrix}}{biggr ]}} , and | 11 = [ 0 0 0 1 ] {displaystyle |11rangle ={biggl [}{begin{smallmatrix}0\0\0\1end{smallmatrix}}{biggr ]}} .

In general, n qubits are represented by a superposition state vector in 2n-dimensional Hilbert space.

A pure qubit state is a coherent superposition of the basis states. This means that a single qubit can be described by a linear combination of | 0 {displaystyle |0rangle } and | 1 {displaystyle |1rangle } :

where and are probability amplitudes and can in general both be complex numbers.When we measure this qubit in the standard basis, according to the Born rule, the probability of outcome | 0 {displaystyle |0rangle } with value “0” is | | 2 {displaystyle |alpha |^{2}} and the probability of outcome | 1 {displaystyle |1rangle } with value “1” is | | 2 {displaystyle |beta |^{2}} . Because the absolute squares of the amplitudes equate to probabilities, it follows that {displaystyle alpha } and {displaystyle beta } must be constrained by the equation

Note that a qubit in this superposition state does not have a value in between “0” and “1”; rather, when measured, the qubit has a probability | | 2 {displaystyle |alpha |^{2}} of the value 0 and a probability | | 2 {displaystyle |beta |^{2}} of the value “1”. In other words, superposition means that there is no way, even in principle, to tell which of the two possible states forming the superposition state actually pertains. Furthermore, the probability amplitudes, {displaystyle alpha } and {displaystyle beta } , encode more than just the probabilities of the outcomes of a measurement; the relative phase of {displaystyle alpha } and {displaystyle beta } is responsible for quantum interference, e.g., as seen in the two-slit experiment.

It might, at first sight, seem that there should be four degrees of freedom in | = | 0 + | 1 {displaystyle |psi rangle =alpha |0rangle +beta |1rangle ,} , as {displaystyle alpha } and {displaystyle beta } are complex numbers with two degrees of freedom each. However, one degree of freedom is removed by the normalization constraint ||2 + ||2 = 1. This means, with a suitable change of coordinates, one can eliminate one of the degrees of freedom. One possible choice is that of Hopf coordinates:

Additionally, for a single qubit the overall phase of the state ei has no physically observable consequences, so we can arbitrarily choose to be real (or in the case that is zero), leaving just two degrees of freedom:

where e i {displaystyle e^{iphi }} is the physically significant relative phase.

The possible quantum states for a single qubit can be visualised using a Bloch sphere (see diagram). Represented on such a 2-sphere, a classical bit could only be at the “North Pole” or the “South Pole”, in the locations where | 0 {displaystyle |0rangle } and | 1 {displaystyle |1rangle } are respectively. This particular choice of the polar axis is arbitrary, however. The rest of the surface of the Bloch sphere is inaccessible to a classical bit, but a pure qubit state can be represented by any point on the surface. For example, the pure qubit state ( ( | 0 + i | 1 ) / 2 ) {displaystyle ((|0rangle +i|1rangle )/{sqrt {2}})} would lie on the equator of the sphere at the positive y axis. In the classical limit, a qubit, which can have quantum states anywhere on the Bloch sphere, reduces to the classical bit, which can be found only at either poles.

The surface of the Bloch sphere is a two-dimensional space, which represents the state space of the pure qubit states. This state space has two local degrees of freedom, which can be represented by the two angles {displaystyle phi } and {displaystyle theta } .

A pure state is one fully specified by a single ket, | = | 0 + | 1 , {displaystyle |psi rangle =alpha |0rangle +beta |1rangle ,,} a coherent superposition as described above. Coherence is essential for a qubit to be in a superposition state. With interactions and decoherence, it is possible to put the qubit in a mixed state, a statistical combination or incoherent mixture of different pure states. Mixed states can be represented by points inside the Bloch sphere (or in the Bloch ball). A mixed qubit state has three degrees of freedom: the angles {displaystyle phi } and {displaystyle theta } , as well as the length r {displaystyle r} of the vector that represents the mixed state.

There are various kinds of physical operations that can be performed on pure qubit states.

An important distinguishing feature between qubits and classical bits is that multiple qubits can exhibit quantum entanglement. Quantum entanglement is a nonlocal property of two or more qubits that allows a set of qubits to express higher correlation than is possible in classical systems.

The simplest system to display quantum entanglement is the system of two qubits. Consider, for example, two entangled qubits in the | + {displaystyle |Phi ^{+}rangle } Bell state:

In this state, called an equal superposition, there are equal probabilities of measuring either product state | 00 {displaystyle |00rangle } or | 11 {displaystyle |11rangle } , as | 1 / 2 | 2 = 1 / 2 {displaystyle |1/{sqrt {2}}|^{2}=1/2} . In other words, there is no way to tell if the first qubit has value 0 or 1 and likewise for the second qubit.

Imagine that these two entangled qubits are separated, with one each given to Alice and Bob. Alice makes a measurement of her qubit, obtainingwith equal probabilitieseither | 0 {displaystyle |0rangle } or | 1 {displaystyle |1rangle } , i.e., she can not tell if her qubit has value 0 or 1. Because of the qubits’ entanglement, Bob must now get exactly the same measurement as Alice. For example, if she measures a | 0 {displaystyle |0rangle } , Bob must measure the same, as | 00 {displaystyle |00rangle } is the only state where Alice’s qubit is a | 0 {displaystyle |0rangle } . In short, for these two entangled qubits, whatever Alice measures, so would Bob, with perfect correlation, in any basis, however far apart they may be and even though both can not tell if their qubit has value 0 or 1 a most surprising circumstance that can not be explained by classical physics.

Controlled gates act on 2 or more qubits, where one or more qubits act as a control for some specified operation. In particular, the controlled NOT gate (or CNOT or cX) acts on 2 qubits, and performs the NOT operation on the second qubit only when the first qubit is | 1 {displaystyle |1rangle } , and otherwise leaves it unchanged. With respect to the unentangled product basis { | 00 {displaystyle {|00rangle } , | 01 {displaystyle |01rangle } , | 10 {displaystyle |10rangle } , | 11 } {displaystyle |11rangle }} , it maps the basis states as follows:

A common application of the CNOT gate is to maximally entangle two qubits into the | + {displaystyle |Phi ^{+}rangle } Bell state. To construct | + {displaystyle |Phi ^{+}rangle } , the inputs A (control) and B (target) to the CNOT gate are:

1 2 ( | 0 + | 1 ) A {displaystyle {frac {1}{sqrt {2}}}(|0rangle +|1rangle )_{A}} and | 0 B {displaystyle |0rangle _{B}}

After applying CNOT, the output is the | + {displaystyle |Phi ^{+}rangle } Bell State: 1 2 ( | 00 + | 11 ) {displaystyle {frac {1}{sqrt {2}}}(|00rangle +|11rangle )} .

The | + {displaystyle |Phi ^{+}rangle } Bell state forms part of the setup of the superdense coding, quantum teleportation, and entangled quantum cryptography algorithms.

Quantum entanglement also allows multiple states (such as the Bell state mentioned above) to be acted on simultaneously, unlike classical bits that can only have one value at a time. Entanglement is a necessary ingredient of any quantum computation that cannot be done efficiently on a classical computer. Many of the successes of quantum computation and communication, such as quantum teleportation and superdense coding, make use of entanglement, suggesting that entanglement is a resource that is unique to quantum computation.[4] A major hurdle facing quantum computing, as of 2018, in its quest to surpass classical digital computing, is noise in quantum gates that limits the size of quantum circuits that can be executed reliably.[5]

A number of qubits taken together is a qubit register. Quantum computers perform calculations by manipulating qubits within a register. A qubyte (quantum byte) is a collection of eight qubits.[6]

Similar to the qubit, the qutrit is the unit of quantum information that can be realized in suitable 3-level quantum systems. This is analogous to the unit of classical information trit of ternary computers. Note, however, that not all 3-level quantum systems are qutrits.[7] The term “qu-d-it” (quantum d-git) denotes the unit of quantum information that can be realized in suitable d-level quantum systems.[8]

Any two-level quantum-mechanical system can be used as a qubit. Multilevel systems can be used as well, if they possess two states that can be effectively decoupled from the rest (e.g., ground state and first excited state of a nonlinear oscillator). There are various proposals. Several physical implementations that approximate two-level systems to various degrees were successfully realized. Similarly to a classical bit where the state of a transistor in a processor, the magnetization of a surface in a hard disk and the presence of current in a cable can all be used to represent bits in the same computer, an eventual quantum computer is likely to use various combinations of qubits in its design.

The following is an incomplete list of physical implementations of qubits, and the choices of basis are by convention only.

In a paper entitled “Solid-state quantum memory using the 31P nuclear spin”, published in the October 23, 2008, issue of the journal Nature,[9] a team of scientists from the U.K. and U.S. reported the first relatively long (1.75 seconds) and coherent transfer of a superposition state in an electron spin “processing” qubit to a nuclear spin “memory” qubit. This event can be considered the first relatively consistent quantum data storage, a vital step towards the development of quantum computing. Recently, a modification of similar systems (using charged rather than neutral donors) has dramatically extended this time, to 3 hours at very low temperatures and 39 minutes at room temperature.[10] Room temperature preparation of a qubit based on electron spins instead of nuclear spin was also demonstrated by a team of scientists from Switzerland and Australia.[11]

Here is the original post:
Qubit – Wikipedia

Quantum computer | computer science | Britannica.com

Quantum computer, device that employs properties described by quantum mechanics to enhance computations.

Read More on This Topic

computer: Quantum computing

According to quantum mechanics, an electron has a binary (two-valued) property known as spin. This suggests another way of representing a bit of information. While single-particle information storage is attractive, it would be difficult to manipulate. The fundamental idea of quantum computing, however,

As early as 1959 the American physicist and Nobel laureate Richard Feynman noted that, as electronic components begin to reach microscopic scales, effects predicted by quantum mechanics occurwhich, he suggested, might be exploited in the design of more powerful computers. In particular, quantum researchers hope to harness a phenomenon known as superposition. In the quantum mechanical world, objects do not necessarily have clearly defined states, as demonstrated by the famous experiment in which a single photon of light passing through a screen with two small slits will produce a wavelike interference pattern, or superposition of all available paths. (See wave-particle duality.) However, when one slit is closedor a detector is used to determine which slit the photon passed throughthe interference pattern disappears. In consequence, a quantum system exists in all possible states before a measurement collapses the system into one state. Harnessing this phenomenon in a computer promises to expand computational power greatly. A traditional digital computer employs binary digits, or bits, that can be in one of two states, represented as 0 and 1; thus, for example, a 4-bit computer register can hold any one of 16 (24) possible numbers. In contrast, a quantum bit (qubit) exists in a wavelike superposition of values from 0 to 1; thus, for example, a 4-qubit computer register can hold 16 different numbers simultaneously. In theory, a quantum computer can therefore operate on a great many values in parallel, so that a 30-qubit quantum computer would be comparable to a digital computer capable of performing 10 trillion floating-point operations per second (TFLOPS)comparable to the speed of the fastest supercomputers.

During the 1980s and 90s the theory of quantum computers advanced considerably beyond Feynmans early speculations. In 1985 David Deutsch of the University of Oxford described the construction of quantum logic gates for a universal quantum computer, and in 1994 Peter Shor of AT&T devised an algorithm to factor numbers with a quantum computer that would require as few as six qubits (although many more qubits would be necessary for factoring large numbers in a reasonable time). When a practical quantum computer is built, it will break current encryption schemes based on multiplying two large primes; in compensation, quantum mechanical effects offer a new method of secure communication known as quantum encryption. However, actually building a useful quantum computer has proved difficult. Although the potential of quantum computers is enormous, the requirements are equally stringent. A quantum computer must maintain coherence between its qubits (known as quantum entanglement) long enough to perform an algorithm; because of nearly inevitable interactions with the environment (decoherence), practical methods of detecting and correcting errors need to be devised; and, finally, since measuring a quantum system disturbs its state, reliable methods of extracting information must be developed.

Plans for building quantum computers have been proposed; although several demonstrate the fundamental principles, none is beyond the experimental stage. Three of the most promising approaches are presented below: nuclear magnetic resonance (NMR), ion traps, and quantum dots.

In 1998 Isaac Chuang of the Los Alamos National Laboratory, Neil Gershenfeld of the Massachusetts Institute of Technology (MIT), and Mark Kubinec of the University of California at Berkeley created the first quantum computer (2-qubit) that could be loaded with data and output a solution. Although their system was coherent for only a few nanoseconds and trivial from the perspective of solving meaningful problems, it demonstrated the principles of quantum computation. Rather than trying to isolate a few subatomic particles, they dissolved a large number of chloroform molecules (CHCL3) in water at room temperature and applied a magnetic field to orient the spins of the carbon and hydrogen nuclei in the chloroform. (Because ordinary carbon has no magnetic spin, their solution used an isotope, carbon-13.) A spin parallel to the external magnetic field could then be interpreted as a 1 and an antiparallel spin as 0, and the hydrogen nuclei and carbon-13 nuclei could be treated collectively as a 2-qubit system. In addition to the external magnetic field, radio frequency pulses were applied to cause spin states to flip, thereby creating superimposed parallel and antiparallel states. Further pulses were applied to execute a simple algorithm and to examine the systems final state. This type of quantum computer can be extended by using molecules with more individually addressable nuclei. In fact, in March 2000 Emanuel Knill, Raymond Laflamme, and Rudy Martinez of Los Alamos and Ching-Hua Tseng of MIT announced that they had created a 7-qubit quantum computer using trans-crotonic acid. However, many researchers are skeptical about extending magnetic techniques much beyond 10 to 15 qubits because of diminishing coherence among the nuclei.

Just one week before the announcement of a 7-qubit quantum computer, physicist David Wineland and colleagues at the U.S. National Institute for Standards and Technology (NIST) announced that they had created a 4-qubit quantum computer by entangling four ionized beryllium atoms using an electromagnetic trap. After confining the ions in a linear arrangement, a laser cooled the particles almost to absolute zero and synchronized their spin states. Finally, a laser was used to entangle the particles, creating a superposition of both spin-up and spin-down states simultaneously for all four ions. Again, this approach demonstrated basic principles of quantum computing, but scaling up the technique to practical dimensions remains problematic.

Quantum computers based on semiconductor technology are yet another possibility. In a common approach a discrete number of free electrons (qubits) reside within extremely small regions, known as quantum dots, and in one of two spin states, interpreted as 0 and 1. Although prone to decoherence, such quantum computers build on well-established, solid-state techniques and offer the prospect of readily applying integrated circuit scaling technology. In addition, large ensembles of identical quantum dots could potentially be manufactured on a single silicon chip. The chip operates in an external magnetic field that controls electron spin states, while neighbouring electrons are weakly coupled (entangled) through quantum mechanical effects. An array of superimposed wire electrodes allows individual quantum dots to be addressed, algorithms executed, and results deduced. Such a system necessarily must be operated at temperatures near absolute zero to minimize environmental decoherence, but it has the potential to incorporate very large numbers of qubits.

See the article here:
Quantum computer | computer science | Britannica.com

IBMs new quantum computer is a symbol, not a breakthrough

In the grueling race to build a practical quantum computer, tech companies are keeping their spirits up by loudly cheering every milestone no matter how small. One of the most vocal competitors is IBM, which today at CES unveiled the IBM Q System One: a 20-qubit quantum computer thats built for stability, but with some very flashy design.

IBM is touting the Q System One as the worlds first fully integrated universal quantum computing system designed for scientific and commercial use. But thats a description that needs a lot of context. The Q System One may be designed for commercial use, but its not exactly ready for it. Not in the way you might think.

Quantum computers like the Q System One are still very much experimental devices. They cant outperform classical computers at useful tasks (in fact, your laptop is probably more powerful when it comes to real-life computation), but are instead supposed to be research tools; letting us work out, qubit by qubit, how quantum devices might work at all.

Its more like a stepping stone than a practical quantum computer, Winfried Hensinger, professor of quantum technologies at the UKs University of Sussex, told The Verge. Dont think of this as a quantum computer that can solve all of the problems quantum computing is known for. Think of it as a prototype machine that allows you to test and further develop some of the programming that might be useful in the future.

And even as an experimental device, its not like IBM is going to start selling the Q System One at Best Buy. The company wont say how much it costs to buy one of these machines or even how many its made. Like IBMs other quantum computers, its accessible only via the cloud, where companies and research institutes can buy time on the IBM Q Network. And today IBM announced two new customers on the network: energy giant ExxonMobil, and European research lab CERN, the organization that built the Large Hadron Collider.

So whats special about the Q System One? Well, IBM says the main achievement is turning an experimental quantum machine into something with reliability (and looks) closer to that of a mainframe computer. Quantum computing is an extremely delicate business. Chips need to be kept at freezing temperatures and can be disturbed by the tiniest electrical fluctuations or physical vibrations. The Q System One, says IBM, minimizes these problems.

This is something IBM brings to the market that no one else really does. We know how to do integrated systems, IBMs VP of quantum research, Bob Sutor, tells The Verge. The electronics for a quantum computer are not something you go buy off the shelf. You need a temperature controlled environment, you need to minimize the vibrations anything that might disrupt the quantum calculations.

Sutor says that a practical advantage of engineering a machine like the Q System One is that it reduces research downtime. Resetting a quantum computer after an upset caused by a power surge or a disgruntled look from a technician is much, much quicker with a device like the Q System One. What used to take days and weeks now takes hour or days, says Sutor.

And while these might sound like marginal gains, if were ever going to have quantum computers that do change the world in all the ways we dream of (by discovering new drugs, for example, and unlocking fusion energy) reliable research will absolutely be key.

And perhaps just as importantly, the Q System One looks the part. The machine was designed by Map Project Office, an industrial design consultancy thats worked with companies like Sonos, Honda, and Graphcore. The Q System One is contained in a nine-foot borosilicate glass cube, with its delicate internals sheathed by a shiny, rounded black case. Its reminiscent of both Apples dustbin-like 2013 Mac Pro and the Monolith from 2001: A Space Odyssey. It looks like a computer from the future.

For IBM this is not simply a side benefit its part of the plan. The 107-year-old company may still rake in billions in revenue each quarter (mostly from legacy enterprise deals), but its facing what some analysts have called irreversible structural decline. Its failed to come out ahead in the tech industrys most recent growth areas, mobile and cloud computing, and it needs new revenue streams to carry it through its second century of existence. AI is one bet, quantum computing another.

Sutor doesnt mention these problems, but he does note that the Q System One is supposed to inspire confidence both in quantum computing and in IBM itself.

People, when they see quantum computing systems, their eyes just glow, he tells The Verge. And its because they understand that these things that were just rumored about, or that were just too futuristic, are now starting to be produced. They can look at these things and say, Ah, IBM sees the path forward!

And machines like the Q System One are still useful on these terms, giving people a glimpse of the future. But we need to remember, says Hensinger, that theres lots of work yet to be done. I wouldnt call this a breakthrough, he says. But its a productive step towards commercial realization of quantum computing.

See the original post:
IBMs new quantum computer is a symbol, not a breakthrough

IBM unveils the world’s first quantum computer that …

For many years, quantum computers have been within only the confines of the research lab.

On Tuesday, though, IBM unveiled the IBM Q System One, billed as the first-ever quantum computer designed for businesses to put to their own use though the company is clear that this is only the first step toward a broader revolution.

Quantum computing is considered one of the most promising early-stage technologies out there today. That’s because quantum computers can process exponentially more data and have the potential to completely transform entire industries. For example, they could streamline aerospace and military systems, calculate risk factors to make better investments, or, perhaps, find a cure for cancer and other diseases.

“Data will be the world’s most valuable natural resource,” IBM CEO Ginni Rometty said on stage at the Consumer Electronics Show in Las Vegas, where the IBM Q System One was unveiled.

Don’t expect to install one in your office any time soon, though. While the computer is open to paying customers, developers will access its power from the comfort of their own homes or offices via the IBM Cloud. IBM Q System One. IBM

Average computers store data in binary, as either zeroes or ones strings of ones and zeroes represent numbers or letters. However, quantum computers are much more powerful. That’s because they store data using “qubits,” which have a special property that allows zeroes and ones to exist simultaneously. This seemingly small thing gives quantum computers the ability to do exponentially more calculations at once, making them powerful enough for incredibly complicated tasks like drug discovery, intensive data analysis, and even creating unbreakable codes.

Enclosed in a 9-foot-tall, 9-foot-wide glass case that forms an air-tight environment, this sleek computer is IBM’s first effort to bring quantum computing to businesses. The casing is important: Qubits lose their quantum-computing properties outside of very specific conditions. A quantum computer has to be kept well below freezing in an environment that is mostly free of vibration and electromagnetic radiation.

IBM’s new system aims to address this challenge with an integrated quantum computer that solves all of that on behalf of its customers hence the casing, which keeps everything in shipshape. However, this relative fragility is why you won’t be installing an IBM Q System One in your own office while it’s definitely a major step forward, it’s far away from being something you can order and have delivered.

“The IBM Q System One is a major step forward in the commercialization of quantum computing,” Arvind Krishna, IBM’s senior vice president of hybrid cloud and director of research, said in a statement. “This new system is critical in expanding quantum computing beyond the walls of the research lab as we work to develop practical quantum applications for business and science.”

Read more: Here’s why we should be really excited about quantum computers

Later this year, IBM will also open its first IBM Q Quantum Computation Center for commercial customers in Poughkeepsie, New York. At this lab, clients can use IBM’s cloud-based quantum computing systems, as well as other high-performance computing systems.

IBM isn’t the only company that’s been working on quantum computing, as the technology is still far from ready for mass deployment.

Google is researching how to make quantum computers more stable and better able to find and fix errors, and it has also created and tested qubit processors as it pursues the technology. Microsoft is working on creating hybrid quantum computers, which combine the new technology with more conventional processors. Intel, too, has been working on quantum computing chips.

See more here:
IBM unveils the world’s first quantum computer that …