Category Archives: Quantum Computing
IBM is outlining another milestone in quantum computing — its highest Quantum Volume to date — and projects that practical uses or so called Quantum Advantage may be a decade away.
Big Blue, which will outline the scientific milestone at the American Physical Society March Meeting, made a bit of a splash at CES 2019 with a display of its Q System quantum computer and has been steadily showing progress on quantum computing.
In other words, that quantum computing buying guide for technology executives may take a while. Quantum Volume is a performance metric that indicates progress in the pursuit of Quantum Advantage. Quantum Advantage refers to the point where quantum applications deliver significant advantages to classical computers.
Also:Meet IBM’s bleeding edge of quantum computingCNET
Quantum Volume is determined by the number of qubits, connectivity, and coherence time, plus accounting for gate and measurement errors, device cross talk, and circuit software compiler efficiency.
IBM said its Q System One, which has a 20-qubit processor, produced a Quantum Volume of 16, double the current IBM Q, which has a Quantum Volume of 8. IBM also said the Q System One has some of the lowest error rates IBM has measured.
That progress is notable, but practical broad use cases are still years away. IBM said Quantum Volume would need to double every year to reach Quantum Advantage within the next decade. Faster progress on Quantum Advantage would speed up that timeline. IBM has doubled the power of its quantum computers annually since 2017.
Once Quantum Advantage is hit, there would be new applications, more of an ecosystem and real business use cases. Consumption of quantum computing would still likely be delivered via cloud computing since the technology has some unique characteristics that make a traditional data center look easy. IBM made its quantum computing technology available in 2016 via a cloud service and is working with partners to find business and science use cases.
Here’show quantum computing and classic computing differsvia our recent primer on the subject.
Every classical electronic computer exploits the natural behavior of electrons to produce results in accordance with Boolean logic (for any two specific input states, one certain output state). Here, the basic unit of transaction is the binary digit (“bit”), whose state is either 0 or 1. In a conventional semiconductor, these two states are represented by low and high voltage levels within transistors.
In a quantum computer, the structure is radically different. Its basic unit of registering state is the qubit, which at one level also stores a 0 or 1 state (actually 0 and/or 1). Instead of transistors, a quantum computing obtains its qubits by bombarding atoms with electrical fields at perpendicular angles to one another, the result being to line up the ions but also keep them conveniently and equivalently separated. When these ions are separated by just enough space, their orbiting electrons become the home addresses, if you will, for qubits.
Go here to read the rest:
IBM hits quantum computing milestone, may see ‘Quantum …
Microsoft is focusing on the development of quantum computers that take advantage of cryogenically cooled nanowires. (Microsoft Photo)
REDMOND, Wash. Quantum computing may still be in its infancy but the Microsoft Quantum Network is all grown up, fostered by in-house developers, research affiliates and future stars of the startup world.
The network made its official debut today here at Microsofts Redmond campus, during a Startup Summit that laid out the companys vision for quantum computing and introduced network partners to Microsofts tools of the quantum trade.
Quantum computing stands in contrast to the classical computer technologies that have held sway for more than a half-century. Classical computing is based on the ones and zeroes of bit-based processing, while quantum computing takes advantage of the weird effects of quantum physics. Quantum bits, or qubits, neednt represent a one or a zero, but can represent multiple states during computation.
The quantum approach should be able to solve computational problems that cant easily be solved using classical computers, such as modeling molecular interactions or optimizing large-scale systems. That could open the way to world-changing applications, said Todd Holmdahl, corporate vice president of Microsofts Azure Hardware Systems Group.
Were looking at problems like climate change, Holmdahl said. Were looking at solving big food production problems. We think we have opportunities to solve problems around materials science, personal health care, machine learning. All of these things are possible and obtainable with a quantum computer. We have been talking around here that were at the advent of the quantum economy.
Representatives from 16 startups were invited to this weeks Startup Summit, which features talks from Holmdahl and other leaders of Microsofts quantum team as well as demos and workshops focusing on Microsofts programming tools. (The closest startup to Seattle is 1QBit, based in Vancouver, B.C.)
Over the past year and a half, Microsoft has released a new quantum-friendly programming language called Q# (Q-sharp) as part of its Quantum Development Kit, and has worked with researchers at Pacific Northwest National Laboratory and academic institutions around the world to lay the technical groundwork for the field.
A big part of that groundwork is the development ofa universal quantum computer, based on a topological architecture that builds error-correcting mechanisms right into the cryogenically cooled, nanowire-based hardware. Cutting down on the error-producing noise in quantum systems will be key to producing a workable computer.
We believe that our qubit equals about 1,000 of our competitions qubits, Holmdahl said.
Theres lots of competition in the quantum computing field nowadays: IBM, Google and Intel are all working on similar technologies for a universal quantum computer, while Canadas D-Wave Systems is taking advantage of a more limited type of computing technology known as quantum annealing.
This week, D-Wave previewed its plans for a new type of computer topology that it said would reduce quantum noise and more than double the qubit count of its existing platform, from 2,000 linked qubits to 5,000.
But the power of quantum computing shouldnt be measured merely by counting qubits. The efficiency of computation and the ability to reduce errors can make a big difference, said Microsoft principal researcher Matthias Troyer.
For example, a standard approach to simulating the molecular mechanism behind nitrogen fixation for crops could require 30,000 years of processing time, he said. But if the task is structured to enable parallel processing and enhanced error correction, the required runtime can be shrunk to less than two days.
Quantum software engineering is really as important as the hardware engineering, Troyer said.
Julie Love, director of Microsoft Quantum Business Development, said that Microsoft will start out offering quantum computing through Miicrosofts Azure cloud-based services. Not all computational problems are amenable to the quantum approach: Its much more likely that an application will switch between classical and quantum processing and therefore, between classical tools such as the C# programming language and quantum tools such as Q#.
When you work in chemistry and materials, all of these problems, you hit this known to be unsolvable problem, Love said. Quantum provides the possibility of a breakthrough.
Love shies away from giving a firm timetable for the emergence of specific applications but last year, Holmdahl predicted that commercial quantum computers would exist five years from now. (Check back in 2023 to see how the prediction panned out.)
The first applications could well focus on simulating molecular chemistry, with the aim of prototyping better pharmaceuticals, more efficient fertilizers, better batteries, more environmentally friendly chemicals for the oil and gas industry, and a new class of high-temperature superconductors. It might even be possible to address the climate change challenge by custom-designing materials that pull excess carbon dioxide out of the air.
Love said quantum computers would also be well-suited for addressing optimization problems, like figuring out how to make traffic flow better through Seattles urban core; and for reducing the training time required for AI modeling.
That list is going to continue to evolve, she said.
Whenever the subject quantum computing comes up, cryptography has to be mentioned as well. Its theoretically possible for a quantum computer to break the codes that currently protect all sorts of secure transactions, ranging from email encryption to banking protocols.
Love said those code-breaking applications are farther out than other likely applications, due to the huge amount of computation resources that would be required even for a quantum computer. Nevertheless, its not too early to be concerned. We have a pretty significant research thrust in whats called post-quantum crypto, she said.
Next-generation data security is one of the hot topics addressed $1.2 billion National Quantum Initiative that was approved by Congress and the White House last December. Love said Microsofts post-quantum crypto protocols have already gone through an initial round of vetting by the National Institute of Standards and Technology.
Weve been working at this in a really open way, she said.
Like every technology, quantum computing is sure to have a dark side as well as a bright side. But its reassuring to know that developers are thinking ahead about both sides.
Read this article:
Microsofts quantum computing network takes a giant leap …
Photo:IBM Research Workers assemble the enclosure for the IBM Q System One quantum computer, which was shown at the Consumer Electronics Show in Las Vegas in January.
Our romance with new technologies always seems to follow the same trajectory: We are by turns mesmerized and adoring, disappointed and disheartened, and end up settling for less than we originally imagined. In 1954, Texas Instruments touted its new transistors as bringing electronic brains approaching the human brain in scope and reliability much closer to reality. In 2000, U.S. president Bill Clinton declared that the Human Genome Project would lead to a world in which our childrens children will know the term cancer only as a constellation of stars. And so it is now with quantumcomputing.
The popular press is awash with articles touting its promise. Tech giants are pouring huge amounts of money into building prototypes. You get the distinct impression that the computer industry is on the verge of an imminent quantum revolution.
But not everyone believes that quantum computing is going to solve real-world problems in anything like the time frame that some proponents of the technology want us to believe. Indeed, many of the researchers involved acknowledge the hype has gotten out of control, cautioning that quantum computing may take decades to mature.
Theoretical physicist Mikhail Dyakonov, a researcher for many years at Ioffe Institute, in Saint Petersburg, Russia, and now at the University of Montpellier, in France, is even more skeptical. In The Case Against Quantum Computing, he lays out his view that practical general-purpose quantum computers will not be built anytime in the foreseeable future.
As you might expect, his essay ruffled some feathers after it was published online. But as it turns out, while his article was being prepared, a committee assembled by the U.S. National Academies of Sciences, Engineering, and Medicine had been grappling with the very same question.
The committee was to provide an independent assessment of the feasibility and implications of creating a functional quantum computer capable of addressing real-world problems…. It was to estimate the time and resources required, and how to assess the probability of success.
The experts who took up the challenge included John Martinis of the University of California, Santa Barbara, who heads Googles quantum-hardware efforts; David Awschalom of the University of Chicago, who formerly directed the Center for Spintronics and Quantum Computation at UCSB; and Umesh Vazirani of the University of California, Berkeley, who codirects the Berkeley Quantum Information and Computation Center.
To their credit, in their report, released in December, they didnt sugarcoat the difficulties. Quite the opposite.
The committee concluded that it is highly unexpected that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable noisy intermediate-scale quantum computers will be built within that time frame, there are at present no known algorithms/applications that could make effective use of this class of machine, the committee says.
Okay, if not a decade, then how long? The committee was not prepared to commit itself to any estimate. Authors of a commentary in the January issue of the Proceedings of IEEE devoted to quantum computing were similarly reticent to make concrete predictions. So the answer is: Nobody really knows.
The people working in this area are nevertheless thrilled by recent progress theyve made on proof-of-concept devices and by the promise of this research. They no doubt consider the technical hurdles to be much more tractable than Dyakonov concludes. So dont be surprised when you see their perspectives appear in Spectrum, too.
This article appears in the March 2019 print issue as Quantum Computings Prospects.
Read the rest here:
When Will Quantum Computing Have Real Commercial Value …
Illustration: Christian Gralingen
Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decadesand without any practical results to show for it.
Weve been told that quantum computers could provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complexsystems, and artificial intelligence. Weve been assured that quantum computers will forever alter our economic, industrial, academic, and societal landscape. Weve even been told that the encryption that protects the worlds most sensitive data may soon be broken by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.
Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.
Its become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the worlds top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.
In light of all this, its natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, Not in the foreseeable future. Having spent decades conducting research in quantum and condensed-matter physics, Ive developed my very pessimistic view. Its based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.
The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.
Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: Nature isnt classical, dammit, and if you want to make a simulation of nature, youd better make it quantum mechanical, and by golly its a wonderful problem, because it doesnt look so easy, he opined. A few years later, University of Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analogue of the universal Turing machine.
The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.
The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, its fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computers clock.
The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their on and off states, according to a prescribed program.
In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electrons internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or 1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electrons spin as and .
Heres where things get weird. With the quantum bit, those two states arent the only ones possible. Thats because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, and (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, and , each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.
Thats because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states and when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the state is 0.6 (60percent), then the probability of finding it in the state must be 0.4 (40 percent)nothing else would make sense.
In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes and . This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its and states.
Yes, quantum mechanics often defies intuition. But this concept shouldnt be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but its not really a useful description. Describing a qubit as being simultaneously in both and states is, in my view, similarly unhelpful. And yet, its become almost de rigueur for journalists to describe it as such.
In a system with two qubits, there are 22 or 4 basic states, which can be written (), (), (), and (). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.
While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.
How is information processed in such a machine? Thats done by applying certain kinds of transformationsdubbed quantum gatesthat change these parameters in a precise and controlled manner.
Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. Thats a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.
To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.
At this point in a description of a possible future technology, a hardheaded engineer loses interest. But lets continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.
In contrast, its absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.
How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machinewhich was already more than astronomical with 1,000 qubitsnow becomes even more ludicrous.
Even without considering these impossibly large numbers, its sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And its not like this hasnt long been a key goal.
In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that requires on the order of 50 physical qubits and exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm. Its now the end of 2018, and that ability has still not been demonstrated.
The huge amount of scholarly literature thats been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.
The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.
By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by local noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.
A decade and a half ago, ARDAs Experts Panel noted that it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely. Here, the key words are under certain assumptions. That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.
I argue that they cant. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.
Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the on state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.
Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed?There are no clear answers to these crucial questions.
While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).
The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shors algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.
On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, Im skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulateon a microscopic level and with enormous precisiona physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.
I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. Thats because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. Whats more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.
All these problems, as well as a few others Ive not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.
To my mind, quantum-computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work.
Editors note: A sentence in this article originally stated that concerns over required precision were never even discussed. This sentence was changed on 30 November 2018 after some readers pointed out to the author instances in the literature that had considered these issues. The amended sentence now reads: There are no clear answers to these crucial questions.
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.
Go here to read the rest:
The Case Against Quantum Computing – IEEE Spectrum
Quantum computing just plain sounds cool. Weve all read about the massive investment in making it a reality, and its promise of breakthroughs in many industries. But all that press is usually short on what it is and how it works. Thats for a reason: Quantum computing is quite different from traditional digital computing and requires thinking about things in a non-intuitive way. Oh, and there is math. Lots of it.
This article wont make you an expert, but it should help you understand what quantum computing is, why its important, and why its so exciting. If you already have a background in quantum mechanics and grad school math, you probably dont need to read this article. You can jump straight into a book like A Gentle Introduction To Quantum Computing (Hint, gentle is a relative term). But ifyoure like most of us and dont have that background, lets do our best to demystify one of the most mystical topics in computing.
In a few short paragraphs, here are the basics that well go over in more detail in this article: Quantum computers use qubits instead of traditional bits (binary digits). Qubits are different from traditional bits because until they are read out (meaning measured), they can exist in an indeterminate state where we cant tell whether theyll be measured as a 0 or a 1. Thats because of a unique property called superposition.
Superposition makes qubits interesting, but their real superpower is entanglement. Entangled qubits can interact instantly. To make functional qubits, quantum computers have to be cooled to near absolute zero. Even when supercooled, qubits dont maintain their entangled state (coherence) for very long.
That makes programming them extra tricky. Quantum computers are programmed using sequences of logic gates of various kinds, but programs need to run quickly enough that the qubits dont lose coherence before theyre measured. For anyone who took a logic class or digital circuit design using flip-flops, quantum logic gates will seem somewhat familiar, although quantum computers themselves are essentially analog. However, the combination of superposition and entanglement make the process about a hundred times more confusing.
The ordinary bits we use in typical digital computers are either 0 or 1. You can read them whenever you want, and unless there is a flaw in the hardware, they wont change. Qubits arent like that. They have a probability of being 0 and a probability of being 1, but until you measure them, they may be in an indefinite state. That state,along with some other state information that allows for additional computational complexity, can be described as being at an arbitrary point on a sphere (of radius 1), that reflects both the probability of being measured as a 0 or 1 (which are the north and south poles).
The qubits state is a combination of the values along all three axes. This is called superposition. Some texts describe this property as being in all possible states at the same time, while others think thats somewhat misleading and that were better off sticking with the probability explanation. Either way, a quantum computer can actually do math on the qubit while it is in superposition changing the probabilities in various ways through logic gates before eventually reading out a result by measuring it. In all cases, though, once a qubit is read, it is either 1 or 0 and loses its other state information.
Qubits typically start life at 0, although they are often then moved into an indeterminate state using a Hadamard Gate, which results in a qubit that will read out as 0 half the time and 1 the other half. Other gates are available to flip the state of a qubit by varying amounts and directions both relative to the 0 and 1 axes, and also a third axis thatrepresents phase, and provides additional possibilities for representing information. The specific operations and gates available depend on the quantum computer and toolkit youre using.
Groups of independent qubits, by themselves, arent enough to create the massive breakthroughs that are promised by quantum computing. The magic really starts to happen when the quantum physics concept of entanglement is implemented. One industry expert likened qubits without entanglement as being a very expensive classical computer. Entangled qubits affect each other instantly when measured, no matter far apart they are, based on what Einstein euphemistically called spooky action at a distance. In terms of classic computing, this is a bit like having a logic gate connecting every bit in memory to every other bit.
You can start to see how powerful that might be compared with a traditional computer needing to read and write from each element of memory separately before operating on it. As a result, there are multiple large potential gains from entanglement. The first is a huge increase in the complexity of programming that can be executed, at least for certain types of problems. One thats creating a lot of excitement is the modeling of complex molecules and materials that are very difficult to simulate with classical computers. Another might be innovations in long-distance secure communications if and when it becomes possible to preserve quantum state over large distances. Programming using entanglement typically starts with the C-NOT gate, which flips the state of an entangled particle if its partner is read out as a 1. This is sort of like a traditional XOR gate, except that it only operates when a measurement is made.
Superposition and entanglement are impressive physical phenomena, but leveraging them to do computation requires a very different mindset and programming model. You cant simply throw your C code on a quantum computer and expect it to run, and certainly not to run faster. Fortunately, mathematicians and physicists are way ahead of the computer builders here, having developed clever algorithms that take advantage of quantum computers decades before the machines started to appear.
Some of the first quantum algorithms created, and honestly, some of the few useful ones Ive found that you can understand without a graduate degree in math, are for secure cryptographic key distribution. These algorithms use the property of entanglement to allow the key creator to send one of each of many pairs of qubits to the recipient. The full explanation is pretty long, but the algorithms rely on the fact that if anyone intercepts and reads one of the entangled bits en route, the companion qubit at the sender will be affected. By passing some statistics back and forth, the sender and receiver can figure out whether the key was transmitted securely, or was hacked on the way.
You may have read that quantum computers one day could break most current cryptography systems. They will be able to do that because there are some very clever algorithms designed to run on quantum computers that can solve a hard math problem, which in turn can be used to factor very large numbers. One of the most famous is Shors Factoring Algorithm. The difficulty of factoring large numbers is essential to the security of all public-private key systems which are the most commonly used today. Current quantum computers dont have nearly enough qubits to attempt the task, but various experts predict they will within the next 3-8 years. That leads to some potentially dangerous situations, such as if only governments and the super-rich had access to the ultra-secure encryption provided by quantum computers.
There are plenty of reasons quantum computers are taking a long time to develop. For starters, you need to find a way to isolate and control a physical object that implements a qubit. That also requires cooling it down to essentially zero (as in .015 degrees Kelvin, in the case of IBMs Quantum One). Even at such a low temperature, qubits are only stable (retaining coherence) for a very short time. That greatly limits the flexibility of programmers in how many operations they can perform before needing to read out a result.
Not only do programs need to be constrained, but they need to be run many times, as current qubit implementations have a high error rate. Additionally, entanglement isnt easy to implement in hardware either. In many designs, only some of the qubits are entangled, so the compiler needs to be smart enough to swap bits around as needed to help simulate a system where all the bits can potentially be entangled.
The good news is that trivial quantum computing programs are actually pretty easy to understand if a bit confusing at first. Plenty of tutorials are available that will help you write your first quantum program, as well as let you run it on a simulator, and possibly even on a real quantum computer.
One of the best places to start is with IBMs QISKit, a free quantum toolkit from IBM Q Research that includes a visual composer, a simulator, and access to an actual IBM quantum computer after you have your code running on the simulator. Rigetti Quantum Computing has also posted an easy intro application, which relies on their toolkit and can be run on their machines in the cloud.
Unfortunately, the trivial applications are just that: trivial. So simply following along with the code in each example doesnt really help you master the intricacies of more sophisticated quantum algorithms. Thats a much harder task.
Thanks to William Poole and Sue Gemmell for their thoughtful input.
Also, check out ourExtremeTech Explainsseries for more in-depth coverage of todays hottest tech topics.
Top image credit: IBM
Quantum technology is a new field of physics and engineering, which transitions some of the properties of quantum mechanics, especially quantum entanglement, quantum superposition and quantum tunnelling, into practical applications such as quantum computing, quantum sensors, quantum cryptography, quantum simulation, quantum metrology and quantum imaging.
Quantum superposition states can be very sensitive to a number of external effects, such as electric, magnetic and gravitational fields; rotation, acceleration and time, and therefore can be used to make very accurate sensors. There are many experimental demonstrations of quantum sensing devices, such as the experiments carried out by the nobel laureate William D. Phillips on using cold atom interferometer systems to measure gravity and the atomic clock which is used by many national standards agencies around the world to define the second.
Recent efforts are being made to engineer quantum sensing devices, so that they are cheaper, easier to use, more portable, lighter and consume less power. It is believed that if these efforts are successful, it will lead to multiple commercial markets, such as for the monitoring of oil and gas deposits, or in construction.
Quantum secure communication are methods which are expected to be ‘quantum safe’ in the advent of a quantum computing systems that could break current cryptography systems. One significant component of a quantum secure communication systems is expected to be Quantum key distribution, or ‘QKD’: a method of transmitting information using entangled light in a way that makes any interception of the transmission obvious to the user.
Quantum computers are the ultimate quantum network, combining ‘quantum bits’ or ‘qubit’ which are devices that can store and process quantum data (as opposed to binary data) with links that can transfer quantum information between qubits. In doing this, quantum computers are predicted to calculate certain algorithms significantly faster than even the largest classical computer available today.
Quantum computers are expected to have a number of significant uses in computing fields such as optimization and machine learning. They are famous for their expected ability to carry out ‘Shor’s Algorithm’, which can be used to factorise large numbers which are mathematically important to secure data transmission.
There are many devices available today which are fundamentally reliant on the effects of quantum mechanics. These include: laser systems, transistors and semi-conductor devices and other devices, such as MRI imagers. These devices are often referred to belonging to the ‘first quantum revolution’; the UK Defence Science and Technology Laboratory (Dstl) grouped these devices as ‘quantum 1.0’, that is devices which rely on the effects of quantum mechanics. Quantum technologies are often described as the ‘second quantum revolution’ or ‘quantum 2.0’. These are generally regarded as a class of device that actively create, manipulate and read out quantum states of matter, often using the quantum effects of superposition and entanglement.
The field of quantum technology was first outlined in a 1997 book by Gerard J. Milburn, which was then followed by a 2003 article by Jonathan P. Dowling and Gerard J. Milburn, as well as a 2003 article by David Deutsch. The field of quantum technology has benefited immensely from the influx of new ideas from the field of quantum information processing, particularly quantum computing. Disparate areas of quantum physics, such as quantum optics, atom optics, quantum electronics, and quantum nanomechanical devices, have been unified under the search for a quantum computer and given a common language, that of quantum information theory.
The Quantum Manifesto was signed by 3,400 scientists and officially released at the 2016 Quantum Europe Conference, calling for a quantum technology initiative to coordinate between academia and industry, to move quantum technologies from the laboratory to industry, and to educate quantum technology professionals in a combination of science, engineering, and business.
The European Commission responded to that manifesto with the Quantum Technology Flagship, a 1 Billion, 10-year-long megaproject, similar in size to earlier European Future and Emerging Technologies Flagship projects such as the Graphene Flagship and Human Brain Project. China is building the world’s largest quantum research facility with a planned investment of 76 Billion Yuan (approx. 10 Billion). The USA are preparing a national initiative.
From 2010 onwards, multiple governments have established programmes to explore quantum technologies, such as the UK National Quantum Technologies Programme, which created four quantum ‘hubs’, the Centre for Quantum Technologies in Singapore, and QuTech a Dutch centre to develop a topological quantum computer.
In the private sector, there have been multiple investments into quantum technologies made by large companies. Examples include Google’s partnership with the John Martinis group at UCSB, multiple partnerships with the Canadian quantum computing company D-wave systems, and investment by many UK companies within the UK quantum technologies programme.
Here is the original post:
Quantum technology – Wikipedia
IBM announced the worlds first commercially available quantum computer at CES 2019. Well. Kinda.
Called IBM Q System One, the computer is a glass box the size of a van with a sleek black cylinder hanging from the ceiling. Yet you wont find it in your garage, or in the offices of your nearest Fortune 500 company. Those willing to pay to harness the power of the 20-qubit machine will access IBM Q System One over the cloud. The hardware will be housed at IBMs Q Computation Center, set to open this year in Poughkeepsie, New York.
Reception has proven mixed. While the initial wave of news was positive, some have received the announcement with skepticism. Their points are valid. While IBMs press release touts that Q System One enables universal approximate superconducting quantum computersto operate beyond the confines of the research lab, it will remain under IBMs watchful eye. And IBM already offered cloud access to quantum computers at the Thomas J. Watson Research Center in Yorktown, New York.
In effect, IBM Q System One is an expansion of an existing cloud service, not a new product. Yet that doesnt lessen its impact.
Quantum computing faces many massive scientific challenges. Q System One, with 20 qubits, isnt no where near capable of beating classical computers even in tasks that will theoretically benefit from quantum computing. No universal quantum computer exists today, and no one knows when one will arrive.
Yet, building a useful quantum computer will only be half the battle. The other half is learning how to use it. Quantum computing, once it arrives, will fundamentally change what computers can accomplish. Engineers will tackle the challenge of building a quantum computer that can operate in a normal environment, while programmers must learn to write software for hardware that compute in ways alien to binary computers.
Companies cant rely on a build it, and they will come philosophy. That might suffice so long as quantum computing remains in the realm of research, but it wont work as the quantum realm bumps up against the general public. Quantum will need a breakthrough device that wows everyone at a glance. IBM Q System One is such a device.
Impact is what IBM Q System One was meant to deliver from the start. Robert Sutor, IBMs Vice President of Q Strategy and Ecosystem, said as much, telling Digital Trends that [we] have to step back and say, What have we created so far? Its amazing what weve created so far, but is it a system? Is it a well-integrated system? Are all the individual parts optimized and working together as best as possible?
The answer, up until recently, was no. IBMs quantum computers were not meant to be used outside of a lab and were built with no regard for aesthetic or ease of use. Q System One changes that, and in doing so, it could entirely change how the system and quantum computers, in general are perceived.
This isnt a new strategy for IBM. As Sutor will quickly point out, the company took a similar approach when it built computer mainframes in the 1960s and 70s. With all the focus now, people going back to mid-century modern, IBM has a long history of design. , he told Digital Trends. We are fully coming back to that. Other examples of this tactic include Deep Blues famous chess match and the ThinkPad, which redefined how consumers thought of portable computers.
Q System One might not be a major leap forward for the science of quantum computing, but it will give the field the standard bearer it needs. Its already making quantum feel less intimidating for those of us who lack a Ph.D in quantum physics.
See the original post here:
CES 2019: IBM’s Q System One Is the Rock Star Quantum …
A thorough exposition of quantum computing and the underlying concepts of quantum physics, with explanations of the relevant mathematics and numerous examples.
The combination of two of the twentieth century’s most influential and revolutionary scientific theories, information theory and quantum mechanics, gave rise to a radically new view of computing and information. Quantum information processing explores the implications of using quantum mechanics instead of classical mechanics to model information and its processing. Quantum computing is not about changing the physical substrate on which computation is done from classical to quantum but about changing the notion of computation itself, at the most basic level. The fundamental unit of computation is no longer the bit but the quantum bit or qubit.
This comprehensive introduction to the field offers a thorough exposition of quantum computing and the underlying concepts of quantum physics, explaining all the relevant mathematics and offering numerous examples. With its careful development of concepts and thorough explanations, the book makes quantum computing accessible to students and professionals in mathematics, computer science, and engineering. A reader with no prior knowledge of quantum physics (but with sufficient knowledge of linear algebra) will be able to gain a fluent understanding by working through the book.
Hardcover Out of Print ISBN: 9780262015066 392 pp. | 7 in x 9 in 3 graphs, 79 figures, 2 tables March 2011
Paperback $39.00 S | 30.00 ISBN: 9780262526678 392 pp. | 7 in x 9 in 3 graphs, 79 figures, 2 tables August 2014
Authors Eleanor G. Rieffel Eleanor Rieffel is Research Scientist at NASA Ames Research Center. Wolfgang H. Polak Wolfgang Polak is a computer science consultant.
Go here to see the original:
Quantum Computing | The MIT Press
IBM unveiled the worlds first universal approximate quantum computing systeminstalled outside of a research lab at CES earlier this week and with it, the next era of computing.
The 20-qubit IBM Q System One represents the first major leap for quantum computers of 2019, but before we get into the technical stufflets take a look at this thing.
All we can say is: wowzah! When can we get a review unit?
The commitment to a fully-functional yet aesthetically pleasing design is intriguing. Especially considering that, just last year, pundits claimedquantum computing was adead-end technology.
To make the first integrated quantum computer designed for commercial use outside of a lab both beautiful and functional, IBM enlisted the aid of Goppion, the company responsible for some of the worlds most famous museum-quality display cases, Universal Design Studio and Map Project Office. The result is not only (arguably) a scientific first, but a stunning machine to look at.
This isnt just about looks. That box represents a giant leap in the field.
Its hard to overstate the importance of bringing quantum computers outside of laboratories. Some of the biggest obstacles to universal quantum computing have been engineering-related. It isnt easy to manipulate the fabric of the universe or, at a minimum, observe it and the machines that attempt it typically require massive infrastructure.
In order to decouple a quantum system from its laboratory lifeline, IBM had to figure out how to conduct super-cooling (necessary for quantum computation under the current paradigm) in a box. This was accomplished through painstakingly developed cryogenic engineering.
Those familiar with the companys history might recall that, back in the 1940s, IBMs classical computers took up an entire room. Eventually, those systems started shrinking. Now they fit on your wrist and have more computational power than all the computers from the mainframe era put together.
It sure looks like history is repeating itself:
TNW asked Bob Wisnieff, IBMs Quantum Computing CTO, if todays progress reminded him of that transition. He told us:
In some respects, quantum computing systems are at a similar stage as the mainframes of the 1960s. The big difference is the cloud access, in a couple of ways:
Imagine if everyone in the 60s had five to ten years to explore the mainframes hardware and programming when it was essentially still a prototype. Thats where we are with quantum computing.
And now, in the IBM Q System One, we have a quantum system that is stable, reliable, and continuously available for commercial use in an IBM Cloud datacenter.
The IBM Q System One isnt the most powerful quantum computer out there. Its not even IBMs most powerful. But its the first one that could, technically, be installed on-site for a commercial customer. It wont be, however. At least not for the time being.
Instead, it can be accessed via the cloud as part of the companys quantum computing Q initiative.
For more information about IBMs Q System One visit the official website here. And dont forget to check out TNWs beginners guide to quantum computers.
Read next: Trump Jr.’s deleted Instagram post likened border wall to zoo fencing
See the article here:
IBM thinks outside of the lab, puts quantum computer in a box
At CES, IBM today announced its first commercial quantum computer for use outside of the lab. The 20-qubit system combines into a single package the quantum and classical computing parts it takes to use a machine like this for research and business applications. That package, the IBM Q system, is still huge, of course, but it includes everything a company would need to get started with its quantum computing experiments, including all the machinery necessary to cool the quantum computing hardware.
While IBM describes it as the first fully integrated universal quantum computing system designed for scientific and commercial use, its worth stressing that a 20-qubit machine is nowhere near powerful enough for most of the commercial applications that people envision for a quantum computer with more qubits and qubits that are useful for more than 100 microseconds. Its no surprise then, that IBM stresses that this is a first attempt and that the systems are designed to one day tackle problems that are currently seen as too complex and exponential in nature for classical systems to handle. Right now, were not quite there yet, but the company also notes that these systems are upgradable (and easy to maintain).
The IBM Q System One is a major step forward in the commercialization of quantum computing, said Arvind Krishna, senior vice president of Hybrid Cloud and director of IBM Research. This new system is critical in expanding quantum computing beyond the walls of the research lab as we work to develop practical quantum applications for business and science.
More than anything, though, IBM seems to be proud of the design of the Q systems. In a move that harkens back to Crays supercomputers with its expensive couches, IBM worked withdesign studios Map Project Office and Universal Design Studio, as well Goppion, the company that has built, among other things, the display cases that house the U.K.s crown jewels and the Mona Lisa. IBM clearly thinks of the Q system as a piece of art and, indeed, the final result is quite stunning. Its a nine-foot-tall and nine-foot-wide airtight box, with the quantum computing chandelier hanging in the middle, with all of the parts neatly hidden away.
If you want to buy yourself a quantum computer, youll have to work with IBM, though. It wont be available with free two-day shipping on Amazon anytime soon.
In related news, IBM also announced the IBM Q Network, a partnership with ExxonMobil and research labs like CERN and Fermilab that aims to build a community that brings together the business and research interests to explore use cases for quantum computing. The organizations that partner with IBM will get access to its quantum software and cloud-based quantum computing systems.
Go here to see the original:
IBM unveils its first commercial quantum computer