Category Archives: Quantum Computing

Microsoft has formed a coalition to promote quantum computing …

Microsoft and some big research institutions are hoping to turn the Pacific Northwest into a hotbed for quantum computing.

On Monday, Microsoft Quantum, the company’s research team devoted to the field, announced that it’s getting together with the Pacific Northwest National Laboratory and the University of Washington to form a coalition called the Northwest Quantum Nexus. The coalition plans to promote the development of quantum computing in the Pacific Northwest region, as well as in parts of Canada.

The partners are also hosting a two-day summit at the University of Washington on Monday and Tuesday that will bring together researchers and officials from universities, government agencies, and businesses. The goal is to encourage attendees to collaborate on quantum-computing projects and research.

“We’re really at a moment when many businesses are starting to think about the promise of quantum information sciences and the promise of quantum computing for solving the world’s most challenging problems,” Krysta Svore, general manager of quantum software at Microsoft told Business Insider.

Standard computers such as PCs and smartphones process and store information in the form of binary bits, either zeros or ones. Quantum computers, by contrast, process and store data as “qubits,” which can hold the values of zero and one simultaneously. That design difference could allow them to perform exponentially more calculations in a given amount of time than traditional computers, giving them the potential to solve immensely more complex problems.

Because of that, quantum computing is considered one of the most promising new technologies, with potential applications in areas ranging from discovering new drugs to cryptography to making stock predictions to calculating more efficient routes for airlines or the military. But the technology is still in its early stages, and analysts don’t expect quantum computers to outperform traditional ones for another five to ten years.

Read more:Quantum computing could change everything, and IBM is racing with Microsoft, Intel, and Google to conquer it. Here’s what you need to know

In December, Congress passed and the president signed the National Quantum Initiative Act, which provides $1.2 billion for research in the field. Since then, there’s been increased interest from government agencies and businesses, said Nathan Baker, a director at the Pacific Northwest National Laboratory.

Krysta Svore, general manager of quantum software at Microsoft, is helping lead the company’s efforts in the field. Microsoft “The Northwest is known for its outstanding physics and outstanding work in computing,” Baker said. “We need to be thinking about how can we deliberately move it forward to do something bigger.”

Although business and investor interest in quantum computing is growing, there’s a shortage of people with skills in the field, Svore and Baker said. That’s something they hope the Northwest Quantum Nexus will help address.

“There’s a huge gap between quantum information sciences and all of the skills you need to bring together to make it a functioning technological platform,” Baker said. “We’re going to have to be deliberate in how to build that out.”

In addition to helping form the Nexus coalition, Microsoft and the University of Washington are teaming up to teach students how to program quantum computers.

“Microsoft’s focus is producing a scalable quantum computer and bringing that forward for our customers and for our future,” Svore said. “To do that, we need to be able to accelerate the progress in quantum computing. We need to be able to educate a whole world of quantum developers.”

Microsoft is developing both quantum computing hardware and software. Its effort focuses on fragmenting electrons to store information in multiple places at once.

That’s different from the approach of companies such as IBM, Intel and Google, which are working on creating quantum computers that store data using superconducting circuits.

“Having devoted my life to this field, I’m overwhelmingly giddy with the prospect of the type of output we’ll see with the Northwest Quantum Nexus Summit,” Svore said. “I really do believe this can start the quantum revolution.”

Read the original post:
Microsoft has formed a coalition to promote quantum computing …

Quantum computing for everyone | Michael Nielsen

Can you give me a simple, concrete explanation of how quantum computers work?

Ive been asked this question a lot. I worked on quantum computing full time for 12 years, wrote 60 or so papers, and co-authored the standard text. But for many years the question stumped me. I had several pat answers, but none really satisfied me or my questioners.

It turns out, though, that there is a satisfying answer to the question, which anyone can understand if theyre willing to spend some time concentrating hard.

To understand the answer, lets back up and think first about why big media outlets like the New York Times and the Economist regularly run stories about quantum computers.

The reason is that quantum computer scientists believe quantum computers can solve problems that are intractable for conventional computers. That is, its not that quantum computers are like regular computers, but smaller and faster. Rather, quantum computers work according to principles entirely different than conventional computers, and using those principles can solve problems whose solution will never be feasible on a conventional computer.

In everyday life, all our experience is with objects which can be directly simulated by a conventional computer. We dont usually think about this fact, but movie-makers rely on it, and we take it for granted special effects are basically just rough computer simulations of events that would be more expensive for the movie makers to create in real life than they are to simulate inside a computer. Much more detailed simulations are used by companies like Boeing to test designs for their latest aircraft, and by Intel to test designs for their latest chips. Everything youve ever seen or done in your life driving a car, walking in the park, cooking a meal all these actions can be directly simulated using a conventional computer.

Because of this, when we think in concrete terms we invariably think about things that can be directly simulated on a conventional computer.

Now, imagine for the sake of argument that I could give you a simple, concrete explanation of how quantum computers work. If that explanation were truly correct, then it would mean we could use conventional computers to simulate all the elements in a quantum computer, giving us a way to solve those supposedly intractable problems I mentioned earlier.

Of course, this is absurd! Whats really going on is that no simple concrete explanation of quantum computers is possible. Rather, there is an intrinsic quantum gap between how quantum computers work, and our ability to explain them in simple concrete terms. This quantum gap is what made it hard for me to answer peoples requests for a concrete explanation. The right answer to such requests is that quantum computers cannot be explained in simple concrete terms; if they could be, quantum computers could be directly simulated on conventional computers, and quantum computing would offer no advantage over such computers. In fact, what is truly interesting about quantum computers is understanding the nature of this gap between our ability to give a simple concrete explanation and whats really going on.

This account of quantum computers is distinctly at odds with the account that appears most often in the mainstream media. In that account, quantum computers work by exploiting what is called quantum parallelism. The idea is that a quantum computer can simultaneously explore many possible solutions to a problem. Implicitly, such accounts promise that its then possible to pick out the correct solution to the problem, and that its this which makes quantum computers tick.

Quantum parallelism is an appealing story, but its misleading. The problem comes in the second part of the story: picking out the correct solution. Most of the time this turns out to be impossible. This isnt just my opinion, in some cases you can mathematically prove its impossible. In fact, the problem of figuring out how to extract the solution, which is glossed over in mainstream accounts, is really the key problem. Its here that the quantum gap lies, and glossing over it does nothing to promote genuine understanding.

None of my discussion to now actually explains how quantum computers work. But its a good first step to understanding, for it prepares you to expect a less concrete explanation of quantum computers than you might at first have hoped for. I wont give a full description here, but I will sketch whats going on, and give you some suggestions for further reading.

Quantum computers are built from quantum bits, or qubits [1], which are the quantum analogue of the bits which make up conventional computers. Heres a magnified picture of a baby quantum computer made up of three Beryllium atoms, which are used to store three qubits:

The atoms are held in place using an atom trap, which you cant see because its out of frame, but which surrounds the atoms, holding them suspended in place using electric and magnetic fields, similar to the way magnets can be used to levitate larger objects in the air.

The atoms in the picture are about three micrometers apart, which means that if you laid a million end to end, they wouldnt quite span the length of a living room. Very fine human hair is about 20 micrometers in diameter itd pretty much cover the width of this photo.

The atoms themselves are about a thousand times smaller than the spacing between the atoms. They look a lot bigger in the picture, and the reason is interesting. Although the atoms are very small, the way the picture was created was by shining laser light on the atoms to light them up, and then taking a photograph. The particles making up the laser light are much bigger than the atoms, which makes the picture come out all blurry; the photo above is basically a very blurry photograph of the atoms, which is why they look so much bigger than they really are.

I called this a baby quantum computer because it has only three qubits, but in fact its pretty close to the state of the art. Its hard to build quantum computers, and adding extra qubits turns out to be tricky. Exactly who holds the record for the most qubits depends on who you ask, because different people have different ideas about what standards need to be met to qualify as a genuine quantum computer. The current consensus for the record is about 5-10 qubits.

Okay, a minor alert is in order. Ive tried to keep this essay as free from mathematics as possible, but the rest of the essay will use a little high-school mathematics. If this is the kind of thing that puts you off, do not be alarmed! You should be able to get the gist even if you skip over the mathematical bits.

How should we describe whats inside a quantum computer? We can give a bare-bones description of a conventional computer by listing out the state of all its internal components. For example, its memory might contains the bits 0,0,1,0,1,1, and so on. It turns out that a quantum computer can also be described using a list of numbers, although how this is done is quite different. If our quantum computer has n qubits (in the example pictured above n = 3), then it turns out that the right way to describe the quantum computer is using a list of 2n numbers. Its helpful to give these numbers labels: the first is s1, the second s2, and so on, so the entire list is:

What are these numbers, and how are they related to the n qubits in our quantum computer? This is a reasonable question in fact, its an excellent question! Unfortunately, the relationship is somewhat indirect. For that reason, Im not going to describe it in detail here, although you can get a better picture from some of the further reading I describe below. For us, the thing to take away is that describing n qubits requires 2n numbers.

One result of this is that the amount of information needed to describe the qubits gets big really quickly. More than a million numbers are needed to describe a 20-qubit quantum computer! The contrast with conventional computers is striking a conventional 20-bit computer needs only 20 numbers to describe it. The reason is that each added qubit doubles the amount of information needed to describe the quantum computer [2]. The moral is that quantum computers get complex far more quickly than conventional computers as the number of components rises.

The way a quantum computer works is that quantum gates are applied to the qubits making up the quantum computer. This is a fancy way of saying that we do things to the qubits. The exact details vary quite a bit in different quantum computer designs. In the example I showed above, it basically involves manipulating the atoms by shining laser light on them. Quantum gates usually involve manipulating just one or two qubits at a time; some quantum computer designs involve more at the same time, but thats a luxury, its not actually necessary. A quantum computation is just a sequence of these quantum gates done in a particular order. This sequence is called a quantum algorithm; it plays the same role as a program for a conventional computer.

The effect of a quantum gate is to change the description s1, s2, of the quantum computer. Let me show you a specific example to make this a bit more concrete. Theres a particular type of quantum gate called a Hadamard gate. This type of gate affects just one qubit. If we apply a Hadamard gate to the first qubit in a quantum computer, the effect is to produce a new description for the quantum computer with numbers t1, t2, given by

t1 = (s1+s2n/2+1)/ 2

t2 = (s2+s2n/2+2)/ 2,

t3 = (s3+s2n/2+3)/ 2,

and so on, down through all 2n different numbers in the description. The details arent important, the salient point is that even though weve manipulated just one qubit, the way we describe the quantum computer changes in an extremely complicated way. Its bizarre: by manipulating just a single physical object, we reshuffle and recombine the entire list of 2n numbers!

Its this reshuffling and recombination of all 2n numbers that is the heart of the matter. Imagine we were trying to simulate whats going on inside the quantum computer using a conventional computer. The obvious way to do this is to track the way the numbers s1, s2, change as the quantum computation progresses. The problem with doing this is that even a single quantum gate can involve changes to all 2n different numbers. Even when n is quite modest, 2n can be enormous. For example, when n = 300, 2n is larger than the number of atoms in the Universe. Its just not feasible to track this many numbers on a conventional computer.

You should now be getting a feeling for why quantum computer scientists believe it is infeasible for a conventional computer to simulate a quantum computer. Whats really clever, and not so obvious, is that we can turn this around, and use the quantum manipulations of all these exponentially many numbers to solve interesting computational problems.

I wont try to tell that story here. But if youre interested in learning more, heres some reading you may find worthwhile.

In an earlier essay I explain why conventional ways of thinking simply cannot give a complete description of the world, and why quantum mechanics is necessary. Going a little further, an excellent lay introduction to quantum mechanics is Richard Feynmans QED: The Strange Theory of Light and Matter. It requires no mathematical background, but manages to convey the essence of quantum mechanics. If youre feeling more adventurous still, Scott Aaronsons lecture notes are a fun introduction to quantum computing. They contain a fair bit of mathematics, but are written so you can get a lot out of them even if some of the mathematics is inaccessible. Scott and Dave Bacon run excellent blogs that occasionally discuss quantum computing, and their blogrolls are a good place to find links to other quantum bloggers.

Finally, if youve enjoyed this essay, you may enjoy some of my other essays, or perhaps like to subscribe to my blog. Thanks for reading!

Thanks to Jen Dodd and Kate Nielsen for providing feedback that greatly improved early drafts of this essay.

Michael Nielsen is a writer living near Toronto, and working on a book about The Future of Science. If youd like to be notified when the book is available, please send a blank email to the.future.of.science@gmail.com with the subject subscribe book. Youll be emailed to let you know when the book is to be published; your email address will not be used for any other purpose.

[1] Ben Schumacher, who coined the term qubit, runs an occasional blog.

[2] Motivated by this observation, in my PhD thesis I posed a tongue-in-cheek quantum analogue of Moores Law: to keep pace with conventional computers, quantum computers need only add a single qubit every two years. So far, things are neck and neck.

Read more here:
Quantum computing for everyone | Michael Nielsen

Ask a Techspert: What is quantum computing? – blog.google

Editors Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a new series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isnt meant to be comprehensive, but just enough to make you sound smart at a dinner party.

Quantum computing sounds like something out of a sci-fi movie. But its real, and scientists and engineers are working to make it a practical reality. Google engineers are creating chips the size of a quarter that could revolutionize the computers of tomorrow. But what is quantum computing, exactly?

The Keywords very first Techspert is Marissa Giustina, a research scientist and quantum electronics engineer in our Santa Barbara office. We asked her to explain how this emerging technology actually works.

What do we need to know about conventional computers when we think about quantum computers?

At a first glance, information seems like an abstract concept. Sure, information can be stored by writing and drawinghumans figured that out a long time ago. Still, there doesnt seem to be anything physically tangible about the process of thinking.

Enter the personal computer. Its a machinea purely physical objectthat manipulates information. So how does it do that, if its a physical machine and information is abstract? Well, information is actually physical. Computers store and process rich, detailed information by breaking it down. At a low level, a computer represents information as a series of bits. Each bit can take a value of either [0] or [1], and physically, these bits are tiny electrical switches that can be either open [0] or closed [1]. Emails, photos and videos on YouTube are all represented by long sequences of bitslong rows of tiny electrical switches inside a computer.

The computer computes by manipulating those bits, like changing between [0] and [1] (opening or closing a switch), or checking whether two bits have equal or opposite values and setting another bit accordingly. These bit-level manipulations are the basis of even the fanciest computer programs.

Ones and zeros, like “The Matrix.” Got it. So then what is a quantum computer?

A quantum computer is a machine that stores and manipulates information as quantum bits, or qubits, instead of the classical bits we were talking about before. Quantum bits are good at storing and manipulating a different kind of information than classical bits, since they are governed by rules of quantum mechanicsthe same rules that govern the behavior of atoms and molecules.

Whats the difference between a bit and a qubit?

This is where it gets more complicated. Remember that a classical bit is just a switch: it has only two possible configurations: [open] or [closed]. A qubits configuration has a lot more possibilities. Physicists often think of a qubit like a little globe, with [0] at the north pole and [1] at the south pole, and the qubits configuration is represented by a point on the globe. In manipulating the qubit, we can send any point on the globe to any other point on the globe.

At first, it sounds like a qubit can hold way more information than a regular bit. But theres a catch: the rules of quantum mechanics restrict what kinds of information we can get out of a qubit. If we want to know the configuration of a classical bit, we just look at it, and we see that the switch is either open [0] or closed [1]. If we want to know the configuration of a qubit, we measure it, but the only possible measurement outcomes are [0] (north pole) or [1] (south pole). A qubit that was situated on the equator will measure as [0] 50 percent of the time and [1] the other 50 percent of the time. That means we have to repeat measurements many times in order to learn about a qubits actual configuration.

Here is the original post:
Ask a Techspert: What is quantum computing? – blog.google

IBM hits quantum computing milestone, may see ‘Quantum …

IBM is outlining another milestone in quantum computing — its highest Quantum Volume to date — and projects that practical uses or so called Quantum Advantage may be a decade away.

Big Blue, which will outline the scientific milestone at the American Physical Society March Meeting, made a bit of a splash at CES 2019 with a display of its Q System quantum computer and has been steadily showing progress on quantum computing.

In other words, that quantum computing buying guide for technology executives may take a while. Quantum Volume is a performance metric that indicates progress in the pursuit of Quantum Advantage. Quantum Advantage refers to the point where quantum applications deliver significant advantages to classical computers.

Also:Meet IBM’s bleeding edge of quantum computingCNET

Quantum Volume is determined by the number of qubits, connectivity, and coherence time, plus accounting for gate and measurement errors, device cross talk, and circuit software compiler efficiency.

IBM said its Q System One, which has a 20-qubit processor, produced a Quantum Volume of 16, double the current IBM Q, which has a Quantum Volume of 8. IBM also said the Q System One has some of the lowest error rates IBM has measured.

That progress is notable, but practical broad use cases are still years away. IBM said Quantum Volume would need to double every year to reach Quantum Advantage within the next decade. Faster progress on Quantum Advantage would speed up that timeline. IBM has doubled the power of its quantum computers annually since 2017.

Once Quantum Advantage is hit, there would be new applications, more of an ecosystem and real business use cases. Consumption of quantum computing would still likely be delivered via cloud computing since the technology has some unique characteristics that make a traditional data center look easy. IBM made its quantum computing technology available in 2016 via a cloud service and is working with partners to find business and science use cases.

Here’show quantum computing and classic computing differsvia our recent primer on the subject.

Every classical electronic computer exploits the natural behavior of electrons to produce results in accordance with Boolean logic (for any two specific input states, one certain output state). Here, the basic unit of transaction is the binary digit (“bit”), whose state is either 0 or 1. In a conventional semiconductor, these two states are represented by low and high voltage levels within transistors.

In a quantum computer, the structure is radically different. Its basic unit of registering state is the qubit, which at one level also stores a 0 or 1 state (actually 0 and/or 1). Instead of transistors, a quantum computing obtains its qubits by bombarding atoms with electrical fields at perpendicular angles to one another, the result being to line up the ions but also keep them conveniently and equivalently separated. When these ions are separated by just enough space, their orbiting electrons become the home addresses, if you will, for qubits.

Go here to read the rest:
IBM hits quantum computing milestone, may see ‘Quantum …

Its Time You Learned About Quantum Computing | WIRED

Youve probably heard of quantum computing. Do you understand it? Unlikely! Its time that you did.

The basic ideatap into quantum physics to make immensely powerful computersisnt new. Nobel-winning physicist Richard Feynman is generally credited with first suggesting that in 1982. But in the past few years the concept has started to become more real.

Google, IBM, Intel, Microsoft, and a pack of startups are all building and testing quantum computing hardware and software. Theyre betting that these machines will lead to breakthroughs in areas such as chemistry, materials science, logistical planning such as in factories, and perhaps artificial intelligence.

It will probably be years before the technology is mature enough to be broadly practical. But the potential gains are so large that companies such as JP Morgan and Daimler are already experimenting with early machines from IBM. And you dont have to be a giant bank or auto maker to play with quantum computing. Both IBM and Rigetti Computing, a startup that opened its own quantum computing factory last year, have launched services to help developers learn about and practice with quantum computing code.

So how do they work? You may have heard that the normal rules of reality dont always apply in the world of quantum mechanics. A phenomenon known as a quantum superposition allows things to kinda, sorta, be in two places at once, for example. In a quantum computer, that means bits of data can be more than just 1 or 0, as they are in a conventional computer; they can also be something like both at the same time.

When data is encoded into effects like those, some normal limitations on conventional computers fall away. That allows a quantum computer to be much faster on certain tricky problems. Want a full PhD, or third-grade, explanation? Watch the video above.

More here:
Its Time You Learned About Quantum Computing | WIRED

Microsofts quantum computing network takes a giant leap …

Microsoft is focusing on the development of quantum computers that take advantage of cryogenically cooled nanowires. (Microsoft Photo)

REDMOND, Wash. Quantum computing may still be in its infancy but the Microsoft Quantum Network is all grown up, fostered by in-house developers, research affiliates and future stars of the startup world.

The network made its official debut today here at Microsofts Redmond campus, during a Startup Summit that laid out the companys vision for quantum computing and introduced network partners to Microsofts tools of the quantum trade.

Quantum computing stands in contrast to the classical computer technologies that have held sway for more than a half-century. Classical computing is based on the ones and zeroes of bit-based processing, while quantum computing takes advantage of the weird effects of quantum physics. Quantum bits, or qubits, neednt represent a one or a zero, but can represent multiple states during computation.

The quantum approach should be able to solve computational problems that cant easily be solved using classical computers, such as modeling molecular interactions or optimizing large-scale systems. That could open the way to world-changing applications, said Todd Holmdahl, corporate vice president of Microsofts Azure Hardware Systems Group.

Were looking at problems like climate change, Holmdahl said. Were looking at solving big food production problems. We think we have opportunities to solve problems around materials science, personal health care, machine learning. All of these things are possible and obtainable with a quantum computer. We have been talking around here that were at the advent of the quantum economy.

Representatives from 16 startups were invited to this weeks Startup Summit, which features talks from Holmdahl and other leaders of Microsofts quantum team as well as demos and workshops focusing on Microsofts programming tools. (The closest startup to Seattle is 1QBit, based in Vancouver, B.C.)

Over the past year and a half, Microsoft has released a new quantum-friendly programming language called Q# (Q-sharp) as part of its Quantum Development Kit, and has worked with researchers at Pacific Northwest National Laboratory and academic institutions around the world to lay the technical groundwork for the field.

A big part of that groundwork is the development ofa universal quantum computer, based on a topological architecture that builds error-correcting mechanisms right into the cryogenically cooled, nanowire-based hardware. Cutting down on the error-producing noise in quantum systems will be key to producing a workable computer.

We believe that our qubit equals about 1,000 of our competitions qubits, Holmdahl said.

Theres lots of competition in the quantum computing field nowadays: IBM, Google and Intel are all working on similar technologies for a universal quantum computer, while Canadas D-Wave Systems is taking advantage of a more limited type of computing technology known as quantum annealing.

This week, D-Wave previewed its plans for a new type of computer topology that it said would reduce quantum noise and more than double the qubit count of its existing platform, from 2,000 linked qubits to 5,000.

But the power of quantum computing shouldnt be measured merely by counting qubits. The efficiency of computation and the ability to reduce errors can make a big difference, said Microsoft principal researcher Matthias Troyer.

For example, a standard approach to simulating the molecular mechanism behind nitrogen fixation for crops could require 30,000 years of processing time, he said. But if the task is structured to enable parallel processing and enhanced error correction, the required runtime can be shrunk to less than two days.

Quantum software engineering is really as important as the hardware engineering, Troyer said.

Julie Love, director of Microsoft Quantum Business Development, said that Microsoft will start out offering quantum computing through Miicrosofts Azure cloud-based services. Not all computational problems are amenable to the quantum approach: Its much more likely that an application will switch between classical and quantum processing and therefore, between classical tools such as the C# programming language and quantum tools such as Q#.

When you work in chemistry and materials, all of these problems, you hit this known to be unsolvable problem, Love said. Quantum provides the possibility of a breakthrough.

Love shies away from giving a firm timetable for the emergence of specific applications but last year, Holmdahl predicted that commercial quantum computers would exist five years from now. (Check back in 2023 to see how the prediction panned out.)

The first applications could well focus on simulating molecular chemistry, with the aim of prototyping better pharmaceuticals, more efficient fertilizers, better batteries, more environmentally friendly chemicals for the oil and gas industry, and a new class of high-temperature superconductors. It might even be possible to address the climate change challenge by custom-designing materials that pull excess carbon dioxide out of the air.

Love said quantum computers would also be well-suited for addressing optimization problems, like figuring out how to make traffic flow better through Seattles urban core; and for reducing the training time required for AI modeling.

That list is going to continue to evolve, she said.

Whenever the subject quantum computing comes up, cryptography has to be mentioned as well. Its theoretically possible for a quantum computer to break the codes that currently protect all sorts of secure transactions, ranging from email encryption to banking protocols.

Love said those code-breaking applications are farther out than other likely applications, due to the huge amount of computation resources that would be required even for a quantum computer. Nevertheless, its not too early to be concerned. We have a pretty significant research thrust in whats called post-quantum crypto, she said.

Next-generation data security is one of the hot topics addressed $1.2 billion National Quantum Initiative that was approved by Congress and the White House last December. Love said Microsofts post-quantum crypto protocols have already gone through an initial round of vetting by the National Institute of Standards and Technology.

Weve been working at this in a really open way, she said.

Like every technology, quantum computing is sure to have a dark side as well as a bright side. But its reassuring to know that developers are thinking ahead about both sides.

Read this article:
Microsofts quantum computing network takes a giant leap …

When Will Quantum Computing Have Real Commercial Value …

Photo:IBM Research Workers assemble the enclosure for the IBM Q System One quantum computer, which was shown at the Consumer Electronics Show in Las Vegas in January.

Our romance with new technologies always seems to follow the same trajectory: We are by turns mesmerized and adoring, disappointed and disheartened, and end up settling for less than we originally imagined. In 1954, Texas Instruments touted its new transistors as bringing electronic brains approaching the human brain in scope and reliability much closer to reality. In 2000, U.S. president Bill Clinton declared that the Human Genome Project would lead to a world in which our childrens children will know the term cancer only as a constellation of stars. And so it is now with quantumcomputing.

The popular press is awash with articles touting its promise. Tech giants are pouring huge amounts of money into building prototypes. You get the distinct impression that the computer industry is on the verge of an imminent quantum revolution.

But not everyone believes that quantum computing is going to solve real-world problems in anything like the time frame that some proponents of the technology want us to believe. Indeed, many of the researchers involved acknowledge the hype has gotten out of control, cautioning that quantum computing may take decades to mature.

Theoretical physicist Mikhail Dyakonov, a researcher for many years at Ioffe Institute, in Saint Petersburg, Russia, and now at the University of Montpellier, in France, is even more skeptical. In The Case Against Quantum Computing, he lays out his view that practical general-purpose quantum computers will not be built anytime in the foreseeable future.

As you might expect, his essay ruffled some feathers after it was published online. But as it turns out, while his article was being prepared, a committee assembled by the U.S. National Academies of Sciences, Engineering, and Medicine had been grappling with the very same question.

The committee was to provide an independent assessment of the feasibility and implications of creating a functional quantum computer capable of addressing real-world problems…. It was to estimate the time and resources required, and how to assess the probability of success.

The experts who took up the challenge included John Martinis of the University of California, Santa Barbara, who heads Googles quantum-hardware efforts; David Awschalom of the University of Chicago, who formerly directed the Center for Spintronics and Quantum Computation at UCSB; and Umesh Vazirani of the University of California, Berkeley, who codirects the Berkeley Quantum Information and Computation Center.

To their credit, in their report, released in December, they didnt sugarcoat the difficulties. Quite the opposite.

The committee concluded that it is highly unexpected that anyone will be able to build a quantum computer that could compromise public-key cryptosystems (a task that quantum computers are, in theory, especially suitable for tackling) in the coming decade. And while less-capable noisy intermediate-scale quantum computers will be built within that time frame, there are at present no known algorithms/applications that could make effective use of this class of machine, the committee says.

Okay, if not a decade, then how long? The committee was not prepared to commit itself to any estimate. Authors of a commentary in the January issue of the Proceedings of IEEE devoted to quantum computing were similarly reticent to make concrete predictions. So the answer is: Nobody really knows.

The people working in this area are nevertheless thrilled by recent progress theyve made on proof-of-concept devices and by the promise of this research. They no doubt consider the technical hurdles to be much more tractable than Dyakonov concludes. So dont be surprised when you see their perspectives appear in Spectrum, too.

This article appears in the March 2019 print issue as Quantum Computings Prospects.

Read the rest here:
When Will Quantum Computing Have Real Commercial Value …

The Case Against Quantum Computing – IEEE Spectrum

Illustration: Christian Gralingen

Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decadesand without any practical results to show for it.

Weve been told that quantum computers could provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complexsystems, and artificial intelligence. Weve been assured that quantum computers will forever alter our economic, industrial, academic, and societal landscape. Weve even been told that the encryption that protects the worlds most sensitive data may soon be broken by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.

Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.

Its become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the worlds top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.

In light of all this, its natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, Not in the foreseeable future. Having spent decades conducting research in quantum and condensed-matter physics, Ive developed my very pessimistic view. Its based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.

The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.

Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: Nature isnt classical, dammit, and if you want to make a simulation of nature, youd better make it quantum mechanical, and by golly its a wonderful problem, because it doesnt look so easy, he opined. A few years later, University of Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analogue of the universal Turing machine.

The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.

The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, its fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computers clock.

The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their on and off states, according to a prescribed program.

In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electrons internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or 1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electrons spin as and .

Heres where things get weird. With the quantum bit, those two states arent the only ones possible. Thats because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, and (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, and , each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.

Thats because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states and when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the state is 0.6 (60percent), then the probability of finding it in the state must be 0.4 (40 percent)nothing else would make sense.

In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes and . This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its and states.

Yes, quantum mechanics often defies intuition. But this concept shouldnt be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but its not really a useful description. Describing a qubit as being simultaneously in both and states is, in my view, similarly unhelpful. And yet, its become almost de rigueur for journalists to describe it as such.

In a system with two qubits, there are 22 or 4 basic states, which can be written (), (), (), and (). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.

While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.

How is information processed in such a machine? Thats done by applying certain kinds of transformationsdubbed quantum gatesthat change these parameters in a precise and controlled manner.

Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. Thats a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.

To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.

At this point in a description of a possible future technology, a hardheaded engineer loses interest. But lets continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.

In contrast, its absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.

How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machinewhich was already more than astronomical with 1,000 qubitsnow becomes even more ludicrous.

Even without considering these impossibly large numbers, its sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And its not like this hasnt long been a key goal.

In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that requires on the order of 50 physical qubits and exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm. Its now the end of 2018, and that ability has still not been demonstrated.

The huge amount of scholarly literature thats been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.

The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.

By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by local noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.

A decade and a half ago, ARDAs Experts Panel noted that it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely. Here, the key words are under certain assumptions. That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.

I argue that they cant. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.

Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the on state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.

Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed?There are no clear answers to these crucial questions.

While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).

The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shors algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.

On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.

While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, Im skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulateon a microscopic level and with enormous precisiona physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?

My answer is simple. No, never.

I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. Thats because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. Whats more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.

All these problems, as well as a few others Ive not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.

To my mind, quantum-computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work.

Editors note: A sentence in this article originally stated that concerns over required precision were never even discussed. This sentence was changed on 30 November 2018 after some readers pointed out to the author instances in the literature that had considered these issues. The amended sentence now reads: There are no clear answers to these crucial questions.

Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.

Go here to read the rest:
The Case Against Quantum Computing – IEEE Spectrum

How Does Quantum Computing Work? – ExtremeTech

Quantum computing just plain sounds cool. Weve all read about the massive investment in making it a reality, and its promise of breakthroughs in many industries. But all that press is usually short on what it is and how it works. Thats for a reason: Quantum computing is quite different from traditional digital computing and requires thinking about things in a non-intuitive way. Oh, and there is math. Lots of it.

This article wont make you an expert, but it should help you understand what quantum computing is, why its important, and why its so exciting. If you already have a background in quantum mechanics and grad school math, you probably dont need to read this article. You can jump straight into a book like A Gentle Introduction To Quantum Computing (Hint, gentle is a relative term). But ifyoure like most of us and dont have that background, lets do our best to demystify one of the most mystical topics in computing.

In a few short paragraphs, here are the basics that well go over in more detail in this article: Quantum computers use qubits instead of traditional bits (binary digits). Qubits are different from traditional bits because until they are read out (meaning measured), they can exist in an indeterminate state where we cant tell whether theyll be measured as a 0 or a 1. Thats because of a unique property called superposition.

Superposition makes qubits interesting, but their real superpower is entanglement. Entangled qubits can interact instantly. To make functional qubits, quantum computers have to be cooled to near absolute zero. Even when supercooled, qubits dont maintain their entangled state (coherence) for very long.

That makes programming them extra tricky. Quantum computers are programmed using sequences of logic gates of various kinds, but programs need to run quickly enough that the qubits dont lose coherence before theyre measured. For anyone who took a logic class or digital circuit design using flip-flops, quantum logic gates will seem somewhat familiar, although quantum computers themselves are essentially analog. However, the combination of superposition and entanglement make the process about a hundred times more confusing.

The ordinary bits we use in typical digital computers are either 0 or 1. You can read them whenever you want, and unless there is a flaw in the hardware, they wont change. Qubits arent like that. They have a probability of being 0 and a probability of being 1, but until you measure them, they may be in an indefinite state. That state,along with some other state information that allows for additional computational complexity, can be described as being at an arbitrary point on a sphere (of radius 1), that reflects both the probability of being measured as a 0 or 1 (which are the north and south poles).

The qubits state is a combination of the values along all three axes. This is called superposition. Some texts describe this property as being in all possible states at the same time, while others think thats somewhat misleading and that were better off sticking with the probability explanation. Either way, a quantum computer can actually do math on the qubit while it is in superposition changing the probabilities in various ways through logic gates before eventually reading out a result by measuring it. In all cases, though, once a qubit is read, it is either 1 or 0 and loses its other state information.

Qubits typically start life at 0, although they are often then moved into an indeterminate state using a Hadamard Gate, which results in a qubit that will read out as 0 half the time and 1 the other half. Other gates are available to flip the state of a qubit by varying amounts and directions both relative to the 0 and 1 axes, and also a third axis thatrepresents phase, and provides additional possibilities for representing information. The specific operations and gates available depend on the quantum computer and toolkit youre using.

Groups of independent qubits, by themselves, arent enough to create the massive breakthroughs that are promised by quantum computing. The magic really starts to happen when the quantum physics concept of entanglement is implemented. One industry expert likened qubits without entanglement as being a very expensive classical computer. Entangled qubits affect each other instantly when measured, no matter far apart they are, based on what Einstein euphemistically called spooky action at a distance. In terms of classic computing, this is a bit like having a logic gate connecting every bit in memory to every other bit.

You can start to see how powerful that might be compared with a traditional computer needing to read and write from each element of memory separately before operating on it. As a result, there are multiple large potential gains from entanglement. The first is a huge increase in the complexity of programming that can be executed, at least for certain types of problems. One thats creating a lot of excitement is the modeling of complex molecules and materials that are very difficult to simulate with classical computers. Another might be innovations in long-distance secure communications if and when it becomes possible to preserve quantum state over large distances. Programming using entanglement typically starts with the C-NOT gate, which flips the state of an entangled particle if its partner is read out as a 1. This is sort of like a traditional XOR gate, except that it only operates when a measurement is made.

Superposition and entanglement are impressive physical phenomena, but leveraging them to do computation requires a very different mindset and programming model. You cant simply throw your C code on a quantum computer and expect it to run, and certainly not to run faster. Fortunately, mathematicians and physicists are way ahead of the computer builders here, having developed clever algorithms that take advantage of quantum computers decades before the machines started to appear.

Some of the first quantum algorithms created, and honestly, some of the few useful ones Ive found that you can understand without a graduate degree in math, are for secure cryptographic key distribution. These algorithms use the property of entanglement to allow the key creator to send one of each of many pairs of qubits to the recipient. The full explanation is pretty long, but the algorithms rely on the fact that if anyone intercepts and reads one of the entangled bits en route, the companion qubit at the sender will be affected. By passing some statistics back and forth, the sender and receiver can figure out whether the key was transmitted securely, or was hacked on the way.

You may have read that quantum computers one day could break most current cryptography systems. They will be able to do that because there are some very clever algorithms designed to run on quantum computers that can solve a hard math problem, which in turn can be used to factor very large numbers. One of the most famous is Shors Factoring Algorithm. The difficulty of factoring large numbers is essential to the security of all public-private key systems which are the most commonly used today. Current quantum computers dont have nearly enough qubits to attempt the task, but various experts predict they will within the next 3-8 years. That leads to some potentially dangerous situations, such as if only governments and the super-rich had access to the ultra-secure encryption provided by quantum computers.

There are plenty of reasons quantum computers are taking a long time to develop. For starters, you need to find a way to isolate and control a physical object that implements a qubit. That also requires cooling it down to essentially zero (as in .015 degrees Kelvin, in the case of IBMs Quantum One). Even at such a low temperature, qubits are only stable (retaining coherence) for a very short time. That greatly limits the flexibility of programmers in how many operations they can perform before needing to read out a result.

Not only do programs need to be constrained, but they need to be run many times, as current qubit implementations have a high error rate. Additionally, entanglement isnt easy to implement in hardware either. In many designs, only some of the qubits are entangled, so the compiler needs to be smart enough to swap bits around as needed to help simulate a system where all the bits can potentially be entangled.

The good news is that trivial quantum computing programs are actually pretty easy to understand if a bit confusing at first. Plenty of tutorials are available that will help you write your first quantum program, as well as let you run it on a simulator, and possibly even on a real quantum computer.

One of the best places to start is with IBMs QISKit, a free quantum toolkit from IBM Q Research that includes a visual composer, a simulator, and access to an actual IBM quantum computer after you have your code running on the simulator. Rigetti Quantum Computing has also posted an easy intro application, which relies on their toolkit and can be run on their machines in the cloud.

Unfortunately, the trivial applications are just that: trivial. So simply following along with the code in each example doesnt really help you master the intricacies of more sophisticated quantum algorithms. Thats a much harder task.

Thanks to William Poole and Sue Gemmell for their thoughtful input.

Now Read:

Also, check out ourExtremeTech Explainsseries for more in-depth coverage of todays hottest tech topics.

Top image credit: IBM

View post:
How Does Quantum Computing Work? – ExtremeTech

Quantum technology – Wikipedia

Quantum technology is a new field of physics and engineering, which transitions some of the properties of quantum mechanics, especially quantum entanglement, quantum superposition and quantum tunnelling, into practical applications such as quantum computing, quantum sensors, quantum cryptography, quantum simulation, quantum metrology and quantum imaging.

Quantum superposition states can be very sensitive to a number of external effects, such as electric, magnetic and gravitational fields; rotation, acceleration and time, and therefore can be used to make very accurate sensors. There are many experimental demonstrations of quantum sensing devices, such as the experiments carried out by the nobel laureate William D. Phillips on using cold atom interferometer systems to measure gravity and the atomic clock which is used by many national standards agencies around the world to define the second.

Recent efforts are being made to engineer quantum sensing devices, so that they are cheaper, easier to use, more portable, lighter and consume less power. It is believed that if these efforts are successful, it will lead to multiple commercial markets, such as for the monitoring of oil and gas deposits, or in construction.

Quantum secure communication are methods which are expected to be ‘quantum safe’ in the advent of a quantum computing systems that could break current cryptography systems. One significant component of a quantum secure communication systems is expected to be Quantum key distribution, or ‘QKD’: a method of transmitting information using entangled light in a way that makes any interception of the transmission obvious to the user.

Quantum computers are the ultimate quantum network, combining ‘quantum bits’ or ‘qubit’ which are devices that can store and process quantum data (as opposed to binary data) with links that can transfer quantum information between qubits. In doing this, quantum computers are predicted to calculate certain algorithms significantly faster than even the largest classical computer available today.

Quantum computers are expected to have a number of significant uses in computing fields such as optimization and machine learning. They are famous for their expected ability to carry out ‘Shor’s Algorithm’, which can be used to factorise large numbers which are mathematically important to secure data transmission.

There are many devices available today which are fundamentally reliant on the effects of quantum mechanics. These include: laser systems, transistors and semi-conductor devices and other devices, such as MRI imagers. These devices are often referred to belonging to the ‘first quantum revolution’; the UK Defence Science and Technology Laboratory (Dstl) grouped these devices as ‘quantum 1.0’,[1] that is devices which rely on the effects of quantum mechanics. Quantum technologies are often described as the ‘second quantum revolution’ or ‘quantum 2.0’. These are generally regarded as a class of device that actively create, manipulate and read out quantum states of matter, often using the quantum effects of superposition and entanglement.

The field of quantum technology was first outlined in a 1997 book by Gerard J. Milburn,[2] which was then followed by a 2003 article by Jonathan P. Dowling and Gerard J. Milburn,[3][4] as well as a 2003 article by David Deutsch.[5] The field of quantum technology has benefited immensely from the influx of new ideas from the field of quantum information processing, particularly quantum computing. Disparate areas of quantum physics, such as quantum optics, atom optics, quantum electronics, and quantum nanomechanical devices, have been unified under the search for a quantum computer and given a common language, that of quantum information theory.

The Quantum Manifesto was signed by 3,400 scientists and officially released at the 2016 Quantum Europe Conference, calling for a quantum technology initiative to coordinate between academia and industry, to move quantum technologies from the laboratory to industry, and to educate quantum technology professionals in a combination of science, engineering, and business.[6][7][8][9][10]

The European Commission responded to that manifesto with the Quantum Technology Flagship, a 1 Billion, 10-year-long megaproject, similar in size to earlier European Future and Emerging Technologies Flagship projects such as the Graphene Flagship and Human Brain Project.[8][11] China is building the world’s largest quantum research facility with a planned investment of 76 Billion Yuan (approx. 10 Billion).[12] The USA are preparing a national initiative.[13]

From 2010 onwards, multiple governments have established programmes to explore quantum technologies, such as the UK National Quantum Technologies Programme, which created four quantum ‘hubs’, the Centre for Quantum Technologies in Singapore, and QuTech a Dutch centre to develop a topological quantum computer.[14]

In the private sector, there have been multiple investments into quantum technologies made by large companies. Examples include Google’s partnership with the John Martinis group at UCSB,[15] multiple partnerships with the Canadian quantum computing company D-wave systems, and investment by many UK companies within the UK quantum technologies programme.

Here is the original post:
Quantum technology – Wikipedia