Page 3,724«..1020..3,7233,7243,7253,726..3,7303,740..»

This Crisis Is Good For Bitcoin, But Beware of Recession – Luno CEO – Cryptonews

Source: Adobe/Wit

The current global crisis is good for crypto as it shows the dysfunctionality of the existing system, according to Marcus Swanepoel, co-founder and CEO of crypto exchange Luno. However, in case of a global recession, this nascent industry might be hit hard too, he warned.

Crypto was born out of a financial crisis, accompanied with a vision to upgrade the world to the better financial system, so "the more dysfunctional the existing system becomes, the more there's an opportunity to showcase to the world that there is a better way to do it," Swanepoel told Cryptonews.com. Based on the available data, one can speculate, however, where the current situation can lead, he added.

People were expecting bitcoin (BTC) to rise immediately when the traditional market started falling, yet it dropped. That's not strange, explained the CEO. BTC is a proxy for gold, and what happens in a financial crisis is that a lot of assets go down at the same time, including gold. It follows the market "for a little bit," and "once people have basically removed all the money, put a lot more low-risk assets, and they start reallocating, and then they go into gold and then gold decouples." But Swanepoel said that it's too early to make an assessment on just how correlated BTC is to other asset classes.

An argument supporting decoupling, according to him, is the rise in the number of new customers buying bitcoin.

"We measure every day how many people buy bitcoin for the first time, and that number has been actually steadily going up," he said without specifying.

As reported yesterday, after a crypto market crash in March, the number of wallets with at least 1 bitcoin has kept rising, reaching new all-time highs.

There's a lot of things that are telling us that the sentiment is actually still strong on the retail market, "so it makes us think that the drop was probably more institutional/large traders that had to close positions to kind of cover some other stuff. But the grassroots level kind of adoption appears to have been unchanged until now."

Data shows, said the CEO, that a lot of the volatility and movement we've been seeing is caused by either traders or big investors, not long-time hodlers.

"So our view is that it is going to decouple and, and it's going to take some time now, whether that's two weeks or two months," or more, said Swanepoel, but it'll happen. And as people start realizing there is decoupling, they will be even more new people coming into the market, and it will grow significantly.

As they feel financial losses on their assets, many will likely notice the stock market going down, but BTC not following it, and may decide to invest for the first time, Swanepoel estimated. "My expectation is that we're talking months, not years," he added.

However, the CEO warned that the positive trends for crypto may be stopped by a major problem: a massive global recession. Not only would onboarding go down but, despite people's belief in crypto, even the long-term hodlers may be forced to sell their BTC to survive. If crypto doesn't start decoupling before that, the recession would put major pressure on it. Furthermore, if the market doesn't do well in a couple of months, many small crypto companies are going to go out of business. Finally, because of the financial crisis, the B2B (business-to-business) side will slow down, as companies will not invest in new projects.

Meanwhile, according to the CEO, Luno is in "a good financial position" and will continue to grow regardless of the outcome.

As crypto is cheaper, and the company believes decoupling is coming, they expect to see a lot more momentum in the market in the next few months. Therefore, Luno is accelerating its expansion. Their goal is to solve a long-running issue in the Cryptoverse - the difficulty for newcomers to get into crypto. It's still far too complicated, Swanepoel said, and Luno is working on improving the user experience, educating people on crypto, simplifying and hastening the process as well as the KYC (know your customers) procedures, but also adding more funding methods so users can deposit money quicker.

London-based Luno is one of the largest crypto-related companies in emerging markets, particularly in Africa and Southeast Asia, offering a number of products for trading and storing crypto, as well as APIs (application programming interfaces) for other businesses to link into. Per their website, Luno has local offices in seven countries on three continents, as well as 3 million users across 40 countries. The number of wallets surpassed 3.5 million, while only c. 5% of their volume is from Europe because they only recently launched there, the CEO said.___

Learn more: Can CBDC Help Recover From Coronavirus Recession And Lead To Bitcoin?

Go here to see the original:
This Crisis Is Good For Bitcoin, But Beware of Recession - Luno CEO - Cryptonews

Read More..

Top Three Coins Price Prediction: Bitcoin bears take control while Ethereum and Ripple still remains in the green – FXStreet

BTC/USD has dropped from $6,423 to $6,360. There is a lack of healthy support levels, so further price drops can be expected. On the upside, there are two strong resistance levels at $6,475 and $6,500.

$6,475 has the 4-hour SMA 5, 15-min and 4-hour SMA 50, 15-min SMA 100, one-hour SMA 200 and one-hour Bollinger Band middle curve. $6,500 has the 4-hour Previous High, one-day Pivot Point resistance-one and 15-min Bollinger Band upper curve.

ETH/USD has gone up slightly from $133.10 to $133.18. The bulls must overcome resistance at $134, which has the 15-min and 4-hour Previous Lows, 15-min, one-hour and 4-hour SMA 5, 4-hour and one-day SMA 10 and 15-min and one-hour Bollinger Band middle curves.

On the downside, two healthy support levels lie at $131 and $128.50. The former has the one-day Previous Low, one-day Pivot Point support-one, one-day SMA 5 and 4-hour Bollinger Band middle curve. The latter has the 4-hour SMA 100, one-week Fibonacci 38.2% retracement level, one-day Pivot Point support-two and one-day Bollinger Band middle curve.

XRP/USD has also gone up from $0.1738 to $0.174. On the upside, it faces one strong resistance at $0.178, which has the one-week Fibonacci 23.6% retracement level. On the downside, there is one healthy support level at $0.168 and it has the one-day SMA 10, 4-hour SMA 5 and one-day Pivot Point support-two.

See the rest here:
Top Three Coins Price Prediction: Bitcoin bears take control while Ethereum and Ripple still remains in the green - FXStreet

Read More..

D-Wave makes its quantum computers free to anyone working on the coronavirus crisis – VentureBeat

D-Wave today made its quantum computers available for free to researchers and developers working on responses to the coronavirus (COVID-19) crisis. D-Wave partners and customers Cineca, Denso, Forschungszentrum Jlich, Kyocera, MDR, Menten AI, NEC, OTI Lumionics, QAR Lab at LMU Munich, Sigma-i, Tohoku University, and Volkswagen are also offering to help. They will provide access to their engineering teams with expertise on how to use quantum computers, formulate problems, and develop solutions.

Quantum computing leverages qubits to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. D-Wave says the move to make access free is a response to a cross-industry request from the Canadian government for solutions to the COVID-19 pandemic. Free and unlimited commercial contract-level access to D-Waves quantum computers is available in 35 countries across North America, Europe, and Asia via Leap, the companys quantum cloud service. Just last month, D-Wave debuted Leap 2, which includes a hybrid solver service and solves problems of up to 10,000 variables.

D-Wave and its partners are hoping the free access to quantum processing resources and quantum expertise will help uncover solutions to the COVID-19 crisis. We asked the company if there were any specific use cases it is expecting to bear fruit. D-Wave listed analyzing new methods of diagnosis, modeling the spread of the virus, supply distribution, and pharmaceutical combinations. D-Wave CEO Alan Baratz added a few more to the list.

The D-Wave system, by design, is particularly well-suited to solve a broad range of optimization problems, some of which could be relevant in the context of the COVID-19 pandemic, Baratz told VentureBeat. Potential applications that could benefit from hybrid quantum/classical computing include drug discovery and interactions, epidemiological modeling, hospital logistics optimization, medical device and supply manufacturing optimization, and beyond.

Earlier this month, Murray Thom, D-Waves VP of software and cloud services, told us quantum computing and machine learning are extremely well matched. In todays press release, Prof. Dr. Kristel Michielsen from the Jlich Supercomputing Centre seemed to suggest a similar notion: To make efficient use of D-Waves optimization and AI capabilities, we are integrating the system into our modular HPC environment.

More:
D-Wave makes its quantum computers free to anyone working on the coronavirus crisis - VentureBeat

Read More..

Can Quantum Computing Be the New Buzzword – Analytics Insight

Quantum Mechanics created their chapter in the history of the early 20th Century. With its regular binary computing twin going out of style, quantum mechanics led quantum computing to be the new belle of the ball! While the memory used in a classical computer encodes binary bits one and zero, quantum computers use qubits (quantum bits). And Qubit is not confined to a two-state solution, but can also exist in superposition i.e., qubits can be employed at 0, 1 and both 1 and 0 at the same time.

Hence it can perform many calculations in parallel owing to the ability to pursue simultaneous probabilities through superposition along with manipulating them with magnetic fields. Its coefficients allow predicting how much zero-ness and one-ness it has, are complex numbers, which indicates the real and imaginary part. This provides a huge technical edge over other conventional computing. The beauty of this is if you have n qubits, you can have a superposition of 2n states or bits of information simultaneously.

Another magic up its sleeve is that Qubits are capable of pairing which is referred to as entanglement. Here, the state of one qubit cannot be described independently of the state of the others which allows instantaneous communication.

To quote American theoretical physicist, John Wheeler, If you are not completely confused by quantum mechanics, you do not understand it. So, without a doubt it is safe to say that even quantum computing has few pitfalls. First, the qubits tend to loss the information they contain, and also lose their entanglement in other words, decoherence. Second, imperfections of quantum rotations. These led to a loss of information within a few microsecond.

Ultimately, quantum computing is the Trump Card as promises to be a disruptive technology with such dramatic speed improvements. This will enable systems to solve complex higher-order mathematical problems that earlier took months to be computed, investigate material properties, design new ones, study superconductivity, aid in drug discovery via simulation and understanding new chemical reactions.

This quantum shift in the history of computer sciences can also pave way for encrypted communication (as keys cannot be copied nor hacked), much better than Blockchain technology, provide improved designs for solar panels, predict financial markets, big data mining, develop Artificial Intelligence to new heights, enhanced meteorological updates and a much-anticipated age of quantum internet. According to scientists, Future advancements can also lead to help find a cure for Alzheimers.

The ownership and effective employment of a quantum computer could change the political and technological dynamics of the world. Computing power, in the end, is power whether it is personal, national or globally strategic. In short, a quantum computer could be an existential threat to a nation that hasnt got one. At the moment Google, IBM, Intel, and D-Wave are pursuing this technology. While there are scientific minds who dont believe in the potential of quantum computing yet unless you are a time-traveler like Marty McFly in Back to the Future series or any one of the Doctor Who, one cannot say what future beholds.

View post:
Can Quantum Computing Be the New Buzzword - Analytics Insight

Read More..

Q-CTRL to Host Live Demos of ‘Quantum Control’ Tools – Quantaneo, the Quantum Computing Source

Q-CTRL, a startup that applies the principles of control engineering to accelerate the development of the first useful quantum computers, will host a series of online demonstrations of new quantum control tools designed to enhance the efficiency and stability of quantum computing hardware.

Dr. Michael Hush, Head of Quantum Science and Engineering at Q-CTRL, will provide an overview of the companys cloud-based quantum control engineering software called BOULDER OPAL. This software uses custom machine learning algorithms to create error-robust logical operations in quantum computers. The team will demonstrate - using real quantum computing hardware in real time - how they reduce susceptibility to error by 100X and improve hardware stability in time by 10X, while reducing time-to-solution by 10X against existing software.

Scheduled to accommodate the global quantum computing research base, the demonstrations will take place:

April 16 from 4-4:30 p.m. U.S. Eastern Time (ET) April 21 from 10-10:30 a.m. Singapore Time (SGT) April 23 from 10-10:30 a.m. Central European Summer Time (CEST) To register, visit https://go.q-ctrl.com/l/791783/2020-03-19/dk83

Released in Beta by Q-CTRL in March, BOULDER OPAL is an advanced Python-based toolkit for developers and R&D teams using quantum control in their hardware or theoretical research. Technology agnostic and with major computational grunt delivered seamlessly via the cloud, BOULDER OPAL enables a range of essential tasks which improve the performance of quantum computing and quantum sensing hardware. This includes the efficient identification of sources of noise and error, calculating detailed error budgets in real lab environments, creating new error-robust logic operations for even the most complex quantum circuits, and integrating outputs directly into real hardware.

The result for users is greater performance from todays quantum computing hardware, without the need to become an expert in quantum control engineering.

Experimental validations and an overview of the software architecture, developed in collaboration with the University of Sydney, were recently released in an online technical manuscript titled Software Tools for Quantum Control: Improving Quantum Computer Performance through Noise and Error Suppression.

More here:
Q-CTRL to Host Live Demos of 'Quantum Control' Tools - Quantaneo, the Quantum Computing Source

Read More..

We’re Getting Closer to the Quantum Internet, But What Is It? – HowStuffWorks

Advertisement

Back in February 2020, scientists from the U.S. Department of Energy's Argonne National Laboratory and the University of Chicago revealed that they had achieved a quantum entanglement in which the behavior of a pair two tiny particles becomes linked, so that their states are identical over a 52-mile (83.7 kilometer) quantum-loop network in the Chicago suburbs.

You may be wondering what all the fuss is about, if you're not a scientist familiar with quantum mechanics that is, the behavior of matter and energy at the smallest scale of reality, which is peculiarly different from the world we can see around us.

But the researchers' feat could be an important step in the development of a new, vastly more powerful version of the internet in the next few decades. Instead of the bits that today's network uses, which can only express a value of either 0 or 1, the future quantum internet would utilize qubits of quantum information, which can take on an infinite number of values. (A quibit is the unit of information for a quantum computer; it's like a bit in an ordinary computer).

That would give the quantum internet way more bandwidth, which would make it possible to connect super-powerful quantum computers and other devices and run massive applications that simply aren't possible with the internet we have now.

"A quantum internet will be the platform of a quantum ecosystem, where computers, networks, and sensors exchange information in a fundamentally new manner where sensing, communication, and computing literally work together as one entity, " explains David Awschalom via email. He's a spintronics and quantum information professor in the Pritzker School of Molecular Engineering at the University of Chicago and a senior scientist at Argonne, who led the quantum-loop project.

So why do we need this and what does it do? For starters, the quantum internet is not a replacement of the regular internet we now have. Rather it would be a complement to it or a branch of it. It would be able to take care of some of the problems that plague the current internet. For instance, a quantum internet would offer much greater protection from hackers and cybercriminals. Right now, if Alice in New York sends a message to Bob in California over the internet, that message travels in more or less a straight line from one coast to the other. Along the way, the signals that transmit the message degrade; repeaters read the signals, amplify and correct the errors. But this process allows hackers to "break in" and intercept the message.

However, a quantum message wouldn't have that problem. Quantum networks use particles of light photons to send messages which are not vulnerable to cyberattacks. Instead of encrypting a message using mathematical complexity, says Ray Newell, a researcher at Los Alamos National Laboratory, we would rely upon the peculiar rules of quantum physics. With quantum information, "you can't copy it or cut it in half, and you can't even look at it without changing it." In fact, just trying to intercept a message destroys the message, as Wired magazine noted. That would enable encryption that would be vastly more secure than anything available today.

"The easiest way to understand the concept of the quantum internet is through the concept of quantum teleportation," Sumeet Khatri, a researcher at Louisiana State University in Baton Rouge, says in an email. He and colleagues have written a paper about the feasibility of a space-based quantum internet, in which satellites would continually broadcast entangled photons down to Earth's surface, as this Technology Review article describes.

"Quantum teleportation is unlike what a non-scientist's mind might conjure up in terms of what they see in sci-fi movies, " Khatri says. "In quantum teleportation, two people who want to communicate share a pair of quantum particles that are entangled. Then, through a sequence of operations, the sender can send any quantum information to the receiver (although it can't be done faster than light speed, a common misconception). This collection of shared entanglement between pairs of people all over the world essentially constitutes the quantum internet. The central research question is how best to distribute these entangled pairs to people distributed all over the world. "

Once it's possible to do that on a large scale, the quantum internet would be so astonishingly fast that far-flung clocks could be synchronized about a thousand times more precisely than the best atomic clocks available today, as Cosmos magazine details. That would make GPS navigation vastly more precise than it is today, and map Earth's gravitational field in such detail that scientists could spot the ripple of gravitational waves. It also could make it possible to teleport photons from distant visible-light telescopes all over Earth and link them into a giant virtual observatory.

"You could potentially see planets around other stars, " says Nicholas Peters, group leader of the Quantum Information Science Group at Oak Ridge National Laboratory.

It also would be possible for networks of super-powerful quantum computers across the globe to work together and create incredibly complex simulations. That might enable researchers to better understand the behavior of molecules and proteins, for example, and to develop and test new medications.

It also might help physicists to solve some of the longstanding mysteries of reality. "We don't have a complete picture of how the universe works," says Newell. "We have a very good understanding of how quantum mechanics works, but not a very clear picture of the implications. The picture is blurry where quantum mechanics intersects with our lived experience."

But before any of that can happen, researchers have to figure out how to build a quantum internet, and given the weirdness of quantum mechanics, that's not going to be easy. "In the classical world you can encode information and save it and it doesn't decay, " Peters says. "In the quantum world, you encode information and it starts to decay almost immediately. "

Another problem is that because the amount of energy that corresponds to quantum information is really low, it's difficult to keep it from interacting with the outside world. Today, "in many cases, quantum systems only work at very low temperatures," Newell says. "Another alternative is to work in a vacuum and pump all the air out. "

In order to make a quantum internet function, Newell says, we'll need all sorts of hardware that hasn't been developed yet. So it's hard to say at this point exactly when a quantum internet would be up and running, though one Chinese scientist has envisioned that it could happen as soon as 2030.

Read more from the original source:
We're Getting Closer to the Quantum Internet, But What Is It? - HowStuffWorks

Read More..

Who Will Mine Cryptocurrency in the Future – Quantum Computers or the Human Body? – Coin Idol

Apr 01, 2020 at 09:31 // News

Companies including Microsoft, IBM and Google, race to come up with cheap and effective mining solutions to improve its cost and energy efficiency. Lots of fuss has been made around quantum computing and its potential for mining. Now, the time has come for a new solution - mining with the help of human body activity.

While quantum computers are said to be able to hack bitcoin mining algorithms, using physical activity for the process is quite a new and extraordinary thing. The question is, which technology turns out to be more efficient?

Currently, with the traditional cryptocurrency mining methods, the reward for mining a bitcoin block is around 12.5 bitcoins, at $4k per BTC and this should quickly be paid off after mining a few blocks.

Consequently, the best mining method as per now is to keep trying random numbers and wait to observe which one hashes to a number that isnt more than the target difficulty. And this is one of the reasons as to why mining pools have arisen where multiple PCs are functioning in parallel to look for the proper solution to the problem and if one of the PCs gets the solution, then the pool is given an appropriate reward which is then shared among all the miners.

Quantum computers possess more capacity and might potentially be able to significantly speed up mining while eliminating the need for numerous machines. Thus, it can improve both energy efficiency and the speed of mining.

In late 2019, Google released a quantum processor called Sycamore, many times faster than the existing supercomputer. There was even a post in the medium claiming that this new processor is able to mine all remaining bitcoins like in two seconds. Sometime later the post was deleted due to an error in calculations, according to the Bitcoinist news outlet.

Despite quantum computing having the potential to increase the efficiency of mining, its cost is close to stratospheric. It would probably take time before someone is able to afford it.

Meanwhile, another global tech giant, Microsoft, offers a completely new and extraordinary solution - to mine cryptos using a persons brain waves or body temperature. As coinidol.com, a world blockchain news outlet has reported, they have filed a patent for a groundbreaking system which can mine digital currencies using the data collected from human beings when they view ads or do exercises.

The IT giant disclosed that sensors could identify and diagnose any activity connected with the particular piece(s) of work like the time taken to read advertisements, and modify it into digital information that is readable by a computing device to do computation works, the same manner as a conventional proof-of-work (PoW) system works. Some tasks would either decrease or soar computational energy in an appropriate manner, basing on the produced amount of info from the users activity.

So far, there is no signal showing when Microsoft will start developing the system and it is still uncertain whether or not this system will be developed on its own blockchain network. Quantum computing also needs time to be fully developed and deployed.

However, both solutions bear a significant potential for transforming the entire mining industry. While quantum computing is able to boost the existing mining mechanism, having eliminated high energy-consuming mining firms, Microsofts new initiative can disrupt the industry making it even look different.

Which of these two solutions turns out to be more viable? We will see over time. What do you think about these mining solutions? Let us know in the comments below!

Read the original post:
Who Will Mine Cryptocurrency in the Future - Quantum Computers or the Human Body? - Coin Idol

Read More..

Universities Space Research Association to Lead a DARPA Project on Quantum Computing – Quantaneo, the Quantum Computing Source

The collaboration will focus on developing a superconducting quantum processor, hardware -aware software and custom algorithms that take direct advantage of the hardware advances to solve scheduling and asset allocation problems. In addition, the team will design methods for benchmarking the hardware against classical computers to determine quantum advantage.

USRA Senior Vice President Bernie Seery noted, This is a very exciting public-private partnership for the development of forefront quantum computing technology and the algorithms that will be used to address pressing, strategically significant challenges. We are delighted to receive this award and look forward to working with our partner institutions to deliver value to DARPA.

In particular, the work will target scheduling problems whose complexity goes beyond what has been done so far with the quantum approximate optimization algorithm (QAOA). USRAs Research Institute for Advanced Computer Science (RIACS) has been working on quantum algorithms for planning and scheduling for NASA QuAIL since 2012. The innovations on quantum gates performed by Rigetti coupled perfectly with the recent research ideas at QuAIL, enabling an unprecedented hardware-theory co-design opportunity explains Dr. Venturelli, USRA Associate Director for Quantum Computing and project PI for USRA. Understanding how to use quantum computers for scheduling applications could have important implications for national security such as real time strategic asset deployment, as well as commercial applications including global supply chain management, network optimizations or vehicle routing.

The grant is a part of the DARPA Optimization with Noisy Intermediate-Scale Quantum program (ONISQ). The goal of this program is to establish that quantum information processing using NISQ devices has a quantitative advantage for solving real-world-combinatorial optimization problems using the QAOA method.

Continue reading here:
Universities Space Research Association to Lead a DARPA Project on Quantum Computing - Quantaneo, the Quantum Computing Source

Read More..

The Schizophrenic World Of Quantum Interpretations – Forbes

Quantum Interpretations

To the average person, most quantum theories sound strange, while others seem downright bizarre.There are many diverse theories that try to explain the intricacies of quantum systems and how our interactions affect them.And, not surprisingly, each approach is supported by its group of well-qualified and well-respected scientists.Here, well take a look at the two most popular quantum interpretations.

Does it seem reasonable that you can alter a quantum system just by looking at it? What about creating multiple universes by merely making a decision?Or what if your mind split because you measured a quantum system?

You might be surprised that all or some of these things might routinely happen millions of times every day without you even realizing it.

But before your brain gets twisted into a knot, lets cover a little history and a few quantum basics.

The birth of quantum mechanics

Classical physics describes how large objects behave and how they interact with the physical world.On the other hand, quantum theory is all about the extraordinary and inexplicable interaction of small particles on the invisible scale of such things as atoms, electrons, and photons.

Max Planck, a German theoretical physicist, first introduced the quantum theory in 1900. It was an innovation that won him the Nobel Prize in physics in 1918.Between 1925 and 1930, several scientists worked to clarify and understand quantum theory.Among the scientists were Werner Heisenberg and Erwin Schrdinger, both of whom mathematically expanded quantum mechanics to accommodate experimental findings that couldnt be explained by standard physics.

Heisenberg, along with Max Born and Pascual Jordan, created a formulation of quantum mechanics called matrix mechanics. This concept interpreted the physical properties of particles as matrices that evolved in time.A few months later, Erwin Schrdinger created his famous wave mechanics.

Although Heisenberg and Schrdinger worked independently from each other, and although their theories were very different in presentation, both theories were essentially mathematically the same. Of the two formulations, Schrdingers was more popular than Heisenbergs because it boiled down to familiar differential equations.

While today's physicists still use these formulations, they still debate their actual meaning.

First weirdness

A good place to start is Schrdingers equation.

Erwin Schrdingers equation provides a mathematical description of all possible locations and characteristics of a quantum system as it changes over time.This description is called the systems wave function.According to the most common quantum theory, everything has a wave function. The quantum system could be a particle, such as an electron or a photon, or even something larger.

Schrdingers equation won't tell you the exact location of a particle.It only reveals the probability of finding the particle at a given location.The probability of a particle being in many places or in many states at the same time is called its superposition. Superposition is one of the elements of quantum computing that makes it so powerful.

Almost everyone has heard about Schrdingers cat in a box.Simplistically, ignoring the radiation gadgets, while the cat is in the closed box, it is in a superposition of being both dead and alive at the same time.Opening the box causes the cat's wave function to collapse into one of two states and you'll find the cat either alive or dead.

There is little dispute among the quantum community that Schrdingers equation accurately reflects how a quantum wave function evolves.However, the wave function itself, as well as the cause and consequences of its collapse, are all subjects of debate.

David Deutsch is a brilliant British quantum physicist at the University of Cambridge. In his book, The Fabric of Reality, he said: Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them. Facts cannot be understood just by being summarized in a formula, any more than being listed on paper or committed to memory.

The Copenhagen interpretation

Quantum theories use the term "interpretation" for two reasons.One, it is not always obvious what a particular theory means without some form of translation.And, two, we are not sure we understand what goes on between a wave functions starting point and where it ends up.

There are many quantum interpretations.The most popular is the Copenhagen interpretation, a namesake of where Werner Heisenberg andNiels Bohr developed their quantum theory.

Werner Heisenberg (left) with Niels Bohr at a Conference in Copenhagen in 1934.

Bohr believed that the wave function of a quantum system contained all possible quantum states.However, when the system was observed or measured, its wave function collapsed into a single state.

Whats unique about the Copenhagen interpretation is that it makes the outside observer responsible for the wave functions ultimate fate. Almost magically, a quantum system, with all its possible states and probabilities, has no connection to the physical world until an observer interacts or measures the system. The measurement causes the wave function to collapse into one of its many states.

You might wonder what happens to all the other quantum states present in the wave function as described by the Copenhagen Interpretation before it collapsed?There is no explanation of that mystery in the Copenhagen interpretation. However, there is a quantum interpretation that provides an answer to that question.Its called the Many-Worlds Interpretation or MWI.

Billions of you?

Because the many-worlds interpretation is one of the strangest quantum theories, it has become central to the plot of many science fiction novels and movies.At one time, MWI was an outlier with the quantum community, but many leading physicists now believe it is the only theory that is consistent with quantum behavior.

The MWI originated in a Princeton doctoral thesis written by a young physicist named Hugh Everett in the late 1950s. Even though Everett derived his theory using sound quantum fundamentals, it was severely criticized and ridiculed by most of the quantum community. Even Everetts academic adviser at Princeton, John Wheeler, tried to distance himself from his student. Everette became despondent over the harsh criticism. He eventually left quantum research to work for the government as a mathematician.

The theory proposes that the universe has a single, large wave function that follows Schrdingers equation.Unlike the Copenhagen Interpretation, the MWI universal wave function doesnt collapse.

Everything in the universe is quantum, including ourselves. As we interact with parts of the universe, we become entangled with it.As the universal wave function evolves, some of our superposition states decohere. When that happens, our reality becomes separated from the other possible outcomes associated with that event. Just to be clear, the universe doesn't split and create a new universe. The probability of all realities, or universes, already exists in the universal wave function, all occupying the same space-time.

Schrdinger's Cat, many-worlds interpretation, with universe branching. Visualization of the ... [+] separation of the universe due to two superposed and entangled quantum mechanical states.

In the Copenhagen interpretation, by opening the box containing Schrdingers cat, you cause the wave function to collapse into one of its possible states, either alive or dead.

In the Many -Worlds interpretation, the wave function doesn't collapse. Instead, all probabilities are realized.In one universe, you see the cat alive, and in another universe the cat will be dead.

Right or wrong decisions become right and wrong decisions

Decisions are also events that trigger the separation of multiple universes. We make thousands of big and little choices every day. Have you ever wondered what your life would be like had you made different decisions over the years?

According to the Many-Worlds interpretation, you and all those unrealized decisions exist in different universes because all possible outcomes exist in the universal wave function.For every decision you make, at least two of "you" evolve on the other side of that decision. One universe exists for the choice you make, and one universe for the choice you didnt make.

If the Many-Worlds Interpretation is correct, then right now, a near infinite versions of you are living different and independent lives in their own universes.Moreover, each of the universes overlay each other and occupy the same space and time.

It is also likely that you are currently living in a branch universe spun off from a decision made by a previous version of yourself, perhaps millions or billions of previous iterations ago.You have all the old memories of your pre-decision self, but as you move forward in your own universe, you live independently and create your unique and new memories.

A Reality Check

Which interpretation is correct?Copenhagen or Many-Worlds?Maybe neither. But because quantum mechanics is so strange, perhaps both are correct.It is also possible that a valid interpretation is yet to be expressed. In the end, correct or not, quantum interpretations are just plain fun to think about.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including Amazon.com, Advanced Micro Devices,Apstra,ARM Holdings, Aruba Networks, AWS, A-10 Strategies,Bitfusion,Cisco Systems, Dell, DellEMC, Dell Technologies, Diablo Technologies, Digital Optics,Dreamchain, Echelon, Ericsson, Foxconn, Frame, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries,Google,HPInc., Hewlett Packard Enterprise, HuaweiTechnologies,IBM, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MACOM (Applied Micro),MapBox,Mavenir, Mesosphere,Microsoft,National Instruments, NetApp, NOKIA, Nortek,NVIDIA, ON Semiconductor, ONUG, OpenStack Foundation, Panasas,Peraso, Pixelworks, Plume Design,Portworx, Pure Storage,Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat, Samsung Electronics, Silver Peak, SONY,Springpath, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse,TensTorrent,TobiiTechnology, Twitter, Unity Technologies, Verizon Communications,Vidyo, Wave Computing,Wellsmith, Xilinx, Zebra, which may be cited in this article.

Read more from the original source:
The Schizophrenic World Of Quantum Interpretations - Forbes

Read More..

Disrupt The Datacenter With Orchestration – The Next Platform

Since 1965, the computer industry has relied on Moores Law to accelerate innovation, pushing more transistors into integrated circuits to improve computation performance. Making transistors smaller helped lift all boats for the entire industry and enable new applications. At some point, we will reach a physical limit that is, a limit stemming from physics itself. Even with this setback, improvements kept on pace thanks to increased parallelism of computation and consolidation of specialized functions into single chip packages, such as systems on chip).

In recent years, we are nearing another peak. This article proposes to improve computation performance not only by building better hardware, but by changing how we use existing hardware. More specifically, the focusing on how we use existing processor types. I call this approach Compute Orchestration: automatic optimization of machine code to best use the modern datacenter hardware (again, with special emphasis on different processor types).

So what is compute orchestration? It is the embracing of hardware diversity to support software.

There are many types of processors: Microprocessors in small devices, general purpose CPUs in computers and servers, GPUs for graphics and compute, and programmable hardware like FPGAs. In recent years, specialized processors like TPUs and neuromorphic processors for machine learning are rapidly entering the datacenter.

There is potential in this variety: Instead of statically utilizing each processor for pre-defined functions, we can use existing processors as a swarm, each processor working on the most suitable workloads. Doing that, we can potentially deliver more computation bandwidth with less power, lower latency and lower total cost of ownership).

Non-standard utilization of existing processors is already happening: GPUs, for example, were already adapted from processors dedicated to graphics into a core enterprise component. Today, GPUs are used for machine learning and cryptocurrency mining, for example.

I call the technology to utilize the processors as a swarm Compute Orchestration. Its tenets can be described in four simple bullets:

Compute orchestration is, in short, automatic adaptation of binary code and automatic allocation to the most suitable processor types available. I split the evolution of compute orchestration into four generations:

Compute Orchestration Gen 1: Static Allocation To Specialized Co-Processors

This type of compute orchestration is everywhere. Most devices today include co-processors to offload some specialized work from the CPU. Usually, the toolchain or runtime environment takes care of assigning workloads to the co-processor. This is seamless to the developer, but also limited in functionality.

Best known example is the use of cryptographic co-processors for relevant functions. Being liberal in our definitions of co-processor, Memory Management Units (MMUs) to manage virtual memory address translation can also be considered an example.

Compute Orchestration Gen 2: Static Allocation, Heterogeneous Hardware

This is where we are at now. In the second generation, the software relies on libraries, dedicated run time environments and VMs to best use the available hardware. Lets call the collection of components that help better use the hardware frameworks. Current frameworks implement specific code to better use specific processors. Most prevalent are frameworks that know how to utilize GPUs in the cloud. Usually, better allocation to bare metal hosts remains the responsibility of the developer. For example, the developer/DevOps engineer needs to make sure a machine with GPU is available for the relevant microservice. This phenomenon is what brought me to think of Compute Orchestration in the first place, as it proves there is more slack in our current hardware.

Common frameworks like OpenCL allow programming compute kernels to run on different processors. TensorFlow allows assigning nodes in a computation graph to different processors (devices).

This better use of hardware by using existing frameworks is great. However, I believe there is a bigger edge. Existing frameworks still require effort from the developer to be optimal they rely on the developer. Also, no legacy code from 2016 (for example) is ever going to utilize a modern datacenter GPU cluster. My view is that by developing automated and dynamic frameworks, that adapt to the hardware and workload, we can achieve another leap.

Compute Orchestration Gen 3: Dynamic Allocation To Heterogeneous Hardware

Computation can take an example from the storage industry: Products for better utilization and reliability of storage hardware have innovated for years. Storage startups develop abstraction layers and special filesystems that improve efficiency and reliability of existing storage hardware. Computation, on the other hand, remains a stupid allocation of hardware resources. Smart allocation of computation workloads to hardware could result in better performance and efficiency for big data centers (for example hyperscalers like cloud providers). The infrastructure for such allocation is here, with current data center designs pushing to more resource disaggregation, introduction of diverse accelerators, and increased work on automatic acceleration (for example: Workload-aware Automatic Parallelization for Multi-GPU DNN Training).

For high level resource management, we already have automatic allocation. For example, project Mesos (paper) focusing on fine-grained resource sharing, Slurm for cluster management, and several extensions using Kubernetes operators.

To further advance from here would require two steps: automatic mapping of available processors (which we call the compute environment) and workload adaptation. Imagine a situation where the developer doesnt have to optimize her code to the hardware. Rather, the runtime environment identifies the available processing hardware and automatically optimizes the code. Cloud environments are heterogeneous and changing, and the code should change accordingly (in fact its not the code, but the execution model in the run time environment of the machine code).

Compute Orchestration Gen 4: Automatic Allocation To Dynamic Hardware

A thought, even a possibility, can shatter and transform us. Friedrich Wilhelm Nietzsche

The quote above is to say that there we are far from practical implementation of the concept described here (as far as I know). We can, however, imagine a technology that dynamically re-designs a data center to serve needs of running applications. This change in the way whole data centers meet computation needs as already started. FGPAs are used more often and appear in new places (FPGAs in hosts, FPGA machines in AWS, SmartNICs), providing the framework for constant reconfiguration of hardware.

To illustrate the idea, I will use an example: Microsoft initiated project Catapult, augmenting CPUs with an interconnected and configurable compute layer composed of programmable silicon. The timeline in the projects website is fascinating. The project started off in 2010, aiming to improve search queries by using FPGAs. Quickly, it proposed the use of FPGAs as bumps in the wire, adding computation in new areas of the data path. Project Catapult also designed an architecture for using FPGAs as a distributed resource pool serving all the data center. Then, the project spun off Project BrainWave, utilizing FPGAs for accelerating AI/ML workloads.

This was just an example of innovation in how we compute. Quick online search will bring up several academic works on the topic. All we need to reach the 4th generation is some idea synthesis, combining a few concepts together:

Low effort HDL generation (for example Merlin compiler, BORPH)

In essence, what I am proposing is to optimize computation by adding an abstraction layer that:

Automatic allocation on agile hardware is the recipe for best utilizing existing resources: faster, greener, cheaper.

The trends and ideas mentioned in this article can lead to many places. It is very likely, that we are already working with existing hardware in the optimal way. It is my belief that we are in the midst of the improvement curve. In recent years, we had increased innovation in basic hardware building blocks, new processors for example, but we still have room to improve in overall allocation and utilization. The more we deploy new processors in the field, the more slack we have in our hardware stack. New concepts, like edge computing and resource disaggregation, bring new opportunities for optimizing legacy code by smarter execution. To achieve that, legacy code cant be expected to be refactored. Developers and DevOps engineers cant be expected to optimize for the cloud configuration. We just need to execute code in a smarter way and that is the essence of compute orchestration.

The conceptual framework described in this article should be further explored. We first need to find the killer app (what type of software we optimize to which type of hardware). From there, we can generalize. I was recently asked in a round table what is the next generation of computation? Quantum computing? Tensor Processor Units? I responded that all of the above, but what we really need is better usage of the existing generation.

Guy Harpak is the head of technology at Mercedes-Benz Research & Devcelopment in its Tel Aviv, Israel facility. Please feel free to contact him on any thoughts on the topics above at harpakguy@gmail.com. Harpak notes that this contributed article reflects his personal opinion and is in no way related to people or companies that he works with or for.

Related Reading: If you find this article interesting, I would recommend researching the following topics:

Some interesting articles on similar topics:

Return Of The Runtimes: Rethinking The Language Runtime System For The Cloud 3.0 Era

The Deep Learning Revolution And Its Implications For Computer Architecture And Chip Design (by Jeffrey Dean from Google Research)

Beyond SmartNICs: Towards A Fully Programmable Cloud

Hyperscale Cloud: Reimagining Datacenters From Hardware To Applications

More here:
Disrupt The Datacenter With Orchestration - The Next Platform

Read More..