Page 1,635«..1020..1,6341,6351,6361,637..1,6401,650..»

Quantum Physics Falls Apart without Imaginary Numbers – Scientific American

Three years ago one of us, Toni, asked another of us, Marco, to come to his office at the Institute of Photonic Sciences, a large research center in Castelldefels near Barcelona. There is a problem that I wanted to discuss with you, Toni began. It is a problem that Miguel and I have been trying to solve for years. Marco made a curious face, so Toni posed the question: Can standard quantum theory work without imaginary numbers?

Imaginary numbers, when multiplied by themselves, produce a negative number. They were first named imaginary by philosopher Ren Descartes, to distinguish them from the numbers he knew and accepted (now called the real numbers), which did not have this property. Later, complex numbers, which are the sum of a real and an imaginary number, gained wide acceptance by mathematicians because of their usefulness for solving complicated mathematical problems. They aren't part of the equations of any fundamental theory of physics, howeverexcept for quantum mechanics.

The most common version of quantum theory relies on complex numbers. When we restrict the numbers appearing in the theory to the real numbers, we arrive at a new physical theory: real quantum theory. In the first decade of the 21st century, several teams showed that this real version of quantum theory could be used to correctly model the outcomes of a large class of quantum experiments. These findings led many scientists to believe that real quantum theory could model any quantum experiment. Choosing to work with complex instead of real numbers didn't represent a physical stance, scientists thought; it was just a matter of mathematical convenience.

Still, that conjecture was unproven. Could it be false? After that conversation in Toni's office, we started on a months-long journey to refute real quantum theory. We eventually came up with a quantum experiment whose results cannot be explained through real quantum models. Our finding means that imaginary numbers are an essential ingredient in the standard formulation of quantum theory: without them, the theory would lose predictive power. What does this mean? Does this imply that imaginary numbers exist in some way? That depends on how seriously one takes the notion that the elements of the standard quantum theory, or any physical theory, exist as opposed to their being just mathematical recipes to describe and make predictions about experimental observations.

Complex numbers date to the early 16th century, when Italian mathematician Antonio Maria Fiore challenged professor Niccol Fontana Tartaglia (the stutterer) to a duel. In Italy at that time, anyone could challenge a mathematics professor to a math duel, and if they won, they might get their opponent's job. As a result, mathematicians tended to keep their discoveries to themselves, deploying their theorems, corollaries and lemmas only to win intellectual battles.

From his deathbed, Fiore's mentor, Scipione del Ferro, had given Fiore a formula for solving equations of the form x3 + ax = b, also known as cubic equations. Equipped with his master's achievement, Fiore presented Tartaglia with 30 cubic equations and challenged him to find the value of x in each case.

Tartaglia discovered the formula just before the contest, solved the problems and won the duel. Tartaglia later confided his formula to physician and scientist Gerolamo Cardano, who promised never to reveal it to anyone. Despite his oath, though, Cardano came up with a proof of the formula and published it under his name. The complicated equation contained two square roots, so it was understood that, should the numbers within be negative, the equation would have no solutions, because there are no real numbers that, when multiplied by themselves, produce a negative number.

In the midst of these intrigues, a fourth scholar, Rafael Bombelli, made one of the most celebrated discoveries in the history of mathematics. Bombelli found solvable cubic equations for which the del Ferro-Tartaglia-Cardano formula nonetheless required computing the square root of a negative number. He then realized that, for all these examples, the formula gave the correct solution, as long as he pretended that there was a new type of number whose square equaled 1. Assuming that every variable in the formula was of the form a + 1 xb with a and b being normal numbers, the terms multiplying 1 canceled out, and the result was the normal solution of the equation.

For the next few centuries mathematicians studied the properties of all numbers of the form a + 1 xb, which were called complex. In the 17th century Descartes, considered the father of rational sciences, associated these numbers with nonexistent features of geometric shapes. Thus, he named the number i= 1 imaginary, to contrast it with what he knew as the normal numbers, which he called real. Mathematicians still use this terminology today.

Complex numbers turned out to be a fantastic tool, not only for solving equations but also for simplifying the mathematics of classical physicsthe physics developed up until the 20th century. An example is the classical understanding of light. It is easier to describe light as rotating complex electric and magnetic fields than as oscillating real ones, despite the fact that there is no such thing as an imaginary electric field. Similarly, the equations that describe the behavior of electronic circuits are easier to solve if you pretend electric currents have complex values, and the same goes for gravitational waves.

Before the 20th century all such operations with complex numbers were simply considered a mathematical trick. Ultimately the basic elements of any classical theorytemperatures, particle positions, fields, and so oncorresponded to real numbers, vectors or functions. Quantum mechanics, a physical theory introduced in the early 20th century to understand the microscopic world, would radically challenge this state of affairs.

In standard quantum theory, the state of a physical system is represented by a vector (a quantity with a magnitude and direction) of complex numbers called the wave function. Physical properties, such as the speed of a particle or its position, correspond to tables of complex numbers called operators. From the start, this deep reliance on complex numbers went against deeply held convictions that physical theories must be formulated in terms of real magnitudes. Erwin Schrdinger, author of the Schrdinger equation that governs the wave function, was one of the first to express the general dissatisfaction of the physics community. In a letter to physicist Hendrik Lorentz on June 6, 1926, Schrdinger wrote, What is unpleasant here, and indeed directly to be objected to, is the use of complex numbers. [the wave function] is surely fundamentally a real function.

At first, Schrdinger's uneasiness seemed simple to resolve: he rewrote the wave function, replacing a single vector of complex numbers with two real vectors. Schrdinger insisted this version was the true theory and that imaginary numbers were merely for convenience. In the years since, physicists have found other ways to rewrite quantum mechanics based on real numbers. But none of these alternatives has ever stuck. Standard quantum theory, with its complex numbers, has a convenient rule that makes it easy to represent the wave function of a quantum system composed of many independent partsa feature that these other versions lack.

What happens, then, if we restrict wave functions to real numbers and keep the usual quantum rule for composing systems with many parts? At first glance, not much. When we demand that wave functions and operators have real entries, we end up with what physicists often call real quantum theory. This theory is similar to standard quantum theory: if we lived in a real quantum world, we could still carry out quantum computations, send secret messages to one another by exchanging quantum particles, and teleport the physical state of a subatomic system over intercontinental distances.

All these applications are based on the counterintuitive features of quantum theory, such as superpositions, entanglement and the uncertainty principle, which are also part of real quantum theory. Because this formulation included these famed quantum features, physicists long assumed that the use of complex numbers in quantum theory was fundamentally a matter of convenience, and real quantum theory was just as valid as standard quantum theory. Back on that autumn morning in 2020 in Marco's office, however, we began to doubt it.

When designing an experiment to refute real quantum theory, we couldn't make any assumptions about the experimental devices scientists might use, as any supporter of real quantum theory could always challenge them. Suppose, for example, that we built a device meant to measure the polarization of a photon. An opponent could argue that although we thought we measured polarization, our apparatus actually probed some other propertysay, the photon's orbital angular momentum. We have no way to know that our tools do what we think they do. Yet falsifying a physical theory without assuming anything about the experimental setup sounds impossible. How can we prove anything when there are no certainties to rely on? Luckily, there was a historical precedent.

Despite being one of quantum theory's founders, Albert Einstein never believed our world to be as counterintuitive as the theory suggested. He thought that although quantum theory made accurate predictions, it must be a simplified version of a deeper theory in which its apparently paradoxical peculiarities would be resolved. For instance, Einstein refused to believe that Heisenberg's uncertainty principlewhich limits how much can be known about a particle's position and speedwas fundamental. Instead he conjectured that the experimentalists of his time were not able to prepare particles with well-defined positions and speeds because of technological limitations. Einstein assumed that a future classical theory (one where the physical state of an elementary particle can be fully determined and isn't based on probabilities) would account for the outcomes of all quantum experiments.

We now know that Einstein's intuition was wrong because all such classical theories have been falsified. In 1964 John S. Bell showed that some quantum effects can't be modeled by any classical theory. He envisioned a type of experiment, now called a Bell test, that involves two experimentalists, Alice and Bob, who work in separate laboratories. Someone in a third location sends each of them a particle, which they measure independently. Bell proved that in any classical theory with well-defined properties (the kind of theory Einstein hoped would win out), the results of these measurements obey some conditions, known as Bell's inequalities. Then, Bell proved that these conditions are violated in some setups in which Alice and Bob measure an entangled quantum state. The important property is that Bell's inequalities hold for all classical theories one can think of, no matter how convoluted. Therefore, their violation refuted all such theories.

Various Bell tests performed in labs since then have measured just what quantum theory predicts. In 2015 Bell experiments done in Delft, Netherlands, Vienna, Austria, and Boulder, Colo., finally did so while closing all the loopholes previous experiments had left open. Those results do not tell us that our world is quantum; rather they prove that, contra Einstein, it cannot be ruled by classical physics.

Could we devise an experiment similar to Bell's that would rule out quantum theory based on real numbers? To achieve this feat, we first needed to envision a standard quantum theory experiment whose outcomes cannot be explained by the mathematics of real quantum theory. We planned to first design a gedankenexperimenta thought experimentthat we hoped physicists would subsequently carry out in a lab. If it could be done, we figured, this test should convince even the most skeptical supporter that the world is not described by real quantum theory.

Our first, simplest idea was to try to upgrade Bell's original experiment to falsify real quantum theory, too. Unfortunately, two independent studies published in 2008 and 2009one by Kroly Pl and Tams Vrtesi and another by Matthew McKague, Michele Mosca and Nicolas Gisinfound this wouldn't work. The researchers were able to show that real quantum theory could predict the measurements of any possible Bell test just as well as standard quantum theory could. Because of their research, most scientists concluded that real quantum theory was irrefutable. But we and our co-authors proved this conclusion wrong.

Within two months of our conversation in Castelldefels, our little project had gathered eight theoretical physicists, all based there or in Geneva or Vienna. Although we couldn't meet in person, we exchanged e-mails and held online discussions many times a week. It was through a combination of long solitary walks and intensive Zoom meetings that on one happy day of November 2020 we came up with a standard quantum experiment that real quantum theory could not model. Our key idea was to abandon the standard Bell scenario, in which a single source distributes particles to several separate parties, and consider a setup with several independent sources. We had observed that, in such a scenario, which physicists call a quantum network, the Pl-Vrtesi-McKague-Mosca-Gisin method could not reproduce the experimental outcomes predicted by complex number quantum theory. This was a promising start, but it was not enough: similarly to what Bell achieved for classical theories, we needed to rule out the existence of any form of real quantum theory, no matter how clever or sophisticated, that could explain the results of quantum network experiments. For this, we needed to devise a concrete gedankenexperiment in a quantum network and show that the predictions of standard quantum theory were impossible to model with real quantum theory.

Initially we considered complicated networks involving six experimentalists and four sources. In the end, however, we settled for a simpler quantum experiment with three separate experimenters called Alice, Bob and Charlie and two independent particle sources. The first source sends out two particles of light (photons), one to Alice and one to Bob; the second one sends photons to Bob and Charlie. Next, Alice and Charlie choose a direction in which to measure the polarization of their particles, which can turn out to be up or down. Meanwhile Bob measures his two particles. When we do this over and over again, we can build up a set of statistics showing how often the measurements correlate. These statistics depend on the directions Alice and Charlie choose.

Next, we needed to show that the observed statistics could not be predicted by any real quantum system. To do so, we relied on a powerful concept known as self-testing, which allows a scientist to certify both a measurement device and the system it's measuring at once. What does that mean? Think of a measurement apparatus, for instance, a weight scale. To guarantee that it's accurate, you need to test it with a mass of a certified weight. But how to certify this mass? You must use another scale, which itself needs to be certified, and so on. In classical physics, this process has no end. Astonishingly, in quantum theory, it's possible to certify both a measured system and a measurement device simultaneously, as if the scale and the test mass were checking each other's calibration.

With self-testing in mind, our impossibility proof worked as follows. We conceived of an experiment in which, for any of Bob's outcomes, Alice and Charlie's measurement statistics self-tested their shared quantum state. In other words, the statistics of one confirmed the quantum nature of the other, and vice versa. We found that the only description of the devices that was compatible with real quantum theory had to be precisely the Pl-Vrtesi-McKague-Mosca-Gisin version, which we already knew didn't work for a quantum network. Hence, we arrived at the contradiction we were hoping for: real quantum theory could be falsified.

We also found that as long as any real-world measurement statistics observed by Alice, Bob and Charlie were close enough to those of our ideal gedankenexperiment, they could not be reproduced by real quantum systems. The logic was very similar to Bell's theorem: we ended up deriving a Bell's inequality for real quantum theory and proving that it could be violated by complex quantum theory, even in the presence of noise and imperfections. That allowance for noise is what makes our result testable in practice. No experimentalists ever achieve total control of their lab; the best they can hope for is to prepare quantum states that are approximately what they were aiming for and to make approximately the measurements they intended, which will allow them to generate approximately the same measurement statistics that were predicted. The good news is that within our proof, the experimental precision required to falsify real quantum theory, though demanding, was within reach of current technologies. When we announced our results, we hoped it was just a matter of time before someone, somewhere, would realize our vision.

It happened quickly. Just two months after we made our discovery public, an experimental group in Shanghai reported implementing our gedankenexperiment with superconducting qubitscomputer bits made of quantum particles. Around the same time, a group in Shenzhen also contacted us to discuss carrying out our gedankenexperiment with optical systems. Months later, we read about yet another optical version of the experiment, also conducted in Shanghai. In each case, the experimenters observed correlations between the measurements that real quantum theory could not account for. Although there are still a few experimental loopholes to take care of, taken together these three experiments make the real quantum hypothesis very difficult to sustain.

We now know neither classical nor real quantum theory can explain certain phenomena, so what comes next? If future versions of quantum theory are proposed as alternatives to the standard theory, we could use a similar technique to try to exclude them as well. Could we go one step further and falsify standard quantum theory itself?

If we did, we would be left with no theory for the microscopic world given that we currently lack an alternative. But physicists are not convinced that standard quantum theory is true. One reason is that it seems to conflict with one of our other theories, general relativity, used to describe gravity. Scientists are seeking a new, deeper theory that could reconcile these two and perhaps replace standard quantum theory. If we could ever falsify quantum theory, we might be able to point the way toward that deeper theory.

In parallel, some researchers are trying to prove that no theory other than quantum will do. One of our co-authors, Mirjam Weilenmann, in collaboration with Roger Colbeck, recently argued that it may be possible to discard all alternative physical theories through suitable Bell-like experiments. If this were true, then those experiments would show that quantum mechanics is indeed the only physical theory compatible with experimental observations. The possibility makes us shiver: Can we really hope to demonstrate that quantum theory is so special?

Read more here:

Quantum Physics Falls Apart without Imaginary Numbers - Scientific American

Read More..

The Little-Known Origin Story behind the 2022 Nobel Prize in Physics – Scientific American

In November 1949 Chien-Shiung Wu and her graduate student, Irving Shaknov, descended to a laboratory below Columbia University's Pupin Hall. They needed antimatter for a new experiment, so they made their own, using a machine called a cyclotron. The machine's multiton magnet was so gigantic that, according to university folklore, a decade earlier administrators had to blast a hole in an exterior wall and recruit the football team to maneuver the block of iron into the building.

The magnetic field produced by a cyclotron accelerates particles to dizzying speeds. In the lab, Wu and Shaknov used it to bombard a sheet of copper with deuterons, generating an unstable isotope, Cu 64, as a source of positronsthe antimatter. When a positron and an electron collide, they annihilate each other, releasing two photons that fly apart in opposite directions. A few years earlier physicist John Wheeler had predicted that when matter and antimatter met, the resulting photons would be orthogonally polarized. Wu and Shaknov were looking for conclusive proof of Wheeler's so-called pair theory.

They weren't the first. An earlier team of experimentalists had a high margin of error, so their results were not sufficiently reliable. A second team came back with results that were too low to match Wheeler's predictions. But Wu was known for her extreme precision and strategic experimental design. The prior year she had proved Enrico Fermi's theory of beta decay after more than a decade of attempts by others.

Wu and Shaknov packed the copper isotope into a tiny capsule, eight millimeters long, and waited for electrons and positrons to collide inside the apparatus. Then they tracked the resulting annihilation radiation at the farthest edges of their experiment, using two photomultiplier tubes, anthracene crystals and a scintillation counter as a gamma-ray detector.

Ultimately they captured significantly more data than their predecessors, and what they saw was astonishing. Their evidence suggested that pairs of photons from particle collisions remained polarized at right angles to each otherconsistentlyas if somehow connected, even at a distance. Their experiment had proved Wheeler's pair theory, and Wu and Shaknov published their findings on New Year's Day in 1950 in a one-page letter to the Physical Review. But it also became the first experiment to document evidence of something weirder: that the properties of entangled particles are always perfectly correlated, no matter how far apart they stray. Entanglement is so strange that Albert Einstein thought it proved where quantum physics went wrong.

In 2022 the Nobel Prize Committee honored experimental work on entanglement by three physicists. John Clauser, Alain Aspect and Anton Zeilinger had each produced increasingly convincing evidence for the phenomenon by improving on their predecessor's experimental design. They ruled out one alternative explanation after another until, finally, entanglement was the only conclusion left standing. Although Wu's 1949 experiment had not been designed to rule out competing explanations, historians agree it was the first to document entangled photons. Yet Wu, who died in 1997, was not mentioned when the 2022 awards were announced. It's not the first time she has been overlooked.

Chien-Shiung Wu was born the same year as the New Republic of China, in a small town in the Yangtze River basin. Her father, Zhong-Yi Wu, was an intellectual, a revolutionary and a feminist. To celebrate his daughter's birth and the end of dynastic rule, Zhong-Yi hosted a party in the spring of 1912 where he announced his daughter's name and his new plan to open the region's first elementary school for girls. At a time when most names for girls suggested a delicate fragrance or beautiful flower, Zhong-Yi's name for his daughter translated to strong hero.

Chien-Shiung grew up in the crosscurrents of Chinese nationalism and the New Culture Movement that criticized traditional Confucian values. In 1936, at age 24, having reached the limit of what China could offer in physics training, she boarded the SS Hoover bound for California. Political movements were calling for science and democracy, along with a generation of scholars who could elevate China's status. Wu was off to pursue a Ph.D. in physics. She would study under pioneers such as Emile Segr, Ernest Lawrence and J. Robert Oppenheimer.

At the University of California, Berkeley, Wu became a star student. Her dissertation research on the fission products of uranium was so sophisticated and sensitive that it was turned over to the military and embargoed until the end of World War II. Yet Wu had trouble finding a job after graduation. For two years she depended on mentors for research appointments. At the time, none of the top 20 research universities in the country had a woman on the physics faculty.

Gender bias was not Wu's only obstacle. A year after her arrival in the U.S., the escalation of World War II cut off communication with China, and discrimination against Asian immigrants had intensified, especially on the West Coast. In 1940 Berkeley's acting comptroller wrote to Wu's supervisor to warn him that Wu's employment would be approved only on a temporary basis; less than a year later he wrote again: Regulations laid down by the Regents meant Miss Wu is not eligible for employment, and immediate steps should be taken to dismiss this employee from your staff. When Oppenheimer left Berkeley in 1942 to lead the Manhattan Project, he brought many of his students along; Wu, despite her acclaim, was not invited.

Eventually Wu moved East for a teaching position at Smith College. The following year she became the first woman hired to the Princeton University physics faculty. Not long after, the Manhattan Project finally recruited her, and she played a quiet, conflicted and crucial role in the development of the atomic bomb. Yet Wu navigated repeated investigations by immigration authorities and threats of deportation for years. When she had left China in 1936, Wu expected to be away for only a short while. In 1945, when the silence between the U.S. and China lifted, China was embroiled in a brutal civil war, and relatives cautioned against returning too soon. By 1949, the year Wu observed evidence of the criterion for entanglement, Mao Zedong had established communism in the People's Republic of China, and McCarthyism was ramping up in the U.S., making travel home nearly impossible. She never saw her family again.

Entanglement emerges from the most rigorous branches of mathematics and physics yet has poetic appeal. Abner Shimony, a philosopher and physicist, called it passion at a distance. Entanglement offers the wild notion that once certain particles or systems interact, they can no longer be described independently of one another. What happens to one, no matter how far it may travel from its entangled partner, instantly affects the other, as decades of evidence now shows. The characteristics of entangled particles are correlated, without any apparent communication, and at any distance. What's more, each member of the entangled pair seems to lack a complete set of definite properties until the moment when one partner is measured. Then, instantly, the entangled pair will be in synceven if the particles have drifted galaxies apart. It's the ultimate star-crossed love.

To grasp entanglement's full strangeness, it helps to understand that when quantum physicists first set out to quantify the position and motion of subatomic particles, the tiny objects could not be pinned down. Sometimes particles seemed localized and distinct. At other times, the particles showed a broad and wavelike behavior, with influence spreading out over large regions of physical space relative to their natural size. Sometimes early 20th-century experimentalists couldn't be sure the particles were even tangible objects at all.

In 1927 physicist Werner Heisenberg called this problem the uncertainty principle. He studied under the founder of quantum mechanics, Niels Bohr, and Bohr had coined the term complementarity to describe the uncanny experimental results that quantum physics produced. For Bohr, one way to think about the entire confusing situation was to presume that certain pairs of observations such as a particle's position and momentum were complementary to one another; complementary characteristics could not be perceived or measured exactly in the subatomic world at the same time. Perhaps those characteristics did not even exist until the very moment of measurement. Things got weirder, though, when the mathematics of quantum mechanics suggested that measuring one particle might instantaneously influence the state of another particle far away. This seemed especially odd if the particles had no measurable attributes in the first place until the two, somehow, telepathically connected.

In 1935 Einstein, Boris Podolsky and Nathan Rosen tried to poke holes in quantum mechanics by pointing out how counterintuitive it seemed. The famous Einstein-Podolsky-Rosen paradox (EPR) pointed directly at entanglement. EPR suggested that there had to be a better explanation for why and how one particle could impact its entangled partner faster than the speed of light. Einstein derisively nicknamed the phenomenon spooky action at a distance. For Einstein and his co-authors, spooky action proved that quantum theory was still incomplete.

Like Einstein, physicist David Bohm felt sure there was a perfectly reasonable explanation for entanglement. Perhaps we couldn't see it quite yet, but the explanation might not be so spooky after all. It could be attributed to hidden variables. Physics simply had more work to do to find them. In 1957 Bohm and his graduate student Yakir Aharonov wrote about how photon research could harness the famous EPR paradox to reveal these hidden variables. [T]here has been done an experiment which, as we shall see, tests essentially for this point, but in a more indirect way, Bohm wrote.

That experiment, says Indianara Silva, a professor of physical sciences and history at the State University of Feira de Santana in Brazil, was the 1949 Wu-Shaknov experiment.

Silva is a historian who is acutely attentive to the missing stories of women in science. When Wu and Shaknov made the first precise measurement of Wheeler's pair-theory in 1949, Silva says, they became the first to document entanglement between photons, inspiring decades of later research in quantum foundations. Silva has identified a string of publications by other physicists and historians who acknowledge Wu's 1949 observation of entangled photons. She begins with Bohm in 1957 and continues through Zeilinger, one of the 2022 Nobel laureates, who wrote in 1999 that an earlier experiment by Wu and Shaknov (1950) had demonstrated the existence of spatially separated entangled states.

Bohm had good reason to trust Wu's findings. He was a few years junior to Wu when they were graduate students at Berkeley. Both had studied under Oppenheimer, and both worked in E. O. Lawrence's prestigious radiation laboratory. Bohm had every reason to know of Wu's stellar reputation. He acknowledged Wu in a footnote in his 1957 article.

Silva traces how Wu's experimental workin 1949 and later in 1971prompted later entanglement experiments. Silva's findings were published in The Oxford Handbook of the History of Quantum Interpretation in 2022. She points out how Bohm's article about hidden variables inspired John Bell, who proposed that the number of quantum coincidences between particles could be predicted and counted. In 1964, in an obscure journal called Physics, Physique, Fizika, Bell discussed Bohm's 1957 paper (which referenced Wu's experiment) and launched his own new theory. A few years later at Columbia, a young Clauser found Bell's Theorem in the library. The theory inspired Clauser to design a new experiment, one he hoped could prove Bell right, showing hidden variables were real.

Interestingly, the Wu-Shaknov letter to Physical Review in 1950 talks about Wheeler's pair theory, but it is silent about entanglement. In 2012 physicist F. J. Duarte called Wheeler's pair theory the essence of entanglement. Other physicists, and historians like Silva, clearly spotted the connection, too. So why did Wu not mention quantum entanglement in her 1950 letter?

Wu might have been hesitant to discuss evidence of entanglement because throughout the 1950s and 1960s, such quantum-foundations work was stigmatized as junk science. Back then, explains David Kaiser, a professor of physics and history of science at the Massachusetts Institute of Technology, the idea of using an experiment to prove or disprove theories about quantum physics or to test for local hidden variables was not even an inkling for most physicists. Researchers who explored questions about entanglement often disguised their research because backlash could stymie a promising career. We're left to wonder whether Wu might have done so as well.

Silva points out that Wu came back to her 1949 experiment more than 20 years later to refine it further. By then, Wu was far more professionally secure, and she addressed questions about quantum mechanics directly. She favored traditional quantum-entanglement interpretations, not Bohm's theory. In 1971, when she designed a new version of the 1949 experiment, Wu wrote that it should certainly quiet those proponents of the hidden variables.

When Clauser published his proposed test of Bell's theorem in 1969, he took care to distinguish the Wu-Shaknov experiment from his own. Clauser had wanted to prove hidden variables were real; instead, in 1972, he disproved the existence of hidden variables and demonstrated entanglement with even greater certainty. He had counted coincidences, much as Bell suggested, but there were far more coincidences than hidden variables could explain. Clauser's work prompted Aspect and Zeilinger's later experiments, which closed lingering loopholes and supported entanglement further. Together those experiments led to their 2022 Nobel Prize.

By the time Bohm's paper on hidden variables emerged, much had changed in Wu's life. She had married and moved to the East Coast. She had broken a glass ceiling at Princeton, had a child and had become a U.S. citizen. She was on the faculty of Columbia University, though still not a full professor.

In 1956 Wu's Columbia colleague T. D. Lee approached her for advice about an odd question. He and his research partner, Chen Ning Yang, wondered if some of the tiniest particles in the universe might violate long-established expectations. In response, Wu pointed Lee to a body of research, and she described a handful of possible experiments to address the questions he posed.

Yang and Lee were far from the likeliest of candidates to act on Wu's suggestions. Both were theorists, not experimentalists like Wu. In an oral history with the Simons Foundation half a century later, Yang confessed that neither he nor Lee had any sincere belief in 1956 that their hypothesis would hold up. In fact, physicists had assumed for decades that the opposite would be true: that symmetry would be among the immutable, consistent patterns in many building blocks of our universe. Mathematical conservation laws said that if you ran the same sequence of events forward and backward in time, the events would remain symmetrical. Yang and Lee's hypothesis, though, suggested that the behavior of nuclear particles in beta decay might not look the same if you flipped the events in an imaginary mirror. The idea simply did not align with conventional scientific thought or with common sense.

Like her father, Wu was willing to question mainstream thinking. She suspected the issue was important, and she knew how to approach it. So she designed and led an experiment to address her colleagues' ideas. It meant canceling a trip to China that would have been her first visit home since 1936.

To run the experiment she had in mind, Wu needed to reduce the temperature of radioactive cobalt 60 nuclei until the particles almost stopped moving. She wanted to study whether the daughter particles of nuclear decay shot out in a symmetrical patternas all of mainstream physics believed they wouldor if the radioactive patterns showed a preference for right-handed or left-handed behavior. She enlisted cooperation from the National Bureau of Standards (NBS, now the NIST) in Washington, D.C., because, unlike many other labs, they had the technology and expertise to work at temperatures close to absolute zero. For months Wu commuted between New York City and Washington, overseeing graduate student work that supported the experiment.

By January 1957, in close consultation with Yang and Lee, Wu and her NBSpartners made an astonishing discovery. Beta-decay particles were slightly left-handed, not symmetrical as all of physics had assumed. As soon as it was announced, Yang, Lee and Wu, along with other experimentalists who followed Wu's work, found themselves on a national conference circuit, their names and images splashed across the popular press. When the American Physical Society met at the New York Hotel that year, they presented their findings in what the New Yorker called the largest hall ... occupied by so immense a crowd that some of its members did everything but hang from the chandeliers.

That October, Yang and Lee became the first two Chinese Americans in history to win the Nobel Prize. Although Nobel rules allowed up to three award recipients each year, Wu was not included. It could hardly be more apt that the law of physics that Wu toppled was called the principle of parity. Like a prism, the 1957 Nobel Prize separated out elements of identity like bands of light, rendering the impact of gender more visible. The following year Columbia finally promoted Wu to the rank of full professor.

In his Nobel lecture that December, Yang told the committee and guests how crucial Wu's experiment had been, making a bold statement that the results were due to Wu's team's courage and skill. Lee would later plead with the Nobel Committee to recognize Wu's work. Oppenheimer publicly stated that Wu should have shared in the 1957 prize. Segr called the overthrow of parity probably the major development of physics after the war.

Other scientists criticized Wu's exclusion from the highest recognition of scientific achievement, too. In 1991 Douglas Hofstadter, the author of Gdel, Escher, Bach, organized scientists to write letters to the Nobel Committee recommending Wu for the physics prize. And in 2018, 1,600 researchers invoked Wu's name in an open letter to CERN challenging current-day sexism in physics. [T]here are at least four women whose work is relevant for particle physics who are widely viewed as having deserved the Nobel prize but who did not receive it, in some cases even though their male colleagues did, the letter says. Wu's name appears at the top of that list.

After overthrowing parity, Wu became the first woman to receive the Comstock Prize from the National Academy of Sciences; the first female president of the American Physical Society; the first physicist to receive the Wolf Prize; and the first living physicist to have an asteroid named in her honor. Her work pushed open doors to university teaching in the West for women and scientists of color. In China, she is revered. In 2021 the U.S. Postal Service released a Forever stamp with Wu's portrait. Today Wu's parity experiment is understood as an early step on the path to what would become the Standard Model of particle physics, and it points toward possible answers about why matter exists in our universe at all.

Wu's early entanglement work, however, remained in obscurity. Sometimes by examining one part of a system, we begin to perceive a related link, at a distance. The 2022 Nobel Prize celebrated a set of connected experiments that took place at great distance from one another. Even though Wu couldn't have been awarded the prize posthumously, her early research is finally coming to light as a crucial part of that entangled history, thanks in large part to historians such as Silva. Society may prefer a hero narrative or the myth of a lone genius, but a closer look reveals that extraordinary science, like entanglement itself, depends fundamentally on connection.

View original post here:

The Little-Known Origin Story behind the 2022 Nobel Prize in Physics - Scientific American

Read More..

Instrument adapted from astronomy helps capture singular quantum interference effects – Phys.org

by Kavli Institute for the Physics and Mathematics of the Universe

By adapting technology used for gamma-ray astronomy, a group of experimental researchers has found that X-ray transitions previously thought to have been unpolarized according to atomic physics, are in fact highly polarized, reports a new study published in Physical Review Letters on March 15.

When electrons recombine with highly charged ions, X-ray polarization becomes important for testing fundamental atomic physics involving relativistic and quantum electrodynamics effects. But to date, experimental researchers have been challenged by the technical difficulties these experiments require.

A team of researchers led by the University of Electro-Communications Institute for Laser Science Professor Nobuyuki Nakamura, and including Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) Professor Tadayuki Takahashi and graduate student Yutaka Tsuzuki, and Institute of Space and Astronautical Science (ISAS/JAXA) Associate Professor Shin Watanabe, successfully combined two state-of-the-art instruments and technologies to measure the polarization of high-energy X-rays emitted when highly charged ions capture high-energy electrons.

The first is the electron beam ion trap the Tokyo-EBIT, which is one of the world's leading highly charged ion generators and experimental instruments owned by the University of Electro-Communications, and the second is the Si/CdTe Compton Camera for high-energy X-rays, which was developed for astronomical observations mainly at ISAS/JAXA and improved for this research. The Si/CdTe Compton Camera (left) attached to the Tokyo-EBIT. Credit: University of Electro-Communications, JAXA

The technology behind the Si/CdTe Compton Camera was originally developed by a team led by Takahashi to study X-rays and gamma rays in the universe released by highly energized black holes, supernovae and galaxy clusters, and was built into the Japan Aerospace Exploration Agency (JAXA) ASTRO-H satellite, launched in 2016.

Takahashi had been looking for a way to adapt the technology to other fields. After a meeting with Nakamura, Takahashi began to work on designing the X-ray polarization experiment and implementing the Si/CdTe Compton Camera into the method.

Tsuzuki carried out a large part of the calibration and simulation of the Compton camera.

Tsukuba University Associate Professor Xiao-Min Tong, Institute for Applied Physics and Computational Mathematics Distinguished Research Fellow Xiang Gao, and National Institute for Fusion Science Associate Professor Daiji Kato made a theoretical analysis of the results, which revealed that the unexpectedly large polarization observed in the experiment was the result of quantum interference effects, where quantum mechanical probability waves interfere with each other. Normally, the initial states of two waves must be equal in order for interference to occur, but it was also revealed that the observed polarization was caused by a peculiar interference effect between two waves with different angular momenta.

More information: Nobuyuki Nakamura et al, Strong Polarization of a J=1/2 to 1/2 Transition Arising from Unexpectedly Large Quantum Interference, Physical Review Letters (2023). DOI: 10.1103/PhysRevLett.130.113001

Journal information: Physical Review Letters

Provided by Kavli Institute for the Physics and Mathematics of the Universe

Read more:

Instrument adapted from astronomy helps capture singular quantum interference effects - Phys.org

Read More..

Physics – Universal Quantum Logic through Spin Swapping – Physics

March 17, 2023• Physics 16, 46

Researchers have demonstrated quantum gate operations in a system where voltage pulses cause neighboring electron spins to swap with one another.

Twenty years ago, theorists proposed an approach for protecting fragile spin-based qubits against the decoherence from noisy inputs. The idea was to encode information in the qubits by swapping the spin states of neighboring electrons. Unlike the usual method of flipping spins, this swapping process would add no energy to the system. Researchers at HRL Laboratories in California have now realized that design in an electrically controlled, silicon-based platform [1]. Their devicewhich was presented last week at the APS March Meetingdemonstrates a low-error logic gate that can be used to perform any kind of quantum computational algorithm.

In most spin-based qubit designs, a qubit is a single spin with two states0 or 1which have different energies corresponding to the spins alignment with respect to an applied magnetic field. The qubit can be controlled by adding or removing energy from the system. Thats typically accomplished by irradiating the qubit with microwave photons at a frequency corresponding to the qubits energy level splitting. The qubits spin responds by flipping directionlike an on-off switch. This method is well established, but it suffers from decoherencethe qubit tends to lose its quantum information as the result of small inhomogeneities (noise) in the microwave radiation or magnetic field.

In contrast, the teams approach creates a spin-based qubit whose 0 and 1 states have the same energy. Here the qubit states correspond to whether two electron spins in the qubit have antisymmetric (0) or symmetric (1) spin wave functions. Control over these states is offered by voltage pulses that swap the directions of neighboring spins without aligning them in a particular direction. These swaps, which are energy-conserving operations, change nothing when the two wave functions are symmetric, but they introduce a quantum phase of 1 when the wave functions are antisymmetric. Such swaps are actually partial swaps, meaning the voltage pulse is tuned so that the swapping can occur but theres a certain probability that it doesnt. A partial swap is a quantum operation that leaves us in a superposition of swapped and not swapped, explains HRL team member Thaddeus Ladd. He and his colleagues use a complex sequence of partial swaps to encode information in a set of electron spins.

For the experiment, the HRL team fabricated six silicon quantum dots, forming two distinct qubits. Each dot traps a single electron, whose spin interacts with neighboring spins through voltage pulses delivered to metal gates. The researchers demonstrated two quantum operationscalled CNOT and SWAPwith the two qubits. Doing so required complex sequences of partial swaps across the six spins, involving thousands of precisely calibrated voltage pulses that switch on and off a hundred million times per second. The measured errors in these operations were low, characterized by a fidelity of around 97%. With a healthy dose of mathematics, one may show that this [method] of partial swapping of spins is sufficient to perform any quantum operation on a desired, limited set of states of many spins, says Ladd.

This approach offers two key advantages compared to conventional single-spin qubits. First, it avoids the need for various hardware integration to control magnetic fields and mismatched phases. Second, it avoids the crosstalk generated by a microwave input. These advantages avoid microscopic sources of error and improve the fidelity of qubit control. The price is that each qubit needs three quantum dots to form a single qubit, and each basic operation consists of a long complex sequence of pulses. Ladd says getting the device to work was no easy feat of hardware fabrication and software development.

The researchers built their new six-dot device using a technique that theyve been developing called SLEDGE (single-layer etch-defined gate electrode). This platform uses an electron beam to pattern dot-shaped gates onto a plane and subsequently interconnect the gates via metal leads. Andrea Morello, a quantum physicist at the University of New South Wales, Australia, is impressed with the labs new device. [HRLs] state-of-the-art device fabrication capabilities allowed the researchers to fabricate quantum dots with exquisite precision and reproducibility such that even a complex six-dot device exhibited reliable behavior, Morello says.

Ladd clarifies that the technology wont lead to practical quantum computing until millions of qubits can communicate with one another. Although HRLs proof of concept avoids many problems associated with microwave control, there are other challenges, such as keeping the system cold and ensuring uniformity in the etched quantum-dot patterns, which will become more difficult as more qubits are included. Im not claiming ours is the best or fastest or smartest qubit design. But I think its one of the most interesting, not least because it connects to the fundamental computing problem of whether you must input energy to perform a computation, says Ladd.

According to Morello, the swapping approach would require big changes to the way people normally operate qubits, but he thinks a compelling argument can be made that removing the need for microwave signals may simplify the job of qubit control. The future will tell whether this bold choice pays off when enlarging the quantum processor to ever more qubits, he says.

Rachel Berkowitz

Rachel Berkowitz is a Corresponding Editor forPhysics Magazine based in Vancouver, Canada.

Phase separation within cells creates droplets whose chemical activity leads to surprising mobility that serves cellular function and hints at the origin of life. Read More

Original post:

Physics - Universal Quantum Logic through Spin Swapping - Physics

Read More..

A Growing Number of Scientists Are Convinced the Future … – VICE

ABSTRACT breaks down mind-bending scientific research, future tech, new discoveries, and major breakthroughs.

Have you ever found yourself in a self-imposed jam and thought, Well, if it isnt the consequences of my own actions? Its a common refrain that exposes a deeper truth about the way we humans understand time and causality. Our actions in the past are correlated to our experience of the future, whether thats a good outcome, like acing a test because you prepared, or a bad one, like waking up with a killer hangover.

But what if this forward causality could somehow be reversed in time, allowing actions in the future to influence outcomes in the past? This mind-bending idea, known as retrocausality, may seem like science fiction grist at first glance, but it is starting to gain real traction among physicists and philosophers, among other researchers, as a possible solution to some of the most intractable riddles underlying our reality.

In other words, people are becoming increasingly retro-curious, said Kenneth Wharton, a professor of physics at San Jose State University who has published research about retrocausality, in a call with Motherboard. Even though it may feel verboten to consider a future that affects the past, Wharton and others think it could account for some of the strange phenomena observed in quantum physics, which exists on the tiny scale of atoms.

We have instincts about all sorts of things, and some are stronger than others, said Wharton, who recently co-authored an article about retrocausality with Huw Price, a distinguished professor emeritus at the University of Bonn and an emeritus fellow of Trinity College, Cambridge.

Ive found our instincts of time and causation are our deepest, strongest instincts that physicists and philosophersand humansare loath to give up, he added.

Scientists, including Price, have speculated about the possibility that the future might influence the past for decades, but the renewed curiosity about retrocausality is driven by more recent findings about quantum mechanics.

Unlike the familiar macroscopic world that we inhabit, which is governed by classical physics, the quantum realm allows for inexplicably trippy phenomena. Particles at these scales can breeze right through seemingly impassable barriers, a trick called quantum tunneling, or they can occupy many different states simultaneously, known as superposition.

The properties of quantum objects can also somehow become synced up together even if they are located light years apart. This so-called quantum entanglement was famously described by Albert Einstein as spooky action at a distance, and experimental research into it just earned the 2022 Nobel Prize in Physics.

Quantum entanglement flouts a lot of our assumptions about the universe, prompting scientists to wonder which of our treasured darlings in physics must be killed to account for it. For some, its the idea of locality, which essentially means that objects should not be able to interact at great distances without some kind of physical mediator. Other researchers think that realismthe idea that there is some kind of objective bedrock to our existenceshould be sacrificed at the altar of entanglement.

Wharton and Price, among many others, are embracing a third option: Retrocausality. In addition to potentially rescuing concepts like locality and realism, retrocausal models also open avenues of exploring a time-symmetric view of our universe, in which the laws of physics are the same regardless of whether time runs forward or backward.

In any model where you had an event in the past correlated with your future choice of setting, that would be retrocausal

If you think things should be time-symmetric, there's an argument to be made that you need some retrocausality to make sense of quantum mechanics in a time-symmetric way, said Emily Adlam, a postdoctoral associate at Western Universitys Rotman Institute of Philosophy who studies retrocausality, in a call with Motherboard. Theres a bunch of different reasons that have come together to make people interested in this possibility.

To better understand retrocausality, its worth revisiting a common thought experiment featuring characters called Alice and Bob, who each receive a particle from the same source, even though they may be light years apart. After conducting measurements on their particles, Alice and Bob discover that these objects are oddly correlated despite the vast distance between them.

Traditionally, this storywhich stems from famous experiments made by physicist John Bellis interpreted to mean that there are non-local quantum effects that cause the particles to be linked across great distances. However, proponents of retrocausality suggest that the particles display correlations that emerge from their past. In other words, the measurements that Alice and Bob conduct on their particles affect the properties of those particles in the past.

Instead of having magic non-local connections between these two points, maybe the connection is through the past, and that's what more of us are interested in these days, Wharton said.

In any model where you had an event in the past correlated with your future choice of setting, that would be retrocausal, he added.

This idea seems so unintuitive because we imagine time as a river, an arrow, or an arrangement of sequential boxes on a calendar. At their core, these paradigms envision cause in the past and effect in the future as a forward flow, but retrocausality raises the prospect that these elements could be reversed. It may seem eerie to our brains, which process events sequentially, but the history of science is also littered with examples of human biases leading to bad conclusions, such as the Earth-centric model of the solar system.

Obviously, as scientists, one thing that it is very useful to do is write down a law which says, given the situation now, what is the situation going to be next? How will things evolve? Adlam said. From a practical point of view, it makes a lot of sense for scientists to write down time evolution laws, because most of the time what were interested in doing with the laws is predicting the future.

But that's a pragmatic consideration, she continued. That doesn't mean that the laws of nature must really work that way. There's no particular reason why they should be aligned with our practical interests in that sense. So, I think it is important to be cautious to distinguish the form of the laws that scientists like to write down for practical reasons from whatever nature is really doing.

Its important to emphasize at this point in time, whatever that means, that retrocausality is not the same as time travel. These models dont predict that signals or objectsincluding human beingscould be dispatched to the past, in part because there is no evidence that we are currently being deluged with any such future messages, or messengers.

You have to be very careful in a retrocausal model because the fact of the matter is, we can't send signals back in time, Adlam explained. It's important that we can't, because if we could, then we could produce all sorts of vehicles or paradoxes. You have to make sure your model doesn't allow that.

Instead, retrocausal models suggest that there is a mechanism that allows circumstances in the future to correlate with past states. This scenario could remove the threats to locality and realism, according to Wharton and Price, though theres disagreement among experts about the implications of these models. (For instance, Adlam has published work suggesting that retrocausality doesnt save locality.)

I'm heartened that more and more physicists are taking this seriously as an unexplored option

While there are a range of views about the mechanics and consequences of retrocausal theories, a growing community of researchers think this concept has the potential to answer fundamental questions about the universe.

Many people in the foundations of physics community, both physicists and philosophers, have been interested in the question Why the quantum? or Why is the world like quantum mechanics says it is? Price said in an email to Motherboard. That is, they're trying to understand how quantum mechanics is a natural or inevitable result of simple and plausible principles.

I think that if our proposed explanation of entanglement works, then it would be a significant new part of the answer, he continued. It would show how the correlations we call 'entanglement' arise naturally from a combination of ingredients which are all really more basic than quantum mechanics.

To that point, perhaps the most monumental quest in physics is the search for the theory of everything that would at last explain how the quantum and classical realms manage to coexist despite having completely contradictory laws. A huge number of scientists believe that the key to this endeavor is figuring out how gravity works on a quantum level, but retrocausality could also be part of the explanation, according to researchers who study it.

The problem facing physics right now is that our two pillars of successful theories don't talk to each other, Wharton explained. One is based in space and time, and one has left space and time aside for this giant quantum wave function.

The solution to this, as everyone seems to have agreed without discussing it, is that weve got to quantize gravity, he continued. That's the goal. Hardly anyone has said, what if things really are in space and time, and we just have to make sense of quantum theory in space and time? That will be a whole new way to unify everything that people are not looking into.

Price agreed that this retrocausality could provide a new means to finally solve eliminate the tension between quantum mechanics and classical physics (including special relativity).

That's such a huge payoff that I'm always puzzled that retrocausality wasn't taken more seriously decades ago, Price said, adding that part of the answer may be that retrocausality has frequently been conflated with another far-out concept called superdeterminism.

Another possible big payoff is that retrocausality supports the so-called 'epistemic' view of the wave function in the usual quantum mechanics descriptionthe idea that it is just an encoding of our incomplete knowledge of the system, he continued. That makes it much easier to understand the so-called collapse of the wave function, as a change in information, as folk such as Einstein and Schoedinger thought, in the early days. In this respect, I think it gets rid of some more of the (apparently) non-classical features of quantum mechanics, by saying that they don't amount to anything physically real.

To that end, scientists who work on retrocausality will continue to develop new theoretical models that attempt to account for more and more experimental phenomena. Eventually, these concepts could inspire experimental techniques that might provide evidence either for, or against, a future that can influence the past.

The goal is to come up with a more general model, Wharton concluded. Whether or not me, or anyone else, will be successful remains to be seen, but I'm heartened that more and more physicists are taking this seriously as an unexplored option. Maybe we should explore it.

Read more from the original source:

A Growing Number of Scientists Are Convinced the Future ... - VICE

Read More..

Stephen Hawking’s final theorem turns time and causality inside out – New Scientist

Thomas Hertog (left) collaborated with Stephen Hawking for many years

Courtesy of Thomas Hertog

IT WAS common knowledge among students at the University of Cambridge that whoever obtained the best marks in the final part of the mathematical tripos exams would be summoned to see Stephen Hawking. I had just got my results and had come top. Sure enough, I was invited for a discussion with him.

I made my way to his office deep in the labyrinth of the department of applied mathematics and theoretical physics, which was housed in a creaking Victorian building on the banks of the river Cam. Stephens office was just off the main common room, and even though it was noisy there, he liked to keep his door ajar. I knocked, paused and slowly pushed it open.

I didnt quite know what to expect on the other side of that door. I knew, of course, that Stephen was famous for his work on black holes and that he had even got into trouble for some of his ideas about what happens when they explode. But it turned out that he was musing on a different question: why is the universe just right for life to arise?

Pondering this question would turn into a long quest for us both. For the next two decades, until his death, Stephen and I worked shoulder to shoulder on novel ideas that suggest a radically new understanding of why the universe is the way it is. In our conception, the laws of physics themselves have, in a sense, evolved to be the

Link:

Stephen Hawking's final theorem turns time and causality inside out - New Scientist

Read More..

Quantum Revolution! Breakthrough By Chinese Scientists Could Rewrite Einsteins Nobel Prize Winning Theory – EurAsian Times

A Chinese-led team of scientists claimed to have achieved a breakthrough that could rewrite Albert Einsteins Nobel Prize-winning theory.

In 1905, Einstein published a paper explaining the photoelectric effect, in which he put forth that light comprises discrete packets, energy quanta, now called photons, as opposed to the wave theory of light, which was widely accepted at the time.

He predicted that photons above a certain threshold frequency when falling on a specific material, eject electrons from its surface. This phenomenon is called the photoelectric effect, which is said to have resulted in the 20th-century quantum revolution in physics.

The discovery of the photoelectric effect earned Einstein the 1921 Nobel Prize in Physics.

The discovery of the photoelectric effect laid the foundation for several modern-day technologies that depend on light detection or electron-beam generation.

High-energy electron beams have been used at a large scale to analyze crystal structures, treat cancer, kill bacteria, and machine alloy.

The materials that convert photons into electrons are known as photocathodes. Notably, most of the photocathodes known today were discovered around 60 years ago, and all of them are said to have a defect.

The electrons these photocathodes generate are dispersed in angle and speed.

After over a century after Einstein received the 1921 Nobel Prize for the discovery of the photoelectric effect, a team of researchers from China, Japan, and the US has now published a paper that, according to reports, could cause the new quantum revolution.

A team of researchers led by He Ruihua, of Westlake University in Hangzhou, in Chinas eastern Zhejiang province, published a paper in the peer-reviewed journal Nature on March 8.

In this paper, the team used a new material called strontium titanate (SrTiO3) to acquire a concentrated beam of electrons with a level of energy enhanced by at least an order of magnitude.

Strontium titanate (SrTiO3) is a quantum material with a diverse set of interesting properties. According to Hes team, electron beams obtained after exciting SrTiO3 are coherent.

Coherence is important to the beam, it concentrates the flow like a pipe on the tap. Without the pipe, water will spray everywhere when the tap is wide open. Without coherence, electrons will scatter, said Hong Caiyun, an author of the paper.

With the coherence we acquired, we can increase the beam intensity while the beam could maintain its direction, Hong further said.

Also, the intensity of photoemission from SrTiO3 is greatly enhanced, according to the team.

This exceptional performance suggests novel physics beyond the well-established theoretical framework for photoemission, Hong said.

The discovery has driven the team to find a new theory to explain unparalleled coherence.

We came up with an explanation as a supplement to Einsteins original theoretical framework. Its in another paper which is under review right now, Professor He said.

Co-author of the paper Arun Bansil of Northeastern University in the US has hailed the discovery.

This is a big deal because there is no mechanism within our existing understanding of photoemission that can produce such an effect. In other words, we dont have any theory for this, currently, so it is a miraculous breakthrough in that sense, Bansil said.

According to Hong, the new theory predicts the existence of an entire class of materials with the same photoemissive properties as SrTiO3.

SrTiO3 presents the first example of a fundamentally new class of photocathode quantum materials. It opens new prospects for applications that require intense electron beams, she said.

Professor He said the discovery came from their focus on a traditional technology called angle-resolved photoemission spectroscopy (ARPES).

ARPES is widely used to study electron structures in solid materials, usually crystalline solids. It measures the kinetic energy and emission angle distributions of the emitted photoelectrons.

In the past few decades, physics and material scientists mainly used ARPES to study the electronic structures related to the optical, electrical, and thermal properties. Our team adapted an unconventional configuration of ARPES and measured another part thats more related to the photoelectric effect, He said.

During the test, we found the unusual photoemission properties of SrTiO3. Previously, quantum oxide materials represented by strontium titanate were mainly studied as substitutes for semiconductors and are currently used in the fields of electronics and photocatalysis.

The material will definitely be promising in the field of photocathode in the future.

Continue reading here:

Quantum Revolution! Breakthrough By Chinese Scientists Could Rewrite Einsteins Nobel Prize Winning Theory - EurAsian Times

Read More..

Opinion | Forget the Multiverse. Embrace the Monoverse. – The New York Times

The patient was elderly and lived alone. She was showing signs of depression, but it was clear that something more was amiss. She insisted she was trapped in the wrong timeline.

The ward to which shed been committed was unstuck in time, she told her doctors. Outside, the future had already arrived, and it was not a good one. She described then that the world outside the ward had been destroyed, reported the doctors in Exeter, England, who wrote a report about the case in a 2019 issue of the journal Neurology and Neurosurgery.

The woman was diagnosed with a variation of Capgras syndrome. First defined a century ago, Capgras typically describes a persons belief that someone close to him or her a spouse or a child has been replaced with a duplicate impostor. But in this case, the patient believed that the whole world everything she could observe of it was a duplicate, a fake.

I know a little bit how that feels.

So do you, probably.

It seems many of us have come to feel there are multiple realities and were stuck in the wrong one. For some, this sensation was occasioned by the outcome of the 2016 presidential election as it was for Arthur Darvill, known for his role in the British television series Doctor Who, who tweeted on Nov. 9, 2016, I think we landed in the wrong timeline.

The Rev. Traci Blackmon, a Christian minister and civil rights activist who was active in protests at Ferguson, Mo., and counterprotests at Charlottesville, Va., tweeted about this feeling in 2018: I believe I am trapped in an alternative universe. At the time, I reached out to her to ask her about it. She explained that the racism shed witnessed felt like a detour from the way shed assumed history would unfold. The gains wed made in social equity and humanizing people I thought these gains would result in a different world, she said.

Now five years and one pandemic later, Everything Everywhere All at Once, a film about the idea that there are multiple universes, each containing a different version of you, swept the Oscars and struck a chord. Apparently many of us have this sense that, as Waymond Wang, played by Ke Huy Quan, says in the movie, something is off.

Its easy to see the appeal of the multiverse, even as metaphor: the notion that were surrounded by a multitude of parallel selves, one of which might be living in a better timeline than the one were stuck in. Its probably no coincidence that the idea has become so popular during an era of pandemic, climate change and political turmoil, when so many of us have felt helpless and trapped. Who doesnt want to imagine a different world?

But it can also be a dangerous way of imagining the cosmos. Like the Capgras patient, we risk becoming detached from the world we can see and touch. Regardless of whether we can prove that the multiverse exists, the idea of it can distract us from doing the work we need to do to make this world better. This timeline is the only one we have access to, and its got to be enough.

As a species, weve long been haunted by spirit realms and ghostly domains. Plato conceived of an intangible world of forms realer than anything we can touch. Plutarch reported that Alexander wept when he heard the possibility of an infinite number of worlds, having not conquered all of this one.

C.S. Lewis was an early multiverse explorer with his Narnia books, in which siblings grow to adulthood as kings and queens on the other side of their magical wardrobe in a world that exists parallel to our own. He was also paying quite a bit of attention at the time to a new branch of science known as quantum physics. In 1957, a year after Lewis published his last Narnia book, a Princeton doctoral student, Hugh Everett III, published a dissertation bringing the ancient idea of the simultaneous existence of several worlds into the realm of modern science.

Everett was trying to solve a seeming paradox in quantum theory: Certain elementary particles (say, a photon) seemed to exist mathematically in many places at once but could be detected at only one location at a time.

Perhaps, Everett suggested, the act of detecting the particle splinters reality; perhaps the observer, and indeed the universe, splits into different possible timelines, one for each possible location of the particle. This would become known as the many-worlds interpretation. Physicists recoiled at the idea at the time.

It took a while for Everetts idea to trickle into popular culture, but once it did, pulp fiction writers fell in love with it. In 1961, DC Comics published Flash of Two Worlds! in which the superfast hero vibrates his way into another universe to meet an alternate Flash. More recently, the multiverse has become the perfect framework for superhero-centric entertainment entities like Marvel Studios (and its licensees) to reiterate a seemingly endless series of franchise reboots. (A.O. Scott, writing in this newspaper, called this a conceit that promises ingenuity and narrative abundance but instead has delivered an infinite recombination of clich.) Thus, Spider-Man entered the Spider-Verse; Dr. Strange got lost in the Multiverse of Madness.

I first encountered the idea of a parallel world as a kid in the 1980s, watching the She-Ra cartoon movie. Princess Adora is separated at birth from her brother, Adam, and sent to grow up on the other side of a tridimensional portal.

I was an only child, and I was fascinated by the idea of some alternate plane where, like Prince Adam, I might discover a secret sibling, an end to loneliness.

When I was 12, my mother met a man, and suddenly the family Id imagined for myself became real. I had an older brother who loved puns and an older sister who wrote poems.

But when I was 19, my stepfather died of melanoma; within a few years of recriminations and disputes, our blended family unblended itself.

I entered adulthood bereft and wrong-footed. I felt a horrible sense of vertigo as I watched the life Id been expecting to live tilting away from me. In this new timeline, my stepsiblings were no longer my siblings; they would become, instead, just people I knew for a while in high school.

All this because a photon of sunlight had collided with a segment of my stepfathers DNA. A quantum event a whole universe of grief unfurled from a minuscule catastrophe that could just as easily not have happened.

For years, I couldnt stop thinking about other, better timelines where it didnt happen, where my stepfather was still alive and my family intact. It helped me understand what was missing, but it did not allow me to mourn what Id lost.

And thats the peril of the multiverse; I was becoming unreal to myself, nostalgic not for a time before the death happened but for a timeline in which it never happened at all. At the climax of the Narnia series, Lewis renounces his beloved fantasy land as a shadow of a copy of a newer, realer Narnia. The new one was a deeper country, he writes. Every rock and flower and blade of grass looked as if it meant more. A shadow of a copy thats how I felt.

In Everything Everywhere, Joy, the character played by Stephanie Hsu, has become aware of every possible timeline. She succumbs to nihilistic despair. If everything is happening, then nothing can matter. Its hard not to recall the real-life fate of Mr. Everetts daughter, Elizabeth, who ended her life in 1996, saying in her farewell note that she hoped she would go on to a correct parallel universe where both she and her father were still alive.

I reached out recently to Ms. Blackmon, the activist who tweeted about an alternate universe, to see how shes feeling about the timeline in 2023. Because she is still trapped here, she has no updates, a representative wrote back.

But its telling that Ms. Blackmon never stopped fighting for what she believes in, striving to improve this world, serving the United Church of Christ and, most recently, leading a group of religious leaders in an effort to block Missouris abortion ban. We can joke or wonder whether were in the wrong timeline. But we cant lose sight of the fact that this timeline is the only one weve got.

In my 30s, I knew I had to save myself from the enticements of alternate realities. So I envisioned a new cosmology of time. Instead of a linear, branching timeline with multiple, parallel possibilities so much more vivid than my real life I tried to imagine time as a sphere always expanding away from me in every direction, like the light leaving a star.

In this model of time, instead of the past receding behind me, it expands outward to surround me, always there and always present. The future is at the very center of the sphere, curled up infinitely small inside of me, waiting to be realized. That way, I can believe that there is nothing to come that I do not already contain.

As a cosmology, its no more tangible than the multiverse. But if we have to believe in something invisible, let me believe in a version of the universe that keeps my focus where it belongs: on the things I can touch and change.

See the original post here:

Opinion | Forget the Multiverse. Embrace the Monoverse. - The New York Times

Read More..

The future of museums and a history of ignorance: Books in brief – Nature.com

The Museum of Other People

Adam Kuper Profile (2023)

This fascinating history by anthropologist Adam Kuper discusses ethnology museums, mainly in Europe and the United States, established during the colonial period. He argues that it is time to turn such institutions into cosmopolitan museums that include challenging perspectives and contrasting points of view backed by research and scholarship, not mystical insight or the authority of identity. But he also recognizes that the force is with those who demand the restitution of colonial collections.

Ignorance

Peter Burke Yale Univ. Press (2023)

Having published a study of polymathy in 2020, cultural historian Peter Burke now tackles its opposite: ignorance. Both have long been key to scientific progress. Thoroughly conscious ignorance is the prelude to every real advance in science, remarked physicist James Clerk Maxwell in the nineteenth century. Chapters also consider the relevance of ignorance to business, geography, politics, religion and war. Burke argues, with encyclopedic cogency, that we should think of knowledges and ignorances plural rather than singular.

Invention and Innovation

Vaclav Smil MIT Press (2023)

As an environmentalist and energy writer, Vaclav Smil is well placed to analyse the impact of past and promised inventions and innovations. He distinguishes between these concepts: innovation, he says, involves mastering new materials, products, processes and ideas. He focuses engagingly on three types of failed invention: welcomed but then unwelcome (for example, leaded petrol and the pesticide DDT); over-hyped (such as nuclear fission and supersonic flight); and undelivered (including travel by vacuum tube and controlled nuclear fusion).

What Is Intergenerational Justice?

Axel Gosseries Polity (2023)

What duties do we have to future generations, both living and unborn? What does justice mean in the intergenerational context? On topics such as climate change, we have to divide cakes without knowing how many guests will join us and what their tastes will be, comments political philosopher Axel Gosseries. Despite the emotive subject, his language and tone are academic. Greenhouse gases and extinction are discussed, but not Greenpeace and Extinction Rebellion even though he accepts the need for radical changes.

The One

Heinrich Ps Basic (2023)

Theoretical physicist Heinrich Ps begins his intriguing, controversial book for general readers on the mysteries of quantum physics by asking, how can standing under a star-studded night sky make us feel at once insignificant yet strangely at home in the universe? He answers, essentially, that everything including quantum particles is part of the same fundamental whole. He is a monist, convinced by a comment by ancient Greek philosopher Heraclitus: From all things One and from One all things.

The author declares no competing interests.

Read more:

The future of museums and a history of ignorance: Books in brief - Nature.com

Read More..

The Mind-Bending Multiverse: Our Universe Is Suspiciously Unlikely … – SciTechDaily

Do universes pop up as bubbles from a multiverse?

Its easy to envisage other universes, governed by slightly different laws of physics, in which no intelligent life, nor indeed any kind of organized complex systems, could arise. Should we, therefore, be surprised that a universe exists in which we were able to emerge?

Thats a question physicists including me have tried to answer for decades. But it is proving difficult. Although we can confidently trace cosmic history back to one second after the Big Bang, what happened before is harder to gauge. Our particle accelerators simply cant produce enough energy to replicate the extreme conditions that prevailed in the first nanosecond.

But we expect that its in that first tiny fraction of a second that the key features of our universe were imprinted.

The Big Bang theory is the most widely accepted scientific explanation for the origins of the universe. It proposes that the universe began as a singularity, an infinitely dense and hot point that expanded rapidly about 13.8 billion years ago, and has been cooling and expanding ever since.

The conditions of the universe can be described through its fundamental constants fixed quantities in nature, such as the gravitational constant (called G) or the speed of light (called C). There are about 30 of these representing the sizes and strengths of parameters such as particle masses, forces or the universes expansion. But our theories dont explain what values these constants should have. Instead, we have to measure them and plug their values into our equations to accurately describe nature.

The values of the constants are in the range that allows complex systems such as stars, planets, carbon, and ultimately humans to evolve. Physicists have discovered that if we tweaked some of these parameters by just a few percent, it would render our universe lifeless. The fact that life exists therefore takes some explaining.

Some argue it is just a lucky coincidence. An alternative explanation, however, is that we live in a multiverse, containing domains with different physical laws and values of fundamental constants. Most might be wholly unsuitable for life. But a few should, statistically speaking, be life-friendly.

What is the extent of physical reality? Were confident that its more extensive than the domain that astronomers can ever observe, even in principle. That domain is definitely finite. Thats essentially because, like on the ocean, theres a horizon that we cant see beyond. And just as we dont think the ocean stops just beyond our horizon, we expect galaxies beyond the limit of our observable universe. In our accelerating universe, our remote descendants will also never be able to observe them.

Most physicists would agree there are galaxies that we cant ever see, and that these outnumber the ones we can observe. If they stretched far enough, then everything we could ever imagine happening may be repeated over and over. Far beyond the horizon, we could all have avatars.

This vast (and mainly unobservable) domain would be the aftermath of our Big Bang and would probably be governed by the same physical laws that prevail in the parts of the universe we can observe. But was our Big Bang the only one?

The theory of inflation, which suggests that the early universe underwent a period when it doubled in size every trillionth of a trillionth of a trillionth of a second has genuine observational support. It accounts for why the universe is so large and smooth, except for fluctuations and ripples that are the seeds for galaxy formation.

But physicists including Andrei Linde have shown that, under some specific but plausible assumptions about the uncertain physics at this ancient era, there would be an eternal production of Big Bangs each giving rise to a new universe.

String theory, which is an attempt to unify gravity with the laws of microphysics, conjectures everything in the universe is made up of tiny, vibrating strings. But it makes the assumption that there are more dimensions than the ones we experience. These extra dimensions, it suggests, are compacted so tightly together that we dont notice them all. And each type of compactification could create a universe with different microphysics so other Big Bangs, when they cool down, could be governed by different laws.

The laws of nature may therefore, in this still grander perspective, be local by-laws governing our own cosmic patch.

The NASA/ESA/CSA James Webb Space Telescope has produced the deepest and sharpest infrared image of the distant Universe to date. Known as Webbs First Deep Field, this image of galaxy cluster SMACS 0723 is overflowing with detail. However, we can only see a fraction of the universe. Credit: NASA, ESA, CSA, and STScI

If physical reality is like this, then theres a real motivation to explore counterfactual universes places with different gravity, different physics, and so forth to explore what range or parameters would allow complexity to emerge, and which would lead to sterile or stillborn cosmos. Excitingly, this is ongoing, with recent reseach suggesting you could imagine universes that are even more friendly to life than our own. Most tweakings of the physical constants, however, would render a universe stillborn.

That said, some dont like the concept of the multiverse. They worry it would render the hope for a fundamental theory to explain the constants as vain as Keplers numerological quest to relate planetary orbits to nested platonic solids.

But our preferences are irrelevant to the way physical reality actually is so we should surely be open-minded to the possibility of an imminent grand cosmological revolution. First we had the Copernican realization that the Earth wasnt the center of the Solar System it revolves around the Sun. Then we realized that there are zillions of planetary systems in our galaxy, and that there are zillions of galaxies in our observable universe.

So could it be that our observable domain indeed our Big Bang is a tiny part of a far larger and possibly diverse ensemble?

How do we know just how atypical our universe is? To answer that we need to work out the probabilities of each combination of constants. And thats a can of worms that we cant yet open it will have to await huge theoretical advances.

We dont ultimately know if there are other Big Bangs. But theyre not just metaphysics. We might one day have reasons to believe that they exist.

Specifically, if we had a theory that described physics under the extreme conditions of the ultra-early Big Bang and if that theory had been corroborated in other ways, for instance by deriving some unexplained parameters in the standard model of particle physics then if it predicted multiple Big Bangs, we should take it seriously.

Critics sometimes argue that the multiverse is unscientific because we cant ever observe other universes. But I disagree. We cant observe the interior of black holes, but we believe what physicist Roger Penrose says about what happens there his theory has gained credibility by agreeing with many things we can observe.

About 15 years ago, I was on a panel at Stanford where we were asked how seriously we took the multiverse concept on the scale would you bet your goldfish, your dog, or your life on it. I said I was nearly at the dog level. Linde said hed almost bet his life. Later, on being told this, physicist Steven Weinberg said hed happily bet Martin Rees dog and Andrei Lindes life.

Sadly, I suspect Linde, my dog and I will all be dead before we have an answer.

Indeed, we cant even be sure wed understand the answer just as quantum theory is too difficult for monkeys. Its conceivable that machine intelligence could explore the geometrical intricacies of some string theories and spew out, for instance, some generic features of the standard model. Wed then have confidence in the theory and take its other predictions seriously.

But wed never have the aha insight moment thats the greatest satisfaction for a theorist. Physical reality at its deepest level could be so profound that its elucidation would have to await posthuman species depressing or exhilarating as that may be, according to taste. But its no reason to dismiss the multiverse as unscientific.

Written by Martin Rees, Emeritus Professor of Cosmology and Astrophysics, University of Cambridge.

This article was first published in The Conversation.

Original post:

The Mind-Bending Multiverse: Our Universe Is Suspiciously Unlikely ... - SciTechDaily

Read More..