Page 569«..1020..568569570571..580590..»

How AI can help journalists find diverse and original sources – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

close

What would news stories be without proper sources? To tell a compelling story, reporters need to find newsworthy narratives and trustworthy information. Such information typically comes from a wide pool of publications, official records and experts, all with their own biases, expertise, opinions and backgrounds. The pool of interview candidates is plentiful yet overwhelming to navigate.

Artificial intelligence, however, may serve as a guide.

Researchers from the USC Information Sciences Institute are creating a source-recommendation engine designed to suggest references for journalists. "In practice, the software application would analyze a given text or topic and suggest relevant sources by cross-referencing against a database of potential interviewees, experts or informational resources," said Emilio Ferrara, a professor of computer science and communication at the USC Viterbi School of Engineering. "The tool could provide contact details, areas of expertise and previous work of the sources," he added.

The tool's development is being led by Alexander Spangher, a computer science Ph.D. student at USC Viterbi who previously worked as a data scientist at the New York Times. While immersed in the journalism industry, Spangher witnessed the pressure of traditional newsrooms. "I haven't spoken to a single local journalist that was not totally overstretched," he remarked. "There have been news deserts and papers shutting down. It's areas like this that we really want to assist and build tools for."

Motivated to provide helpful resources for reporters, Spangher is creating various AI gadgets, including a source-recommendation system prefaced in his paper, "Identifying Informational Sources in News Articles," that was accepted to the 2023 Conference on Empirical Methods in Natural Language Processing and is now posted to the arXiv preprint server.

To create an AI model that can suggest sources, the researchers first laid the groundwork: how are human journalists currently using sources in news writing? To study this, they gathered a dataset of sentences from over a thousand news articles and annotated the source of the information, as well as the sourcing category (e.g., "direct quotes," "indirect quotes," "published works" and "court proceedings").

A thousand annotated news articles, however, were not enough data for the researchers to draw firm conclusions about all the myriad ways journalists use sources across reporting genres. But, it was enough to train a language model (LM) to continue the annotation process. "Language models are AI frameworks that process and understand human language by analyzing large volumes of text for patterns and context," explained Ferrara, senior author of the paper.

The LMs the researchers trained could detect source attributions with 83% accuracy, revealed the authors. Now equipped with these LMs, they annotated roughly 10,000 news articles and delved further into understanding the compositionality of news writing: when and how do journalists currently use sources?

The AI models found that, on average, roughly half the information in news articles came from sources and, in each article, there are usually one to two major sources (i.e., those contribute 20% or more of the information in the article) and two to eight minor ones (those that contribute less). "The AI also discovered that the first and last sentences were the most likely to be sourced," Spangher explained, adding that reporters often lead with cited information and close with a quote to send the reader off.

The researchers challenged their new algorithm with one more test: could they detect if a source was missing? If AI can recognize when information is lacking, then it can be configured to know when to recommend a particular expert to complete the full picture.

Analyzing 40,000 articles with some sources randomly removed, the AI models easily noticed when a major source was absent but had difficulties with minor ones. Although they may be the least crucial to a story, less obvious sources may also be the most valuable recommendations that an AI could one day make, Spangher said.

"You're going to draw a lot of information from the main participants, but supplementary voices are going to provide extra color and details to the article," he noted. "It's going to be a challenge to get the engine to recognize and recommend minor sources, but they may be the most helpful."

The researchers also think the tool will be significant if it can diversely recommend sources. "It can introduce journalists to new, diverse voices beyond their usual network, thus reducing the reliance on familiar sources and potentially bringing in fresh perspectives," Ferrara said.

However, every AI system is prone to bias if not appropriately designed, he added. "To ensure diversity in source databases, standards should include representation from a wide range of demographics, disciplines and perspectives," he noted.

Jonathan May, a research associate professor of computer science at USC Viterbi and ISI lead researcher, imagines a future where the sourcing engine jumpstarts the reporting process, allowing journalists to be more efficient.

"Technology that can help us do creative work and be our creative best is a good thing," said May, a co-author of the paper. "That's why I'm hopeful for it."

The team plans to collaborate with journalists to gather feedback for further improvements.

"With projects like this, I really thrive off talking to journalists and understanding their needs, viewpoints and what they think will or won't work," Spangher said. "Any solution to local journalism will require a bunch of different people with a bunch of different backgrounds coming together."

More information: Alexander Spangher et al, Identifying Informational Sources in News Articles, arXiv (2023). DOI: 10.48550/arxiv.2305.14904

Journal information: arXiv

Read the original post:

How AI can help journalists find diverse and original sources - Tech Xplore

Read More..

Does quantum theory imply the entire Universe is preordained? – Nature.com

Is cosmic evolution a single track with no choice about the destination?Credit: Getty

Was there ever any choice in the Universe being as it is? Albert Einstein could have been wondering about this when he remarked to mathematician Ernst Strauss: What Im really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all.

US physicist James Hartle, who died earlier this year aged 83, made seminal contributions to this continuing debate. Early in the twentieth century, the advent of quantum theory seemed to have blown out of the water ideas from classical physics that the evolution of the Universe is deterministic. Hartle contributed to a remarkable proposal that, if correct, completely reverses a conventional story about determinisms rise with classical physics, and its subsequent fall with quantum theory. A quantum Universe might, in fact, be more deterministic than a classical one and for all its apparent uncertainties, quantum theory might better explain why the Universe is the one it is, and not some other version.

The brazen science that paved the way for the Higgs boson (and a lot more)

In physics, determinism means that the state of the Universe at any given time and the basic laws of physics fully determine the Universes backward history and forward evolution. This idea reached its peak with the strict, precise laws about how the Universe behaves introduced by classical physics. Take Isaac Newtons laws of motion. If someone knew the present positions and momenta of all particles, they could in theory use Newtons laws to deduce all facts about the Universe, past and future. Its only a lack of knowledge (or computational power) that prevents scientists from doing so.

Along with this distinctive predictive power, determinism underwrites scientific explanations that come close to the principle of sufficient reason most famously articulated by German polymath Gottfried Leibniz: that everything has an explanation. Every state of the Universe (with one obvious exception, which well come to) can be completely explained by an earlier one. If the Universe is a train, determinism says that its running on a track, with no option to switch to any other path because different tracks never cross.

Physicists have conventionally liked determinisms predictive and explanatory power. Others, including some philosophers, have generally been more divided, not least because of how determinism might seem to preclude human free will: if the laws of physics are deterministic, and our actions are just the summation of particle interactions, there seems to be no room for us to freely choose A instead of B, because the earlier states of the Universe will already have determined the outcome of our choice. And if we are not free, how can we be praised or blamed for our actions? Neuroendocrinologist Robert Sapolskys 2023 book Determined touches on this fascinating and controversial issue.

The strange behaviours of quantum particles that began to emerge in the twentieth century fundamentally shifted the debate surrounding determinism in physics. The laws of quantum mechanics give only the probabilities of outcomes, which can be illustrated with the thought experiment devised by Austrian physicist Erwin Schrdinger in 1935 (although when he devised it, he was concerned mainly with how the wavefunction represents reality). A cat is trapped in a box with a vial of poison that might or might not have been broken by a random event because of radioactive decay, for example. If quantum mechanics applied to the cat, it would be described by a wavefunction in a superposition of alive and dead. The wavefunction, when measured, randomly jumps to one of the two states, and quantum mechanics specifies only the probability of either possibility occurring. One consequence of the arrival of quantum mechanics was that it seemed to throw determinism out of the window.

How Stephen Hawking flip-flopped on whether the Universe has a beginning

But this accepted idea might not be the whole story, as developments in the second half of the twentieth century suggested. The quantum Universe could actually be more deterministic than a classical one, for two reasons. The first is technical. Newtons laws allow situations in which the past does not determine how things will move in the future. For example, the laws do not provide an upper bound on how much an object can be accelerated, so in theory a classical object can reach spatial infinity in finite time. Reverse this process, and you get what have been called space invaders objects that come from spatial infinity with no causal connection to anything else in the Universe, and which cant be predicted from any of the Universes past states.

In practice, this problem is solved by the universal speed limit, the speed of light, introduced by Einsteins special theory of relativity. But unruly infinities also plague Einsteinian relativity, which is a classical theory. The equations of general relativity lead to singularities of infinite curvature, most notoriously in black holes and at the Big Bang at the beginning of the Universe. Singularities are like gaps in space-time where the theory no longer applies; in some cases, anything can come out of them (or disappear into them), threatening determinism.

Many physicists think that quantum theory can come to the rescue by removing such singularities for example, by converting the Big Bang into a Big Bounce, with a Universe that continues to evolve smoothly on the other side of the singularity. If they are right, a theory of quantum gravity that fully unifies quantum theory, which predicts the behaviour of matter on the smallest scales, and Einsteins relativity, which encapsulates the large-scale evolution of the Universe, will smooth out the gaps in space-time and restore determinism.

Space-time singularities inside black holes could threaten a deterministic cosmic order.Credit: ESO/SPL

But there is a deeper reason why the quantum Universe might be more deterministic, to which Hartles scientific legacies are relevant. With US physicist Murray Gell-Mann, Hartle developed an influential approach to quantum theory, called decoherent histories1. This attempted to explain the usefulness of probabilistic statements in quantum physics, and the emergence of a familiar, classical realm of everyday experience from quantum superpositions. In their picture, the wavefunction never randomly jumps. Instead, it always obeys a deterministic law given by Schrdingers equation, which characterizes the smooth and continuous evolution of quantum states. In this respect, it is similar to US physicist Hugh Everett IIIs popular many worlds interpretation of quantum mechanics, which proposes that the quantum Universe splits into different branches according to the possibilities encoded in the wavefunction whenever anything is measured2. In what follows I assume, as Everett did, that the Universe can be completely described by a quantum wavefunction with no hidden variables that operate on a more fundamental level.

With Stephen Hawking, Hartle went on to become one of the founders of quantum cosmology, which applies quantum theory to the entire Universe. In a classical Universe, there is freedom in choosing how it all started. Even setting aside the extreme situations mentioned earlier, classical mechanics is deterministic merely in that it lays down many possible evolutionary histories for the Universe, and offers conditional statements about them: if this happens, then that must happen next. To return to the train analogy, a deterministic theory does not, by itself, say why the train is on any one given track out of many: why it is going from A to B via C, rather than from X to Y via Z. We can go back to earlier states to explain the current state, and do that all the way back to the initial state but this initial state is not explained by anything that precedes it. Ultimately, standard determinism fails to fully satisfy Leibnizs principle of sufficient reason: when it comes to the initial state, something remains without an explanation.

See me here, see me there

This failure is not just philosophical. A complete theory of the Universe should predict the phenomena we observe in it, including its large-scale structure and the existence of galaxies and stars. The dynamic equations we have, whether from Newtonian physics or Einsteinian relativity, cannot do this by themselves. Which phenomena show up in our observations depend sensitively on the initial conditions. We must look at what we see in the Universe around us, and use this information to determine the initial condition that might have given rise to such observations.

A theory that specifies deterministic laws of both the Universes temporal evolution and its exact initial condition satisfies what English physicist Roger Penrose called strong determinism in his 1989 book The Emperors New Mind. This is, according to Penrose, not just a matter of the future being determined by the past; the entire history of the universe is fixed, according to some precise mathematical scheme, for all time. Let us say that a Universe is strongly deterministic if its basic laws of physics fix a unique cosmic history. If determinism provides a set of non-crossing train tracks, without specifying which one is being used, then strong determinism lays down a single track that has no choice even about where it starts.

Strong determinism is hard to implement in classical physics. You might consider doing it by specifying the initial condition of the Universe as a law. But although the dynamical laws of classical physics are simple, the Universe itself is complex and so its initial condition must have been, too. Describing the precise positions and momenta of all the particles involved requires so much information that any statement of the initial condition is too complex to be a law.

Hartle suggested3 that quantum mechanics can solve this complexity problem. Because a quantum objects wavefunction is spread out across many classical states (cat alive or cat dead, for instance), you could propose a simple initial condition that includes all the complexities as emergent structures in the quantum superposition of these states. All the observed complexities can be regarded as partial descriptions of a simple fundamental reality: the Universes wavefunction. As an analogy, a perfect sphere can be cut into many chunks with complicated shapes, yet they can be put back together to form a simple sphere.

In 1983, Hartle and Hawking introduced4 one of the first (and highly influential) proposals about the quantum Universes initial state. Their no boundary wavefunction idea suggests that the shape of the Universe is like that of a shuttlecock: towards the past, it rounds off smoothly and shrinks to a single point. As Hawking said in a 1981 talk on the origin of the Universe in the Vatican: There ought to be something very special about the boundary conditions of the Universe, and what can be more special than the condition that there is no boundary?

Unique, or not unique?

In this perspective, the quantum Universe has two basic laws: a deterministic one of temporal evolution and a simple one that picks an initial wavefunction for the Universe. Hence, the quantum Universe satisfies strong determinism. The physical laws permit exactly one cosmic history of the Universe, albeit one described by a wavefunction that superposes many classical trajectories. There is no contingency in what the Universe as a whole could have been, and no alternative possibility for how it could have started. Every event, including the first one, is explained; the entire wavefunction of the Universe for all times is pinned down by the laws. The probabilities of quantum mechanics do not exist at the level of the basic physical laws, but can nonetheless be assigned to coarse-grained and partial descriptions of bits of the Universe.

This leads to a more predictive and explanatory theory. For example, the no-boundary proposal makes predictions for a relatively simple early Universe and for the occurrence of inflation a period of rapid expansion that the Universe seems to have undergone in its first instants.

There are still many wrinkles to this proposal, not least because some studies have shown that, contrary to initial expectations, the theory might not single out a unique wavefunction for the Universe5,6. But studies in quantum foundations research that is mostly independent from that of quantum cosmology could offer yet another method for implementing strong determinism. Several researchers have considered the controversial idea that quantum states of closed systems, including the Universe, need not be restricted to wavefunctions, but instead can come from a broader category: the space of density matrices710.

Density matrices can be thought of as superpositions of superpositions, and they provide extra options for the initial condition of the Universe. For example, if we have reasons to adopt the past hypothesis the idea, which seems likely, that the Universe began in a low-entropy state (and its entropy has been increasing steadily since) and that this theory corresponds to a set of wavefunctions, then we can choose a simple density matrix that corresponds to the uniform mixture of that set. As I have argued10, if we regard the density matrix as the initial state of the Universe and accept that it is specified by a law, then this choice, together with the deterministic von Neumann equation (a generalization of Schrdingers equation), can satisfy strong determinism. However, in this case, the laws fix a cosmic history of a quantum Universe that has many evolving branches a multiverse.

So how deterministic is the Universe? The answer will depend on the final theory that bridges the divide between quantum physics and relativity and that remains a far-off prospect. But if Hartle is right, the story of the rise and fall of determinism until now might be the reverse of the conventional tale. From a certain perspective, the quantum Universe is more deterministic than a classical one, providing stronger explanations and better predictions. That has consequences for humans, too, because that makes it harder to appeal to quantum theory to defend free will11. If the quantum Universe is strongly deterministic, then there is no other path to make the Universe than the way it is. The ultimate laws of the quantum cosmos might tell us why it is this one.

Excerpt from:

Does quantum theory imply the entire Universe is preordained? - Nature.com

Read More..

Quantum batteries could charge faster by scrambling the rules of cause and effect – Livescience.com

Quantum batteries of the future could gain charge by breaking the conventional laws of causality, research has shown.

Conventional batteries charge by converting electrical energy into chemical energy on the scale of vast numbers of electrons.

But in a new proof-of-principle experiment researchers have demonstrated how a weird quantum effect may lead to batteries that charge faster and with more efficiency by scrambling cause and effect, according to research Dec. 14 in the journal Physical Review Letters.

Related: Electricity flows like water in 'strange metals,' and physicists don't know why

Causality, or the relationship between cause and effect, is not always straightforward in quantum mechanics, the strange rules that govern the world of the very small.

"Normally, if event A comes first and causes event B, it is assumed that B cannot in turn cause A at the same time," co-first author Yuanbo Chen, a physicist at the University of Tokyo, told Live Science. "However, recent advancements in theoretical physics propose that in certain frameworks, scenarios where 'A causes B' and 'B causes A' could simultaneously be true."

The principle of quantum superposition enables particles to exist in many different states at once, at least until they are observed and "pick" a state to land in.

Any property of a quantum object (such as its momentum, location, or, in the famed case of Erwin Schrdingers hypothetical cat, whether its alive or not) can exist in superposition a probabilistic jumble of every possible state that only collapses into a definite outcome when the object is viewed.

This realization has led physicists to conduct all kinds of bizarre experiments that contradict our intuitive notions of what should be possible, including ones where a single particle can both exist and not exist in many different places at the same time.

But superposition doesnt just mess with our intuitive sense of space, it scrambles our sense of causality too. In 2009, physicists used a device called a quantum switch to observe a phenomenon called indefinite causal order. By sending a light particle, or photon down a pair of diverging paths, the physicists caused it to split into two possible versions of itself one which went down the first path, and the other the second.

Then, depending on the path the photon took, physicists applied two different processes in a different order depending on the path. The result was a photon that had its causality jumbled: it was in a quantum superposition where both orders of events were true.

"Say that we have two processes: A and B," Chen said. "With a quantum switch, you can create a superposition of (First apply A and then B) and (First apply B and then A)."

Chen and his colleagues wondered if they could incorporate this into a quantum battery, a proposed device that could theoretically store the energy of photons and charge faster than conventional electrochemical batteries.

They compared three charging methods: connecting two chargers to a battery sequentially, simultaneously, or in a superposition that made it impossible to tell the order of input.

Their calculations showed that the superposition method would enable a low-power, causally-scrambled charger to deliver more energy more efficiently than a conventional high-power charger.

They followed up their calculations with a proof-of-principle experiment using light. By sending photons through a quantum switch with two possible paths, the researchers split the light particles into two possible versions of themselves, each one traversing a different path.

Then, after subjecting the light to two inputs that would polarize them in a different order (A then B or B then A) based on the path they were on, the researchers measured the polarization at the end and found that the individual photons had been causally scrambled.

Having tested their protocol, the scientists say their next challenge is to create a physical quantum battery that can hold a charge. However, the first experimental evidence for a quantum battery was only published last year, so it may not happen any time soon.

"Given the current situation characterized by limited experimental efforts and ongoing theoretical exploration in the realm of quantum batteries, it is challenging to estimate a precise timeline for achieving conclusive outcomes," Chen said.

Go here to read the rest:

Quantum batteries could charge faster by scrambling the rules of cause and effect - Livescience.com

Read More..

Scientists measure entanglement at the LHC | symmetry magazine – Symmetry magazine

On the smallest level, the universe operates in such a bizarre way that even Albert Einstein had a difficult time making sense of it. An example of the strangeness in the quantum realmone that has no equivalent in the world as we experience itis the phenomenon of quantum entanglement.

In our classical world, if Iman flips a coin in Indonesia and Olaf flips a coin in Norway, both Iman and Olaf have a 50% chance of landing on heads and a 50% chance of landing on tails. The result of one coin flip will have no effect on the result of the other.

In the quantum world, that wouldnt necessarily be true. Two particles can become entangled, which in this case, would mean the coins would be connected by the outcome of their flips. If one landed on heads, the other would land on tails, and vice versa. So if Iman saw her coin land on heads, she would immediately know that Olaf had gotten tails.

This seemed impossible to Einstein. The problem was: For one entangled particle to affect another, even if separated by a great distance, it appeared that informationperhaps the result of a coin flip, or the resolution of a particles quantum statewould need to travel between them faster than the speed of light.

But quantum entanglement, which Einstein famously described as spooky action at a distance, is now an accepted fact of nature.

Quantum entanglement is the most distinctive signature of quantum mechanics, says Juan R. Muoz de Nova, a condensed-matter physicist at the Complutense University of Madrid. It contradicts the intuitions we have on a daily basis, he says. That is why entanglement is so intrinsic to quantum mechanics.

This phenomenon has been observed by researchers around the world, and the 2022 Nobel Prize in physics was awarded to three scientists for experimentally advancing our understanding of it. Scientists have detected quantum entanglement through experiments involving macroscopic diamonds and ultracold gases.

In September 2023, the ATLAS collaboration made another advancement when they unveiled the highest-energy measurement of quantum entanglement ever, using top quarks produced in the Large Hadron Collider at CERN. Interestingly, the measurement turned out a bit differently than expected.

At the LHC, scientists accelerate protons to relativistic speeds at high energies, bringing them into collision with one another about 40 million times a second. The energy of the collisions can generate new particles, allowing experiments like ATLAS to study their properties.

When particles are created in collisions at the LHC, some of them naturally become entangled. For instance: A characteristic of a particle that can be measured is its spin; two particles produced in the same collision can become entangled so that they have opposite spin.

One of the types of particles that scientists study at the LHC are top quarks, which are heavier than any other known fundamental particle. Due to their large mass, top quarks decay extremely quickly after they are produced. This quick decay prevents a process that quarks typically go through called hadronization.

Hadronization causes quarks to grab other quarks and combine into more stable quark bundles like protons and neutrons. The fact that top quarks cant hadronize is important, because once a quark hadronizes, it is a lot more challenging to measure its spin.

Instead of hadronizing, top quarks decay, transferring their energy into less massive, more stable particles. They pass their spins on to their decay products, which ATLAS researchers can detect and measure. By reconstructing the spin correlations between top-quark pairs, the scientists can confirm whether they were entangled.

The ATLAS collaboration faced many challenges as they brought this science from theory to experimentation.

The biggest issue was accounting for neutrinos. Top quarks decay into leptonssome leptons, like electrons, are easy for the ATLAS detector to spot. Others, however, are practically invisible. Neutrinos, which can pass through the entire planet without ever interacting, easily escape the ATLAS detector without leaving a trace.

Their presence can be inferred by the amount of input and output energy from the collision. If energy has gone missing, scientists can attribute it to neutrinos. But its not a perfect solution, as a decaying top quark pair can produce more than one neutrino, and theres only one measurement of the collisions total energy. We use some clever, quite complicated bits of math, and some experimental hacks to correct for this, says James Howarth, a physics lecturer at the University of Glasgow and a coordinator of the analysis.

Scientists placed strict requirements on the collisions to carefully reconstruct the motions of top quark pairs. This problem took the most time to solve since reconstruction is crucial for detecting entanglement.

A second issue lay in the detector itself. When measuring entanglement, scientists paid close attention to the way decay products were distributed after a collision. But they needed to do so quickly because imperfections in the detector can affect the experiment. The detector actually kind of distorts the shape of the distribution that were interested in, says Yoav Afik, a physicist at the University of Chicago and a coordinator of the analysis.

To counteract the distortion, ATLAS collaborators used simulations to model what the collision looked like without the detectors effects.

A third issue: To measure entanglement experimentally, scientists needed a way to test alternative hypotheses using modeled data. The way the simulations were constructed, testing different scenarios was tricky.

With help from de Nova, who collaborated as a theorist during the analysis, researchers developed a weighted method in the model that allowed them to change the distribution of the decay products. In doing so, they could test varying hypotheses that would ultimately indicate whether entanglement existed in the system.

After a grueling three years of testing and analyzing, the ATLAS collaboration came to a rewarding yet puzzling result. What we found is that the data shows us stronger entanglement than expected by the simulations, Afik says.

It could be that the physicists have found an intrinsic difference between prediction and reality that points to a misunderstanding in our theory of quantum entanglement. Or, it could be a sign that something needs to be tweaked in the analysis. Howarth reasons that the simulations may not fully describe the region in which the collisions are taking place, so the modeling could be incomplete.

Whether or not the measurement eventually lines up with predictions, it still stands as the highest-energy detection of quantum entanglement and the first measurement of entanglement between a pair of quarks.

We never observed entanglement in such a high-energy relativistic system as the LHC, says de Nova, so this is like opening a whole new world to quantum information.

Read more from the original source:

Scientists measure entanglement at the LHC | symmetry magazine - Symmetry magazine

Read More..

The Holy Grail of Quantum Computing Is Finally Here. Or Is It? – WIRED

Andersen and Lensky of Google disagree. They do not think the experiment demonstrates a topological qubit, because the object cannot reliably manipulate information to achieve practical quantum computing. It is repeatedly stated explicitly in the manuscript that error correction must be included to achieve topological protection and that this would need to be done in future work, they write to WIRED.

When WIRED spoke with Tony Uttley, the president and COO of Quantinuum, after the companys own announcement in May, he was steadfast. We created a topological qubit, he said. (Uttley said last month that he was leaving the company.) The companys experiments made non-Abelian anyons out of 27 ions of the metal ytterbium, suspended in electromagnetic fields. The team manipulated the ions to form non-Abelian anyons in a racetrack-shaped trap, and similar to the Google experiment, they demonstrated that the anyons could remember how they had moved. Quantinuum published its results in a preprint study on arXiv without peer review two days before Nature published Kims paper.

Room for Improvement

Ultimately, no one agrees whether the two demonstrations have created topological qubits because they havent agreed on what a topological qubit iseven if there is widespread agreement that such a thing is highly desirable. Consequently, Google and Quantinuum can perform similar experiments with similar results but end up with two very different stories to tell.

Regardless, Frolov at the University of Pittsburgh says that neither demonstration appears to have brought the field closer to the true technological purpose of a topological qubit. While Google and Quantinuum appear to have created and manipulated non-Abelian anyons, the underlying systems and materials used were too fragile for practical use.

David Pekker, another physicist at Pittsburgh, who previously used an IBM quantum computer to simulate the manipulation of non-Abelian anyons, says that the Google and Quantinuum projects dont showcase any quantum advantage in computational power. The experiments dont shift the field of quantum computing from where it has been for a while: Working on systems that are too small-scale to yet compete with existing computers. My iPhone can simulate 27 qubits with higher fidelity than the Google machine can do with actual qubits, Pekker says.

Still, technological breakthroughs sometimes grow from incremental progress. Delivering a practical topological qubit will require all kinds of studieslarge and smallof non-Abelian anyons and the math underpinning their quirky behavior. Along the way, the quantum computing industrys interest is helping further some fundamental questions in physics.

Continued here:

The Holy Grail of Quantum Computing Is Finally Here. Or Is It? - WIRED

Read More..

Breaking Causality: The Revolutionary Power of Quantum Batteries – SciTechDaily

Quantum batteries, with their innovative charging methods, represent a leap in battery technology, promising higher efficiency and wider applications in sustainable energy solutions. Credit: SciTechDaily.com

A new way to charge batteries harnesses the power of indefinite causal order.

Batteries that exploit quantum phenomena to gain, distribute, and store power promise to surpass the abilities and usefulness of conventional chemical batteries in certain low-power applications. For the first time, researchers including those from the University of Tokyo take advantage of an unintuitive quantum process that disregards the conventional notion of causality to improve the performance of so-called quantum batteries, bringing this future technology a little closer to reality.

When you hear the word quantum, the physics governing the subatomic world, developments in quantum computers tend to steal the headlines, but there are other upcoming quantum technologies worth paying attention to. One such item is the quantum battery which, though initially puzzling in name, holds unexplored potential for sustainable energy solutions and possible integration into future electric vehicles. Nevertheless, these new devices are poised to find use in various portable and low-power applications, especially when opportunities to recharge are scarce.

In the classical world, if you tried to charge a battery using two chargers, you would have to do so in sequence, limiting the available options to just two possible orders. However, leveraging the novel quantum effect called ICO opens the possibility to charge quantum batteries in a distinctively unconventional way. Here, multiple chargers arranged in different orders can exist simultaneously, forming a quantum superposition. Credit: 2023 Chen et al.

At present, quantum batteries only exist as laboratory experiments, and researchers around the world are working on the different aspects that are hoped to one day combine into a fully functioning and practical application. Graduate student Yuanbo Chen and Associate Professor Yoshihiko Hasegawa from the Department of Information and Communication Engineering at the University of Tokyo are investigating the best way to charge a quantum battery, and this is where time comes into play. One of the advantages of quantum batteries is that they should be incredibly efficient, but that hinges on the way they are charged.

Current batteries for low-power devices, such as smartphones or sensors, typically use chemicals such as lithium to store charge, whereas a quantum battery uses microscopic particles like arrays of atoms, said Chen. While chemical batteries are governed by classical laws of physics, microscopic particles are quantum in nature, so we have a chance to explore ways of using them that bend or even break our intuitive notions of what takes place at small scales. Im particularly interested in the way quantum particles can work to violate one of our most fundamental experiences, that of time.

While its still quite a bit bigger than the AA battery you might find around the home, the experimental apparatus acting as a quantum battery demonstrated charging characteristics that could one day improve upon the battery in your smartphone. Credit: 2023 Zhu et al.

In collaboration with researcher Gaoyan Zhu and Professor Peng Xue from Beijing Computational Science Research Center, the team experimented with ways to charge a quantum battery using optical apparatuses such as lasers, lenses, and mirrors, but the way they achieved it necessitated a quantum effect where events are not causally connected the way everyday things are. Earlier methods to charge a quantum battery involved a series of charging stages one after the other. However, here, the team instead used a novel quantum effect they call indefinite causal order, or ICO. In the classical realm, causality follows a clear path, meaning that if event A leads to event B, then the possibility of B causing A is excluded. However, at the quantum scale, ICO allows both directions of causality to exist in whats known as a quantum superposition, where both can be simultaneously true.

Common intuition suggests that a more powerful charger results in a battery with a stronger charge. However, the discovery stemming from ICO introduces a remarkable reversal in this relationship; now, it becomes possible to charge a more energetic battery with significantly less power. Credit: 2023 Chen et al.

With ICO, we demonstrated that the way you charge a battery made up of quantum particles could drastically impact its performance, said Chen. We saw huge gains in both the energy stored in the system and the thermal efficiency. And somewhat counterintuitively, we discovered the surprising effect of an interaction thats the inverse of what you might expect: A lower-power charger could provide higher energies with greater efficiency than a comparably higher-power charger using the same apparatus.

The phenomenon of ICO the team explored could find uses beyond charging a new generation of low-power devices. The underlying principles, including the inverse interaction effect uncovered here, could improve the performance of other tasks involving thermodynamics or processes that involve the transfer of heat. One promising example is solar panels, where heat effects can reduce their efficiency, but ICO could be used to mitigate those and lead to gains in efficiency instead.

Reference: Charging Quantum Batteries via Indefinite Causal Order: Theory and Experiment by Gaoyan Zhu, Yuanbo Chen, Yoshihiko Hasegawa and Peng Xue, 13 December 2023,Physical Review Letters.DOI: 10.1103/PhysRevLett.131.240401

This work has been supported by the National Natural Science Foundation of China (Grant Nos. 92265209 and 12025401). Y. H. acknowledges support by JSPS KAKENHI Grant Number JP22H03659. Y.C. acknowledges support by JST SPRING, Grant Number JPMJSP2108.

Read the original post:

Breaking Causality: The Revolutionary Power of Quantum Batteries - SciTechDaily

Read More..

Physicist Bob Coecke: Its easier to convince kids than adults about quantum mechanics – The Guardian

Physics

The Belgian physicist and industrial musician on replacing maths with pictures, why hes now working in industry and why we all need to understand subatomic physics

Zeeya Merali

Sat 16 Dec 2023 09.00 EST

Belgian physicist and musician Prof Bob Coecke, 55, wants to teach quantum physics to a mass audience. The paradox-filled theory that describes the microscopic realm has become a staple of science fiction, from Marvels Ant-Man to the multiple Oscar-winning Everything Everywhere All at Once. Its famously bizarre and, in the UK, the subject is mostly reserved for undergraduates specialising in physics because it requires grappling with complicated maths. But Coecke, a former Oxford professor, has devised a maths-free framework using diagrams for total beginners, outlined in Quantum in Pictures, his book with Dr Stefano Gogioso that was published earlier this year. Over the summer, they ran an education experiment, teaching the pictorial method to UK schoolchildren who then beat the average exam scores of Oxford Universitys postgraduate physics students.

Quantum physics is notoriously esoteric. Why should most people even want to study it? Think about AI. Think about how the world is getting fucked up now. Billion-dollar companies are in charge of a revolution that could control the world and nobody understands what they are doing. I used to be an Oxford professor for 20 years and now I work in industry, with Quantinuum, building quantum computers [machines designed to exploit subatomic physics to one day outperform conventional computers]. We want people to understand what were doing from the start, before the technology becomes huge. We want to make Stem [science, technology, engineering and mathematics] more inclusive, make quantum more inclusive. Its completely counterintuitive, but within industry I can now do this educational experiment.

Your educational experiment involved 54 schoolchildren, aged 15-17, who were randomly selected from around 1,000 applicants, from 36 UK schools mostly state schools. The teenagers spent two hours a week in online classes and after eight weeks were given a test using questions from an Oxford postgraduate quantum physics exam. More than 80% of the pupils passed and around half earned a distinction. Were you surprised by their success? At one point, I was going to call off the whole thing because I thought it was going to be a complete disaster. Wed originally wanted the kids to interact with each other on social media or communicate online, but that wasnt allowed due to the ethical guidelines for the experiment. I thought, what sort of educational experience is it, if you cant talk to each other?

This is the Covid generation: none of them put their cameras on [for the online classes], so we were looking at a black screen. None of them asked questions using their voices, they just typed. It was a difficult teaching challenge by all standards. We also saw a self-esteem problem with the students. But the majority of kids liked that we had announced that you didnt need a complex maths background. The maths had been a barrier to kids who had wanted to access this knowledge.

And then we got back the numbers. They did significantly better than we see from university-level students. Exams were marked blind, so we dont know how many came in with the aim of pursuing Stem. We are processing that data now.

How did you come up with this quantum picturalism method? Was it originally aimed at children and beginners? Im a very visual person. Im not just a quantum physicist, Im an artist and musician. In fact, the only reason I ended up in quantum physics was because I wanted to support my music career my rock/metal/electronica fusion band, Black Tish, released two albums this year. I got a job at Oxford Universitys computer science department in the 1990s and my senior colleague Samson Abramsky told me we needed a high-level programming language for [future] quantum computers. For normal computing, you program in zeros and ones, but most people dont understand how to do that. But everyone understands how to use an iPhone. We wanted the equivalent of an iPhone interface for quantum-computer programming. So Abramsky and I published a new formalism of quantum mechanics in 2004, based on category theory [a well-established branch of mathematics that uses diagrams to describe collections of objects].

I then developed it over the years, with others, and I wrote a book about it for physicists in 2017 with Aleks Kissinger. But the worst people to teach are theoretical physicists. They have so much to unlearn. Half of the mainstream people in quantum computing said: You are doing things with silly pictures, this cant be useful, it is too simple! And the other half said: Category theory is so hard, this cant be useful, it is too complicated! It took years to get rid of the stigma that this was too complicated. So I wrote this new book with Stefano, who did all the pictures, specifically to run this experiment, to prove that this is so easy, kids can do it and outperform Oxford postgrad students.

We hear so many weird and wonderful things about quantum physics: A cat in a box can be dead and alive at the same time, until you look at it; particles can be in two places at once, unless their position is measured; information can be teleported between quantum systems. How do you convey these processes using just pictures? Its simple. Its all drawing quantum circuits: boxes connected by wires [to demonstrate quantum phenomena]. Teleporting is just sliding boxes along a wire. Measurements are represented by boxes called spiders that have many legs, or wires, sticking out. A quantum particle that can be in two places at once before being measured is drawn as two legs that go into a spider the spiders body represents the measurement and theres one leg coming out the other side, thats the outcome.

What is your hope for quantum picturalism in the future? I have been approached by people in the Australian and Greek governments, in their education departments, who are interested in implementing this. I am also passionate about taking this into Africa. Its early days, but we are planning something there.

I started out wanting to change the way quantum mechanics is understood, and its easier to convince kids than adults. They have no preconceptions. So maybe the next generation will carry it forward. As one of the founders of quantum physics, Max Planck, once said: Science advances one funeral at a time.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

One-timeMonthlyAnnual

Other

Read the rest here:

Physicist Bob Coecke: Its easier to convince kids than adults about quantum mechanics - The Guardian

Read More..

If Quantum Physics is Queer, What Does it Mean for Quantum Technologies? – Medium

Photo by Jordan McDonald on Unsplash

All that we think is obvious in the physical world is, in reality, merely the physical traits that we can see, and which we then measure against the standards we are familiar with. For instance, our common sense would appear to tell us that a big system (such as a forest ecosystem) could only be made up of subsystems (the trees, herbage, animals, water etc) that create a set of rules that linearly correlate with the overall workings of the big system (the forest). But what if we break down the subsystems further. For instance, lets take a leaf from a tree and photosynthesis. Photosynthesis, as we should have learnt by middle school (if the school is doing its job), is the process by which a plant uses carbon dioxide, water, and the sun in combination to produce energy in the form of glucose, with the aid of a machinery known as chlorophyll.

All of these are explained using biological and chemical logic that do not, for instance, explain why the energy production of the leaf must work in such combination to produce the chemical kinetics that would then lead to the intricate production of the different energy compounds needed to power the life of the tree. Rather, what we get is the expected: every base component (water + carbon dioxide + sunlight (with the aid of chlorophyll) = ATP and NADPH (the energy molecules).

Admittedly, I am taking shortcuts around the steps needed to get to the final product, but even if you were to lay out all the steps, there is still a missing answer to the question of how the photons and neutrinos from the sun cooperate with an even more complex range of particles found in carbon dioxide and water through the absorption of sunlight through the chloroplasts where the chlorophyll are located? This is even after we take into account the breaking down of those complex entities into subatomic particles found in hydrogen, oxygen, and carbon. For some scientists, the answers lie in the quantum systems within each of these component subsystems, one represented by quantum biology.

Read more from the original source:

If Quantum Physics is Queer, What Does it Mean for Quantum Technologies? - Medium

Read More..

Science of Visibility and Invisibility | by Alexandre Kassiantchouk | Time Matters | Dec, 2023 – Medium

Lets explore an object or a particle that was at point A and had velocity v at point A (we dont care about its velocity before or after that). Lets say, we have two observers: a stationary observer at point B, and another observer moving at constant velocity v: the second observer was at point A when the observed object/particle was there, and later he was at some point E at the moment, when the first observer, stationed in B, noticed/saw the object at the point A (after some time delay t for light to travel from A to B). For the moving observer (between A and E), according to Einstein, time slows down by a factor sqrt(1-v/c), where sqrt is square root, and in his frame of reference (where he thinks of himself as not moving) light has travelled not from A to B, but from E to B, and that took tsqrt(1-v/c) time, which is less than t. Since we know all sides in the triangle ABE, we can find angle EAB value :

= Arccos(v/c)

Only at the angle Arccos(v/c), particle A moving at speed v is visible to the first observer! We will discuss nuances of this seemingly strange restriction. Such line of sight can be geometrically represented as AC line below, where point C is constructed as the intersection between the perpendicular to the velocity vector v at its end and radial distance c (299,792,458 m), because for angle in the picture below, we have Cos() = v/c.

Here we see the speed of light constant c residing on the hypotenuse of the triangle Avc, and Ac there is the line of sight. Now, we can imagine all possible velocities v of the object A it to be seen at this line of sight. All such velocities end up on the circle with the diameter c, and this diameter lines up with the line of sight: check the right image above. Here summary about such circle and the line of sight:

Stationary (meaning v=0) object/particle A is always visible. Longest vector v=c is not achievable for objects/particles having mass, as, according to Einstein, for such objects/particles v

But in real life we do not experience such limitations to visibility, why is that? It is because real objects are composed of atoms, wrapped in electrons, which always wobble. And their vibrations, unnoticeable to us, often, from time to time, compensate for the velocity of the object, making combined velocity of an atom vibration plus the object velocity equal to 0. That is because electron vibration speed is much higher than the speed of an object. For example, speed of electron knocked out of an atom in photoelectric effect is about 600 km/sec. Assuming that atoms/electrons vibrate at such speed, any object at a speed well below 600 km/sec is visible: from time to time for a blink of an eye, stationary at that moment atoms are visible (because the combined velocity+vibration=0 happens very often, thousands times per second).

Lets explore the case when the vibration velocity v value is smaller than the objects velocity v value. Then, in the picture below, the line of sight is not just fixed AB1 or AB2, but it varies from AB1 to AC1 and from AB2 to AC2 for a combined velocity v1=v+v instead of just v:

Thus, angle arccos(v/c), at which non-vibrating object is visible, changes to a range of angles arccos( sqrt(v-v)/c ) arcsin(v/v), at which vibrating object is visible. In reality, such area of visibility/observability is 3-dimensional between two cones:

P.S. For velocities v close to speed of light we should keep in mind that adding velocities vv operation should be adjusted to the Einsteins velocity-addition operator vv, so that sum does not exceed speed of light c. For example, for collinear velocities v and v, their real sum is vv = (v+v)/[1+vv/c], which never exceeds c. Even for v=c, cv stays c:

c v = (c+v) / [1+cv/c] = (c+v) / [1+v/c] = (c+v) / [(c+v)/c] = c.

Read the original here:

Science of Visibility and Invisibility | by Alexandre Kassiantchouk | Time Matters | Dec, 2023 - Medium

Read More..

Beyond the Void: New Experiment Challenges Quantum Electrodynamics – SciTechDaily

The X-ray beam from the worlds largest X-ray laser, the European XFEL, only becomes as clearly visible as in the photo in complete darkness and with an exposure time of 90 seconds. In 2024, the first experiments to detect quantum fluctuations in vacuum will take place here. Credit: European XFEL / Jan Hosan

Absolutely empty that is how most of us envision the vacuum. Yet, in reality, it is filled with an energetic flickering: the quantum fluctuations. Scientists are currently scientists are gearing up for a laser experiment intended to verify these vacuum fluctuations in a novel way, which could potentially provide clues to new laws in physics.

A research team from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) has developed a series of proposals designed to help conduct the experiment more effectively thus increasing the chances of success. The team presents its findings in the scientific journalPhysical Review D.

The physics world has long been aware that the vacuum is not entirely void but is filled with vacuum fluctuations an ominous quantum flickering in time and space. Although it cannot be captured directly, its influence can be indirectly observed, for example, through changes in the electromagnetic fields of tiny particles.

However, it has not yet been possible to verify vacuum fluctuations without the presence of any particles. If this could be accomplished, one of the fundamental theories of physics, namely quantum electrodynamics (QED), would be proven in a hitherto untested area. Should such an experiment reveal deviations from the theory, however, it would suggest the existence of new, previously undiscovered particles.

Dr. Ulf Zastrau heads the HED (High Energy Density Science) experimental station at the European XFEL. In the HED beam chamber the flashes from the worlds largest X-ray laser must meet the light pulses from the ReLaX high-power laser operated by the HZDR in order to detect vacuum fluctuations. Credit: European XFEL / Jan Hosan

The experiment intended to accomplish this is planned as part of the Helmholtz International Beamline for Extreme Fields (HIBEF), a research consortium led by the HZDR at the HED experimental station of the European XFEL in Hamburg, the largest X-ray laser in the world. The underlying principle is that an ultra-powerful laser fires short, intense flashes of light into an evacuated stainless steel chamber. The aim is to manipulate the vacuum fluctuations so that they, seemingly magically, change the polarization of an X-ray flash from the European XFEL, i.e., rotate its direction of oscillation.

It would be like sliding a transparent plastic ruler between two polarizing filters and bending it back and forth, explains HZDR theorist Prof. Ralf Schtzhold. The filters are originally set up so that no light passes through them. Bending the ruler would now change the direction of the lights oscillation in such a way that something could be seen as a result. In this analogy, the ruler corresponds to the vacuum fluctuations while the ultra-powerful laser flash bends them.

The original concept involved shooting just one optical laser flash into the chamber and using specialized measurement techniques to register whether it changes the X-ray flashs polarization. But there is a problem: The signal is likely to be extremely weak, explains Schtzhold. It is possible that only one in a trillion X-ray photons will change its polarization.

But this might be below the current measurement limit the event could simply fall through the cracks undetected. Therefore, Schtzhold and his team are relying on a variant: instead of just one, they intend to shoot two optical laser pulses simultaneously into the evacuated chamber.

Both flashes will strike there and literally collide. The X-ray pulse of the European XFEL is set to fire precisely into their collision point. The decisive factor: The colliding laser flashes affect the X-ray pulse like a type of crystal. Just as X-rays are diffracted, i.e., deflected, when passing through a natural crystal, the XFEL X-ray pulse should also be deflected by the briefly existing light crystal of the two colliding laser flashes.

That would not only change the polarization of the X-ray pulse but also slightly deflect it at the same time, explains Ralf Schtzhold. This combination could increase the chances of actually being able to measure the effect so the researchers hope. The team has calculated various options for the striking angle of the two laser flashes colliding in the chamber. Experiments will show which variant proves to be most suitable.

The prospects could even be improved further if the two laser flashes shot into the chamber were not of the same color but of two different wavelengths. This would also allow the energy of the X-ray flash to change slightly, which would, likewise, help to measure the effect. But this is technically quite challenging and may only be implemented at a later date, says Schtzhold.

The project is currently in the planning stages in Hamburg together with the European XFEL team at the HED experimental station, and the first trials are scheduled to launch in 2024. If successful, they could confirm QED once more.

But perhaps the experiments will reveal deviations from the established theory. This could be due to previously undiscovered particles for example, ultra-light ghost particles known as axions. And that, says Schtzhold, would be a clear indication of additional, previously unknown laws of nature.

Reference: Detection schemes for quantum vacuum diffraction and birefringence by N. Ahmadiniaz, T. E. Cowan, J. Grenzer, S. Franchino-Vias, A. Laso Garcia, M. md, T. Toncian, M. A. Trejo and R. Schtzhold, 10 October 2023,Physical Review D.DOI: 10.1103/PhysRevD.108.076005

Read the rest here:

Beyond the Void: New Experiment Challenges Quantum Electrodynamics - SciTechDaily

Read More..