Page 2,631«..1020..2,6302,6312,6322,633..2,6402,650..»

Vedika Khemani wins Breakthrough New Horizons Prize | Stanford News – Stanford University News

Vedika Khemani, assistant professor of physics at Stanford University, has been awarded a New Horizons in Physics Prize from the Breakthrough Prize Foundation. Khemani was recognized for pioneering theoretical work formulating novel phases ofnon-equilibrium quantum matter, including time crystals.

Vedika Khemani (Image credit: Rod Searcey)

Time crystals got their name for the fact that, like crystals, they are structurally arranged in a repeating pattern. But, while standard crystals like diamonds or salt have an arrangement that repeats in space, time crystals repeat across time forever. Importantly, they do so without any input of energy, like a clock that runs forever without batteries. Khemanis work offered a theoretical formulation for the first time crystals, as well as a blueprint for their experimental creation. But she emphasizes that time crystals are only one of the exciting potential outcomes of out-of-equilibrium quantum physics, which is still a nascent field.

None of the world is in equilibrium; just look out your window, right? Were starting to see into these vastly larger spaces of how quantum systems evolve through experiments, said Khemani, who who is faculty in the School of Humanities and Sciencesand a member of Q-Farm, Stanfords broad interdisciplinary initiative in quantum science and engineering. Im very excited to see what kinds of new physics these new regimes will bring. Time crystals are one example of something new we could get, but I think its just the beginning.

The $100,000 New Horizons Prize in Physics is given each year to up to three promising junior researchers who have already produced important work, according to the prize website. New Horizons prizes are one of three groups of Breakthrough Prizes in physics the others are the $3 million Special Breakthrough Prize and the $3 million Breakthrough Prize. The Breakthrough Prizes also recognize researchers in mathematics and life sciences. Called the Oscars of Science, the prizes are celebrated at a gala award ceremony presented by superstars of movies, music, sports and tech entrepreneurship. Since the prizes began in 2012, 10 Stanford faculty and researchers have won Breakthrough Prizes.

The concept of time crystals was first proposed in 2012 by physicist and Nobel laureate Frank Wilczek, but the idea was met with significant skepticism and comparisons to the impossible perpetual motion machine. In 2014, shortly after Wilczeks proposal, it was shown by Masaki Oshikawa and Haruki Watanabe that fundamental laws of thermodynamics provably forbid the existence of time crystals. (Watanabe is a co-recipient of the New Horizons Prize.)

Thus, Khemani wasnt thinking of time crystals at all as she went about her graduate work at Princeton University on non-equilibrium quantum physics. But in 2016, a reviewer for a preprint paper co-authored by Khemani pointed out that she and her colleagues had, without intending to, outlined a working model for time crystals.

I think if we had set out to find the time crystal we would have run into the same kinds of objections as Wilczek, said Khemani. Instead, we were thinking about: How do we generalize the ideas of quantum phases of matter to systems that are out of equilibrium?

Khemani and her doctoral advisor, Shivaji Sondhi, a professor of physics at Princeton University, were working on the problem of many-body localization. In a many-body localized system, particles get stuck in the state in which they started and can never relax to an equilibrium state. As such, these systems lie strictly outside the framework of equilibrium thermodynamics, which underpins our conventional understanding of all phases of matter.

Sondhi and Khemani worked with Achilleas Lazarides and Roderich Moessner at the Max Planck Institute to figure out how to think about phases of matter in many-body localized systems that are periodically driven in time, for instance by a laser. They found that, while equilibrium thermodynamics goes out the window, the possibility of formulating phases of matter need not. In addition to abstract theoretical formulations, they studied a concrete model: a periodically driven system of Ising spins. (The Ising model is often described as the fruit fly of statistical physics and has been extensively studied in equilibrium to understand fundamental phenomena, such as magnetism.)

These researchers found a number of phases in the out-of-equilibrium Ising model, including a novel one in which the system displays a stable, repetitive flip between patterns that repeat in time forever, at a period twice that of the driving period of the laser. (As required by the definition of time crystals, the laser does not impart energy into the system.) The phase Khemani and co-workers had found was, in fact, a time crystal the out-of-equilibrium setting in which they were working allowed them to evade the constraints imposed by the laws of thermodynamics.

In the months that followed the preprint, important properties about the new phase were worked out by Khemani and her collaborators, notably Curt von Keyserlingk at the University of Birmingham, as well as a by Dominic Else, Bela Bauer and Chetan Nayak at Microsoft Station Q. (Else and collaborators also independently identified Khemanis model as a time crystal, and Else is a co-recipient of the New Horizons Prize.) It was found that the phase displays a remarkable amount of robustness and stability. Then, various early experiments in 2017 showed promising precursors of the phase although they were ultimately found to not realize a stable many-body time crystal.

Khemani describes work in the years that followed as creating a checklist of what actually makes a time crystal a time crystal, and the measurements needed to experimentally establish its existence, both under ideal and realistic conditions.

In 2020, Matteo Ippoliti, a postdoctoral scholar at Stanford working with Khemani, and others published a proposal for experimentally realizing a time crystal using the unique capabilities of Googles Sycamore quantum computer. Following this proposal, this summer, Ippoliti and Khemani, collaborating with the large Google Quantum AI team, published a preprint paper detailing the experimental creation of the first-ever time crystal on Googles device. That paper is now undergoing peer review.

Khemani sees great promise in these types of quantum experiments for many-body physics.

While many of these efforts are broadly motivated by the quest to build quantum computers which may only be achievable in the distant future, if at all these devices are also, and immediately, useful when viewed as experimental platforms for probing new nonequilibrium regimes in many-body physics, said Khemani.

As for the award recognizing all of this work, Khemani described how it reflects the bigger picture. This is called the New Horizons prize and I do think we are looking at new horizons in physics, she said. There are people at Stanford who think about black holes and big astronomical questions talking to people who are trying to build quantum computers, talking to many-body theorists, talking to quantum information scientists. Its really exciting when you start getting so many different perspectives and so many different new ways of looking at problems.

Read more from the original source:

Vedika Khemani wins Breakthrough New Horizons Prize | Stanford News - Stanford University News

Read More..

Weird Muons May Point to New Particles and Forces of Nature – Scientific American

After leaving the European Organization for Nuclear Research (CERN) physics laboratory years ago, I crossed the Swiss-German border by high-speed train. Looking out the window of the carriage, I was enthralled by the scenes flashing by: a young couple embracing on an otherwise deserted platform, an old man standing by a rusty wagon with a missing wheel, two girls wading into a reedy pond. Each was just a few flickering frames, gone in the blink of an eye, but enough for my imagination to fill in a story.

I had just finished writing up some theoretical work on muon particlesheavier cousins to electronsand it was out for the scrutiny of my particle physics colleagues during peer review. There was a symmetry between my thoughts as I looked out the train window that day and the research I had been working on. I had been analyzing the flickering effects of unseen virtual particles on muons, aiming to use the clues from these interactions to piece together a fuller picture of our quantum universe. As a young theorist just launching my career, I had heard about proposed experiments to measure the tiny wobbles of muons to gather such clues. I had just spent my last few months at CERN working on an idea that could relate these wobbling muons to the identity of the missing dark matter that dominates our universe and other mysteries. My mind fast-forwarding, I thought, Greatnow I just have to wait for the experiments to sort things out. Little did I suspect that I would end up waiting for a quarter of a century.

Finally, this past April, I tuned in to a Webcast from my home institution, Fermi National Accelerator Laboratory (Fermilab) near Chicago, where scientists were reporting findings from the Muon g-2 (g minus two) experiment. Thousands of people around the world watched to see if the laws of physics would soon need to be rewritten. The Fermilab project was following up on a 2001 experiment that found tantalizing hints of the muon wobble effect I had been hoping for. That trial didnt produce enough data to be definitive. But now Muon g-2 co-spokesperson Chris Polly was unveiling the long-awaited results from the experiments first run. I watched with excitement as he showed a collection of new evidence that agreed with the earlier trial, both suggesting that muons are not acting as current theory prescribes. With the evidence from these two experiments, we are now very near the rigorous statistical threshold physicists require to claim a discovery.

What is this wobble effect that has me and other scientists so intrigued? It has to do with the way a muon spins when it travels through a magnetic field. This variation in spin direction can be affected by virtual particles that appear and disappear in empty space according to the weird rules of quantum mechanics. If there are additional particles in the universe beyond the ones we know about, they, too, will show up as virtual particles and exert an influence on a muons spin in our experiments. And this seems to be what we are seeing. The Fermilab experiment and its precursor measured a stronger wobble in muons spins than what we expect based on just the known particles. If the current discrepancy holds up, this will be the biggest breakthrough in particle physics since the discovery of the Higgs bosonthe most recent novel particle discovered. We might be observing the effects of particles that could help unveil the identity of dark matter or even reveal a new force of nature.

My romance with physics began when I was a child, gazing in amazement at the Via Lactea (the Milky Way) in the deep dark sky of Argentinas Pampas where I grew up. The same wonder fills me now. It is my job as a particle physicist to investigate what the universe is made of, how it works and how it began.

Scientists believe there is a simple yet elegant mathematical structure, based on symmetries of nature, that describes the way microscopic elementary particles interact with one another through the electromagnetic, weak and strong forces; this is the miracle of particle physics that scientists prosaically call the Standard Model. The distant stars are made of the same three elementary matter particles as our bodies: the electron and the up and down quarks, the two latter of which form protons and neutrons. Starlight is the result of the electromagnetic force acting between the charged protons and electrons, liberating light energy at the hot surface of the star. The heat source of these stars, including our sun, is the strong force, which acts on the protons and neutrons to produce nuclear fusion. And the weak force, which operates on both the quarks and the electrons, turns protons into neutrons and positively charged electrons and controls the rate of the first step in the fusion process. (The fourth force of nature, gravity, is not part of the Standard Model, although integrating it with the other forces is a major goal.)

Physicists assembled the Standard Model piece by piece over the course of decades. At particle accelerators around the world, we have been able to create and observe all of the particles that the mathematical structure requires. The last to be found, the Higgs boson, was discovered almost a decade ago at CERNs Large Hadron Collider (LHC). Yet we know the Standard Model is not complete. It does not explain, for example, the 85 percent of the matter in the universedark matterthat holds the cosmos together, making galaxies such as our Milky Way possible. The Standard Model falls short of answering why, at some early time in our universes history, matter prevailed over antimatter, enabling our existence. And the Muon g-2 experiment at Fermilab may now be showing that the Standard Model, as splendid as it is, describes just a part of a richer subatomic world.

The subject of the experimentmuonsare produced in abundance by cosmic rays in Earths atmosphere; more than 10,000 of them pass through our bodies every minute. These particles have the same physical properties as the familiar electron, but they are 200 times heavier. The extra mass makes them better probes for new phenomena in high-precision laboratories because any deviations from their expected behavior will be more noticeable. At Fermilab, a 50-foot-diameter ring of powerful magnets stores muons created under controlled conditions by smashing a beam of protons from a particle accelerator into a target of mostly nickel. This process produces pions, unstable composite particles that then decay into neutrinos and muons through weak force effects. At this point, the muons enter a ring filled with the vacuum of empty space.

Like electrons, muons have electric charge and a property we call spin, which makes them behave as little magnets. Because of the way they were created, when negatively charged muons enter the ring their spins point in the same direction as their motion, whereas for positively charged muons (used in the Fermilab experiment) the spins point in the opposite direction of their motion. An external magnetic field makes the electrically charged muons orbit around the ring at almost the speed of light. At the same time, this magnetic field causes the spin of the muons to precess smoothly like a gyroscope, as the particles travel around the ring, but with a small wobble.

The rate of precession depends on the strength of the muons internal magnet and is proportional to a factor that we call g. The way the equations of the Standard Model are written, if the muon didnt wobble at all, the value of g would be 2. If that were the case, the muons direction of motion and direction of spin would always be the same with respect to each other, and g-2 would be zero. In that case, scientists would measure no wobble of the muon. This situation is exactly what we would expect without considering the properties of the vacuum.

But quantum physics tells us that the nothingness of empty space is the most mysterious substance in the universe. This is because empty space contains virtual particlesshort-lived objects whose physical effects are very real. All the Standard Model particles we know of can behave as virtual particles as a result of the uncertainty principle, an element of quantum theory that limits the precision with which we can perform measurements. As a result, it is possible that for a very short time the uncertainty in the energy of a particle can be so large that a particle can spring into existence from empty space. This mind-blowing feature of the quantum world plays a crucial role in particle physics experiments; indeed, the discovery of the Higgs boson was enabled by virtual particle effects at the LHC.

Virtual particles also interact with the muons in the Fermilab ring and change the value of g. You can imagine the virtual particles as ephemeral companions that a muon emits and immediately reabsorbsthey follow it around like a little cloud, changing its magnetic properties and thus its spin precession. Therefore, scientists always knew that g would not be exactly 2 and that there would be some wobble as muons spin around the ring. But if the Standard Model is not the whole story, then other particles that we have not yet discovered may also be found in that cloud, changing the value of g in ways that the Standard Model cannot predict.

Muons themselves are unstable particles, but they live long enough inside the Muon g-2 experiment for physicists to measure their spin direction. Physicists do this by monitoring one of the decay particles they create: electrons, from decays of negatively charged muons, or positronsthe antiparticle version of electronsfrom decays of positively charged muons. By determining the energy and arrival time of the electrons or positrons, scientists can deduce the spin direction of the parent muon. A team of about 200 physicists from 35 universities and labs in seven countries developed techniques for measuring the muon g-2 property with unprecedented accuracy.

The first experiments to measure the muon g-2 took place at CERN, and by the late 1970s they had produced results that, within their impressive but limited precision, agreed with standard theory. In the late 1990s the E821 Muon g-2 experiment at Brookhaven National Laboratory started taking data, with a similar setup to that at CERN. It ran until 2001 and got impressive results showing an intriguing discrepancy from the Standard Model calculations. It collected only enough data to establish a three-sigma deviation from the Standard Modelwell short of the five-sigma statistical significance physicists require for a discovery.

A decade later Fermilab acquired the original Brookhaven muon ring, shipped the 50-ton apparatus from Long Island to Chicago via highways, rivers and an ocean, and started the next generation of the Muon g-2 experiment. Nearly a decade after that, Fermilab announced a measurement of muon wobble with an uncertainty of less than half a part in a million. This impressive accuracy, achieved with just the first 6 percent of the expected data from the experiment, is comparable to the result from the full run of the Brookhaven trial. Most important, the new Fermilab results are in striking agreement with the E821 values, confirming that the Brookhaven findings were not a fluke.

To confirm this years results, we need not just more experimental data but also a better understanding of what exactly our theories predict. Over the past two decades we have been refining the Standard Model predictions. Most recently, more than 100 physicists working on the Muon g-2 Theory Initiative, started by Aida El-Khadra of the University of Illinois, have strived to improve the accuracy of the Standard Models value for the muon g-2 factor. Advances in mathematical methods and com putational power have enabled the most accurate theoretical calculation of g yet, taking into account the effects from all virtual Standard Model particles that interact with muons through the electromagnetic, weak and strong forces. Just months before Fermilab revealed its latest experimental measurements, the theory initiative unveiled their new calculation. The number disagrees with the experimental result by 4.2 sigma, which means that the chances that the discrepancy is purely a statistical fluctuation are about one in 40,000.

Still, the latest theoretical calculation is not iron-clad. The contributions to the g-2 factor governed by effects from the strong force are extremely difficult to compute. The Muon g-2 Theory Initiative used input from two decades of judiciously measured data in related experiments with electrons to evaluate these effects. Another technique, though, is to try to calculate the size of the effects directly from theoretical principles. This calculation is way too complex to solve exactly, but physicists can make approximations using a mathematical trick that discretizes our world into a gridlike lattice of space and time. These techniques have yielded highly accurate results for other computations where strong forces play a dominant role.

Teams around the world are tackling the lattice calculations for the muon g-2 factor. So far only one team has claimed to have a result of comparable accuracy to those based on experimental data from electron collisions. This result happens to dilute the discrepancy between the experimental and Standard Model expectationsif it is correct, there may not be evidence of additional particles tugging on the muon after all. Yet this lattice result, if confirmed by other groups, would itself conflict with experimental electron datathe puzzle then would be our understanding of electron collisions. And it would be hard to find theoretical effects that would explain such a result because electron collisions have been so thoroughly studied.

If the mismatch between Fermilabs measurements and theory persists, we may be glimpsing an uncharted world of unfamiliar forces, novel symmetries of nature and new particles. In the research I published 25 years ago searching for clues about the muons wobble, my collaborators and I considered a proposed property of nature called supersymmetry. This idea bridges two categories of particlesbosons, which can be packed together in large numbers, and fermions, which are antisocial and will share space only with particles of opposite spin. Supersymmetry postulates that each fermion matter particle of the Standard Model has a yet to be discovered boson particle superpartner, and each Standard Model boson particle also has an undiscovered fermion superpartner. Supersymmetry promises to unify the three Standard Model forces and offers natural explanations for dark matter and the victory of matter over antimatter. It may also explain the striking Muon g-2 results.

Just after the Fermilab collaboration announced its measurement, my colleagues Sebastian Baum, Nausheen Shah, Carlos Wagner and I posted a paper to a preprint server investigating this intriguing notion. Our calculations showed that virtual superparticles in the vacuum could make the muons wobble faster than the Standard Model predicts, just as the experiment saw. Even more exhilarating, one of those new particlescalled a neutralinois a candidate for dark matter. Supersymmetry can take numerous forms, many of them already ruled out by data from the LHC and other experimentsbut plenty of versions are still viable theories of nature.

The paper my team submitted was just one of more than 100 that have appeared proposing possible explanations for the Muon g-2 result since it was announced. Most of these papers suggest new particles that fall into one of two camps: either light and feeble or heavy and strong. The first category includes new particles that have masses comparable to or smaller than the muon and that interact with muons with a strength millions of times weaker than the electromagnetic force. The simplest theoretical models of this type involve new, lighter cousins of the Higgs boson or particles related to new forces of nature that act on muons. These new light particles and feeble forces could be hard to detect in terrestrial experiments other than Muon g-2, but they may have left clues in the cosmos. These light particles would have been produced in huge numbers after the big bang and might have had a measurable effect on cosmic expansion. The same ideathat light particles and feeble forces wrote a chapter missing from our current history of the universehas also been proposed to explain discrepancies in observations of the expansion rate of space, the so-called Hubble constant crisis.

The second category of explanations for the muon resultsheavy and stronginvolves particles with masses about as heavy as the Higgs boson (roughly 125 times the mass of a proton) to up to 100 times heavier. These particles could interact with muons with a strength comparable to the electromagnetic and weak interactions. Such heavy particles might be cousins of the Higgs boson, or exotic matter particles, or they might be carriers of a new force of nature that works over a short range. Supersymmetry offers some models of this type, so my youthful speculations at CERN are still in the running. Another possibility is a new type of particle called a leptoquarka strange kind of boson that shares properties with quarks as well as leptons such as the muon. Depending on how heavy the new particles are and the strength of their interactions with Standard Model particles, they might be detectable in upcoming runs of the LHC.

Some recent LHC data already point toward unusual behavior involving muons. Recently, for instance, LHCb (one of the experiments at the LHC) measured the decays of certain unstable composite particles similar to pions that produce either muons or electrons. If muons are just heavier cousins of the electron, as the Standard Model claims, then we can precisely predict what fraction of these decays should produce muons versus electrons. But LHCb data show a persistent three-sigma discrepancy from this prediction, perhaps indicating that muons are more different from electrons than the Standard Model allows. It is reasonable to wonder whether the results from LHCb and Muon g-2 are different, flickering frames of the same story.

The Muon g-2 experiment may be telling us something new, with implications far beyond the muons themselves. Theorists can engineer scenarios where new particles and forces explain both the muons funny wobbling and solve other outstanding mysteries, such as the nature of dark matter or, even more daring, why matter dominates over antimatter. The Fermilab experiment has given us a first glimpse of what is going on, but I expect it will take many more experiments, both ongoing and yet to be conceived, before we can confidently finish the story. If supersymmetry is part of the answer, we have a fair chance of observing some of the superparticles at the LHC. We hope to see evidence of dark matter particles there or in deep underground labs seeking them. We can also look at the behavior of muons in different kinds of experiments, such as LHCb.

All of these experiments will keep running. Muon g-2 should eventually produce results with nearly 20 times more data. I suspect, however, that the final measured value of the g-2 factor will not significantly change. There is still a shadow of doubt on the theory side that will be clarified in the next few years, as lattice computations using the worlds most powerful supercomputers achieve higher precision and as independent teams converge on a final verdict for the Standard Model prediction of the g-2 factor. If a big mismatch between the prediction and the measurement persists, it will shake the foundations of physics.

Muons have always been full of surprises. Their very existence prompted physicist I. I. Rabi to complain, Who ordered that? when they were first discovered in 1936. Nearly a century later they are still amazing us. Now it seems muons may be the messengers of a new order in the cosmos and, for me personally, a dream come true.

Go here to read the rest:

Weird Muons May Point to New Particles and Forces of Nature - Scientific American

Read More..

Matter that is both solid and liquid helps classical physics advance – Innovation Origins

There are some inventions that do not have a major impact on our daily lives until much later. Like the invention that you could use to store information on a disc with pits and bumps and read it with a laser. Thats when the CD was born. Last month, Austrian scientists managed to make quantum matter (read the IO article here) that can be both a liquid and a solid. The practical application is still some time away. But it could have a major impact on the development of new materials.

The Innsbruck research team managed to form a crystal and a superfluid at the same time. Superfluids are liquids that flow without any resistance. The experiment was based on magnetic atoms and an ultracold quantum gas, called the Bose Einstein condensate. This is what is created when a gas is cooled to just above absolute zero (minus 273 degrees Celsius).

Also interesting: Relationship discovered between quantum physics and spacetime

Your weekly innovation overviewEvery sunday the best articles of the week in your inbox.

In everyday life, we can only observe three states of aggregation: gaseous, liquid and solid. Substances change their state of aggregation, for example, by changing temperature. Usually substances are solid at low temperatures and gaseous at high temperatures. But if you take a highly diluted gas and cool it down in an extreme way, it becomes neither liquid nor solid, but remains gaseous.

Despite this, the particles do lose more and more energy. Below a certain critical temperature, the quantum properties of these particles become so dominant that what is known as a Bose-Einstein condensate is formed. In this condensate, the individual atoms are completely delocalized. This means that the same atom is present at any point in the condensate at any given time. Consequently, Bose-Einstein condensates are also superfluids.

Francesca Ferlainos team used the Bose-Einstein condensate two years ago to create one-dimensional supersolids. The researchers got magnetic atoms to organize themselves into droplets in the ultracold quantum gas and rearrange themselves as crystals. However, all particles still delocalized across all of the droplets, so the gas remained superfluid. The combination of the crystal structure with simultaneous superfluidity is called suprasolid or supersolid. Now scientists have succeeded in extending this phenomenon to two dimensions. They have managed to create systems with two or more rows of droplets.

This breakthrough significantly broadens the perspectives for research. In a two-dimensional suprasolid system, for example, it is possible to study how vortices form in the gap between several adjacent droplets. These vortices have been defined in theory but had not yet been demonstrated in practice. Yet they are an important consequence of superfluidity.

So far, vortices have only been observed in uniform superfluids and in quantized forms. A quantized vortex is basically a hole in the system, and then the superfluid circulates around this hole with a certain amount of rotation, explains Matthew Norcia of the research team. But in supersolids, the vortices should not be quantified in this way. And they should be found in low-density regions. Thats between droplets, not within a droplet where the atomic density is high.

A quantized vortex is basically a hole in the system, and then the superfluid circulates around this hole with a certain amount of rotation, Matthew Norcia

When researchers talk about quantized vortices in superfluid systems, they are talking specifically about the momentum impulse per particle. This is a unique property of the superfluid that stems from a quantum mechanical treatment of the system. Norcia: We assume that these quantum conditions are relaxed in supersolids. And in such a way that the momentum impulse per particle associated with a vortex can vary, depending on how the density of the state is modulated. So, if we look at the momentum impulse of these quantized vortices, we may have a measure of just how superfluid different supersolids are.

However, observing the phenomena of supersolids in quantum gas promises even more insights for research. This is because some important properties of supersolids can only be studied in two dimensions. For example, the rotational properties of a suprafluid can differ drastically from those of a normal fluid or a different system. Similarly, quantities such as viscosity, for which superfluids are unique, only make sense in systems with more than one dimension.

Nevertheless, these findings also help researchers explore the effects of symmetries. Norcia: When crystalline structures and superfluidity occur simultaneously in supersolids, it relates to the combination of translational and phase symmetries that are each broken in a supersolid. A comprehensive understanding of symmetries is critical to physics in general and to materials systems in particular. In this sense, studying the effects of these symmetries can help us better understand other physics systems. Both in the laboratory and in terms of practical applications.

Back in 2017, several research groups undertook similar experiments with lasers and quantum gases made up of sodium or rubidium atoms. The atoms were coupled to periodic structures excited by laser light. That is, the crystalline structure of the atom state was determined by the laser light. The result was that the supersolid that was produced was extremely rigid. This is because laser light does not support the oscillations of the crystalline structure of solids. By contrast, in the case of the magnetic atoms that the Austrian scientists used, it is the direct magnetic interaction between the atoms that causes the density to modulate. This allows the supersolid to be compress and vibrate. It is also this interaction, in combination with the drop potential, that determines the crystalline fraction.

Also interesting: Physicists develop an interface for quantum computers

See the original post:

Matter that is both solid and liquid helps classical physics advance - Innovation Origins

Read More..

1st ‘atom tornado’ created from swirling vortex of helium atoms – Livescience.com

Physicists have created the first-ever atomic vortex beam a swirling tornado of atoms and molecules with mysterious properties that have yet to be understood.

By sending a straight beam of helium atoms through a grating with teeny slits, scientists were able to use the weird rules of quantum mechanics to transform the beam into a whirling vortex.

The extra gusto provided by the beam's rotation, called orbital angular momentum, gives it a new direction to move in, enabling it to act in ways that researchers have yet to predict. For instance, they believe the atoms' rotation could add extra dimensions of magnetism to the beam, alongside other unpredictable effects, due to the electrons and the nuclei inside the spiraling vortex atoms spinning at different speeds.

Related: The 18 biggest unsolved mysteries in physics

"One possibility is that this could also change the magnetic moment of the atom," or the intrinsic magnetism of a particle that makes it act like a tiny bar magnet, study co-author Yair Segev, a physicist at the University of California, Berkeley, told Live Science.

In the simplified, classical picture of the atom, negatively-charged electrons orbit a positively-charged atomic nucleus. In this view, Segev said that as the atoms spin as a whole, the electrons inside the vortex would rotate at a faster speed than the nuclei, "creating different opposing [electrical] currents" as they twist. This could, according to the famous law of magnetic induction outlined by Michael Faraday, produce all kinds of new magnetic effects, such as magnetic moments that point through the center of the beam and out of the atoms themselves, alongside more effects that they cannot predict.

The researchers created the beam by sending helium atoms through a grid of tiny slits each just 600 nanometers across. In the realm of quantum mechanics the set of rules which govern the world of the very small atoms can behave both like particles and tiny waves; as such, the beam of wave-like helium atoms diffracted through the grid, bending so much that they emerged as a vortex that corkscrewed its way through space.

The whirling atoms then arrived at a detector, which showed multiple beams diffracted to differing extents to have varying angular momentums as tiny little doughnut-like rings imprinted across it. The scientists also spotted even smaller, brighter doughnut rings wedged inside the central three swirls. These are the telltale signs of helium excimers a molecule formed when one energetically excited helium atom sticks to another helium atom. (Normally, helium is a noble gas and doesn't bind with anything.)

The orbital angular momentum given to atoms inside the spiraling beam also changes the quantum mechanical "selection rules" that determine how the swirling atoms will interact with other particles, Segev said. Next, the researchers will smash their helium beams into photons, electrons and atoms of elements besides helium to see how they might behave.

If their rotating beam does indeed act differently, it could become an ideal candidate for a new type of microscope that can peer into undiscovered details on the subatomic level. The beam could, according to Segev, give us more information on some surfaces by changing the image that is imprinted upon the beam atoms bounced off it.

"I think that as is often the case in science, it's not a leap of capability that leads to something new, but rather a change in perspective," Segev said.

The researchers published their findings Sept. 3 in the journal Science.

Originally published on Live Science.

See the article here:

1st 'atom tornado' created from swirling vortex of helium atoms - Livescience.com

Read More..

Could fundamental physical constants not be constant across space and time? – Big Think

Whenever we examine the universe in a scientific manner, there are a few assumptions that we take for granted as we go about our investigations. We assume that the measurements that register on our devices correspond to physical properties of the system that we are observing. We assume that the fundamental properties, laws, and constants associated with the material universe do not spontaneously change from moment to moment. And we also assume, for many compelling reasons, that although the environment may vary from location to location, the rules that govern the universe always remain the same.

But every assumption, no matter how well-grounded it may be or how justified we believe we are in making it, has to be subject to challenge and scrutiny. Assuming that atoms behave the same everywhere at all times and in all places is reasonable, but unless the universe supports that assumption with convincing, high-precision evidence, we are compelled to question any and all assumptions. If the fundamental constants are identical at all times and places, the universe should show us that atoms behave the same everywhere we look. But do they? Depending on how you ask the question, you might not like the answer. Here is the story behind the fine-structure constant, and why it might not be constant, after all.

A number of fundamental constants, as reported by the Particle Data Group in 1986. Although many advances have occurred in the intervening 35 years, the values of these constants have changed very little, with the largest difference being a slight but significant increase in the precisions of these Credit: Particle Data Group / LBL / DOE / NSF

When most people hear the idea of a fundamental constant, they think about the constants of nature that are inherent to our reality. Things like the speed of light, the gravitational constant, or Planck's constant (the fundamental constant of the quantum universe) are often the first things we think of, along with the masses of the various indivisible particles in the universe. In physics, however, these are what we call "dimensionful" constants, which means that they rely on our definitions of quantities like mass, length, or time.

An alternative way to conceive of these constants is to make them dimensionless instead: so that arbitrary definitions like kilogram, meter, or second make no difference to the constant. In this conception, each quantum interaction has a coupling strength associated with it, and the coupling of the electromagnetic interaction is known as the fine-structure constant and is denoted by the symbol alpha (). Fascinatingly enough, its effects were detected before quantum physics was even remotely understood, and remained wholly unexplained for nearly 30 years.

The Michelson interferometer (top) showed a negligible shift in light patterns (bottom, solid) as compared with what was expected if Galilean relativity were true (bottom, dotted). The speed of light was the same no matter which direction the interferometer was oriented.Credit: Albert A. Michelson (1881); A.A. Michelson and E. Morley (1887)

In 1887, arguably the greatest null result in the history of physics was obtained, via the Michelson-Morley experiment. The experiment was brilliant in conception, seeking to measure the speed of Earth through the "rest frame" of the universe by:

Michelson originally performed a version of this experiment by himself back in 1881, detecting no effect but recognizing the need to improve the experiment's precision.

Six years later, the Michelson-Morley experiment represented an improvement by more than a factor of ten, making it the most precise electromagnetic measuring device at the time. While again, no shift was detected, demonstrating no need for the hypothesized aether, the apparatus they developed was also spectacular for measuring the spectrum of light emitted by various atoms. Puzzlingly, where a single emission line was expected to occur at a specific wavelength, sometimes there was just a single line, but at other times there were a series of narrowly-spaced emission lines, providing empirical evidence (but without a theoretical motivation) for a finer-than-expected structure to atoms.

In the Bohr model of the hydrogen atom, only the orbiting angular momentum of the point-like electron contributes to the energy levels. Adding in relativistic effects and spin effects not only causes a shift in these energy levels, but causes degenerate levels to split into multiple states, revealinCredit: Rgis Lachaume and Pieter Kuiper / Public domain

What is actually happening became clearer with the development of modern quantum mechanics. Electrons orbit around the atomic nucleus in fixed, quantized energy levels only, and it is known that they can occupy different orbitals, which correspond to different values of orbital angular momentum. These are required to balance by both relativity and quantum physics. First derived by Arnold Sommerfeld in 1916, it was recognized that these narrowly-spaced lines were an example of splitting due to the fine-structure of atoms, with hyperfine structure from electron/nucleon interactions discovered shortly thereafter.

Today, we understand the fine-structure constant in the context of quantum field theory, where it is the probability of an interacting particle having what we call a radiative correction: emitting or absorbing an electromagnetic quantum (that is, a photon) during an interaction. We typically measure the fine-structure constant, , at today's negligibly low energies, where it has a value that is equal to 1/137.0359991, with an uncertainty of ~1 in the final digit. It is defined as a dimensionless combination of dimensionful physical constants: the elementary charge squared divided by Planck's constant and the speed of light, and the value we measure today is consistent across all sufficiently precise experiments.

In quantum electrodynamics, higher-order loop diagrams contribute progressively smaller and smaller effects. However, as the energy increases, these higher-order processes become more efficient, and thus the value of the fine-structure constant increases with energy.Credit: American Physical Society, 2012

At high energies in particle physics experiments, however, we notice that the value of gets stronger at higher energies. As the energy of the interacting particle(s) increases, so does the strength of the electromagnetic interaction. When the universe was very, very hot such as at energies achieved just ~1 nanosecond after the Big Bang the value of was more like 1/128, as particles like the Z-boson, which can only exist virtually at today's low energies, can more easily be physically "real" at higher energies. The interaction strength is expected to scale with energy, an instance where our theoretical predictions and our experimental measurements match up remarkably well.

However, there is an entirely different way to measure the fine-structure constant at today's low energies: by measuring spectral lines, or emission and absorption features, from distant light sources throughout the cosmos. As background light from a source strikes the intervening matter, some portion of that light is absorbed at specific wavelengths. The exact wavelengths that are observed depend on a number of factors, such as the redshift of the source but also on the value of the fine-structure constant.

The light from ultra-distant quasars provide cosmic laboratories for measuring the gas clouds they encounter along the way, with exact properties of those absorption lines revealing the fine structure constant's value.Credit: Ed Janssen / ESO

If there are any variations in , either over time or directionally in space, a careful examination of spectral features from a wide variety of astrophysical sources, particularly if they span many billions of years in time (or billions of light-years in distance), could reveal those variations. The most straightforward way to look for these variations is through quasar absorption spectroscopy: where the light quasars, the brightest individual sources in the universe, encounter every intervening cloud of matter that exists between the emitter (the quasar itself) and the observer (us, here on Earth).

There are very intricate, precise energy levels that exist for both normal hydrogen (with an electron bound to a proton) and its heavy isotope deuterium (with an electron bound to a deuteron, which contains both a proton and a neutron), and these energy levels are just slightly different from one another. If you can measure the spectra of these different quasars and look for these precise, very-slightly-different fine and hyperfine transitions, you would be able to measure at the location of the quasar.

Narrow-line absorption spectra allow us to test whether constants vary by looking at variations in line placements. Large numbers of systems investigated for fine and hyperfine splitting can reveal if there's an overall varying effect.Credit: M. T. Murphy, J. K. Webb, V. V. Flambaum, and S. J. Curran

If the laws of physics were the same everywhere throughout the universe, then based on the observed properties of these lines, which includes:

you would expect to be able to infer the same value of everywhere. The only difference you would anticipate would be redshift-dependent, where all the wavelengths for a specific absorber would be systematically shifted by the same redshift-dependent factor.

Yet, that is not what we see. Everywhere we look in the universe at every quasar and every example of fine or hyperfine structure in the intervening, absorptive gas clouds we see that there are tiny, minuscule, but non-negligible shifts in those transition ratios. At the level of a few parts-per-million, the value of the fine-structure constant, , appears to observationally vary. What is remarkable is that this variation was not expected or anticipated but has robustly shown up, over and over again, in quasar absorption studies going all the way back to 1999.

Spatial variations in the fine-structure constant are inferred from quasar absorption data. Unfortunately, these individual variations between systems are significantly larger than any overall variation seen in space or time, casting severe doubt on those conclusions.Credit: J.K. Webb et al., Phys. Rev. Lett. 107, 191101 (2011)

Beginning in 1999, a team of astronomers led by Australian astrophysicist John K. Webb started seeing evidence that was different from different astronomical measurements. Using the Keck telescopes and over 100 quasars, they found that was smaller in the past and had risen by approximately 6 parts-per-billion over the past ~10 billion years. Other groups were unable to verify this, however, with complementary observations from the Very Large Telescope showing the exact opposite effect: that the fine-structure constant, , was larger in the past, and has been slowly decreasing ever since.

Subsequently, Webb's team obtained more data with greater numbers of quasars, spanning larger fractions of the sky and cutting across cosmic time. A simple time-variation was no longer consistent with the data, as variations were inconsistent from place-to-place and did not scale directly with either redshift or direction. Overall, there were some places where appeared larger than average and others where it appeared smaller, but there was no overall pattern. Even with the latest 2021 data, the few-parts-in-a-million variations that are seen are inconclusive.

Variations in the fine-structure constant across a wide variety of quasar systems, sorted by redshift. This latest work leverages four separate systems at high redshift, but sees no net evidence for a time-variation in the constant itself.Credit: M.R. Wilczynska et al., Sci Adv. 2020 Apr; 6(17): eaay9672

It is often said that "extraordinary claims require extraordinary evidence," but the uncertainties associated with each of these measurements were at least as large as the suspected signal itself: a few parts-per-million. In 2018, however, a remarkable study even though it was only of one system had the right confluence of properties to be able to measure , at a distance of 3.3 billion light-years away, to a precision of just ~1 part-per-million.

Instead of looking at hydrogen and deuterium, isotopes of the same element with the same nuclear charges but different nuclear masses, researchers using the Arecibo telescope in one of its last major discoveries found two absorption lines of a hydroxyl (OH-) ion: at 1720 and 1612 megahertz in frequency around a rare and peculiar blazar. These absorption lines have different dependencies on the fine-structure constant, , as well as the proton-to-electron mass ratio, and yet these measurements combine to show a null result: consistent with no variation over the past ~3 billion years. These are, to date, the most stringent constraints on tiny changes in the fine-structure constant's value from astronomy, consistent with no effect at all.

The Arecibo radio telescope as viewed from above. The 1000 foot (305 m) diameter was the largest single-dish telescope from 1963 until 2016, and leaves behind a legacy of tremendous scientific discovery.Credit: H. Schweiker/Wiyn and NOAO/Aura/NSF

The observational techniques that have been pioneered in quasar absorption spectroscopy have allowed us to measure these atomic profiles to unprecedented precision, creating a puzzle that remains unsolved to this day: why do quasars appear to show small but significant differences in the inferred value of the fine-structure constant between them? We know there has been no significant variation over the past ~3 billion years, from not only astronomy but from the Oklo natural nuclear reactor as well. In addition, the value is not changing today to 17 decimal places, as constrained by atomic clocks.

It remains possible that the fundamental constants did actually vary a long time ago, or that they varied differently in different locations in space. To untangle whether that is the case or not, however, we first have to understand what is causing the observed variations in quasar absorption lines, and that remains an unsolved puzzle that could just as easily be due to an unidentified error as it is to a physical cause. Until there is a confluence of evidence, where many disparate observations all come together to point to the same consistent conclusion, the default assumption must remain that the fundamental constants really are constant.

From Your Site Articles

Related Articles Around the Web

See the article here:

Could fundamental physical constants not be constant across space and time? - Big Think

Read More..

Quantum Gas Experiment Creates the Coldest Temperature Ever – Interesting Engineering

Physicists at the University of Bremen, Germany produced the coldest temperature ever recorded, an incredibly precisely measured 38 trillionths of a degree above absolute zero. They did so as part of an experiment involving dropping a quantum gas and slowing its motion with magnets, a report from New Atlas explains.

Absolute zero is measured as -459.67 F (-273.15C) and it is the coldest possible temperature on the thermodynamics scale. For an object to reach that temperature, there would have to be zero atomic motion or kinetic energy in its atoms, meaning it is impossible for scientists to ever truly reach absolute zero. However, experiments such as those conducted aboard theInternational Space Station's Cold Atom Lab have been as cold as 100 nanoKelvin, or 100 millionths of a degree above absolute zero.

The team from the University of Bremen have smashed previous records, however, by recording a temperature of38 picoKelvin, or 38 trillionths of a degree above absolute zero, during their experiments. In a press release, the team explained that "while researching the wave properties of atoms, one of the "coldest places in the universe" [was] created for a few seconds at the Center for Applied Space Technology and Microgravity (ZARM) at the University of Bremen."

For their experiments, the team trapped a gas cloud composed of 100,000 rubidium atoms in a magnetic field in a vacuum chamber. This was then cooled down to turn it into a quantum gas calleda Bose-Einstein Condensate (BEC). As quantum gasesact uniformly, as if they were one big atom, scientists use them in experiments to observe unusual quantum effects on the macro scale, with a view to expanding their knowledge of quantum mechanics.

In order to reach the required temperature, the researchers dropped the BEC at the Bremen Drop Tower research facility. While they dropped the gas 393.7 feet (120 meters) down the tower, they also switched the magnetic field containing the gas on and off several times. When the magnetic field is turned off the gas starts to expand and when it is turned back on its contracts. The switching slows the expansion of the gas to an almost complete standstill, greatly lowering its temperature due to the reduced molecular speed.

The researchers were only able to sustain the record-breaking temperature for 2 seconds, though they carried out simulations suggesting it could be maintained for approximately 17 seconds in a weightless environment such as the International Space Station. In space, scientists can confine atoms using much weaker forces, as they don't have to be supported against the effects of gravity. This means that further investigation may eventually take place in the ISS's Cold Atom Lab (CAL), where astronomers last year reported the creation of a "fifth state of matter" during BEC experiments. The CAL was transported to space by a SpaceX rocket in 2018 and it has since been used to observe quantum phenomena that would be undetectable on Earth.

Originally posted here:

Quantum Gas Experiment Creates the Coldest Temperature Ever - Interesting Engineering

Read More..

Controlling the Phase Transition in Superfluid Helium-3 – Physics

September 8, 2021• Physics 14, 122

Researchers demonstrate that they can suppress the formation of defects that appear in superfluid helium-3 when it undergoes a continuous phase transition, allowing them to influence the form of the systems final phase.

When a system that can be described by the 2D Ising model cools, it transitions from having a paramagnetic phase to having a ferromagnetic one via a continuous phase transition. During such a phase transition, magnetic defects can form in the material, creating a nonuniform final ferromagnetic phase. Juho Rysti of Aalto University, Finland, and colleagues now show that they can suppress the formation of these defects in superfluid helium-3when it undergoes a 3D continuous phase transitionby applying a symmetry-breaking bias field to the material [1]. This technique could also be applied to materials undergoing quantum phase transitions, where the appearance of defects can demolish quantum states prepared by adiabatic evolution.

The high-temperature paramagnetic and low-temperature ferromagnetic phases of the 2D Ising model differ by their symmetry: The paramagnetic phase is symmetricthe phase looks the same if the pointing direction of its spins are simultaneously reversedwhile the two ferromagnetic phases of the model are symmetry broken. As a 2D Ising system cools from its paramagnetic phase to a ferromagnetic one, it has to choose which of the two ferromagnetic phases it will transition to, and the evolution of the system slows down near the critical point as the system tries to make this choice.

This critical slowing down causes different parts of the system to move out of thermal equilibrium with each other, something that allows different parts of the system to make independent choices of their magnetization. If the different parts can communicate with each other, the choices can be coordinated, which is more likely for slower cooling rates. Slower cooling rates thus lead to larger domains of one or other of the ferromagnetic phases, with the size of the domains being quantifiable using the Kibble-Zurek-mechanism theory [24]. That said, after the phase transition occurs, the final ferromagnetic phase of the system is almost never uniform but is rather a mosaic of domains of the two ferromagnetic phases (Fig. 1).

The outcome of the phase transition can be made more uniform by applying a magnetic field to the system. For example, if this field points upward as the system cools, the decision will be biased toward the ferromagnetic phase that has spins pointing up. The bias is ineffective for very fast cooling rates because there is not enough time for the field to leave its imprint on the phase of the system. So how slow should the cooling rate be for the bias to be effective in ensuring a uniform ferromagnetic phase? The answer comes again from a generalization of the Kibble-Zurek-mechanism theory, which predicts that the maximal cooling rate scales with the bias strength [5]. The new experiment from Rysti and colleagues shows that when the cooling rate is slow enough, the final phase of the system is an equilibrium ferromagnetic one without any domainsthe first time that has been seen experimentally.

Rysti and his colleagues study a continuous symmetry-breaking phase transition of superfluid helium-3 [1]. Superfluid helium-3 has more complex magnetic behavior than that of the 2D Ising model: Its spins can point in a continuum of directions rather than just up and down, and they can wind into quantized vortices. The nonequilibrium ferromagnetic phase of superfluid helium-3 is a tangle of such vortices, whose density scales with a power of the cooling rate.

In their experiments, the team investigated this scaling behavior by cooling the superfluid using a 3D cryostat and then detecting the orientation of its spins using nuclear magnetic resonance (NMR) coils. In the space between the NMR coils, where the superfluid helium-3 is held, they placed an array of long, thin columns (they call them solid strands), which trap the superfluids vortices.

The experiment shows that when a bias is applied to the systemthe team use both a magnetic field for the bias and also spin-orbit couplingthe power law relating the density of vortices to the cooling rate can break down. Specifically, Rysti and colleagues find that this breakdown happens when the cooling rate falls below a threshold value that is proportional to a power of the bias, with the exponent of the power law being a combination of the universal critical exponents for the transition. Cooling at rates below this threshold value, they find that the density of vortices decays exponentially with cooling time such that the final phase becomes a uniform, equilibrium one.

The team found that the 1-mT bias that they apply is effective only near the phase transitions critical temperature where the system is most susceptible to small perturbations, and even the tiniest of biases can influence the orientation of the spins. They also found that the transition is adiabatic, and as such, they show that cooling with a bias is an efficient way to achieve an adiabatic transition with a finite cooling rate, something that could allow use of the method for adiabatic quantum state preparation in an adiabatic quantum simulator, for example.

The idea of such a simulator is to evolve a system adiabatically from a simple ground state to a more interesting one that cannot be calculated analytically or with a classical computer. If successfully prepared in a quantum simulator, the properties of such a state could simply be measured. Unfortunately, these two ground states are often different enough that to move the system from one to the other requires that the system goes through a quantum phase transition. That means that any adiabatic simulator must be able to evolve a system that is close to its quantum critical point.

This evolution can be described by a quantum generalization of the Kibble-Zurek-mechanism theory, which predicts that, because of a closing of the energy gap of the system at the quantum critical point, excitation of the system is inevitable [6, 7]. It is predicted, however, that in symmetry-breaking transitions these excitations can be suppressed by applying a bias while the system is crossing the quantum critical point [5]. The bias is too weak to affect the properties of the final ground state but is large enough to prevent excitations that would destroy the ground state. The new demonstration by Rysti and colleagues shows that this should be experimentally possible, opening the door to many future experiments on this topic.

Jacek Dziarmaga is a professor at the Jagiellonian University, Poland, where he also obtained his Ph.D. Dziarmaga studies the dynamics of quantum phase transitions. He also develops tensor network algorithms to simulate time evolution of strongly correlated systems in two dimensions.

Matter-wave diffraction can put chiral molecules into superpositions of left- and right-handed forms, enabling new studies of how the two states interact with their environment. Read More

See original here:

Controlling the Phase Transition in Superfluid Helium-3 - Physics

Read More..

The Godmother of the Digital Image – The New York Times

The puzzle that Daubechies solved was how to take a recent wavelet advance a thing of beauty, by the French mathematicians Yves Meyer and Stphane Mallat, but technically impractical and make it amenable to application. To put it on its head, Daubechies would say, but without making it ugly. As she said in the Guggenheim statement: It is something that mathematicians often take for granted, that a mathematical framework can be really elegant and beautiful, but that in order to use it in a true application, you have to mutilate it: Well, they shrug, Thats life applied mathematics is always a bit dirty. I didnt agree with this point of view.

By February 1987, she constructed the foundation for what grew into a family of Daubechies wavelets, each suited to a slightly different task. One key factor made her breakthrough possible: For the first time in her career, she had a computer terminal at her desk, so she could easily program her equations and graph the results. By that summer, Daubechies wrote up a paper and, sidestepping a hiring freeze, secured a job at AT&T Bell Labs. She started in July and moved into a house recently bought with Calderbank, whom she married after popping the question the previous fall. (Calderbank had made it known there was a standing offer, but he resisted proposing out of respect for Daubechies declared opposition to the institution of marriage.)

The ceremony was in May in Brussels. Daubechies cooked the entire wedding dinner (with some help from her fianc), a Belgian-British feast of chicken with endive and Lancashire hotpot stew, chocolate cake and trifle (among other offerings) for 90 guests. She had figured that 10 days of cooking and baking would be manageable, only later to realize that she had neither enough pots and pans for the preparation nor refrigerator space for storage, not to mention other logistical challenges. Her algorithmic solution went as follows: Have friends lend her the necessary vessels; fill said vessels and pass them back for safekeeping in their fridges and for transport to the wedding. She encouraged the more gourmand guests to bring hors doeuvres instead of presents. Her mother, putting her foot down, bought an army of salt-and-pepper shakers.

Daubechies continued her wavelets research at AT&T Bell Labs, pausing in 1988 to have a baby. It was an unsettling and disorienting period, because she lost her ability to do research-level mathematics for several months postpartum. Mathematical ideas wouldnt come, she says. That frightened her. She told no one, not even her husband, until gradually her creative motivation returned. On occasion, she has since warned younger female mathematicians about the baby-brain effect, and they have been grateful for the tip. I could not imagine that I would ever have trouble thinking, Lillian Pierce, a colleague at Duke, says. But when it happened, Pierce reminded herself: OK, this is what Ingrid was talking about. It will pass. Daubechies female students also mention their gratitude for her willingness to push for child care at conferences, and sometimes even to take on babysitting duties herself. My adviser volunteered to entertain my toddler while I gave a talk, a former Ph.D. student, the Yale mathematician Anna Gilbert, recalls. She seamlessly included all aspects of work and life.

In 1993, Daubechies was appointed to the faculty at Princeton, the first woman to become full professor in the mathematics department. She was lured by the prospect of mingling with historians and sociologists and their ilk, not only electrical engineers and mathematicians. She designed a course called Math Alive aimed at nonmath and nonscience majors and gave talks for the general public on Surfing With Wavelets: A New Approach to Analyzing Sound and Images. Wavelets were taking off in the real world, deployed by the F.B.I. in digitizing its fingerprint database. A wavelet-inspired algorithm was used in the animation of films like A Bugs Life.

The Daubechies wavelets are smooth, well balanced, not too spread out and easy to implement on a computer, Terence Tao, a mathematician at the University of California, Los Angeles, says. He was a Princeton grad student in the 1990s and took courses from Daubechies. (He won the Fields Medal in 2006.) Daubechies wavelets, he says, can be used out of the box for a wide variety of signal-processing problems. In the classroom, Tao recalls, Daubechies had a knack for viewing pure math (for curiositys sake), applied math (for practical purpose) and physical experience as a unified whole. I remember, for instance, once when she described learning about how the inner ear worked and realizing that it was more or less the same thing as a wavelet transform, which I think led to her proposing the use of wavelets in speech recognition. The Daubechies wavelet propelled the field into the digital age. In part, wavelets proved revolutionary because they are so mathematically deep. But mostly, as Calderbank notes, it was because Daubechies, a tireless community-builder, made it her mission to construct a network of bridges to other fields.

In due course, the awards began piling up: The MacArthur in 1992 was followed by the American Mathematical Society Steele Prize for Exposition in 1994 for her book Ten Lectures on Wavelets. In 2000 Daubechies became the first woman to receive the National Academy of Sciences award in mathematics. By then she was mothering two young children. (Her daughter, Carolyn, 30, is a data scientist; her son, Michael, 33, is a high school math teacher on Chicagos South Side.) And by all appearances she was handily juggling it all.

Read more from the original source:

The Godmother of the Digital Image - The New York Times

Read More..

Analyst: This Altcoin Has Best Shot at Becoming 2nd Most …

Recently, popular pseudonymous crypto influencer and trader The Crypto Dog said that investing in one lesser-known altcoin is a more exciting long term betthan Ethereum ($ETH).

In a series of tweets on June 10, The Crypto Dog said that he considers Bitcoin to be a safer, more boring long term investment compared to Ethereum. The popular trader thinks that it is unlikely for Ethereum to flipper Bitcoin and explained why:

The Crypto Dog also highlighted the lesser-known altcoin Solana (SOL) as having more upside potential than Ethereum and claimed it was the layer-one protocol with the most potential to challenge ETH for the #2 position in the market cap table.

He then explained why he is so bullish on Solana:

The views and opinions expressed by the author, or any people mentioned in this article, are for informational purposes only, and they do not constitute financial, investment, or other advice. Investing in or trading cryptoassets comes with a risk of financial loss.

Image byPexelsfromPixabay

See the original post:
Analyst: This Altcoin Has Best Shot at Becoming 2nd Most ...

Read More..

Bitcoin Vs Altcoins: How Are They Different? – NDTV Profit

Bitcoin is the first cryptocurrency which was launched in 2009

The rapid popularity of cryptocurrency has led to thousands of them coming into existence. While they offer options to investors, they also make them nervous. Given the relatively new nature of the industry, it is wise to be able to distinguish between them. Broadly, there are two categories: Bitcoin and Altcoins. Given its dominant appeal, Bitcoin is the largest crypto. As the blockchain technology, on which cryptocurrency is based, matured, it led to the emergence of a number of new crypto coins like Ethereum. These new coins were dubbed altcoins, short for alternative coins.

These altcoins are based on the same principle as Bitcoin but take things a step further with some additions and unique features.

What Is Bitcoin?

Bitcoin is the first cryptocurrency, proposed in October 2008 and launched in January next year. It was invented by a pseudonymous person or group of persons known as Satoshi Nakamoto. Bitcoin is a decentralised peer-to-peer digital currency and all transactions are entered into an online public ledger that is available to everyone. It does not require any intermediary, say a bank or any other financial institution, to facilitate transactions.

What Is Altcoin?

Except for Bitcoin, all other crypto coins are generically called Altcoins, which basically mean alternative to Bitcoin. There are more than 11,000 crypto coins listed on CoinMarketCap, a market research organisation. All these are altcoins.

How Are They Different From Bitcoin?

Altcoins were built on the success of Bitcoin by slightly changing the rules to appeal to specific users. For example, Ethereum, the second-largest cryptocurrency by market capitalisation, introduced the idea of smart contracts. These smart contracts are basically codes that run only when predetermined conditions are met. They execute agreements between two parties using blockchain technology, opening possibilities for the development of new applications for crypto.

Altcoins have improved functionality, transaction, and scaling to meet the rapidly expanding demand. As the market for altcoins continues to expand, many wonder if the original cryptocurrency's lead will end with any of the coins that came after it. Simply put, Bitcoin is the first cryptocurrency and all others are Altcoins.

Waiting for response to load...

Continued here:
Bitcoin Vs Altcoins: How Are They Different? - NDTV Profit

Read More..