Page 2,748«..1020..2,7472,7482,7492,750..2,7602,770..»

Quantum Cash and the End of Counterfeiting – IEEE Spectrum

Illustration: Emily Cooper

Since the invention of paper money, counterfeiters have churned out fake bills. Some of their handiwork, created with high-tech inks, papers, and printing presses, is so good that its very difficult to distinguish from the real thing. National banks combat the counterfeiters with difficult-to-copy watermarks, holograms, and other sophisticated measures. But to give money the ultimate protection, some quantum physicists are turning to the weird quirks that govern natures fundamental particles.

At the moment, the idea of quantum money is very much on the drawing board. That hasnt stopped researchers from pondering what encryption schemes they might apply for it, or from wondering how the technologies used to create quantum states could be shrunk down to the point of fitting it in your wallet, says Scott Aaronson, an MIT computer scientist who works on quantum money. This is science fiction, but its science fiction that doesnt violate any of the known laws of physics.

The laws that govern subatomic particles differ dramatically from those governing everyday experience. The relevant quantum law here is the no-cloning theorem, which says it is impossible to copy a quantum particles state exactly. Thats because reproducing a particles state involves making measurementsand the measurements change the particles overall properties. In certain cases, where you already know something about the state in question, quantum mechanics does allow you to measure one attribute of a particle. But in doing so youve made it impossible to measure the particles other attributes.

This rule implies that if you use money that is somehow linked to a quantum particle, you could, in principle, make it impossible to copy: It would be counterfeit-proof.

The visionary physicist Stephen Wiesner came up with the idea of quantum money in 1969. He suggested that banks somehow insert a hundred or so photons, the quantum particles of light, into each banknote. He didnt have any clear idea of how to do that, nor do physicists today, but never mind. Its still an intriguing notion, because the issuing bank could then create a kind of minuscule secret watermark by polarizing the photons in a special way.

To validate the note later, the bank would check just one attribute of each photon (for example, its vertical or horizontal polarization), leaving all other attributes unmeasured. The bank could then verify the notes authenticity by checking its records for how the photons were set originally for this particular bill, which the bank could look up using the bills printed serial number.

Thanks to the no-cloning theorem, a counterfeiter couldnt measure all the attributes of each photon to produce a copy. Nor could he just measure the one attribute that mattered for each photon, because only the bank would know which attributes those were.

But beyond the daunting engineering challenge of storing photons, or any other quantum particles, theres another basic problem with this scheme: Its a private encryption. Only the issuing bank could validate the notes. The ideal is quantum money that anyone can verify, Aaronson saysjust the way every store clerk in the United States can hold a $20 bill up to the light to look for the embedded plastic strip.

That would require some form of public encryption, and every such scheme researchers have created so far is potentially crackable. But its still worth exploring how that might work. Verification between two people would involve some kind of black boxa machine that checks the status of a piece of quantum money and spits out only the answer valid or invalid. Most of the proposed public-verification schemes are built on some sort of mathematical relationship between a bank notes quantum states and its serial number, so the verification machine would use an algorithm to check the math. This verifier, and the algorithm it follows, must be designed so that even if they were to fall into the hands of a counterfeiter, he couldnt use them to create fakes.

As fast as quantum money researchers have proposed encryption schemes, their colleagues have cracked them, but its clear that everyones having a great deal of fun. Most recently, Aaronson and his MIT collaborator Paul Christiano put forth a proposal [PDF] in which each banknotes serial number is linked to a large number of quantum particles, which are bound together using a quantum trick known as entanglement.

All of this is pie in the sky, of course, until engineers can create physical systems capable of retaining quantum states within moneyand that will perhaps be the biggest challenge of all. Running a quantum economy would require people to hold information encoded in the polarization of photons or the spin of electrons, say, for as long as they required cash to sit in their pockets. But quantum states are notoriously fragile: They decohere and lose their quantum properties after frustratingly short intervals of time. Youd have to prevent it from decohering in your wallet, Aaronson says.

For many researchers, that makes quantum money even more remote than useful quantum computers. At present, its hard to imagine having practical quantum money before having a large-scale quantum computer, says Michele Mosca of the Institute for Quantum Computing at the University of Waterloo, in Canada. And these superfast computers must also overcome the decoherence problem before they become feasible.

If engineers ever do succeed in building practical quantum computersones that can send information through fiber-optic networks in the form of encoded photonsquantum money might really have its day. On this quantum Internet, financial transactions would not only be secure, they would be so ephemeral that once the photons had been measured, there would be no trace of their existence. In todays age of digital cash, we have already relieved ourselves of the age-old burden of carrying around heavy metal coins or even wads of banknotes. With quantum money, our pockets and purses might finally be truly empty.

Michael Brooks, a British science journalist, holds a Ph.D. in quantum physics from the University of Sussex, which prepared him well to tackle the article Quantum Cash and the End of Counterfeiting. He says he found the topic of quantum money absolutely fascinating, and adds, I just hope I get to use some in my lifetime. He is the author, most recently, of Free Radicals: The Secret Anarchy of Science (Profile Books, 2011).

See the original post:

Quantum Cash and the End of Counterfeiting - IEEE Spectrum

Read More..

Charm Meson Particle | What Is Antimatter? – Popular Mechanics

A quirky type of subatomic particle known as the charm meson has the seemingly magical ability to switch states between matter and antimatter (and back again), according to the team of over 1,000 physicists who were involved in documenting the phenomenon for the first time.

Oxford researchers, using data from the second run of the Large Hadron Collider (LHC)a particle accelerator at the Switzerland-based European Organization for Nuclear Research (known internationally as CERN)made the determination by taking extremely precise measurements of the masses of two particles: the charm meson in both its particle and antiparticle states.

Yes, this breakthrough in quantum physics is as heady as it sounds. A charm meson particle, after all, can exist in a state where it is both itself and its evil twin (the antiparticle version) at once. This state is known as "quantum superposition," and it's at the heart of the famous Schrdinger's Cat thought experiment.

As a result of this situation, the charm meson exists as two distinct particles with two distinct masses. But the difference between the two is infinitesimally small0.00000000000000000000000000000000000001 grams to be exact, according to the scientists' research, described in a new paper published last month on the arXiv preprint server (that means the work hasn't been peer-reviewed yet). They've recently submitted the work for publication in the journal Physical Review Letters.

While the findings are basically the definition of minuscule, the ramifications are anything but; the physicists say the charm meson particle's ability to exist as both itself and its alter-ego could shake up our assumptions about the very nature of reality.

To understand what's going on here, we first have to unpack the meson particle. These are extremely short-lived subatomic particles with a balanced number of quarks and antiquarks. In case you skipped that lecture in quantum physics, quarks are particles that combine together to form "hadrons," some of which are protons and neutronsthe basic components of atomic nuclei.

Via Symmetry Magazine: a joint Fermilab/SLAC publication. Artwork by Sandbox Studio, Chicago.

There are six "flavors" of quark: up, down, charm, strange, top, and bottom. Each also has an antiparticle, called an antiquark. Quarks and antiquarks vary because they have different propertieslike electrical charge of equal magnitude, but opposite sign.

Back to mesons: They're almost the size of neutrons or protons, but are extremely unstable. So, they're uncommon in nature itself, but physicists are interested in studying them in artificial environments (like in the LHC) because they want to better understand quarks. That's because, along with leptons, quarks make up all known matter.

Charm mesons can travel as a mixture of both its particle and antiparticle states (a phenomenon appropriately called "mixing"). Physicists have known that for over a decade, but the new research shows for the first time that charm mesons can actually oscillate back and forth between the two states.

Antiquarks are the opposite of quarks and are considered a type of antimatter. These particles can cancel out normal matterwhich is kind of a problem if you want the universe to, well, exist. The various kinds of antimatter are almost all named using the anti- prefix, like quark versus antiquark. More specifically, a charm meson typically has a charm quark and an up antiquark, and its anti- partner has a charm antiquark and an up quark.

It's important to note the charm meson is not the only particle that can oscillate between matter and antimatter states. Physicists have previously observed three other particles in the Standard Modelthe theory that explains particle physicsdoing so. That includes strange mesons, beauty mesons, and strange-beauty mesons.

Why was the charm meson a holdout for so long? The charm meson oscillates incredibly slowly, meaning physicists had to take measurements at an extremely fine degree of detail. In fact, most charm mesons will fully decay before a complete oscillation can even take place, like an aging person with a very slow-growing tumor.

CERN

The large-scale undertaking that produced the charm meson data is called the Large Hadron Collider beauty experiment. It seeks to examine why we live in a world full of matter, but seemingly no antimatter, according to CERN.

Using a vast amount of data from the charm mesons generated at the LHC, the scientists measured particles to a difference of 1 x 10^-38 grams. With that unbelievably fine-toothed comb, they were able to observe the superposition oscillation of the charm mesons.

How did scientists measure this incredibly tiny difference in mass? In short, the LHC regularly produces mesons of all kinds as part of its scientists' work.

"Charm mesons are produced at the LHC in proton-proton collisions, and normally they only travel a few millimeters before they decay into other particles," according to a University of Oxford press release. "By comparing the charm mesons that tend to travel further versus those that decay sooner, the team identified differences in mass as the main factor that drives whether a charm meson turns into an anti-charm meson or not."

ALFRED PASIEKA/SCIENCE PHOTO LIBRARYGetty Images

Now that scientists have finally observed charm meson oscillation, they're ready to open up a whole new can of worms in their experimentation, hoping to unearth the mysteries of the oscillation process itself.

That path of study could lead to a new understanding about how our world began in the first place. Per the Standard Model of particle physics, the Big Bang should have produced matter and antimatter in equal parts. Thankfully, that didn't happenbecause if it had, all of the antimatter particles would have collided with the matter particles, destroying everything.

Clearly, physicists say, there is an imbalance in matter and antimatter collisions in our world, and the answer to that mystery could lie in the incomprehensibly small oscillations of particles like the charm meson. Now, scientists want to understand if the rate of transition from particle to antiparticle is the same as the rate of transition from antiparticle to particle.

Depending on what they find, our very conceptions of how we existwhy we live in a world full of matter rather than antimatter, and how we got herecould change forever.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

More:

Charm Meson Particle | What Is Antimatter? - Popular Mechanics

Read More..

In search of nature’s laws Steven Weinberg died on July 23rd – The Economist

Jul 31st 2021

AS HE LIKED to tell it, there were three epiphanies in Steven Weinbergs life. The first came in a wooden box. It was a chemistry set, passed on by a cousin who was tired of it. As he played with the chemicals in it, and found that each reacted differently because of atoms, a vast thought struck him: if he learned about atoms, he would know how the whole world worked.

Your browser does not support the

Enjoy more audio and podcasts on iOS or Android.

The second epiphany came when, as a teenager, he paid a routine visit to his local library in New York. On the table was a book called Heat, open to a page of equations. Among them was the elegant, unknown swirl of an integral sign. It showed that with a mathematical formula, and a magic symbol, science could express something as rudimentary as the glow of a candle flame. His third awakening, when he was in his 20s and already a professor of physics, was the discovery that a mathematical theory could be applied to the whole dazzling array of stars and planets, dark space beyond them and, he concluded, everything.

All regularities in nature followed from a few simple laws. Not all were known yet; but they would be. In the end he was sure they would combine into a set of equations simple enough to put on a T-shirt, like Einsteins E=mc2. It was just a matter of continually querying and searching. In the strange circumstance of finding himself conscious and intelligent on a rare patch of ordinary matter that was able to sustain life, doggedly asking questions was the least he could do.

His signal achievement was to discover, in the 1960s, a new level of simplicity in the universe. There were then four known universal forcesgravity and electromagnetism, both of which operate at large scales, and the strong and weak nuclear forces, both of which are appreciable only at small scales. Electromagnetism was explained by a quantum field theory; similar theories for the nuclear forces were eagerly being sought.

In quantum field theories, forces are mediated by particles called bosons; the boson involved in electromagnetism is the photon, the basic particle of light. He and others showed that a theory of the weak force required three bosons: the W+ and the W-, which carried electric charges, and the Z0, which did not. The W particles were at play in the observable universe; they were responsible for some sorts of radioactive decay. The Z was notional until, in 1973, researchers at CERN, Europes great particle-physics lab, observed neutral currents between the particles they were knocking together. These had never been seen before, and could be explained only by the Z. In 1979 the Nobel prize duly followed.

In his understated way, he called his contribution very satisfactory. It was not just that the weak force and the electromagnetic force could be explained by similar tools. At high energies they were basically the same thing.

That triumph of unification increased his curiosity about the only point where such high energies were known to have existed: the Big Bang. In his book The First Three Minutes, in 1977, he described the immediate aftermath, to the point where the hyper-hot cosmic soup had cooled enough for atomic nuclei to form. He saw early on how deeply particle physics and cosmology were intertwined, and became fascinated by the idea of a universe dominated by unobservable dark energy and dark matter in which ordinary matter (the stars and the planets and us) was merely a small contamination. He longed for CERN s Large Hadron Collider to find evidence of dark matter. It caused him lasting frustration that Congress in 1993 had cancelled the Superconducting Super Collider, which was to have been even bigger.

Whatever was found, he was sure it would fit into the simple scheme of natures laws. Quantum mechanics, however, troubled him. He worried that its determinism implied that the world was endlessly splitting, generating myriad parallel histories and universes in which the constants in nature would have different values. Goodbye to a unified theory of everything, if that were so.

Such a unified law would have given him satisfaction but, he knew, no comfort. Natures laws were impersonal, cold and devoid of purpose. Certainly there was no God-directed plan. As he wrote at the end of The First Three Minutes, the more the universe seemed comprehensible, the more it seemed pointless. No saying of his became more famous, but the next paragraph softened it: humans gave the universe their own point and purpose by the way they lived, by loving each other and by creating art.

He set the example by marrying Louise, his college sweetheart, devouring opera and theatre, revelling in the quirky liberalism of Austin, where he taught at the University of Texas for almost four decades, and looking for theories in physics that would carry the same sense of inevitability he found so beautiful in chamber music, or in poetry. He still thought of human existence as accidental and tragic, fundamentally. But from his own little island of warmth and love, art and science, he managed a wry smile.

What angered him most was the persistence of religion. It had not only obstructed and undermined science in the age of Galileo and Copernicus; it had also survived Darwin, whose theory of evolution had shocked it more sharply than anything physics did. And it was still there, an alternative theory of the world that corroded free inquiry. For even if the laws of nature could be reduced to one, scientists would still ask: Why? Why this theory, not another? Why in this universe, and not another?

There was, he reflected, no end to the chain of whys. So he did not stop asking or wondering. He liked to review and grade his predecessors, from the ancient Greeks onwards, chastising them for failing to use the data they had, but also sympathising with their lack of machines advanced enough to prove their ideas. The human tragedy was never to understand why things were as they were. Yet, for all that, he could echo Ptolemy: I know that I am mortal and the creature of a day, but when I search out the massed wheeling circles of the stars, my feet no longer touch the EarthI take my fill of ambrosia, the food of the gods.

This article appeared in the Obituary section of the print edition under the headline "Natures laws"

See the original post here:

In search of nature's laws Steven Weinberg died on July 23rd - The Economist

Read More..

July: Superconductivity in cuprates | News and features – University of Bristol

Researchers from the University of Bristols School of Physics used some of Europes strongest continuous magnetic fields to uncover evidence of exotic charge carriers in the metallic state of copper-oxide high-temperature superconductors.

Their results have been published this week in Nature. In a related publication in SciPost Physics last week, the team postulated that it is these exotic charge carriers that form the superconducting pairs, in marked contrast with expectations from conventional theory.

Conventional superconductivity

Superconductivity is a fascinating phenomenon in which, below a so-called critical temperature, a material loses all its resistance to electrical currents. In certain materials, at low temperatures, all electrons are entangled in a single, macroscopic quantum state, meaning that they no longer behave as individual particles but as a collective resulting in superconductivity. The general theory for this collective electron behaviour has been known for a long time, but one family of materials, the cuprates, refuses to conform to the paradigm. They also possess the highest ambient-pressure superconducting transition temperatures known to exist. It was long thought that for these materials the mechanism that glues together the electrons must be special, but recently the attention has shifted and now physicists investigate the non-superconducting states of cuprates, hoping to find clues to the origin of high-temperature superconductivity and its distinction from normal superconductors.

High-temperature superconductivity

Most superconductors, when heated to exceed their critical temperature, change into ordinary metals. The quantum entanglement that causes the collective behaviour of the electrons fades away, and the electrons start to behave like an ordinary gas of charged particles.

Cuprates are special, however. Firstly, as mentioned above, because their critical temperature is considerably higher than that of other superconductors. Secondly, they have very special measurable properties even in their metallic phase. In 2009, physicist Prof Nigel Hussey and collaborators observed experimentally that the electrons in these materials form a new type of structure, different from that in ordinary metals, thereby establishing a new paradigm that scientists now call the strange metal. Specifically, the resistivity at low temperatures was found to be proportional to temperature, not at a singular point in the temperature versus doping phase diagram (as expected for a metal close to a magnetic quantum critical point) but over an extended range of doping. This extended criticality became a defining feature of the strange metal phase from which superconductivity emerges in the cuprates.

Magnetoresistance in a strange metal

In the first of these new reports, EPSRC Doctoral Prize Fellow Jakes Ayres and PhD student Maarten Berben (based at HFML-FELIX in Nijmegen, the Netherlands) studied the magnetoresistance the change in resistivity in a magnetic field and discovered something unexpected. In contrast to the response of usual metals, the magnetoresistance was found to follow a peculiar response in which magnetic field and temperature appear in quadrature. Such behaviour had only been observed previously at a singular quantum critical point, but here, as with the zero-field resistivity, the quadrature form of the magnetoresistance was observed over an extended range of doping. Moreover, the strength of the magnetoresistance was found to be two orders of magnitude larger than expected from conventional orbital motion and insensitive to the level of disorder in the material as well as to the direction of the magnetic field relative to the electrical current. These features in the data, coupled with the quadrature scaling, implied that the origin of this unusual magnetoresistance was not the coherent orbital motion of conventional metallic carriers, but rather a non-orbital, incoherent motion from a different type of carrier whose energy was being dissipated at the maximal rate allowed by quantum mechanics.

From maximal to minimal dissipation

Prof Hussey said: Taking into account earlier Hall effect measurements, we had compelling evidence for two distinct carrier types in cuprates - one conventional, the other strange. The key question then was which type was responsible for high-temperature superconductivity? Our team led by Matija ulo and Caitlin Duffy then compared the evolution of the density of conventional carriers in the normal state and the pair density in the superconducting state and came to a fascinating conclusion; that the superconducting state in cuprates is in fact composed of those exotic carriers that undergo such maximal dissipation in the metallic state. This is a far cry from the original theory of superconductivity and suggests that an entirely new paradigm is needed, one in which the strange metal takes centre stage.

Paper:

'Incoherent transport across the strange-metal regime of overdoped cuprates' in Nature by Nigel Hussey et al.

Read the original here:

July: Superconductivity in cuprates | News and features - University of Bristol

Read More..

The Quest for the Spin Transistor – IEEE Spectrum

From the earliest batteries through vacuum tubes, solid state, and integrated circuits, electronics has staved off stagnation. Engineers and scientists have remade it repeatedly, vaulting it over one hurdle after another to keep alive a record of innovation unmatched in industrial history.

It is a spectacular and diverse account through which runs a common theme. When a galvanic pile twitches a frog's leg, when a triode amplifies a signal, or when a microprocessor stores a bit in a random access memory, the same agent is at work: the movement of electric charge. Engineers are far from exhausting the possibilities of this magnificent mechanism. But even if a dead end is not yet visible, the foreseeable hurdles are high enough to set some searching for the physics that will carry electronics on to its next stage. In so doing, it could help up the ante in the semiconductor stakes, ushering in such marvels as nonvolatile memories with enormous capacity, ultrafast logic devices that can change function on the fly, and maybe even processors powerful enough to begin to rival biological brains.v A growing band of experimenters think they have seen the future of electronics, and it is spin. This fundamental yet elusive property of electrons and other subatomic particles underlies permanent magnetism, and is often regarded as a strange form of nano-world angular momentum.

Microelectronics researchers have been investigating spin for at least 20 years. Indeed, their discoveries revolutionized hard-disk drives, which since 1998 have used a spin-based phenomenon to cram more bits than ever on to their disks. Within three years, Motorola Inc. and IBM Corp. are expected to take the next step, introducing the first commercial semiconductor chips to exploit spin--a new form of random access memory called M (for magnetic) RAM. Fast, rugged, and nonvolatile, MRAMs are expected to carve out a niche from the US $10.6-billion-a-year flash memory market. If engineers can bring the costs down enough, MRAMs may eventually start digging into the $35 billion RAM market as well.

The sultans of spin say memory will be just the beginning. They have set their sights on logic, emboldened by experimental results over the past two or three years that have shown the budding technologies of spin to be surprisingly compatible with the materials and methods of plain old charge-based semiconductor electronics. In February 2000, the Defense Advance Research Projects Agency announced a $15-million-a-year, five-year program to focus on new kinds of semiconductor materials and devices that exploit spin. It was the same Arlington, Va., agency's largesse of $60 million or so over the past five years that helped move MRAMs from the blackboard to the verge of commercial production.

Subatomic spookiness

Now proponents envision an entirely new form of electronics, called spintronics. It would be based on devices that used the spin of electrons to control the movement of charge. Farther down the road (maybe a lot farther), researchers might even succeed in making devices that used spin itself to store and process data, without any need to move charge at all. Spintronics would use much less power than conventional electronics, because the energy needed to change a spin is a minute fraction of what is needed to push charge around.

Other advantages of spintronics include nonvolatility: spins don't change when the power is turned off. And the peculiar nature of spin--and the quantum theory that describes it--points to other weird, wonderful possibilities, such as: logic gates whose function--AND, OR, NOR, and so on--could be changed a billion times a second; electronic devices that would work directly with beams of polarized light as well as voltages; and memory elements that could be in two different states at the same time. "It offers completely different types of functionality" from today's electronics, said David D. Awschalom, who leads the Center for Spintronics and Quantum Computation at the University of California at Santa Barbara. "The most exciting possibilities are the ones we're not thinking about."

Much of the research is still preliminary, Awschalom cautions. A lot of experiments are still performed at cryogenic temperatures. And no one has even managed to demonstrate a useful semiconductor transistor or transistor-like device based on spin, let alone a complex logic circuit. Nevertheless, researchers at dozens of organizations are racing to make spin-based transistors and logic, and encouraging results from groups led by Awschalom and others have given ground for a sense that major breakthroughs are imminent.

"A year and a half ago, when I was giving a talk [and] said something about magnetic logic, before I went on with the rest of my talk I'd preface my statement with, '...and now, let's return to the planet Earth,'" said Samuel D. Bader, a group leader in the materials science division at Argonne National Laboratory, in Illinois. "I can drop that line now," he added.

Quantum mechanical mystery

Spin remains an unplumbed mystery. "It has a reputation of not being really fathomable," said Jeff M. Byers, a leading spin theorist at the Naval Research Laboratory (NRL), in Washington, D.C. "And it's somewhat deserved."

Physicists know that spin is the root cause of magnetism, and that, like charge or mass, it is an intrinsic property of the two great classes of subatomic particles: fermions, such as electrons, protons, and neutrons; and bosons, including photons, pions, and more. What distinguishes them, by the way, is that a boson's spin is measurable as an integer number (0, 1, 2...) of units, whereas fermions have a spin of 1/2, 3/2, 5/2.... units.

Much of spin's elusiveness stems from the fact that it goes right to the heart of quantum theory, the foundation of modern physics. Devised in the early decades of the 20th century, quantum theory is an elaborate conceptual framework, based on the notion that the exchange of energy at the subatomic level is constrained to certain levels, or quantities--in a word, quantized.

Paul Dirac, an electrical engineering graduate of Bristol University, in England, turned Cambridge mathematician, postulated the existence of spin in the late 1920s. In work that won him a Nobel prize, he reconciled equations for energy and momentum from quantum theory with those of Einstein's special theory of relativity.

Spin is hard to grasp because it lacks an exact analog in the macroscopic world we inhabit. It is named after its closest real-world counterpart: the angular momentum of a spinning body. But whereas the ordinary angular momentum of a whirling planet, say, or curve ball vanishes the moment the object stops spinning and hence is extrinsic, spin is a kind of intrinsic angular momentum that a particle cannot gain or lose.

"Imagine an electronics technology founded on such a bizarre property of the universe," said Byers.

Of course, the analogy between angular momentum and spin only goes so far. Particle spin does not arise out of rotation as we know it, nor does the electron have physical dimensions, such as a radius. So the idea of the electron having angular momentum in the classical meaning of the term doesn't make sense. Confused? "Welcome to the club," Byers said, with a laugh.

The smallest magnets

Fortunately, a deep grasp of spin is not necessary to understand the promise of the recent advances. The usual imperfect analogies that somehow manage to render the quantum world meaningful for mortal minds turn out to be rather useful--as is spin's role in magnetism, a macroscopic manifestation of spin.

Start with the fact that spin is the characteristic that makes the electron a tiny magnet, complete with north and south poles. The orientation of the tiny magnet's north-south axis depends on the particle's axis of spin. In the atoms of an ordinary material, some of these spin axes point "up" (with respect to, say, an ambient magnetic field) and an equal number point "down." The particle's spin is associated with a magnetic moment, which may be thought of as the handle that lets a magnetic field torque the electron's axis of spin. Thus in an ordinary material, the up moments cancel the down ones, so no surplus moment piles up that could hold a picture to a refrigerator.

For that, you need a ferromagnetic material, such as iron, nickel, or cobalt. These have tiny regions called domains in which an excess of electrons have spins with axes pointing either up or down--at least, until heat destroys their magnetism, above the metal's Curie temperature. The many domains are ordinarily randomly scattered and evenly divided between majority-up and majority-down. But an externally applied magnetic field will move the walls between the domains and line up all the domains in the direction of the field, so that they point in the same direction. The result is a permanent magnet.

Ferromagnetic materials are central to many spintronics devices. Use a voltage to push a current of electrons through a ferromagnetic material, and it acts like a spin polarizer, aligning the spin axes of the transiting electrons so that they are up or down. One of the most basic and important spintronic devices, the magnetic tunnel junction, is just two layers of ferromagnetic material separated by an extremely thin, nonconductive barrier [see figure, "How a Magnetic Tunnel Junction Works" ]. The device was first demonstrated by the French physicist M. Jullire in the mid-1970s.

ILLUSTRATIONS: STEVE STANKIEWICZ

How a Magnetic Tunnel Junction Works: One of the most fundamental spintronic devices, the magnetic tunnel junction, is just two layers of ferromagnetic material [light blue] separated by a nonmagnetic barrier [darker blue]. In the top illustration, when the spin orientation [white arrows] of the electrons in the two ferromagnetic layers are the same, a voltage is quite likely to pressure the electrons to tunnel through the barrier, resulting in high current flow. But flipping the spins in one of the two layers [yellow arrows, bottom illustration], so that the two layers have oppositely aligned spins, restricts the flow of current.

It works like this: suppose the spins of the electrons in the ferromagnetic layers on either side of the barrier are oriented in the same direction. Then applying a voltage across the three-layer device is quite likely to cause electrons to tunnel through the thin barrier, resulting in high current flow. But flipping the spins in one of the two ferromagnetic layers, so that the two layers have opposite alignment, restricts the flow of current through the barrier [bottom]. Tunnel junctions are the basis of the MRAMs developed by IBM and Motorola, one per memory cell.

Any memory device can also be used to build logic circuits, in theory at least, and spin devices such as tunnel junctions are no exception. The idea has been explored by Mark Johnson, a leading spin researcher at the Naval Research Laboratory, and others. Lately, work in this area has shifted to a newly formed program at Honeywell Inc., Minneapolis, Minn. The challenges to the devices' use for programmable logic are formidable. To quote William Black, principal engineer at the Rocket Chips subsidiary of Xilinx, a leading maker of programmable logic in San Jose, Calif., "The basic device doesn't have gain and the switching threshold typically is not very well controlled." To call that "the biggest technical impediment," as he does, sounds like an understatement.

Relativistic transistors

Already on the drawing board are spin-based devices that would act something like conventional transistors--and that might even produce gain. There are several competing ideas. The most enduring one is known as the spin field-effect transistor (FET). A more recent proposal puts a new spin, so to speak, on an almost mythical device physicists have pursued for decades: the resonant tunneling transistor.

In an ordinary FET, a metal gate controls the flow of current from a source to a drain through the underlying semiconductor. A voltage applied to the gate sets up an electric field, and that field in turn varies the amount of current that can flow between source and drain. More voltage produces more current.

In 1990 Supriyo Datta and Biswajit A. Das, then both at Purdue University, in West Lafayette, Ind., proposed a spin FET in a seminal article published in the journal Applied Physics Letters. The two theorized about an FET in which the source and drain were both ferromagnetic metals, with the same alignment of electron spins. Electrons would be injected into the source, which would align the spins so that their axes were oriented the same way as those in the source and drain. These spin-polarized electrons would shoot through the source and travel at 1 percent or so of the speed of light toward the drain.

This speed is important, because electrons moving at so-called relativistic speeds are subject to certain significant effects. One is that an applied electric field acts as though it were a magnetic field. So a voltage applied to the gate would torque the spin-polarized electrons racing from source to drain and flip their direction of spin. Thus electron spins would become polarized in the opposite direction to the drain, and could not enter it so easily. The current going from the source to the drain would plummet.

Note that the application of the voltage would cut off current, rather than turn it on, as in a conventional FET. Otherwise, the basic operation would be rather similar--but with a couple of advantages. To turn the current on or off would require only the flipping of spins, which takes very little energy. Also, the polarization of the source and drain could be flipped independently, offering intriguing possibilities unlike anything that can be done with a conventional FET. For example, Johnson patented the idea of using an external circuit to flip the polarization of the drain, turning the single-transistor device into a nonvolatile memory cell.

A recent German breakthrough will "revolutionize" a majorspintronics subfield, one expert declared

Alas, 11 years after the paper by Datta and Das, no one has managed to make a working spin FET. Major efforts have been led by top researchers, such as Johnson at the NRL, Michael Flatt at the University of Iowa, Michael L. Roukes at the California Institute of Technology, Hideo Ohno of Tohoku University in Japan, Laurens W. Molenkamp, then at the University of Aachen in Germany, and Anthony Bland at the University of Cambridge in England. The main problem has been maintaining the polarization of the spins: the ferromagnetic source does in fact align the spins of electrons injected into it, but the polarization does not survive as the electrons shoot out of the source and into the semiconductor between the source and drain.

Recent work in Berlin, Germany, may change all that. In a result published last July in Physical Review Letters, Klaus H. Ploog and his colleagues at the Paul Drude Institute disclosed that they had used a film of iron, grown on gallium arsenide, to polarize spins of electrons injected into the GaAs. Not only was the experiment carried out at room temperature, but the efficiency of the injection, at 2 percent, was high in comparison with similar experiments. The work was "extremely important," said the Naval Research Laboratory's Johnson. "It will revolutionize this subfield. A year from now many spin-FET researchers will be working with iron."

The other kind of proposed spin transistor would exploit a quantum phenomenon called resonant tunneling. The device would be an extension of the resonant tunneling diode. At the heart of this device is an infinitesimal region, known as a quantum well, in which electrons can be confined. However, at a specific, resonant voltage that corresponds to the quantum energy of the well, the electrons tend to slip--the technical term is "tunnel"--freely through the barriers enclosing the well.

Generally, the spin state of the electron is irrelevant to the tunneling, because the up and down electrons have the same amount of energy. But by various means, researchers can design a device in which the spin-up and spin-down energy levels are different, so that there are two different tunneling pathways. The two tunnels would be accessed with different voltages; each voltage would correspond to one or the other of the two spin states. At one voltage, a certain level of spin-down current would flow. At some other voltage, a different level of spin-up current would go through the quantum well's barriers.

One way of splitting the energy levels is to make the two barriers of different materials, so that the potential energy that confines the electrons within the quantum well is different on either side of the well. That difference in the confining potentials translates, for a moving electron, into two regions within the quantum well, which have magnetic fields that are different from each other. Those asymmetric fields in turn give rise to the different resonant energy levels for the up and down spin states. A device based on these principles is the goal of a team led by Thomas McGill at the California Institute of Technology, with members at HRL Laboratories LLC, Jet Propulsion Laboratory, Los Alamos National Laboratory, and the University of Iowa.

Another method of splitting the energy levels is to simply put them in a magnetic field. This approach is being taken by a collaborative effort of nine institutions, led by Bruce D. McCombe at the University at Buffalo, New York.

Neither team has managed to build a working device, but the promise of such a device has kept interest high. A specific voltage would produce a certain current of, say, spin-up electrons. Using a tiny current to flip the spins would enable a larger current of spin-down electrons to flow at the same voltage. Thus a small current could, in theory anyway, be amplified.

Ray of hope

As these researchers refine the resonant and ballistic devices, they are looking over their shoulders at colleagues who are forging a whole new class of experimental device. This surging competition is based on devices that create or detect spin-polarized electrons in semiconductors, rather than in ferromagnetic metals. In these experiments, researchers use lasers to get around the difficulties of injecting polarized spin into semiconductors. By shining beams of polarized laser light onto ordinary semiconductors, such as gallium arsenide and zinc selenide, they create pools of SPIN-POLARIZED ELECTRONS.

Some observers lament the dependence on laser beams. They find it hard to imagine how the devices could ever be miniaturized to the extent necessary to compete with conventional electronics, let alone work smoothly with them on the same integrated circuit. Also, in some semiconductors, such as GaAs, the spin polarization persists only at cryogenic temperatures.

In an early experiment, Michael Oestreich, then at Philips University in Marburg, Germany, showed that electric fields could push pools of spin-polarized electrons through nonmagnetic semiconductors such as GaAs. The experiment was reported in the September 1998 Applied Physics Letters.

Then over the past three years, a breathtaking series of findings has turned the field into a thriving subdiscipline. Several key results were achieved in Awschalom's laboratory at Santa Barbara. He and his co-workers demonstrated that pools of spin-coherent electrons could retain their polarization for an unexpectedly long time--hundreds of nanoseconds. Working separately, Awschalom, Oestreich, and others also created pools of spin-polarized electrons and moved them across semiconductor boundaries without the electrons' losing their polarization.

If not for these capabilities, spin would have no future in electronics. Recall that a practical device will be operated by altering its orientation of spin. That means that the spin coherence has to last, at a minimum, longer than it takes to alter the orientation of that spin polarization. Also, spintronic devices, like conventional ones, will be built with multiple layers of semiconductors, so moving spin-polarized pools across junctions between layers without losing the coherence will be essential.

Awschalom and his co-workers used a pulsed, polarized laser to establish pools of spin-coherent electrons. The underlying physics revolves around the so-called selection rules. These are quantum-theoretical laws describing whether or not an electron can change energy levels by absorbing or emitting a photon of light. According to those selection rules, light that is circularly polarized will excite only electrons of one spin orientation or the other. Conversely, when spin-coherent electrons combine with holes, the result is photons of circularly polarized light.

Puzzling precession

In his most recent work, Awschalom and his graduate student, Irina Malajovich, collaborated with Nitin Samarth of Pennsylvania State University in University Park and his graduate student, Joseph Berry. As he has in the past, Awschalom performed the experiment on pools of electrons that were not only spin polarized but were also precessing. Precession occurs when a pool of spin-polarized electrons is put in a magnetic field: the field causes their spin axes to rotate in a wobbly way around that field. The frequency and direction of rotation depend on the strength of the magnetic field and on characteristics of the material in which the precession is taking place.

The Santa BarbaraPenn State team used circularly polarized light pulses to create a pool of spin-coherent electrons in GaAs. They applied a magnetic field to make the electrons precess, and then used a voltage to drag the precessing electrons across a junction into another semiconductor, ZnSe. The researchers found that if they used a low voltage to drag the electrons into the ZnSe, the electrons took on the precession characteristics of the ZnSe as soon as they got past the junction. However, if they used a higher voltage, the electrons kept on precessing, as though they were still in the GaAs [see illustration, "Precessional Mystery" ].

ILLUSTRATIONS: STEVE STANKIEWICZ

Precessional Mystery: Given the right circumstances, electrons will synchronously "precess," or whirl about an axis that is itself moving. The angle and rate of this wobbly spin depend in part on the material in which it occurs. Thus, if a voltage pushes an electron out of gallium arsenide [light blue] into zinc selenide [yellow], the electron's precession characteristics change [top]. However, if a higher voltage pushes the electron sharply enough into the ZnSe, the precession characteristics do not change but remain those of GaAs for a while [bottom]. Some researchers believe they will be able to exploit this variability in future devices.

"You can tune the whole behavior of the current, depending on the electric field," Awschalom said in an interview. "That's what was so surprising to us." The group reported its results in the 14 June issue of Nature, prompting theorists around the world to wear out their pencils trying to explain the findings.

Other results from the collaboration were even more intriguing. The Santa Barbara and Penn State researchers performed a similar experiment, except with p-type GaAs and n-type ZnSe. N-type materials rely on electrons to carry current; p-type, on holes. Because the materials were of two different charge-carrier types, an electric field formed around their junction. That field, the experimenters found, was strong enough to pull a pool of spin-coherent electrons from the GaAs immediately into the ZnSe, where the coherence persisted for hundreds of nanoseconds.

The result was encouraging for two reasons. As Awschalom put it, "It showed that you can build n-type and p-type materials and spin can get through the interfaces between them just fine." Equally important, it demonstrated that the spin can be moved from one kind of semiconductor into another without the need for external electric fields, which wouldn't be practical in a commercial device.

"The next big opportunity is to make a spin transistor," Awschalom added. "These results show, in principle, that there is no obvious reason why it won't work well."

Such a device is at least several years away. But even if researchers were on the verge of getting a spin transistor to work in the laboratory, more breakthroughs would be necessary before the device could be practical. For example, the fact that the device would need pulses of circularly polarized laser light would seem an inconvenience, although Awschalom sees a bright side. The gist is that the photons would be used for communications among chips, the magnetic elements for memory, and the spin-based devices for fast, low-power logic.

It's far-fetched now--but no more so than the idea of 1GB DRAMs would have seemed in the days when triodes ruled.

Hot off the presses is Semiconductor Spintronics and Quantum Computation, edited by David D. Awschalom, Nitin Samarth, and Daniel Loss. The 250-page book was released last October by Springer Verlag, Berlin/Heidelberg; ISBN: 3540421769.

The November/December issue of American Scientist, published by the scientific research society Sigma Xi, included an eight-page overview titled "Spintronics" by Sankar Das Sarma. See Vol. 89, pp. 516523.

Honeywell Inc.'s Romney R. Katti and Theodore Zhu described the company's magnetic RAM technology in "Attractive Magnetic Memories," IEEE Circuits & Devices, Vol. 17, March 2001, pp. 2634.

Read the original here:

The Quest for the Spin Transistor - IEEE Spectrum

Read More..

Everything we know about soil science might be wrong – The Counter

You are granted a personal, revocable, limited, non-exclusive, non-transferable license to access and use the Services and the Content conditioned on your continued acceptance of, and compliance with, the Terms. You may use the Services for your noncommercial personal use and for no other purpose. We reserve the right to bar, restrict or suspend any users access to the Services, and/or to terminate this license at any time for any reason. We reserve any rights not explicitly granted in these Terms.

We may change the Terms at any time, and the changes may become effective immediately upon posting. It is your responsibility to review these Terms prior to each use of the Services and, by continuing to use the Services, you agree to all changes as well as Terms in place at the time of the use. The changes also will appear in this document, which you can access at any time.

We may modify, suspend or discontinue any aspect of the Services at any time, including the availability of any Services feature, database, or content, or for any reason whatsoever, whether to all users or to you specifically. We may also impose limits on certain features and services or restrict your access to parts or all of the Services without notice or liability.

Read this article:

Everything we know about soil science might be wrong - The Counter

Read More..

NTT Research Launches Joint Research on Neuro-Computing with The University of Tokyo International Research Center for Neurointelligence – Business…

SUNNYVALE, Calif.--(BUSINESS WIRE)--NTT Research, Inc., a division of NTT (TYO:9432), today announced that it has entered a joint research agreement with The University of Tokyos International Research Center for Neurointelligence (IRCN) to develop Coherent Ising Machine (CIM)-related technologies. The agreement calls for the two research organizations to develop new numerical tools and a simulator for the CIM, an information processing platform based on photonic oscillator networks. The principal investigator (PI) for the three-and-a-half year research project is IRCN Deputy Director Kazuyuki Aihara, a University Professor at the University of Tokyo and expert in the mathematical modeling of complex systems and applications to neurointelligence. His counterpart at NTT Research is Physics & Informatics (PHI) Lab Senior Research Scientist Dr. Satoshi Kako, whose research is focused on the potential capability and application of coherent network computing.

A key component of the PHI Labs research agenda, a CIM addresses problems that have been mapped to an Ising model, which is a mathematical abstraction of magnetic systems composed of interacting spins, or angular momentums, of fundamental particles. A primary goal of this joint research with the IRCN is to develop a novel neuromorphic computing principle for combinatorial optimization and machine learning. Combinatorial optimization problems, which a CIM is programmed to solve, require finding an optimal combination of variables from a larger set under various constraints. This project is directed toward finding a new computing principle and algorithms that can be implemented on a modern digital CIM platform. A near-term goal is to provide a field-programmable gate array (FPGA)-based CIM simulator with 16,000 spins and all-to-all couplings.

Our goal is always to create new scientific knowledge, said PHI Lab Director Yoshihisa Yamamoto. We also anticipate that the CIM simulator and digital algorithms that result from this search for knowledge will be used by our numerous collaborators in other research and academic organizations, which is also likely to accelerate the search for applications in this field.

The NTT Research PHI Lab has embraced an expansive mission of re-thinking computation according to fundamental principles of quantum physics and brain science. Dr. Yamamoto, along with IRCN Project Associate Professor Timothe Leleu and Stanford University professors Surya Ganguli and Hideo Mabuchi, elaborated on this interdisciplinary research agenda, which arguably constitutes a new field of study, in a cover article last year in Applied Physics Letters (APL) titled, Coherent Ising Machines: Quantum optics and neural network perspectives. A core part of the IRCN project is previewed in a presentation that Dr. Leleu delivered at the NTT Research Upgrade 2020 summit, titled, Neuromorphic in Silico Simulator for the CIM.

Professor Aihara, IRCN Deputy Director and PI for this project, has studied mathematical theory for modeling complex systems and developing trans-disciplinary applications in science and technology. He has developed a theoretical platform composed of advanced control theory of complex systems, complex network theory and nonlinear data analysis and data-driven modeling. On the applications side, he has also worked to bridge biological and clinical studies with human disease prediction and next-generation artificial intelligence (AI).

This joint research project with NTT Research is an exciting opportunity to continue exploring the intersection of advanced theory and future applications, Professor Aihara said. We have a strong foundation and considerable momentum going in and will continue to draw inspiration from advances in mathematical and chaos engineering, optics and neuroscience as the research collaboration unfolds.

As part of its goal to radically redesign artificial computers, both classical and quantum, the NTT Research PHI Lab has established similar relationships with eight universities. In addition to the IRCN at The University of Tokyo, the PHI Lab has entered joint research agreements with California Institute of Technology (CalTech), Cornell University, Massachusetts Institute of Technology (MIT), Notre Dame University, Stanford University, Swinburne University of Technology, the University of Michigan and the Tokyo Institute of Technology. It is also conducting joint research with the NASA Ames Research Center in Silicon Valley and 1QBit, a private quantum computing software company.

About NTT Research

NTT Research opened its offices in July 2019 as a new Silicon Valley startup to conduct basic research and advance technologies that promote positive change for humankind. Currently, three labs are housed at NTT Research facilities in Sunnyvale: the Physics and Informatics (PHI) Lab, the Cryptography and Information Security (CIS) Lab, and the Medical and Health Informatics (MEI) Lab. The organization aims to upgrade reality in three areas: 1) quantum information, neuroscience and photonics; 2) cryptographic and information security; and 3) medical and health informatics. NTT Research is part of NTT, a global technology and business solutions provider with an annual R&D budget of $3.6 billion.

NTT and the NTT logo are registered trademarks or trademarks of NIPPON TELEGRAPH AND TELEPHONE CORPORATION and/or its affiliates. All other referenced product names are trademarks of their respective owners. 2021 NIPPON TELEGRAPH AND TELEPHONE CORPORATION

Go here to see the original:

NTT Research Launches Joint Research on Neuro-Computing with The University of Tokyo International Research Center for Neurointelligence - Business...

Read More..

Introducing the Worlds Most Precise Clock – IEEE Spectrum

In 1967, time underwent a dramatic shift. That was the year the key increment of timethe secondwent from being defined as a tiny fraction of a year to something much more stable and fundamental: the time it takes for radiation absorbed and emitted by a cesium atom to undergo a certain number of cycles.

This change, which was officially adopted in the International System of Units, was driven by a technological leap. From the 1910s until the mid-1950s, the most precise way of keeping time was to synchronize the best quartz clocks to Earths motion around the sun. This was done by using telescopes and other instruments to periodically measure the movement of stars across the sky. But in 1955, the accuracy of this method was easily bested by the first cesium atomic clock, which made its debut at the United Kingdoms National Physical Laboratory, on the outskirts of London.

Cesium clocks, which are essentially very precise oscillators, use microwave radiation to excite electrons and get a fix on a frequency thats intrinsic to the cesium atom. When the technology first emerged, researchers could finally resolve a known imperfection in their previous time standard: the slight, irregular speedups and slowdowns in Earths rotation. Now, cesium clocks are so ubiquitous that we tend to forget how integral they are to modern life: We wouldnt have the Global Positioning System without them. They also help synchronize Internet and cellphone communications, tie together telescope arrays, and test fundamental physics. Through our cellphones, or via low-frequency radio synchronization, cesium time standards trickle down to many of the clocks we use daily.

Time Transformed: In the photo at top, John V.L. Parry (left) and Louis Essen stand with the first cesium atomic clock, in 1956, at the United Kingdoms National Physical Laboratory. The instrument paved the way for a redefinition of the second in 1967. At bottom is one of two modern optical-lattice clocks that have been built at the Paris Observatory.Photos: Top: National Physical Laboratory; Bottom: Jrme Lodewyck

The accuracy of the cesium clock has improved greatly since 1955, increasing by a factor of 10 or so every decade. Nowadays, timekeeping based on cesium clocks accrues errors at a rate of just 0.02nanosecond per day. If we had started such a clock when Earth began, about 4.5 billion years ago, it would be off by only about 30 seconds today.

But we can do better. A new generation of atomic clocks that use laser light instead of microwave radiation can divide time more finely. About six years ago, researchers completed single-ion versions of these optical clocks, made with an ion of either aluminum or mercury. These surpassed the accuracy of cesium clocks by a full order of magnitude.

Now, a new offshoot of this technology, the optical-lattice clock (OLC), has taken the lead. Unlike single-ion clocks, which yield one measurement of frequency at a time, OLCs can simultaneously measure thousands of atoms held in place by a powerful standing laser beam, driving down statistical uncertainty. In the past year, these clocks have managed to surpass the best single-ion optical clocks in both accuracy and stability. With further development, they will lose no more than a second over 13.8 billion yearsthepresent-day age of the universe.

So why should you care about clocks of such mind-boggling accuracy? They are already making an impact. Some scientists are using optical-lattice clocks as tools to test fundamental physics. And others are looking at the possibility of using them to better measure differences in how fast time elapses at various points on Eartha result of gravitys distortion of the passage of time as described by Einsteins theory of general relativity. The power to measure such minuscule perturbations may seem hopelessly esoteric. But it could have important real-world applications. We could, for example, improve our ability to forecast volcanic eruptions and earthquakes and more reliably detect oil and gas underground. And one day, in the not-too-distant future, OLCs could enable yet another shift in the way we define time.

According to the rules of quantum mechanics, the energy of an electron bound to an atom is quantized. This means that an electron can occupy only one of a discrete number of orbiting zones, or orbitals, around an atoms nucleus, although it can jump from one orbital to another by absorbing or emitting energy in the form of electromagnetic radiation. Because energy is conserved, this absorption or emission will happen only if the energy corresponding to the frequency of this radiation matches the energy difference between the two orbitals involved in the transition.

Atomic clocks work by exploiting this behavior. Atomsof cesium, for examplearemanipulated so that their electrons all occupy the lowest-energy orbital. The atoms are then hit with a specific frequency of electromagnetic radiation, which can cause an electron to jump up to a higher-energy orbitalthe excited clock state. The likelihood of this transition depends on the frequency of the radiation thats directed at the atom: The closer it is to the actual frequency of the clock transition, the higher the probability that the transition will occur.

To probe how often it happens, scientists use a second source of radiation to excite electrons that remain in the lowest-energy state into a short-lived, higher-energy state. These electrons release photons each time they relax back down from this transient state, and the resulting radiation can be picked up with a photosensor, such as a camera or a photomultiplier tube.

MagicTransition:In an optical-lattice clock, an electron (yellow dot) can absorb electromagnetic radiation to jump from a lower- to a higher-energy orbital around a clock atoms nucleus (center). Light used to trap the atom can shift the natural energy of each orbital (dotted lines) down in energy (solid lines). This would ordinarily change the energy associated with the jump. But for a magic wavelength of trapping light, the energy shift of each orbital will be identical, and the frequency of the transition will remain the same.

If few photons are detected, it means that electrons are largely making the clock transition, and the incoming frequency is a good match. If many photons are being released, it means that most electrons were not excited by the clock signal. A servo-driven feedback loop is used to tune the radiation source so its frequency is always close to the atomic transition.

Converting this frequency reference into a clock that ticks off the seconds requires additional steps. Generally, the frequency measured in an atomic clock is used to calibrate other frequency sources, such as hydrogen masers and quartz clocks. A counter, made using basic analog circuitry, can be connected to a hydrogen maser to convert its electromagnetic signal into a clock that can count off ticks to mark the time.

The most common atomic clocks today use atoms of cesium-133, which has an electron transition that lies in the microwave range of the electromagnetic spectrum. If the atom is held at absolute zero and is unperturbed (more on that in a moment), this transition will occur at a frequency of exactly 9,192,631,770 hertz. And indeed, this is how we define the second in the International System of Unitsit is the time it takes for 9,192,631,770 cycles of 9,192,631,770-Hz radiation to occur.

In actuality, cesium-133 isnt so perfect a pendulum. Atoms experience various forms of perturbation because of their imperfect environment. For example, an atoms motion through space, which in the laboratory can easily be as fast as 100 meters per second, can shift the frequency of an electron transition by means of the Doppler effect. This is the same phenomenon that affects the pitch of ambulance sirens and other sounds as the source of the sound moves relative to the listener. Interactions with the electron clouds of other atoms can also alter the energies of electron states, as can stray external electromagnetic fields.

Perturbations decrease a clocks accuracy: how much the atoms average frequency is shifted from its natural unperturbed value. A number of these offsets can be accounted for, and changes in clock design have helped minimize these shifts. Indeed, one of the most dramatic such improvements occurred in the early 1990s, when physicists developed the fountain clock. This clock uses a laser to launch cooled cesium atoms upward, as if they were water droplets from a fountain, so that the Doppler shift caused by the upward motion cancels out nearly all of the shift that occurs as they fall.

But nowadays cesium clocks cant be improved much more. Tiny gains are increasingly difficult to achieve, and any gains we try to make now will take a long time. Thats because cesium clocks are pushing the limit of the other key metric we use to evaluate clocks: the stability of their frequency.

Frequency stability characterizes how clock frequency fluctuates over time. The bigger the frequency instability, the greater the frequency noise, so the clock frequency will sometimes be a bit higher and sometimes a bit lower than its average value.

Careful engineering can minimize most sources of frequency noise. But theres a fundamental source of instability that is very difficult to overcome, because it comes from the probabilistic nature of quantum mechanics. To understand it, lets go back to the basic operating principle of an atomic clock.

Finding the Frequency: The fundamental frequency of a clock-atom transition (f 0) is often offset by some amount (f). To better detect fluctuations around the resulting transition frequency (f 0 +f), a clock lasers frequency is set slightly off (f lock). This lowers the probability that an electron will make a transition and results in unneeded corrections to the clock laser frequency that yield noise called quantum projection noise. QPN decreases with number of atoms( N).

We typically excite the electrons in an atomic clock with radiation whose frequency doesnt quite match the transition frequency. Thats because the probability that an electron will be excited follows a bell-curve-like distribution. On the sides of the bell curve, its easier to see whether a small change in frequency has occurred because it produces a more detectable effect, more dramatically increasing or decreasing the likelihood that an electron is excited [see illustration, Finding the Frequency]. Because of this, during the ordinary operation of an atomic clock, the clock radiation is set so that it has only a 50percent probability of getting any given atom to make the clock transition. But even if the clock radiation frequency is set precisely at that point, an electron will be in either an excited or an unexcited state after its measured. The servo loop will then wrongly assume that the clock radiation frequency is either too high or too low and will introduce an undue frequency correction.

These miscorrections yield additional noise in the clock that we call quantum projection noise (QPN), and they are the main source of frequency instability in the best cesium clocks. Like many random sources of noise, the average level of QPN decreases with time. The longer you observe the clock, the more often the random upward shifts in frequency cancel out the downward shifts, and the noise eventually becomes negligible.

The catch is that this takes a long time in cesium: It takes about a day for the stability of the best cesium clocks to reach 2parts in 1016 their steady-state accuracy level. (Metrologists commonly measure quantities such as accuracy and stability in fractional units. For a cesium clock with a frequency of 9.2 gigahertz, an accuracy of 2 1016 translates to an uncertainty of 1.8 microhertz in the frequency.)

You could run a series of experiments to make cesium clocks more accurate. But each measurement would have to consist of a lot of data taken over a very long time in order to minimize random fluctuations from measurement to measurement. In a series of experiments designed to push the clock accuracy down to 1 part in 1017, a 20-fold improvement, it could take an entire year just to make a single measurement.

Fortunately, there are other ways to minimize QPN. The noise is the same regardless of frequency, but its relative impact decreases the higher in frequency you go. And just as the average QPN decreases the longer you observe a clock, increasing the number of atoms you interrogate at the same time will boost the signal-to-noise ratio. The more you can sample in one go, the less uncertainty youll have in the number of atoms that made the clock transition.

Illustration: Emily Cooper Clocking In:To trap thousands of atoms at a time, a series of optics is used to focus and select polarized laser light [pink]. This laser light is reflected back by a mirror, forming a lattice-like pattern of regions of high and low intensity as the laser beam interferes with itself [top illustration]. Atoms are attracted to thousands of bright areas in this optical lattice. Three pairs of electromagnets [one pair shown] are tuned to cancel external magnetic fields. Aclock laser beam [blue] is used to excite electrons in the trapped atoms.

Moving to higher frequencies is what motivated work on the optical atomic clock. The first of these clocks was developed in the early 1980s, and nowadays they can be built from any of a number of neutral or ionized versions of elements, including mercury, strontium, calcium, ytterbium, and aluminum. What they all have in common are relatively high resonance frequencies, which lie in the optical spectrum around several hundred thousand gigahertz10,000 times cesiums frequency. Using a higher frequency lowers the QPN, and it also lowers the relative impact of several factors that can shift the clock frequency. These include interactions with external magnetic fields coming from Earth or nearby metal (or, in Paris, the Mtro lines). As an added bonus, if an optical clock is built with ions, those charged atoms can easily be trapped in an oscillating electric field that will cancel out most of their motion, effectively eliminating the Doppler effect.

But optical clocks have limitations of their own. If all other aspects of a clock are the same, the move to optical frequencies should lower the QPN to 0.01percent of what it is in cesium. But many optical clocks are made with ions instead of neutral atoms, such as those used in cesium clocks. Because theyre charged, ions are fairly easy to trap, but they also easily push on one another when placed close together, creating motion thats hard to control and causing a Doppler frequency shift. As a result, such clocks tend to use just one ion at a time and so are only about 20times as stable and 25 times as accurate as the best cesium clocks, which can easily contain a million atoms. To get closer to the factor-of-10,000 boost in stability promised by optical clocks, we must find a way to boost the number of atoms in the optical clock, simultaneously interrogating many atoms so that the QPN averages out. And with the optical-lattice clock, researchers realized they could go quite big, measuring not just a handful of atoms but 10,000 or more at the same time.

It certainly isnt easy. To build a clock out of 10,000 atoms, you must find a way to make an atomic ensemble that is both tightly confined (to minimize the Doppler effect) and very low in density (to minimize electromagnetic interactions among the atoms). The atoms in a typical crystal move too fast and interact too strongly to work, so the best way to proceed is to produce an artificial material with a lattice of your own creation.

To build an optical-lattice clock, we start much the same way we do in many cold-atom experiments, with an ensemble of slow-moving, laser-cooled neutral atoms. We send these into a vacuum vessel containing a single laser beam that has been reflected back on itself. An interference pattern arises in the areas where the beam overlaps with itself, creating an optical lattice made of thousands of small pancakes of light. The atoms fall into the lattice like eggs into an egg carton because of a force that draws each of them toward a spot where the light intensity is at a maximum. Once the atoms are in place, we use a separate clock laser to excite the atoms so that we can measure the frequency of the clock transition.

The difficulty is that the clock atoms arent so easy to coerce into this lattice. Inexpensive lasers have outputs in the milliwatts. To create a lattice strong enough to trap and hold a neutral atom, you need several watts of light. Such a powerful laser beam, however, can shift energy levels in clock atoms, pushing their transition frequency far from their natural state. The amount of this shift will vary with the intensity of the trapping light, and that intensity is hard to control. Even with very careful calibration, this large frequency shift would render the clock much more inaccurate than even the very first cesium clocks.

Fortunately, physicist Hidetoshi Katori conceived a workaround in the early 2000s. When atoms are hit with the trapping light, the energy associated with each electron orbital decreases. Katori, then at the University of Tokyo, noted that each orbital will respond differently, with an energy shift that will depend on the wavelength of the trapping light. For a specific, magic wavelength, the shift of both orbitals will be identical, and so the energy difference between the two orbitals will be unchanged. This magic wavelength, where the clock frequency stays the same whether the atoms are trapped or not, is different for each element. For strontium, its 813 nanometers, in the infrared part of the spectrum. Ytterbiums magic wavelength is 759nm; mercurys is in the ultraviolet part of the spectrum, at 362 nm.

Time Marches On:Atomic clocks have made great strides since their start in 1955, improving in accuracy by a factor of 10or so each decade. Cesium clocks [green], which employ microwave radiation to interrogate ensembles of cesium atoms, were the first. These were surpassed in accuracy in the 2000s by optical clocks [pink], which use laser light and often just a single ion. This year, optical-lattice clocks, which incorporate thousands of atoms [blue], became the most accurate atomic clocks. The symbol by each optical-lattice point denotes the atomic species used in the clock: strontium (Sr), mercury (Hg), and ytterbium (Yb).

When Katori made his proposal, my group at the Paris Observatorys Systmes de Rfrence Temps-Espace (LNE-SYRTE) department, which is responsible for maintaining Frances reference time and frequency signals, had already been investigating the use of strontium for optical clocks. We set to work almost immediately to see if we could make an optical-lattice clock using strontium, competing at first with just two other groups that had long-standing experience working with cooled strontium: Katoris team in Tokyo and Jun Yes group at JILA, in Boulder, Colo. A decade and many projects later, other groups have built lattice clocks using strontium and ytterbium. More experimental projects using mercury or magnesium, which require still higher-frequency and less-well-developed lasers, are also in the works.

One of the key factors in making optical-lattice clocks more accurate over the past few years has been the development of clock lasers with very narrow spectraessentially just a small spike at one particular frequency. We need these to effectively explore the region around the transition frequency of the clock, to see in fine detail how a slight shift in the clock frequency affects the transition probability.

The best way to make narrow-lined laser light is to feed it into a mirrored chamber called a Fabry-Prot cavity. After bouncing back and forth up to a million times inside this cavity, light of any arbitrary wavelength will have interfered with itself and canceled itself out. Only laser light with a wavelength that is a unit fraction of the length of the cavity emerges.

While the cavity helps to filter out natural fluctuations in the frequency of a laser source, the technique isnt perfect. The frequency of the clock laser that emerges from the cavity can wobble around because of thermal fluctuations that cause the cavity to slightly expand or contract.

But over the past few years, researchers have found ways to help mitigate this effect. Cavities were made longer, so the relative impact of a small change in length is smaller. Vibrations were damped. The cavities were also cooled to cryogenic temperatures, to limit tiny expansions and contractions due to thermal energy.

The net result was much more stable clock lasers. Nowadays, over the few seconds it takes to prepare and probe clock atoms, a 429-terahertz clock laser might drift in frequency by just 40 millihertz or so. For a typical cavity, with a length of a few dozen centimeters, that amounts to changes in its length of no more than a few percent of the size of a proton for the several seconds it takes to prepare and probe the atoms in the optical clock.

Largely due to this effort, the stability reached within one day with cesium clocks, or within a few minutes with optical-ion clocks, can now be reached in 1 second with an optical-lattice clock, close to the QPN limit. This improved stability makes the clock itself a tool. The less time you need to gather data to measure an atomic clocks frequency with precision, the faster you can use the clock to run experiments to explore ways to make it better. Indeed, just three years after the first frequency stability improvements were demonstrated in optical-lattice clocks at the U.S. National Institute of Standards and Technology, these clocks took the lead for accuracy. The published record is now held by one of the strontium OLCs at JILA, which boasts an estimated accuracy of 6.4 parts in 1018.

A clock is only so good on its own. Evaluating one clock requires another, comparable clock to serve as a reference. When OLCs were first developed a decade ago, the initial comparisons were done between strontium OLCs and cesium clocks. These measurements were enough to establish the early promise of OLCs. But to truly ascertain the accuracy of an atomic clock, its crucial to directly compare two clocks of the same type. If they are as accurate as advertised, their frequencies should be identical.

So as soon as we had finished building one strontium optical-lattice clock in 2007, we began work on a second. We finished the second clock in 2011, and set to work making the first comparison between two optical-lattice clocks in order to directly establish their accuracy, without relying on cesium clocks.

Once a second clock is built, previously undetectable problems soon become apparent. And indeed, we soon uncovered flaws that had been overlooked. One was the influence of static electric charges that had become trapped on the windows of the vacuum chamber. We had to shine ultraviolet light on the windows to efficiently dislodge the charges.

In a paper that appeared last year in Nature Communications, we showed that our two strontium OLCs agree down to the 1 part in 1016 level, a solid confirmation that these clocks are more accurate than the best cesium clocks. Earlier this year, Katoris team at the research institution Riken, in Wako, Japan, reported an agreement of a few parts in 1018 in similar clocks, this time enclosed in a cryogenic environment.

Incidentally, the frequency of an optical clock is so fast that no electronic device could possibly count its ticks. These sorts of clock comparisons rely on a new technology thats still very much in development: the frequency comb. This instrument uses femtoseconds-long laser pulses to create a spectrum that consists of coherent, equally spaced teeth that span the visible and infrared spectrum. In effect, it acts like a ruler for optical frequencies.

The ability to perform comparisons between OLCs pushes us further along the road to redefining the second. Before a redefinition in the International System of Units can take place, a large number of laboratories must demonstrate not only that they can implement the new standard but also compare their measurements. Consensus is needed to establish that all the laboratories are on the same page. It is also necessary to ensure that the world can literally keep time: Coordinated Universal Time, the time by which the worlds clocks are set, and the International Atomic Time its derived from, are created by making a weighted average of a large number of microwave clocks around the world.

Cesium clocks are networked using the signals emitted by satellites and are compared by microwave transmission. This is good enough for microwave clocks but too unstable for distributing more-accurate optical-lattice clock signals. But soon, international comparisons of optical clocks will reach a new milestone. New fiber connections, built with dedicated phase-compensation systems that can cancel small timing shifts introduced by the lines, are now being constructed.

By the end of this year, thanks to a number of national and international projects, we expect to be able to start using such connections to make the first comparisons between optical-lattice clocks based at LNE-SYRTE in Paris and the Physikalisch-Technische Bundesanstalt, Germanys national metrology center, in Braunschweig. A link to the National Physical Laboratory, in London, which has strontium- and ytterbium-ion clocks, is also set to be completed early next year. These efforts will pave the way for an international metrology network that could enable a new standard for the second.

In the meantime, scientists have already begun using optical-lattice clocks as a tool to explore nature. One focus has been on measuring the frequency ratio between two clocks that use different types of atoms. This ratio depends on fundamental physical constants, such as the fine-structure constant, which could reveal new physics if it turns out to vary in time or from place to place.

Astronomers may also benefit from optical clocks. Atomic clocks are used as a time reference in radio astronomy, allowing astronomers to combine the light collected by telescopes separated by hundreds or thousands of kilometers to produce a virtual telescope, with an angular resolution equivalent to that of a single telescope spanning that entire distance. As optical atomic clocks mature, they could enable a similar feat for optical telescopes.

And its not hard to imagine that optical-lattice clocks could offer new insight into the world beneath our feet. According to Einsteins theory of general relativity, a clock sitting on a denser part of Earth will tick slower relative to one situated on a part thats less dense. Although gravimeters can be used to measure gravitational force at any one point, measuring gravitational potentialwhich could shed light on different, deeper structures inside Earthmust be done by integrating the measurements of gravimeters at different points around Earths surface or by measuring the orbits of satellites. Metrologists and geodesists are now teaming up to understand what optical-lattice clocks will be able to offer. Its possible that they could be used at different points around Earth to assist with oil detection, earthquake monitoring, and volcano prediction.

In the meantime, there is still work to be done to keep improving the stability and accuracy of OLCs. Recently, a large effort has been made to fight the effect of black-body radiation. This radiation is unavoidably emitted by any physical body with nonzero temperature, including the vacuum chamber that surrounds the clock atoms. When it interacts with the atoms it shifts the energy levels of the clock transition. This shift can be corrected after the fact, but a precise knowledge of the temperature and emissivity of the vacuum chamber must be acquired. It is also possible to enclose the atoms in a cryogenic environment or use an atomic species that is inherently less sensitive to black-body radiation, such as mercury, a route that our group is exploring.

Before the end of the decade, new generations of ultranarrow lasers are also likely to help push stabilities below 1part in 1017 after a single second of data gathering. That will make it practical for us to achieve an accuracy below 1018 more than 100times the precision of cesium clocks. As OLCs become more accurate, the scope of applications will continue to expand.

Even if OLCs are wildly successful, we wont abandon the cesium clock, which will remain more compact and less expensive to build. And in the future, OLCs may be supplanted by clocks of even higher frequencies that rely on energy transitions inside the atoms nucleus instead of among the electrons in orbit around it. These nuclear transitions are mostly out of reach of current laser technology, although researchers are starting to explore them.

But before long we will see yet another time standard that could significantly influence the way we relate to our universe. Just as surely as time keeps on ticking, improvements in our ability to measure it will go on.

This article originally appeared in print as An Even Better Atomic Clock.

Jrme Lodewyck, an associate professor at Frances National Center for Scientific Research, enjoys doing battle with atoms, pinning down every factor that might affect their behavior. This compulsion came in handy in 2007, when he took a postdoctoral position at the Paris Observatory and was immediately assigned to build an optical-lattice clock from scratch. These clocks are now the worlds most accurate and could one day redefine the second.

Continue reading here:

Introducing the Worlds Most Precise Clock - IEEE Spectrum

Read More..

The quantum Internet: a glimpse at the future of connectivity – IT-Online

While many of us may not have heard of quantum computing, in the background, its quietly starting a tectonic shift in the world of tech. Quantum computers are capable of calculating complex problems in seconds, which would take the fastest supercomputers 10 000 years.

By Prenesh Padayachee, chief digital officer at Seacom

Although the tech has a long way to go before it hits the mainstream, researchers and scientists are looking for ways to harness the power of quantum mechanics and pave the way forward for the quantum Internet. The promise of unprecedented computation power and unbreakable cybersecurity could change communications forever, and lead to many astounding capabilities.

What is quantum computing?

Traditional computers encode information in 1s and 0s, and this basic unit of information is called a bit. This computing power relies directly on the number of binary transistors that can be used to run computations.

A quantum computer, on the other hand, leverages the unique behaviours of particles on a sub-atomic level. It encodes information using the physical properties of individual particles (such as the direction a photon is spinning) to create 1s or 0s called qubits. Classical computers exhibit a direct increase in computing power with every added transistor, whereas quantum computing power increases exponentially with every qubit added to the system.

Quantum computers also take advantage of a unique phenomenon called entanglement, where two particles that have interacted can be split up across vast distances and are still able to interact instantaneously with one another. These particles are inextricably linked, and share information with each other. Knowing the physical state of the one entangled particle allows you to know the state of the other, and because they interact instantaneously across distances, they share information faster than the speed of light.

The quantum roadmap

While quantum computing is yet to reach its true innovative potential, this futuristic tech has been around for longer than you may think. As early as 2004, a bank in Austria carried out the first quantum-encrypted money transfer. Three years later, the State of Geneva in Switzerland used quantum cryptography to protect its elections.

In 2014, Chinese scientists launched the worlds first quantum satellite and were able to achieve quantum entanglement to perform communication experiments across hundreds of miles between Earth and space. They were also able to expand this into a quantum network that spans 4 600 kilometres across China. In 2021, a team of engineers created a network of quantum computers with a bandwidth switch that minimises operational costs and would make the theoretical quantum Internet more efficient.

Although still only a concept, researchers and scientists around the world are racing to build the worlds first quantum Internet. In theory, a quantum Internet will harness the unprecedented capabilities of quantum mechanics by allowing quantum devices to exchange information over a wide network. While it is currently not possible to use quantum entanglement to communicate faster than the speed of light, a quantum network would have other benefits. Sending qubits through a quantum Internet would give us access to computational capabilities that surpass todays web applications.

Quantum computers can solve complex problems in fields such as supply chain management, chemistry, physics, biology, machine learning, and even finance. With the astounding capabilities of quantum computing power, our ability to innovate would only be limited by what we can imagine. It would also allow us to send information in the most secure way possible: using quantum cryptography.

A new standard for security

Quantum cryptography surpasses most classical encryption methods as it sends information in the form of a physical particle (such as a photon). It is impossible for an intruder to observe these particles without changing or destroying them, so a quantum Internet would be virtually unhackable.

At the same time, quantum computing could pose a threat to current methods of encryption in the near future. Most encryption protocols today use a combination of private and public keys, and with traditional computing methods, it is impossible to derive the private key from a public key that only holds a part of the relevant information. Quantum computers, however, are exceptionally good at decryption, and could derive private keys from openly accessible public keys.

RSA is a widely used encryption method that we rely on for secure data transmission. According to a research paper by Cornell University, a 1024-bit RSA encryption would require approximately 1500 to 2000 qubits to decrypt with a quantum computer. Considering that IBMs latest quantum computer aims to have 1121 qubits by 2023, it may not be that long before current encryption methods become obsolete. Even the cryptocurrency blockchain, which is praised for its robust security, could be vulnerable. Deloitte has stated that presently, about 25% of the Bitcoins in circulation are vulnerable to a quantum attack.

Fortunately, current quantum computers still have high error rates and can only operate in lab conditions at temperatures near absolute zero. There are none that currently have enough qubits to pose a global security threat, but if quantum computing power continues to advance at its current exponential rate, we may soon have to rethink our encryption methods.

The era of quantum communication

Quantum communication may still be in its infancy, but as researchers find new ways to overcome the unique challenges of quantum mechanics, and because of parallel computing, quantum computers are becoming exponentially more powerful and stable.

Quantum cryptography could become the de facto standard for secure communication systems, and if it does, businesses will need to ensure that they find future-facing technology partners. The era of quantum communication seems distant, but its certainly worth thinking about what opportunities it may herald along the way.

Related

Read the rest here:

The quantum Internet: a glimpse at the future of connectivity - IT-Online

Read More..

Hear me out: why GI Joe: The Rise of Cobra isnt a bad movie – The Guardian

The year is 1641. We open in France, where confusingly, everyone is speaking English. A Scottish man has been caught selling weapons to enemies of Louis XIII, and as punishment is forced to wear a red-hot iron mask forever. Cut to the not too distant future, where the mans descendant, Christopher Eccleston, is presenting a lecture about newly weaponised flying metal bugs to some Nato employees. Originally developed to isolate and kill cancer cells, at MARS industries we discovered how to program nanomites to do almost anything. For example eat metal. It turns out nanomites can also be injected into rocket warheads, and thus the back story and premise of GI Joe: The Rise of Cobra is explained in less than a minute.

The opening sets the tone for the film that follows speedy, irony-free B-movie action nonsense, delivered to you with the efficiency of a Big Mac on a Friday night And if it requires Christopher Eccleston to do a PowerPoint presentation so we can get on with watching helicopters blow up in slow motion, then dammit Christopher Eccleston will do a PowerPoint. On top of which, this particular Big Mac is filled with Channing Tatum.

Despite his previous acting highlights including the Step Up dance movies and grinding topless in the background of the video for Ricky Martins She Bangs, when asked about GI Joe in an interview in 2012, Channing Tatum said, I fucking hate that movie. Luckily for us, in 2009 Channing Tatum did a three-movie deal with Paramount and was forced to accept the GI Joe role to avoid being sued.

Despite his dislike of the film, Channing Tatum is still Channing Tatum and both he and his massive arms give it their all and he has gone to the Michael Bay School of Turning Around in Slow Motion While Holding a Machine Gun. After turning around slowly, he and his partner Marlon Wayans load some nanomite warheads into a jeep, refer to a group of muscular male soldiers as ladies and tell them to mount up. Strap in, everyone.

What follows is a plot of such madness and a cast of characters so enormous (IMDb lists 144 in total) its understandable that it required a PowerPoint to set it up. The truck is ambushed by Channing Tatums ex-girlfriend, Sienna Miller, and after a lengthy fight in which several members of elite army unit GI Joe parachute in to save the day, Tatum and Wayans are transported to an underground base in the Egyptian desert to participate in a training montage soundtracked by the UK band Bus Stops dance rap cover of T-Rexs Get It On. (Fun fact: Bus Stop were fronted by rapper and professional football manager Darren Daz Sampson, who went on to represent Britain in 2006s Eurovision Song Contest.) Channing Tatum wins a gladiatorial pugil stick fight with GI Joes resident masked ninja, Snake Eyes, and to celebrate the boys all take their tops off.

A semi-naked Marlon Wayans attempts to charm one of the Joes (they are collectively referred to as Joes) confusingly named Scarlett OHara, as she jogs on a treadmill while reading a book about quantum physics. (It is not clear why she needs to read a book about quantum physics when her job is beating people up dont worry about it.) Tatum puts on something called a Delta 6 accelerator suit and travels to Paris to stop Sienna Miller blowing up the Eiffel Tower, before charging around the Champs-lyses running after tanks, jumping through bus windows and flipping over Renault Mganes. Joseph Gordon Levitt appears to explain cobras to everyone using a CGI snake in a glass box (They are vicious). Chaos reigns.

Writer/director Stephen Sommers was also in charge of both The Mummy and the 90s B-movie classic Deep Rising, and although in comparison GI Joe contains a more noughties post-Transformers fixation on guns and machinery than those two films, there is a similar air of fun, unapologetic action campness throughout. If youre happy to suspend your disbelief to its very limits and relax into 1 hour and 58 minutes of revolving door cast, plot delivered via flashbacks and laughably hammy dialogue, plus Channing Tatum blowing things up in slow motion this is the film for you. And give me that kind of Big Mac silliness over po-faced serious blockbuster action, any day of the week.

Read the original:

Hear me out: why GI Joe: The Rise of Cobra isnt a bad movie - The Guardian

Read More..