Page 2,021«..1020..2,0202,0212,0222,023..2,0302,040..»

Is life the result of the laws of entropy? – New Scientist

By Stephon Alexander and Salvador Almagro-Moreno

Can physics explain biology?

Shutterstock / Billion Photos

The following is an extract from our Lost in Space-Time newsletter. Each month, we hand over the keyboard to a physicist or two to tell you about fascinating ideas from their corner of the universe. You can sign up for the Lost in Space-Time here.

At the dawn of time, the universe exploded into existence with the big bang, kick-starting a chain of events that led to subatomic particles clumping together into atoms, molecules and, eventually, the planets, stars and galaxies we see today. This chain of events also led to us, although we often see life and the formation of the universe as separate, or non-overlapping magisteria to borrow biologist Stephen Jay Goulds phrase.

To cosmologists, complex systems like life seem of little consequence to the problems they are trying to solve, such as those relating to the big bang or the standard model of particle physics. Similarly, to biologists, life is housed in a biosphere that is decoupled from the happenings of the grandiose universe. But is that right?

Notable scientists, including John von Neumann, Erwin Schrdinger, Claude Shannon and Roger Penrose, have entertained the idea that there could be insights to gather from looking at life and the universe in tandem.

Physicist Erwin Schrdingers views were particularly interesting, as his audacious speculations and predictions in biology have been hugely influential. In 1943, he gave a series of lectures at Trinity College Dublin that would eventually be published in a tiny, but mighty, book called What Is Life? In it, he speculated on how physics could team up with biology and chemistry to explain how life emerges from inanimate matter.

Schrdinger believed that the same laws of physics that describe a star must account for the intricate processes of metabolism within a living cell. He knew that the physics of his time was insufficient to explain some of the ingenious experimental findings that had already been made about living cells, but he ploughed on regardless, attempting to use the physics he knew to explain biology.

He said that quantum mechanics must play a key role in life, as it is necessary for making atoms stable and enabling them to bond in the molecules found in matter, both living and not. For non-living matter, such as in metal, quantum mechanics allows molecules to organise in interesting ways, such as periodic crystals lattices of molecules with high-degrees of symmetry. But he believed that periodicity was too simple for life; instead he speculated that living matter is governed by aperiodic crystals. He proposed that this type of non-repetitive molecular structure should house a code-script that would give rise to the entire pattern of the individuals future development and of its functioning in the mature state. In other words, he was stumbling across an early description of DNA.

Before Schrdingers time, biologists had hit upon the idea of the gene, but it was just an undefined unit of inheritance. Today, the idea that genes are governed by a code that programs the structures and mechanisms of cells and determines the fate of living organisms seems so familiar, that it feels like common sense. Yet, exactly how this is accomplished at a molecular level is still a being teased out by biologists.

What is particularly remarkable is that Schrdinger used reasoning stemming from quantum mechanics to formulate his hypothesis. He was an outsider to biology, and this naturally made him bring a different approach.

Physics and biology have moved on a lot since Schrdingers day. What if we were to follow the same process and ask what is life today?

Over the years we, the authors of this newsletter, have developed a pattern. We meet up, sometimes over a drink, to exchange ideas and share our latest musings in cosmology or molecular biology. We have often stayed up late talking while listening to our favourite jazz or flamenco musicians. In part, our conversations are an exercise in deliberately generating an outsider perspective, like Schrdinger did, hopefully to benefit each others research. But it is also just a lot of fun.

Specifically, since 2014 we have developed a common intuition that there is a hidden interdependence between living systems and cosmology, as demonstrated in some of our publications. To understand this, we need to talk about entropy, a measure of disorder, and how it flows in the universe, both at biological and cosmological scales.

In the early universe, before there were stars and planets, space was mostly filled with an equal amount of radiation and matter. As this mixture warmed and moved about more, it became less ordered and its entropy increased. But as the universe expanded, it distributed radiation and matter in a homogenous, ordered fashion, lowering the entropy of the universe.

As the universe further expanded and cooled, complex structures such as stars, galaxies and life formed. The second law of thermodynamics says that entropy always increases, but these structures had more order (and therefore less entropy) than the rest of the cosmos. The universe can get away with this because the regions of lower entropy are concentrated within cosmic structures, while entropy in the universe as a whole still increases.

We believe this entropy-lowering network of structures is the main currency for the biosphere and life on planets. As the father of thermodynamics, Ludwig Boltzmann, said: The general struggle for existence of animate beings is therefore not a struggle for raw materials nor for energy which exists in plenty in any body in the form of heat, but a struggle for entropy, which becomes available through the transition of energy from the hot sun to the cold earth.

As the universe deviates from homogeneity, by seeding and forming lower entropy structures, entropy elsewhere in the universe continues to grow. And entropy also tends to grow within those structures. This makes entropy, or its absence, a key player in sustaining cosmic structures, such as stars and life; therefore, an early lifeless universe with low entropy is necessary for life here on Earth. For example, our sun radiates energy that is absorbed by electrons in plants on Earth and used in the functions they need to live. Plants release this energy in the form of heat, giving back to the universe more entropy than was taken in.

Unfortunately, it is difficult to explain with our current understanding of physics why the entropy was so low in the early universe. In fact, this problem of the low entropy we demand of the big bang is one of the major problems with this theory.

The biology side of the story stems from Salvadors research into the genetic and ecological drivers that lead populations of harmless bacteria to evolve and emerge as pathogens. Crucial to the story is that it isnt just a question of the genetic code of the bacteria. One of Salvadors mantras is that life is an adaptive phenomenon responding to constant and unexpected changes in pressures from the environment.

This makes an organism an emergent phenomenon, where the final shape of it isnt contained in the individual pieces that make it up, but can be influenced by a series of larger systems to which it belongs. Living things comprise a network of interactions mediated through the environment. A living system is able to regulate billions of cells to maintain its overall functioning. Beyond that, collections of organisms belong to a network called an ecosystem, which also maintains a dynamical equilibrium.

This extends all the way to networks at lifes largest scales. The idea of Earth being a self-regulating ecosystem was co-discovered by James Lovelock and Lynn Margulis in the 1970s, and it became known as the Gaia hypothesis. The takeaway for us is that the flow of negative entropy exists not only for individual living things, but for the entire Earth.

The sun sends free energy to Earth, and through a chain of complex interactions, the energy gets distributed through a network of interactions to living things, each relying on it to maintain its complexity in the face of increasing disorder. To contextualise the role of life within the framework of thermodynamics, we define these order-generating structures (such as a cell) as Units Of Negentropy, or UONs. But theres no such thing as a free lunch. When UONs release this energy back into the environment, they mostly do so in a form that has higher entropy than was received.

This uncanny parallel between living systems, UONs and the evolution of the universe may seem like a coincidence, but we choose not to think of it this way. Instead, we propose that it is a central organising principle of the evolution of the cosmos and the existence of life. Salvador elected to call this the entropocentric principle, a wink at the anthropic principle, which, in its strong form, states that the universe is fine-tuned for life. This arises because the laws of nature seem to be just right for life. For example, if the strength of the nuclear force that binds the hearts of atoms differed by a few per cent, stars wouldnt be able to produce carbon and there would be no carbon-based life.

The fine-tuning problem may not be as severe as it seems, though. In research Stephon conducted with colleagues, he showed that the universe can be fit for life even when we let the constants of nature like gravity and electromagnetism vary, so long as they vary simultaneously. Maybe we dont need the anthropic principle after all. The entropocentric principle, on the other hand, is harder to shake. If the universe was unable to provide pathways that enabled it to create regions of lower entropy, then life as we know it wouldnt exist. This leaves us wondering: do we live in a cosmic biosphere or is the universe a cosmic cell?

Stephon Alexander is a theoretical physicist at Brown University in Rhode Island who spends his time thinking about cosmology, string theory and jazz, and wondering if the universe is a self-learning AI. He is author of the book Fear of a Black Universe. Salvador Almagro-Moreno is a molecular biologist at the University of Central Florida who investigates emergent properties in complex biological systems, from protein evolution to pandemic dynamics.

More on these topics:

Read more from the original source:

Is life the result of the laws of entropy? - New Scientist

Read More..

Rice physicist wins DOE early career award | Rice News | News and Media Relations | Rice University – Rice News

Guido Pagano, an assistant professor of physics and astronomy at Rice University, has received a prestigiousEarly Career Research Awardfrom the Department of Energy to continue his development of a quantum simulator.

The five-year award for $750,000 is one of 56 granted to university-based researchers in the round of grants announced by DOEs Office of Science. The office also awarded grants to 27 scientists at national laboratories.

Im really thrilled to get this award because it gives me the possibility to address a very promising line of research, said Pagano, who has also won aNational Science Foundation CAREER Awardand anOffice of Naval Research Young Investigator Awardfor different lines of research this year. I feel grateful because my ideas are now basically fully funded.

Pagano, who joined Rice and itsQuantum Initiativein 2019 and shortly thereafter had papers in bothNatureandScience, will use the grant to complete his labs custom ion trap, while already planning the design of a second system. The labs focus ison using trapped ions to simulate quantum systems of interest. Currently, Pagano and his students are building a laser-based system able to manipulate individual atomic ions.

The system were putting together is complex and flexible enough to connect to theories that nuclear physics researchers are interested in, like simulatinggauge field theories, said Pagano, who won the grant in part on the strength of a closecollaborationwith a University of Maryland theorist,Zohreh Davoudi, on trapped-ion research.

A primary goal for the lab is to tailor new ways in which trapped ions can interact among each other to directly map them to gauge field theories.

The new apparatus is designed to both write and read quantum information on the ions in many different ways. The system is designed to give us much more flexibility compared to what we had before, Pagano said. We have so many ways to manipulate the ions, either using multiple atomic states or addressing them from different directions in ways that other experiments are unable to do.

Paganos field falls under nuclear physics, among DOEs set of research topics. These also include advanced scientific computing research, basic energy sciences, biological and environmental research, fusion energy sciences, high energy physics and isotope and accelerator research and development.

Supporting talented researchers early in their career is key to fostering scientific creativity and ingenuity within the national research community, said DOE Office of Science Director Asmeret Asefaw Berhe. Dedicating resources to these focused projects led by well-deserved investigators helps maintain and grow Americas scientific skill set for generations to come.

Read the original:

Rice physicist wins DOE early career award | Rice News | News and Media Relations | Rice University - Rice News

Read More..

Why are there exactly 3 generations of particles? – Big Think

Everything that exists in our Universe, as far as we understand it, is made up of particles and fields. At a fundamental level, you can break everything down until you reach the limit of divisibility; once things can be divided no further, we proclaim that weve landed upon an entity thats truly fundamental. To the best of our current understanding, there are the known elementary particles those represented by the Standard Model of elementary particle physics and then there are the unknowns: things that must be out there beyond the confines of the Standard Model, but whose nature remains unknown to us.

In the latter category are things like dark matter, dark energy, and the particle(s) responsible for creating the matter-antimatter asymmetry in our Universe, as well as any particles that would arise from a quantum theory of gravity. But even within the Standard Model, there are things for which we dont quite have an adequate explanation. The Standard Model consists of two types of particles:

While theres only one copy of each of the bosons, for some reason, there are three copies of each of the fermionic particles: they come in three generations. Although its long been accepted and robustly experimentally verified, the three-generational nature of the Standard Model is one of the great puzzles of nature. Heres what we know so far.

On the right, the gauge bosons, which mediate the three fundamental quantum forces of our Universe, are illustrated. There is only one photon to mediate the electromagnetic force, there are three bosons mediating the weak force, and eight mediating the strong force. This suggests that the Standard Model is a combination of three groups: U(1), SU(2), and SU(3).

Although the Standard Model possesses an incredibly powerful framework leading to, by many measures, our most successful physical theory of all-time it also has limitations. It makes a series of predictions that are very robust, but then has a large number of properties that we have no way of predicting: we simply have to go out and measure them to determine just how nature behaves.

The particles and forces of the Standard Model. Any theory that claims to go beyond the Standard Model must reproduce its successes without making additional predictions that have already been shown to not be true. Pathological behavior that would already be ruled out is the largest source of constraints on beyond-the-Standard Model scenarios.

But what the Standard Model doesnt tell us is also profound.

All of these things can only, at least as we currently understand it, be measured experimentally, and its from those experimental results that we can determine the answers.

Travel the Universe with astrophysicist Ethan Siegel. Subscribers will get the newsletter every Saturday. All aboard!

Fortunately, were good enough at experimental particle physics that weve been able to determine the answers to these questions through a series of both clever and brute-force observations and experiments. Every single one of the Standard Models particles and antiparticles have been discovered, their particle properties have been determined, and the full scope of what exists in the Standard Model three generations of fermions that are all massive and where quarks of like charges and the massive neutrinos all mix together is now unambiguous.

The rest masses of the fundamental particles in the Universe determine when and under what conditions they can be created, and also describe how they will curve spacetime in General Relativity. The properties of particles, fields, and spacetime are all required to describe the Universe we inhabit, but the actual values of these masses are not determined by the Standard Model itself; they must be measured to be revealed.

The two major ways that we know there are three generations no more and no less of fermions are as follows.

1.) The Z-boson, the neutral but very massive weak boson, has a series of different decay pathways. About 70% of the time, it decays into hadrons: particles made up of quarks and/or antiquarks. About 10% of the time, it decays into charged leptons: either the electron (1st generation), muon (2nd generation), or tau (3rd generation) flavor, all with equal probabilities. And about 20% of the time predicted to be exactly double the frequency that it decays to a charged lepton it decays into neutral leptons: the neutrinos, with equal probability for each of the various flavors.

These neutrino decays are invisible, since it would take about a light-year worth of lead to have a 50/50 shot of detecting your average neutrino. The fact that the fraction of Z-bosons that decays into invisible constituents (i.e., neutrinos) is exactly double the fraction that decays into the known charged leptons tells us that there are only three species of neutrinos that are below half the mass of the Z-boson, or around 45 GeV/c. If there is a fourth generation of neutrino, the lightest massive particle in each of the three known generations, its more than a trillion times more massive than any of the other neutrinos.

The final results from many different particle accelerator experiments have definitively showed that the Z-boson decays to charged leptons about 10% of the time, neutral leptons about 20%, and hadrons (quark-containing particles) about 70% of the time. This is consistent with 3 generations of particles and no other number.

2.) The presence of neutrinos that were created in the early Universe, during the first ~second of the hot Big Bang, imprints itself onto other observable cosmic signals.

In addition to the constraints on neutrinos, there are no additional charged leptons or quarks at masses at or below 1.2 and 1.4 TeV, respectively, from experimental constraints at the Large Hadron Collider (and the fact that probabilities must always add up to 100%).

All told, this strongly disfavors the existence of a fourth (or higher) generation of particles.

If there were no oscillations due to matter interacting with radiation in the Universe, there would be no scale-dependent wiggles seen in galaxy clustering. The wiggles themselves, shown with the non-wiggly part (blue, top) subtracted out (bottom), is dependent on the impact of the cosmic neutrinos theorized to be present by the Big Bang. Standard Big Bang cosmology with three neutrino species corresponds to =1.

With the exception of the neutrinos, which appear to be just as stable in the electron species as they are in either the muon or tau species, the only stable charged particles (including neutral composite particles with charged, fundamental constituents) in the Universe are made out of first-generation quarks and leptons. The muon is the longest-lived unstable particle, and even it only has a mean lifetime of 2.2 microseconds. If you have a strange (or heavier) quark, your lifetime is measured in nanoseconds or less; if you have a tau lepton, your lifetime is measured in fractions-of-a-picosecond. There are no stable species that contain second-or-third generation quarks or charged leptons.

There are no hints in the decays of the most massive particles the W, the Z, the Higgs or the top quark that there are any particles in additions to the ones we know. When we look at the mass ratios of the different generations, we find that the four separate types of particles:

all have significantly different mass ratios between the generations from one another. In addition, although quarks mix with one another and neutrinos mix across the generations, the ways in which they mix are not identical to each other. If there is a pattern or an underlying cause or reason as to why there are three generations, we havent uncovered it yet.

Instead of an empty, blank, three-dimensional grid, putting a mass down causes what would have been straight lines to instead become curved by a specific amount. In General Relativity, we treat space and time as continuous, but all forms of energy, including but not limited to mass, contribute to spacetime curvature. The deeper you are in a gravitational field, the more severely all three dimensions of your space is curved, and the more severe the phenomena of time dilation and gravitational redshift become. It is not known if there is a connection between the number of spatial dimensions and the number of fermionic generations.

One of the ideas thats sometimes floated is really just a hint: we have three generations of fermionic particles, and we have three spatial dimension in our Universe. On the other hand, we have only one generation of bosonic particles, and one time dimension in our Universe.

Could this be a potential link; the number of spatial dimensions with the number of generations of fermions, and the number of time dimensions with the number of generations of bosons?

Maybe, but this line of thought doesnt provide any obvious connections between the two. However, pursuing it does help us understand what similarly-minded connections arent present. Particles dont have different spins or spin-modes across generations, indicating that intrinsic angular momentum is simple and unrelated to either generations or dimensions. There is CP-violation in the (weak) decays of heavy quarks, and that requires a minimum of three generations, but we still dont know why theres no CP-violation in the strong decays.

If youre looking at 3 as though its a mysterious number, you might note:

but none of them have any known connection to either the number of spatial dimensions or the number of generations. As far as we can tell, its all just coincidence.

The difference between a Lie algebra based on the E(8) group (left) and the Standard Model (right). The Lie algebra that defines the Standard Model is mathematically a 12-dimensional entity; the E(8) group is fundamentally a 248-dimensional entity. There is a lot that has to go away to get back the Standard Model from String Theories as we know them, and there are numerous ways to recover three generations based on how the various symmetries are broken in String Theory.

Perhaps. By adding in additional symmetries and by considering larger gauge groups, its possible to come up with a rationale for why there would be three, and only three, generations of particles. Indeed, thats not too far fetched. In supersymmetry, there would be more than double the number of particles than are present in the Standard Model, with an additional fermion for every boson, an additional boson for every fermion, and multiple Higgs particles as well as supersymmetric Higgsinos that would exist.

In string theory, were required to go to even greater states of symmetry, with larger gauge groups that are capable of admitting the particles of the Standard Model many times over. It is certainly possible, with such a wide set of variables to play with, to choose a way that these very large gauge groups might break to not only give rise to the Standard Model, but to a Standard Model that has three identical copies of its fermions, but no additional bosons.

But, again, theres no reason that we know of that dictates why this ought to be the case. When you strike a pane of glass with a rock, its possible that the glass will shatter in such a way that youll wind up with three specific shards that are identical; thats a plausible outcome. But unless you can predict those shards in advance, the idea doesnt have any predictive power. Such is the case with string theory at present: it could lead to three generations of fermionic particles, but theres no way to predict such an outcome.

A geometrical interpretation of the Koide formula, showing the relative relationship between the three particles that obey its particular mathematical relationship. Here, as was its original intent, its applied to the charged leptons: the electron, muon, and tau particles.

Back in 1981, physicist Yoshio Koide was looking at the then-known particles of the Standard Model and their particle properties, and took particular notice of the rest masses of the electron, muon, and tau particles. They are:

Although it might appear that theres no relationship at all between these three masses, his eponymous Koide formula indicated differently. One of the rules of quantum physics is that any particles with the same quantum numbers will mix together. With the exception of lepton family number (i.e., the fact that theyre in different generations), the electron, muon, and tau do have identical quantum numbers, and so they must mix.

What Koide noted was that mixing would generally lead to the following formula:

where that constant must lie between and 1. When you put the numbers in, that constant just happens to be a simple fraction that splits the range perfectly: .

The Koide formula, as applied to the masses of the charged leptons. Although any three numbers could be inserted into the formula, guaranteeing a result between 1/3 and 1, the fact that the result is right in the middle, at 2/3 to the limit of our experimental uncertainties, suggests that there might be something interesting to this relation.

But even with all that said, theres no underlying reason for any of this; its just a suggestive correlation. There may be a deep reason as to why there are three generations no more, no less of fermionic particles in the Standard Model, but as far as what that reason might be, we have no indicators or evidence that are any better than these tenuous connections.

The experimental data and the theoretical structure of the Standard Model, combined, allow us to conclude with confidence that the Standard Model, as we presently construct it, is now complete. There are no more Standard Model particles out there, not in additional generations nor in any other yet-undiscovered place. But there are, at the same time, certainly puzzles about the nature of the Universe that require us to go beyond the Standard Model, or well never understand dark matter, dark energy, the origin of the matter-antimatter asymmetry, and many other properties that the Universe certainly possesses. Perhaps, as we take steps towards solving those mysteries, well take another step closer to understanding why the Standard Models particle content is neither greater nor lesser than it is.

View original post here:

Why are there exactly 3 generations of particles? - Big Think

Read More..

Difficult-to-observe effect confirms the existence of quark mass – EurekAlert

image:A cascade of particles and gluons initiated by a decelerating charm quark. The more developed the cascade, the lower the energies of secondary particles and the greater the opening angle of dead cones avoided by subsequent gluons. view more

Credit: Source: CERN

A phenomenon that directly proves the existence of quark mass has been observed for the first time in extremely energetic collisions of lead nuclei. A team of physicists working on the ALICE detector at the Large Hadron Collider can boast this spectacular achievement the observation of the dead cone effect.

The objects that make up our physical everyday life can have many different properties. Among these, a fundamental role is played by mass. Despite being so fundamental, mass has a surprisingly complex origin. Its primary source is the complex interactions binding triplets of quarks in the interiors of protons and neutrons. In modern physics it is assumed that the masses of the quarks themselves, originating from their interactions with the Higgs field (its manifestations are the famous Higgs bosons), contribute only a few percent to the mass of a proton or neutron. However, this has only been a hypothesis. Although the masses of single quarks have been determined from measurements for many years, only indirect methods were used. Now, thanks to the efforts of scientists and engineers working in Geneva at the LHC of the European Organization for Nuclear Research (CERN), it has finally been possible to observe a phenomenon that directly proves the existence of the mass of one of the heavy quarks.

When lead nuclei collide at the LHC particle accelerator, the energy density can become so great that protons and neutrons decay and momentarily form quark-gluon plasma. The quarks inside then move in a powerful field of strong interactions and begin to lose energy by emitting gluons. However, they do this in a rather peculiar way, which our team was the first to succeed in observing, Prof. Marek Kowalski from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow starts to explain. Prof. Kowalski is one of the members of a large international collaboration carrying out measurements using the ALICE detector.

Gluons are particles that carry strong interactions between quarks. Their role is therefore similar to that of photons, which are responsible for the electromagnetic interactions between, for example, electrons. In electrodynamics, there is a phenomenon concerning electrons decelerating in an electromagnetic field: they lose energy by emitting photons and the higher the energy of the electron, the more often the photons fly in a direction increasingly consistent with its direction of motion. This effect is the basis of free-electron lasers today unique, powerful devices capable of producing ultra-short pulses of X-rays.

Electrons decelerating in a magnetic field like to emit 'forward' photons, in an angular cone. The higher their original energy, the narrower the cone. Quarks have quite the opposite predilection. When they lose energy in a field of strong interactions, they emit gluons, but the lower the energy and the larger the mass of the quark, the fewer gluons fly 'forward', says Prof. Kowalski and specifies: It follows from the theory that there should be a certain angular cone around the direction ofquark motion in which gluons do not appear. This cone the more divergent, the lower the energy of the quark and the higher its mass is called the dead cone.

Theorists predicted the phenomenon of the dead cone more than 30 years ago. Unfortunately, its existence in experiments has so far been noticed only indirectly. Both the nature of the phenomenon and the recording process are extremely difficult to observe directly. A decelerating quark emits gluons, which themselves can emit further gluons at different angles or transform into secondary particles. These particles have smaller and smaller energies, so the gluons they emit will avoid larger and larger dead cones. To make matters worse, individual detectors can only record this complex cascade in its final state, at different distances from the collision point, and therefore at different times. To observe the dead cone effect, millions of cascades produced by charm quarks had to be reconstructed from fragmentary data. The analysis, performed with sophisticated statistical tools, included data collected during the three years the LHC was in operation.

Experimental confirmation of the existence of the dead cone phenomenon is an achievement of considerable physical significance. This is because the world of quarks and gluons is governed by strong interactions described by a theory called quantum chromodynamics, which predicts that the dead cone effect can only occur when a quark emitting gluons has non-zero mass. The present result, published in the prestigious journal Nature, is therefore the first direct experimental confirmation of the existence of quark masses.

In the gigantic amount of data collected at the ALICE detector during the collision of lead nuclei and protons, we have traced a phenomenon that we know can only occur in nature when quarks have non-zero masses. Current measurements do not allow us to estimate the magnitude of the mass of the charm quarks we observed, nor do they tell us anything about the masses of quarks of other kinds. So we have a spectacular success, but in fact it is only a prelude to a long line of research, stresses Prof. Kowalski.

The first direct observation of the dead cone effect involved only gluons emitted by charm (c) quarks. Scientists now intend to look for dead cones in processes involving quarks with larger masses, especially beauty (b) quarks. This will be a huge challenge because the higher the mass ofthe quark, the less frequently it is produced in collisions, and therefore the more difficult it will be to collect a number of cases that will guarantee adequate reliability of statistical analyses.

The reported research is of fundamental importance to modern physics. This is because the Standard Model is the basic tool currently used to describe phenomena involving elementary particles. Masses of quarks are the key constants here, responsible for the correspondence between theoretical description and physical reality. It is therefore hardly surprising that the observations of dead cones, raising hopes for direct measurements of quark masses, are of such interest to physicists.

The Henryk Niewodniczaski Institute of Nuclear Physics (IFJ PAN) is currently one of the largest research institutes of the Polish Academy of Sciences. A wide range of research carried out at IFJ PAN covers basic and applied studies, from particle physics and astrophysics, through hadron physics, high-, medium-, and low-energy nuclear physics, condensed matter physics (including materials engineering), to various applications of nuclear physics in interdisciplinary research, covering medical physics, dosimetry, radiation and environmental biology, environmental protection, and other related disciplines. The average yearly publication output of IFJ PAN includes over 600 scientific papers in high-impact international journals. Each year the Institute hosts about 20 international and national scientific conferences. One of the most important facilities of the Institute is the Cyclotron Centre Bronowice (CCB), which is an infrastructure unique in Central Europe, serving as a clinical and research centre in the field of medical and nuclear physics. In addition, IFJ PAN runs four accredited research and measurement laboratories. IFJ PAN is a member of the Marian Smoluchowski Krakw Research Consortium: "Matter-Energy-Future", which in the years 2012-2017 enjoyed the status of the Leading National Research Centre (KNOW) in physics. In 2017, the European Commission granted the Institute the HR Excellence in Research award. The Institute holds A+ Category (the highest scientific category in Poland) in the field of sciences and engineering.

CONTACTS:

Prof. Marek Kowalski

Institute of Nuclear Physics, Polish Academy of Sciences

tel.: +48 12 6628074

email: marek.kowalski@cern.ch, marek.kowalski@ifj.edu.pl

SCIENTIFIC PUBLICATIONS:

Direct observation of the dead-cone effect in quantum chromodynamics

ALICE Collaboration

Nature 605, 440446 (2022)

DOI: https://doi.org/10.1038/s41586-022-04572-w

LINKS:

http://www.ifj.edu.pl/

The website of the Institute of Nuclear Physics, Polish Academy of Sciences.

http://press.ifj.edu.pl/

Press releases of the Institute of Nuclear Physics, Polish Academy of Sciences.

IMAGES:

IFJ220609b_fot01s.jpg

HR: http://press.ifj.edu.pl/news/2022/06/09/IFJ220609b_fot01.jpg

A cascade of particles and gluons initiated by a decelerating charm quark. The more developed the cascade, the lower the energies of secondary particles and the greater the opening angle of dead cones avoided by subsequent gluons. (Source: CERN)

Direct observation of the dead-cone effect in quantum chromodynamics

18-May-2022

Continue reading here:

Difficult-to-observe effect confirms the existence of quark mass - EurekAlert

Read More..

Microsoft aims to win the race to build a new kind of computer. So does Amazon – Sunbury Daily Item

SEATTLE The tech giants are locked in a race.

It might not end for another decade, and there might not be just one winner.

But, at the finish line, the prize they promise is a speedy machine, a quantum computer, that will crack in minutes problems that cant be solved at all today.

Builders describe revolutionary increases in computing power that will accelerate the development of artificial intelligence, help design new drugs and offer new solutions to help fight climate change.

Relying on principles of physics and computer science, researchers are working to build a quantum computer, a machine that will go beyond the capabilities of the computers we use today by moving through information faster.

Unlike the laptop screen were used to, quantum computers display all their inner organs. Often cylindrical, the computers are an intimidating network of coils, plates, wires and bolts. And theyre huge.

Were talking about computing devices which are just unimaginable in terms of their power in what they can do, said Peter Chapman, president and CEO of IonQ, a startup in the race alongside tech giants Microsoft, Amazon, Google, IBM, Intel and Honeywell.

The companies are riding a swell of interest that could grow to $9.1 billion in revenue by 2030, according to Tractica, a market intelligence firm that studies new technologies and how humans interact with tech advancements.

Right now, each company is deciding how to structure the building blocks needed to create a quantum computer. Some rely on semiconductors, others on light. Still others, including Microsoft, have pinned their ambitions on previously unproven theories in physics.

Bottom line, we are in very heavy experimentation mode in quantum computing, and its fairly early days, said Chirag Dekate, who studies the industry for research firm Gartner. We are in the 1950s state of classical computer hardware.

Theres not likely to be a single moment when quantum computers start making the world-changing calculations technologists are looking forward to, said Peter McMahon, an engineering professor at Cornell University. Rather, theres going to be a succession of milestones.

At each one, the company leading the race could change.

In October 2019, Google said it had reached quantum supremacy, a milestone where one of its machines completed a calculation that would have taken todays most advanced computers 10,000 years. In October last year, startup IonQ went public with an initial public offering that valued the company at $2 billion. In November, IBM said it had also created a quantum processor big enough to bypass todays machines.

In March, it was Microsofts turn.

After a false start that saw Microsoft retract some research, it said this spring it had proved the physics principles it needed to show that its theory for building a quantum computer was, in fact, possible.

We expect to capitalize on this to do the almost unthinkable, Krysta Svore, an engineer who leads Microsofts quantum program, said in a company post announcing the discovery. Its never been done before. ... [Now] heres this ultimate validation that were on the right path.

As envisioned by designers, a quantum computer uses subatomic particles like electrons instead of the streams of ones and zeros used by computers today.

In doing so, a quantum computer can examine an unimaginable number of combinations of ones and zeros at once.

A quantum computers big selling points are speed and multitasking, enabling it to solve complex problems that would trip up todays technology.

To understand the difference between classical computers (the computers we use today) and quantum computers (the computers researchers are working on), picture a maze.

Using a classical computer, youre inside the maze. You choose a path at random before realizing its a dead end and circling back.

A quantum computer gives an aerial view of the maze, where the system can see several different paths at once and more quickly reach the exit.

To solve the maze, maybe you have to go 1,000 times to find the right answer, said IonQs Chapman. In quantum computing, you get to test all these paths all at once.

Researchers imagine quantum computers being used by businesses, universities and other researchers, though some industry leaders also talk about quantum computing as a technology that will unlock new ideas our brains cant yet imagine. (Its not likely the average household will have a quantum computer room any time soon.)

Microsoft recently partnered with paints and coatings company AkzoNobel to create a virtual laboratory where it will test and develop sustainable products using quantum computing to overcome some of the constraints that jam up a traditional lab setting, like access to raw materials, lack of space and concerns about toxicity.

Goldman Sachs is working to use quantum computing to speed up risk evaluation done by Wall Street traders. Boeing wants to use the advanced tech to model how materials will react to different environments, while ExxonMobil has plans to use it to simulate the chemical properties of hydrogen, hoping to develop new materials that can be used to make renewable energy.

Most of the companies in the race today will develop fairly credible quantum machines, Chong said, and customers will look for ways to take advantage of their strengths and mitigate their weaknesses.

In the meantime, Amazon, Google and Microsoft are hosting quantum technology from their competitors, alongside their own, hoping to let customers play around with the tech and come up with uses that havent yet been imagined. In the same way companies can buy cloud space and digital infrastructure technology from Amazon Web Services or Google Cloud, the tech companies now offer customers pay-as-you-go quantum computing.

At this stage of the tech, it is important to explore different types of quantum computers, said Nadia Carlsten, former head of product at the AWS Center for Quantum Computing. Its not clear which computer will be the best of all applicants. Its actually very likely there wont be one thats best.

Dekate, who analyzes the quantum industry for research and consulting firm Gartner, says quantum may have reached the peak of its hype cycle.

Excitement and funding for the quantum industry has been building he said, pointing to a rising slope on a line graph. Now, it could be at a turning point, he continued, pointing to the spot right before the line graph takes a nosedive.

The hype cycle is a five phase model Gartner uses to analyze new technologies, as a way to help companies and investors decide when to get on board and when to cash out. It takes three to five years to complete the cycle if a new tech makes it through.

Predictive analytics made it to phase five, where users see real-world benefits. Autonomous vehicles are in phase three, where the original excitement wears off and early adopters are running into problems. Quantum computing is in phase two, the peak of expectations, Dekate said.

For every industry to advance, there needs to be hype. That inspires investment, he said. What happens in these ecosystems is end-users [like businesses and other enterprises] get carried away by extreme hype.

Some quantum companies are nearing the deadlines they originally set for themselves, while others have already passed theirs. The technology is still at least 10 years away from producing the results businesses are looking for, Dekate estimates. and investors are realizing they wont see profits anytime soon.

In the next phase of the hype cycle, Dekate predicts private investment in quantum computing will go down, public investment will go up in an attempt to make up the difference, and companies that have made promises they can no longer keep will be caught flat-footed. Mergers, consolidation and bankruptcy are likely, he said.

The kind of macroeconomic dynamics that were about to enter into, I think means some of these companies might not be able to survive, Dekate said. The ecosystem is ripe for disruption: way too much fragmentation and companies overpromising and not delivering.

In other words, we could be headed toward a quantum winter.

But, even during the funding freeze, businesses are increasingly looking for ways to use quantum computing preparing for when the technology is ready, Dekate said. While Amazon, Microsoft, Google and others are developing their quantum computers, companies like BMW, JPMorgan Chase, Goldman Sachs and Boeing are writing their list of problems for the computer to one day solve.

The real changes will come when that loop closes, Dekate said, when the tech is ready and the questions are laid out.

At some point down the line, the classical [computing] approaches are going to stall, and are going to run into natural limitations, he said. Until then, quantum computing will elicit excitement and, at the same time, disappointment.

2022 The Seattle Times. Visit seattletimes.com. Distributed by Tribune Content Agency, LLC.

View original post here:

Microsoft aims to win the race to build a new kind of computer. So does Amazon - Sunbury Daily Item

Read More..

Data Encryption Strategies Become More Widespread as the Amount of Cloud-Based Data Rises – The Fintech Times

The number of organisations consistently applying a data encryption strategy has risen sharply in the space of a year, whilst many are finding it easier to locate the data they need for the job.

Organisations reporting having a consistent, enterprise-wide encryption strategy in the Middle East leapt from 29 per cent to 63 per cent between last year and this year, as many seek to have greater control over dispersed cloud-based data.

These were the primary findings of a recent survey of security and IT professionals, which was conducted by the Ponemon Institute.

The study involved 6,000 companies across various sectors and countries, including the UAE and KSA, and the response indicated that many are prioritising their digital security investments to regain control of the data amid dynamic cloud environments and increasing cybersecurity threats.

Jumping the gap

Although theyve experienced a steady level of adoption over the past few years amid the growing prevalence of cloud-based systems, encryption strategies have now become fintechs must-have item, especially so in the Middle East, where the rate of constant application within an enterprise jumped dramatically from 29 per cent to 63 per cent.

Similarly, 70 per cent of Middle East respondents rated the level of their senior leaders support for enterprise-wide encryption strategy as significant or very significant.

The data also shows a significant decrease of 39 per cent in the number of people struggling to locate the right data; being identified as one of the top challenges of planning and executing an effective data encryption strategy.

With an unprecedented amount of cybersecurity threats challenging organisations today, coupled with new and dynamic cloud environments, it has never been more important to have a company-wide encryption strategy in place, comments Hamid Qureshi, regional sales director, Middle East, Africa and South Asia at Entrust.

This [report] is telling of a new awakening to the need for more consistent and proactive data security.

While the results indicate that companies have gone from assessing the problem to acting on it, they also reveal encryption implementation gaps across many sensitive data categories.

For example, while half of the respondents in the Middle East say that encryption is extensively deployed across containers, just 31 per cent say the same for big data repositories and 32 per cent across IoT platforms.

Similarly, while 71 per cent rate hardware security modules (HSMs) as an important part of an encryption and key management strategy, 37 per cent are still lacking HSMs.

These results highlight the accelerating digital transformation underpinned by the movement to the cloud, as well as the increased focus on data protection.

Organisations seek greater control of their cloud data

The sensitive nature of the data sitting within multiple cloud environments is forcing enterprises to up their security strategy. Notably, this includes containerised applications, where the use of HSMs reached an all-time high of 35 per cent.

More than half of the reports Middle East respondents admitted that their organisations transfer sensitive or confidential data to the cloud whether or not it is encrypted or made unreadable via some other mechanism such as tokenisation or data masking.

Concerningly, an additional 23 per cent said they expect to do so in the next one to two years.

The rising adoption of multi-cloud environments, containers and serverless deployments, as well as IoT platforms, is creating a new kind of IT security headache for many organisations, added Qureshi.

This is compounded by the growth in ransomware and other cybersecurity attacks. This years study shows that organisations are responding by looking to maintain control over encrypted data rather than leaving it to platform providers to secure.

When it comes to protecting some or all of their data at rest in the cloud, 41 per cent of respondents in the Middle East said encryption is performed in the cloud using keys generated and managed by the cloud provider; an improvement from the 28 per cent recorded in 2021.

Another 32 per cent reported encryption being performed on-premises prior to sending data to the cloud using keys their organisation generates and manages, while a quarter are using some form of Bring Your Own Key (BYOK) approach. Both of these models remained at the same level as last years results.

Together, these findings indicate the benefits of cloud computing outweigh the risks associated with transferring sensitive or confidential data to the cloud, but also that encryption and data protection in the cloud is being handled more directly.

The employee threat to sensitive data

When it comes to threat sources, respondents identified employee mistakes as the top threat that might result in the exposure of sensitive data although this is down a mere two per cent from last year.

The threat from temporary or contract workers rose 10 per cent to the highest level ever recorded; reaching 42 per cent. The other highest-ranked threats identified were system or process malfunction (19%) and hackers (33 per cent).

These results make it clear that threats are coming from all directions so its distressing, but not surprising that 64 per cent of Middle East respondents admitted having suffered at least one data breach in 2020, and just about half (49 per cent) having suffered one in the last 12 months.

Over 17 years of doing this study, weve seen some fundamental shifts occur across the industry. The findings in the Entrust 2022 Global Encryption Trends study point to organisations being more proactive about cybersecurity rather than just reactive, said Dr Larry Ponemon, chairman and founder of the Ponemon Institute.

While the sentiment is a very positive one, the findings also point to an increasingly complex and dynamic IT landscape with rising risks that require a hands-on approach to data security and a pressing need to turn cybersecurity strategies into actions sooner rather than later.

As more enterprises migrate applications across multi-cloud deployments there is a need to monitor that activity to ensure enforcement of security policies and compliance with regulatory requirements. Similarly, encryption is essential for protecting company and customer data. Its encouraging to see such a significant jump in enterprise-wide adoption, said Cindy Provin, SVP for identity and data protection at Entrust.

However, managing encryption and protecting the associated keys are rising pain points as organisations engage multiple cloud services for critical functions. As the workforce becomes more transitory, organisations need a comprehensive approach to security built around identity, zero trust, and strong encryption rather than old models that rely on perimeter security and passwords.

See the rest here:
Data Encryption Strategies Become More Widespread as the Amount of Cloud-Based Data Rises - The Fintech Times

Read More..

MongoDB debuts new encryption tool and analytics features at MongoDB World – SiliconANGLE News

MongoDB Inc. today introduced new features that will enable enterprises to query their data without decrypting it and carry out large-scale analytics projects more easily.

The features were announced at the companys annual MongoDB World conference.

Publicly traded MongoDB provides an open-source NoSQL database that is widely used among developers. The database has been downloaded more than 265 million times, while developers at north of 35,000 organizations use it to power applications.

Some of the product updates that MongoDB announced today are rolling out for its namesake open-source database. Other features will become available as part of MongoDB Atlas, a managed cloud version of the database. Atlas removes the need for customers to manage infrastructure and automates a number of other administrative tasks.

Our vision is to offer a developer data platform that provides a modern and elegant developer experience, enables broad support for a wide variety of use cases, and delivers the performance and scale needed to address the most demanding requirements, said MongoDB Chief Executive Officer Dev Ittycheria.

Companies keep the business information in their databases encrypted most of the time to ensure that hackers cant read records in case they gain network access. However, records have to be decrypted when theyre queried by an application or a user. MongoDB is rolling out a new release of its open-source database, MongoDB 6.0, that it says makes it possible to query data without decrypting it.

MongoDB 6.0s Queryable Encryption feature, as its known, doesnt require specialized cryptography know-how to use. Queryable Encryption keeps records encrypted while theyre in a servers memory. Information also remains encrypted while it travels through the servers central processing unit, according to MongoDB.

Cybersecurity researchers have long sought to develop a way of processing data without having to decrypt it. Some of the technologies that have been created to facilitate encrypted processing, such as fully homomorphic encryption, are impractical to use because they significantly slow down queries. MongoDB says Queryable Encryption facilitates speedy queries and doesnt impact application performance.

Another set of features introduced by MongoDB today focuses on helping companies carry out large-scale data analytics initiatives more easily. Some of the capabilities are rolling out for the MongoDB database, while others are part of the Atlas managed database service.

MongoDB 6.0 introduces a feature called Column Store Indexes that will speed up common analytical queries. The feature speeds up queries by creating an index, a collection of data shortcuts that makes it possible to find specific records in a database faster. Reducing the amount of time that it takes to find records enables the database to return results quicker.

For administrators, MongoDB is adding a feature that makes it easier to manage the hardware resources assigned to a MongoDB deployment. According to the company, the feature will help administrators avoid provisioning too little or too much infrastructure for a MongoDB deployment that is used to support analytics workloads.

Atlas, the managed version of MongoDB, is also receiving improved support for analytics workloads. A tool called Atlas Data Lake will provide managed cloud object storage to facilitate analytical queries. For business analysts, MongoDB is rolling out Atlas SQL Interface, a capability that makes it possible to query data using SQL syntax.

MongoDBs revenuegrew 57% year-over-year, to $285.4 million, during the quarter ended April 30. As part of its revenue growth strategy, MongoDB has been adding support for more enterprise use cases, which helps expand its addressable market and unlock new sales opportunities.

MongoDB 6.0 adds improved support for use cases that involve time series data. Thats the term for data used to describe a trend, such as how a servers performance changes over the course of a week. Time series data is used for tracking the health of technology infrastructure, monitoring shifts in product demand and a range of other use cases that MongoDB can now support more effectively.

Website development is another use case on which MongoDB is increasing its focus. The company is upgrading its managed Atlas database service by adding an integration with Vercel, a popular website development tool. MongoDB says that the integration will save time for joint customers by automating certain manual configuration tasks.

For developers using Atlas to power mobile apps, MongoDB is adding the ability to sync data to and from the popular Realm mobile database. Meanwhile, companies that rely on Atlas to power the search features of their applications and websites are also receiving new features. The company is making it easier to let users filter search results by category, a feature that usually requires significant amounts of custom code to implement.

Some MongoDB customers run multiple deployments of its database to support their applications. As part of the product updates announced today, the company is adding a set of features to simplify such customers information technology operations.

Cluster-to-Cluster Synchronization is a new tool that can automatically sync records between MongoDB databases to ensure they all have the latest version of a dataset. The tool can sync records across Atlas deployments, as well as MongoDB databases running in the cloud and on-premises.

Another new addition to the companys feature set is Data Federation. Available as part of Atlas, the capability makes it possible to centrally run a query across multiple MongoDB deployments. Data Federation could simplify large-scale analytics projects that draw on information from multiple databases.

Read more:
MongoDB debuts new encryption tool and analytics features at MongoDB World - SiliconANGLE News

Read More..

It’s not Apple or Tesla, but Inrix has data from 500 million vehicles taking transportation into the future – CNBC

Cars and trucks move along the Cross Bronx Expressway, a notorious stretch of highway in New York City that is often choked with traffic and contributes to pollution and poor air quality on November 16, 2021 in New York City.

Spencer Platt | Getty Images

In this weekly series, CNBC takes a look at companies that made the inaugural Disruptor 50 list, 10 years later.

Transportation has been a big part of the CNBC Disruptor 50 list since its inception in 2013, and some of the original transport disruptors have become household names.

This includes Waze at that time an Israeli GPS start-up with little brand recognition in the U.S. compared to Garmin or TomTom which was acquired by Google for over $1 billion and has long since become critical to the driving public's avoidance of speeding tickets and knowledge of the nearest Dunkin' Donuts. Uber, which despite its stock struggles, has undeniably changed basic ideas about urban mobility. And SpaceX, which is taking transportation disruption to its most ambitious ends.

But another name on that original D50 list remains less well-known to the public, but it is a key link in planning the future of transportation: Inrix.

The company, now almost two decades old (it was founded in 2004), remains under the radar, but its reach in understanding the complexities and challenges in transportation is growing. TomTom is still a competitor, too. When Inrix, based outside Seattle in Kirkland, Washington, launched, a pressing issue was the fact that the world was still relying on helicopters to monitor traffic. "That was state of the art to figure out what was going on," says Bryan Mistele, CEO and co-founder, and a former Microsoft and Ford executive.

Now Inrix, which operates in over 60 countries and several hundred cities, collects aggregated, anonymous data from 500 million vehicles, mobile devices, mobile apps, parking lot operators, mobile carriers and smart meters, all in real-time, covering both consumer and fleet vehicles, and feeding into a system which is finding favor among public agencies and transportation planners rethinking urban mobility.

This week, Apple played up its CarPlay technology at WWDC, and it might be neat to have Siri adjust the temperature in your car one day, but Inrix has on its to-do list a range of tasks from reducing the climate footprint of city traffic through means including optimization of traffic signal timing, to plotting out how autonomous robotaxis will operate within cities, picking up and dropping off passengers, and finding their own parking when needed.

The core of the company's mission hasn't changed: its intelligent mobility, based on GPS data. Mining GPS data from cars and phones got the company off the ground and to clients like IBM, Amazon, and automakers. The biggest changes since its early years are moving beyond the core data to a software-as-a-service model, and that model is being adopted by its biggest-growing customer segment: cities like New York and London and additional geographies around the world including Dubai.

Inrix still works closely with many private sector clients, including auto giants such as BMW and GM. In fact, one of its most recent deals is a cloud-based software venture with GM that overlaps with one of the biggest goals of public sector agencies: reducing crashes and fatalities. Inrix and GM are using data from GM vehicles on air bag deployments, hard braking and seatbelt usage, as well as from the U.S. Census, as part of a data dashboard for city planners with a "Vision Zero" goal of no road fatalities.

"There are 1.3 million people killed annually in crashes," Mistele said.

Those numbers have been rising in recent years, too, specifically in the U.S., with a record set in 2021.

The recent passage of the $1.2 trillion Bipartisan Infrastructure Law (BIL) includes roughly $5 billion in discretionary funds as part of the Safe Streets and Roads for All Grant Program, which will help the public sector tackle the issue.

"Roadway analytics are a big area of revenue growth," Mistele said. "There is an enormous amount of money flowing into the public sector from the infrastructure bill," he said.

Traffic data software-as-a-service is now as much as 30% of the company's overall business and growing at a compound annual growth rate of 40%.

The "zero" vision also overlaps with the goal of making transportation carbon neutral and reducing the number of accidents, ultimately through autonomous vehicle use.

About a year ago, Inrix launched a traffic signal timing product, which in pilot cities such as Austin, Texas, has demonstrated a 7% decrease in congestion "from doing nothing other than optimizing traffic signals," Mistele said. The Florida Department of Transportation has also adopted the technology. "Every second of delay is 800,000 tons of carbon, or 175,000 vehicles," he said.

While full self-driving and autonomous urban mobility have progressed slower than the most ambitious forecasts, it is moving ahead and just last week GM's Cruise self-driving robotaxi business received approval in San Francisco.

"We are big believers in 'ACES,'" Mistele said, referring to "autonomous, connected, electric, shared" vehicles. Moving to a mobility-as-a-service model will become increasingly linked to the rise of autonomous transportation. "Instead of driving into a city and parking for eight hours, in most urban areas you will see mobility delivered as a service and shared," he said. "How do you make it happen? By giving vehicles better information," he added.

He is a believer that 'ACES' and robotaxis will make transportation safer, but that will require them receiving data on everything from road closures to parking dropoff areas. "We do meter by meter mapping of these urban areas ... curbside management will get more complex," he said.

According to Mistele, even though there is always lots of hype with new technology and a "coming back to reality" period, the progress made by companies including Cruise and Waymo in the robotaxi space and Nuro in robo-delivery of consumer goods like pizza, the deployments taking place now in cities, and the growing production of autonomous vehicles, leads him to believe that over the next decade this will be a transportation model in use in most of the top urban areas.

"I don't think we will see it pervasive across the entire U.S., in rural areas where there is no need or use cases. But EVs and autonomous, and moving more to mobility-as-a-service will be pervasive," he said.

There was a moment early on in the pandemic when the world literally stopped moving that Inrix had a worry about its business, but that didn't last very long. In fact, Mistele says the radical changes in mobility patterns never seen before March 2020 have increased the need for planners, whether in mass transit or business, to better understand vehicle data, and it was the pandemic moment that became critical to its pivot to a software-as-a-service model.

As one example, he said companies in the tire sector needed more than ever before to analyze data on miles driven the No. 1 variable in that niche to determine consumer demand and appropriate manufacturing levels. And in the retail sector, companies were trying to understand traffic patterns and whether to close stores, or move stores to new locations.

Inrix's data has less obvious uses as well, such as in financial services, where hedge funds want to know how many people visit a car dealership, what's going on at a retail distribution center, and the traffic into and out of ports, especially with the supply chain under intense pressure during the pandemic.

The company has 1,300 customers today across its growing public sector business, its private enterprise business, which includes companies as diverse as IBM's The Weather Channel and Chick-fil-A, and the auto sector.

Inrix has been profitable for most of its history, operating off of its own cash flow since the 2005-2007 period. "Some years growth is better than others," Mistele said, and the customer ratio can change with new use cases emerging during the pandemic and auto sales dipping for a few years before a big rebound but the company does double-digit growth on an annual basis.

And after almost twenty years as a private company with it largest investors including venture capital firm Venrock, August Capital, and Porsche it almost pulled the trigger on an initial public offer before the market for IPOs closed. Over a recent period of six months, it had worked "very heavily" on an IPO transaction and was very close to filing the securities documents. "We even had the ticker reserved," Mistele said. "We were ready to go, but the market tanked on us after Russia invaded Ukraine," he said.

One of the oldest Disruptors is in a holding pattern for now with its exit strategy, but Mistele said it will be evaluating the market every few months.

Sign upfor our weekly, original newsletter that goes beyond the annual Disruptor 50 list, offering a closer look at list-making companies and their innovative founders.

Read more from the original source:

It's not Apple or Tesla, but Inrix has data from 500 million vehicles taking transportation into the future - CNBC

Read More..

When to use negation handling in sentiment analysis? – Analytics India Magazine

The technique of determining views or feelings conveyed in the text regarding a subject is referred to as sentiment analysis, also known as opinion mining. Sentiment analysis may be done at the sentence or document level. In linguistics, negatives are particularly essential because they change the polarity of other words. Negative terms include no, not, shouldnt, and so on. When a negation appears in a sentence, it is critical to determine which words are impacted by this phrase. This article will be focused on understanding the concept of negation and implementing it with NegSpacy. Following are the topics to be covered.

The emotion expressed in a text by a person could be understood by machine learning algorithms. Lets understand the concept of sentiment analysis.

Sentiment analysis combines various research areas such as natural language processing, data mining, and text mining, and is rapidly becoming important to organisations as they strive to integrate computational intelligence methods into their operations and attempt to shed more light on, and improve, their products and services. The purpose of sentiment analysis, also known as opinion mining (SAOM), is to uncover peoples written views or text. Sentiment can be defined as how one feels about something, personal experience, ones feeling, attitude toward something, or an opinion.

Opinions are fundamental to practically all human actions and have a significant impact on our behaviour. Our views and perceptions of reality, as well as the decisions we make, are heavily influenced by how others see and interpret the world. As a result, when we need to make a decision, we frequently seek the advice of others. This is true not only for individuals but also for companies. Closed-form customer satisfaction questionnaires have traditionally been used to assess the key components, or facets, of total customer satisfaction. However, developing and implementing surveys might be costly or unavailable. In certain circumstances, governmental entities are even barred by law from collecting customer satisfaction questionnaires.

The sentiment analysis is applicable at both the sentence and document levels. The sentence level determines whether the opinion expressed in a sentence about a subject is positive or negative. In contrast, the whole document is classified as positive or negative at the document level.

Are you looking for a complete repository of Python libraries used in data science,check out here.

Negative words are those that influence the sentiment orientation of other words in a phrase. Negation terms include not, no, never, cannot, shouldnt, wouldnt, and so on. Negation handling is a method of automatically detecting the extent of negation and inverting the polarity of opinionated words that are impacted by a negation. The area of the phrase that negation impacts are referred to as the vicinity or scope of negation.

A negation may reverse the polarity of all words in a phrase that has only one clause. In a compound sentence, however, there are numerous clauses. A negation inverts the polarity of certain words in a phrase, and the quantity of words reversed varies according to linguistic factors. As a result, dealing with negation in a compound phrase might be difficult. To establish the scope of a negative, we create a list of negations that serve as a signal of the presence of a negation.

All negation words are divided into three categories.

One of the most popular language strategies for changing text polarity is negation. As a result, in sentiment analysis, negation must be considered. The scope size of a negative expression specifies which words in the phrase are impacted by negation words like no, not, and never. Negation keywords influence the contextual polarity of words, but their existence in a phrase does not imply that all words conveying feelings will be inverted. As a result, we must additionally identify the scope of negation in each phrase.

Linguistic negation is a difficult issue with several ways to communicate a negative attitude. Negation can be morphological, with a prefix (dis-, non-) or a suffix (-less) denoting it [11]. It might be implied, as in this act being his first and final film. Although this statement has a negative opinion, no negative words are utilised. Negation might also be explicit, which is undesirable.

As discussed above there are three types of negation words, syntactic, diminisher and morphological negations. Lets deep dive into these three negation types.

The most prevalent sort of negation in user-created text is syntactic negation. The majority of available approaches merely determine the range of syntactic negations. Linguistic factors are taken into account in syntactic negation. Conjunction analysis, punctuation marks, and the POS of negation phrases are among the language aspects.

Syntactic negation has two exceptions: when the negation has no scope at all and when the negation word inverts the polarity of the entire clause/sentence without changing any opinionated word. Some exceptions linked to specific linguistic traits have already been examined in separate sections. In some cases, syntactic negations have no meaning when combined with terms like not just, not just, no surprise, and not to mention.

Diminisher negations vary from syntactic negations in that they frequently lessen the polarities of other words rather than completely reversing the polarities. Furthermore, with syntactic negation, the words that lessen the impact may appear anywhere in the phrase rather than after the negative term. In sentences like this mobile scarcely lags and the application crashes seldom, the diminishers (i.e. hardly and rarely) weaken the negative polarity of the words lags and crashes. The negative affected the word after it in the first sentence, but not the word before it in the second.

In certain circumstances, the negation term and the negated opinionated word are merged into a single word, as in end-less, rude, dishonest, non-cooperative, and so on. This is known as morphological negation, and it may be constructed by combining one of the nine prefixes (de-, dis-, il-, im-, in-,ir-, mis-, non-, un-) or one suffix (-less) with a root word. One approach to dealing with such negations is to first attempt to lexicalise all of these sorts of terms. If such a negation occurs in a phrase, polarity is acquired straight from the lexicon without any scope identification or polarity inversion. However, unusual terms may not be in the lexical dictionary, resulting in incorrect categorization.

The alternative method is to break down morphological negation words to get the base word. The lexicon is used to determine the polarity of the root word, which is then reversed. If the root word is not lexicalized, the synonyms polarity is obtained. The lexicon hits can be enhanced by deconstructing the morphological negation and exploiting the polarity of the synonyms list.

In this article, we will be using the NegSpacy library for handling syntactic negations. NegSpacy uses the NegEX algorithm for handling negation in a sentence. NegEx searches for trigger phrases that indicate a clinical condition is negated or possible and then decides which text comes within the scope of the trigger terms. It returns two types of outputs depending upon the input.

Lets start with the installation of spacy and negspacy libraries and also spacy stanza which will help to load the NLP pipelines.

Import necessary libraries

Load the NLP pipeline using spacy_stanza

For this article, we are using the en_core_web_sm NLP model. It is trained for the English language and has 19 labels. The dataset used for training the model is from blogs, newspapers, and comments.

Add negex to the pipeline and filter some entities for faster processing. The filter is not necessary.

Lets check some sentences.

The true indicates the word has a negative meaning and the false indicates the positive sense.

The inability to properly discern the effect of negation on other words is one of the primary causes of mistakes in sentence-level sentiment analysis. With this article, we have understood the concept of negation and handled it with an implementation using Negspacy.

Go here to see the original:

When to use negation handling in sentiment analysis? - Analytics India Magazine

Read More..

Top IoT Companies in the Mining Industry – Mining Technology

Internet of Things (IoT) describes the use of connected sensors and actuators to control and monitor the environment, the things that move within it, and the people that act within it.

It is anumbrella term referring to the ability of everyday physical objects (such as fridges, watches or cars) to connect with other devices over the internet, enabling them to send and receive data.

Use cases include the automated home, the connected car, wearable technology, smart cities and many more.

For IoT companies in mining, four key technologies are enabling todays IoT ecosystems: AI, cloud computing, cybersecurity, and 5G.

GlobalData forecasts the global IoT market to reach $1.1 trillion in revenue by 2024.

While pervasive IoT is still some years away, GlobalData is keeping a close eye on the use of IoT in mining, as well as its evolution.

Improved data collection and analysis via sensors and the internet will enable mining companies to operate mines more safely, as well as increase productivity and reduce costs.

Examples include autonomous drilling, driverless haultrucks and predictive maintenance.

Our leader and disruptor lists for each theme are based on our analysts in-depth knowledge of the theme and the players involved in that theme.

These are based on subjective opinions supported by research and analysis.

Leader lists consider global market share, position in the value chain and ability to react to emerging, disruptive trends.

Disruptor lists consider funding, strategic partnerships and the track record of the management team.

The global IoT market was worth $622bn in 2020, up from $586bn in 2019, and will grow to reach $1,077bn by 2024, with a compound annual growth rate (CAGR) of 13% over the period, according to GlobalData forecasts.

The enterprise IoT dominates the overall IoT market, generating 76% of total revenue in 2020.

This dominance of the enterprise IoT will continue for the foreseeable future.

GlobalData expects this segment to still occupy 73% of the overall IoT market in 2024.

The enterprise IoT market will grow at a CAGR of 12.4%, and consumer IoT revenue will increase at a CAGR of 14.6% between 2019 and 2024.

Industrial Internet revenues reached $247bn in 2020, up from $231bn in 2019. We forecast they will hit $555bn by 2024, growing at a CAGR of 19.1% between 2019 and 2024.

The market consists of various applications such as advanced automation, asset tracking, conditional monitoring, environmental monitoring, health tech, people and animal tracking, and telematics.

Conditional monitoring applications occupied the biggest market share in 2020 and are expected to grow at a CAGR of 19.7% from 2019 to 2024 to reach $293bn by 2024.

Due to an increasing focus on environmental sustainability, environmental monitoring applications are expected to grow at a CAGR of 54.3% to reach $170bn by 2024.

Concerns over the supply of key commodities such as nickel, palladium, and aluminium have led to sharp price increases, with nickel prices on the LME briefly topping $100,000/t on March 8 before trading was suspended.

Russia accounts for approximately 8% of global nickel, though 17% of the high-grade nickel used in EV batteries. It is also a major producer of palladium, accounting for an estimated 43% of production in 2021.

While Nonnickel has stated operations are continuing and Polymetal reported on March 9 that all its operations in Russia and Kazakhstan continue undisrupted, Canadian miner Kinross first announced that it was suspending all activities in Russia.

Thisincludes its Udinsk development project in Khabarovsk Krai.

In 2020, the overall IoT market saw sluggish growth as Covid-19 interrupted IoT deployments, slowing progress for IoT technology companies in mining.

In the consumer IoT domain, the connected car market declined by 10%, and the automated home segment saw just 1% growth in 2020.

The top performer during the year was the wearables market, which saw a 16% revenue growth.

Within enterprise IoT, smart cities and Industrial Internet saw moderate yearly increases of 8% and 7%, respectively.

The global IoT market will generate a staggering $1,077bn in revenue by 2024, up from $622bn in 2020.

The Industrial Internet dominates the global IoT market, accounting for 40% of the IoT market in 2020.

We split the value chain for IoT into five layers: devices, connectivity, data, apps, and services.

While these layers are logically discrete, large-scale IoT solutions will see a considerable degree of blurring of these logical boundaries.

For example, while there will continue to be a clearly identifiable data layer towards the top of the stack, a growing proportion of the data processing will take place within and at the edge of the network.

From the point of view of IoT adopters, it is also crucial to note that value is only realised by IoT adopters in the application layer.

All the data that an IoT network collects is ultimately worthless until action is taken as a result of it, whetherin the form of an instruction to an irrigation unit, an alarm sent to a maintenance engineer, or an emergency callto a doctor.

To best track the emergence and use of IoT in mining, GlobalData tracks patents filings and grants as well as companies that hold most patents in the field.

The main trends shaping the IoT theme over the next 12 to 24 months are shown in this table.

In this, we classify these trends into three categories: technology trends, macroeconomic trends, and regulatory trends.

Within the mining industry and beyond, IoT technology is interrelated to many other technology themes.

Thought leaders in the IoT domain are discussing AI, machine learning (ML), big data, and data science. These are themost talked-about areas in relation to IoT by thought leaders on Twitter.

GlobalData also highlights publicly listed and private companies making their mark as IoT technology companies in mining, as detailed here.

GlobalDatas mining jobs tracker lists mining companies with Internet of Things (IoT) jobs posted in the recent months.

Steel Springs for Mining Equipment

Process Instrumentation Systems and Bulk Solids Sensors for the Mining Sector

Continued here:

Top IoT Companies in the Mining Industry - Mining Technology

Read More..