Page 2,240«..1020..2,2392,2402,2412,242..2,2502,260..»

For the first time, new quantum technology demonstrates capabilities that may enable detection of ultralight dark matter – EurekAlert

image:Prof. Tomer Volansky view more

Credit: Tel Aviv University

A new study led by Tel Aviv University researchers demonstrates unprecedented sensitivity to an exciting dark matter candidate. As part of the new NASDUCK (Noble and Alkali Spin Detectors for Ultralight Coherent dark-matter) collaboration, the researchers developed unique innovative quantum technology that enables receiving more accurate information on invisible theoretical particles suspected of being dark matter with ultralight masses. The study was published in the prestigious Advanced Science journal.

The study was led by Prof. Tomer Volansky, research student Itay Bloch from the Raymond & Beverly Sackler School of Physics & Astronomy in the Raymond & Beverly Sackler Faculty of Exact Sciences at Tel Aviv University, Gil Ronen from the Racah Institute of Physics at the Hebrew University, and Dr. Or Katz, formerly of the Weizmann Institute of Science (now from Duke University).

Dark Matter is one of the great mysteries of physics. It composes most of the matter in theuniverse, and it is known to interact through gravity; however, we still know very little of its nature and composition. Over the years, many different theoretical particles have been proposed as good candidates to serve as dark matter, including the so-called axion-like particles.

Prof. Tomer Volansky explains: The interesting thing about axion-like particles is that they can be significantly lighter than any of the matter particles seen around us, and still explain the existence of dark matter, which for years was expected to be significantly heavier. One of the main ways of searching for dark matter is by building a large experiment with lots of mass, waiting until dark matter collides with it or is absorbed in this mass, and then measuring the minute energetic imprint it leaves in its wake. However, if the mass of the dark matter is too small, the energy carried by it is so insignificant that neither the collision nor the absorption effect can be measured. Therefore, we need to be more creative and use other properties of dark matter.

In order to discover these particles, the researchers have designed and built a unique detector in which compressed, polarized xenon gas is used to find tiny magnetic fields. Surprisingly, it turns out that axion-like particles which play the role of dark matter, affect the polarized xenon particles as if it is placed in a weak anomalous magnetic field which can be measured. The innovative technique used for the first time by the researchers, enabled them to explore a new range of dark matter masses, improving previous techniques by as much as three orders of magnitude.

PhD student Itay Bloch adds: This is quite a complex operation, since these particles, if they exist, are invisible. Nevertheless, we have succeeded with this study in constraining the possible properties of axion-like particles, by the very fact that we have not measured them. Several attempts have been made to measure such particles by turning them into particles of light and vice versa. However, the innovation in our study is the measurement through atomic nuclei without relying on an interaction with light, and the ability to search for axion-like particles in masses that were hitherto inaccessible.

The study is based on especially complex mathematical methods taken from particle theory and quantum mechanics and employs advanced statistical and numerical models in order to compare the empirical results with the theory.

Prof. Volansky concludes: After five months of sustained effort, we have presented a new method that expands what we thought was possible with magnetometers; therefore, this is a small but significant step towards finding dark matter. There are many more candidates for dark matter, each with its own quantum properties. However, axion-like particles are among the most interesting options, and if we ever find them, that would be a huge step forward in our understanding of the universe. This experiment was the first of the NASDUCK collaboration, showing the promise that lies in our detectors. I have no doubt that this is just the beginning.

New constraints on axion-like dark matter using a Floquet quantum detector

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Visit link:

For the first time, new quantum technology demonstrates capabilities that may enable detection of ultralight dark matter - EurekAlert

Read More..

So, you’re in an alternate reality, what gives? The science behind ‘Picard’ – Syfy

Star Trek: Picard is back with Season 2 on Paramount+, which means we get to revisit the best captain in the history of the federation don't @ me back at work. Before we go any further, there will be spoilers for the Season 2 premiere after this sentence, you've been warned.

Picard leaned pretty heavily into the nostalgia factor during the first season, bringing back a number of characters from previous Trek series, and the season two premiere is no different. The episode is a bit of a slow burn but by the end the action hits warp factor 10.

Jean-Luc finds himself aboard a ship, adjacent the rest of the fleet, staring down an enormous Borg craft. When all of a sudden, the Borg queen, decked out in a flashy new aesthetic, transports through the shields and onto the bridge. It's the worst sort of family reunion and things go downhill very quickly. Jean-Luc is faced with an impossible decision, to initiate the self-destruct sequence before the queen can infiltrate the ship's computers and gain access to the rest of the fleet.

Obviously, Picard makes the right decision and destroys the ship, killing himself and everyone else onboard. Except moments later he wakes in his home, in a wholly different version of reality. Which leaves Picard, his crew, and the viewers to wonder what the Vulcan's going on?

Perhaps the most realistic, but least fun, explanation of what's going on is that Picard has lost his grip on reality. After initiating self-destruct, Jean-Luc emerges in a version of the world which is very different from the one he left. Suddenly, he's looking at the world and feeling as though it has been altered or falsified in some way. The world around him isn't the one he's supposed to be a part of.

This is actually a fairly common circumstance, experienced by people all over the world. According to studies, between one and two percent of the world's population experiences the sensation that either they or the world around them has been made wrong in some way, at least once in their lifetime.

These symptoms are a sign of depersonalization-derealization disorder. Depersonalization refers to a sensation of detachment from your own body or self, while derealization refers to a similar sensation about the world around you. People experiencing derealization might have the feeling that they are living in a movie or a dream, or that the world around them has been distorted or twisted out of true.

Some people have reported feeling as though they've been transported to an alternate version of reality and desperately need to find their way back to their true reality. That sounds an awful lot like the experience Picard is likely to have throughout the rest of the season. Although, he appears to be sharing the experience with the rest of his crew, which lends some support to the idea that it's actually happening. It also doesn't hurt that Q shows up and straight up tells him that he's been moved to another reality to continue the test which began in the first episode of TNG.

So, if Picard isn't suffering a mental health crisis, what is happening?

The name of this thought experiment is unfortunate, but it's interesting to consider, and might serve as an explanation for how Picard and the rest of his crew found their way to an alternate reality following events which should have killed them.

You're likely already familiar with Schrdinger's Cat, but in case you're not, here's a brief primer. The thought experiment was first cooked up by Erwin Schrdinger as a way of exploring what he saw as a problem with the Copenhagen explanation of quantum physics. The central idea we need to consider is that quantum particles exist as probabilities until such time as they are observed. Meaning a particle can exist in two opposing states until looked upon by an observer. Schrdinger took this idea and set up a thought experiment in the following way. First, we have a cat locked in a box. Inside the box is a device which is capable of smashing a container holding a deadly poison. The poison container is opened only if a particle achieves one state or the other.

Because quantum states exist in superposition, meaning all possible states at once until observed, the cat inside the box must be both alive and dead until we open the box.

The idea of quantum suicide takes this same thought experiment but adds one additional twist. Instead of a cat inside a box, we have a human observer. Because a quantum state must exist in superposition until it is observed, the only possible outcome is that the poison is never activated. If it were, there would be no one alive in the box to observe it.

Essentially, a deadly scenario necessarily favors a shift toward realities in which the observer in this case Picard and his entire crew isn't dead. This thought experiment is considered perhaps one of the only ways to confirm the validity of the many worlds multiverse hypothesis. Although it would only prove the existence of the multiverse to the person, or people, inside the experiment. To everyone else, they'd just be living in the world they always lived in.

On paper, it appears to work, but the risk is immense. We don't recommend it. If you're dealing with thoughts of suicide, depersonalization, derealization, or other mental health stresses, please reach out for help. Whether other realities exist or not, we want you here, safe and healthy, so we can fix the future together.

If you or a loved one is experiencing a mental health crisis, call the suicide hotline:1-800-273-8255or text the Crisis Text Line: 741-741.

Originally posted here:

So, you're in an alternate reality, what gives? The science behind 'Picard' - Syfy

Read More..

Why reductionism fails at higher levels of complexity – Big Think

One of the greatest ideas of all time is reductionism, the notion that every system, no matter how complex, can be understood in terms of the behavior of its basic constituents. Reductionism has its roots in ancient Greece, when Leucippus and Democritus, in about 400 BC, proposed that everything is composed of atoms, which in Greek means that which cannot be cut. So, atoms came to signify the smallest constituents of matter, even though what we understand by smallest has drastically changed in time.

The focus is on the bottom layer of the material chain: matter is made of molecules; molecules are made of atoms; atoms are made of electrons, protons, and neutrons; protons and neutrons are made of up and down quarks, and so on to presumably other possible levels of smallness unknown to us at present. At the biological level, organisms are composed of organs; organs of cells; cells of organic macromolecules; macromolecules of many atoms, etc.

The more radical view of reductionism claims that all behaviors, from elementary particles to the human brain, spring from bits of matter with interactions described by a few fundamental physical laws. The corollary is that if we uncover these laws at the most basic level, we will be able to extrapolate to higher and higher levels of organizational complexity.

Of course, most reductionists know, or should know, that this kind of statement is more faith-based than scientific. In practice, this extrapolation is impossible: studying how quarks and electrons behave wont help us understand how a uranium nucleus behaves, much less genetic reproduction or how the brain works. Hard-core reductionists would stake their position as a matter of principle, a statement of what they believe is the final goal of fundamental science namely, the discovery of the symmetries and laws that dictate (I would say describe to the best of our ability) the behavior of matter at the subatomic level. But to believe that something is possible in principle is quite useless in the practice of science. The expression fundamental science is loaded and should be used with care.

There is no question that we should celebrate the triumphs of reductionism during the first 400 years of science. Many of the technological innovations of the past four centuries derive from it, as does our ever-deepening understanding of how nature works. In particular, our digital revolution is a byproduct of quantum mechanics, the branch of physics that studies atoms and subatomic particles. The problem is not so much with how efficient reductionism is at describing the behavior of the basic constituents of matter. The problems arise as we try to go bottom-up, from the lowest level of material organization to higher ones.

We know how to describe with great precision the behavior of the simplest chemical element: the hydrogen atom, with its single proton and electron. However, even here, trouble lurks as we attempt to include subtle corrections, for example adding that the electron orbits the proton with relativistic speeds (i.e., close to the speed of light) or that its intrinsic rotation (or spin) gives rise to a magnetic force that interacts with a similar magnetic force of the proton. Physicists take these effects into account using perturbation theory, an approximation scheme that adds small changes to the allowed energies of the atom.

Physicists can also describe the next atom of the periodic table, helium, with considerable success due to its high degree of symmetry. But life gets complicated very quickly as we go up in complexity. More drastic and less efficient approximation schemes are required to make progress. And these dont include the interactions between protons and neutrons in the nucleus (which calls for a different force, the strong nuclear force), much less the fact that protons and neutrons are made of quarks and gluons, the particles responsible for the strong interactions.

Physics is the art of approximation. We dress down complex systems to their bare essentials and model them in as simple terms as possible without compromising the goal of understanding the complicated system we started from. This process works well until the complexity is such that a new set of laws and approaches is necessary.

At the next level of complexity are the molecules, assemblies of atoms. In a very rough way, all chemical reactions are attempts to minimize electric charge disparities. How many molecules can exist?

Lets jump to biochemistry for an illustration. Proteins are chains of amino acids. Since there are 20 different amino acids and a typical protein has some 200 of them, the number of possible proteins is around 20200. Increasing the length of the protein and hence the possible choices of amino acids leads to a combinatorial explosion. Physicist Walter Elsasser coined the term immense to describe numbers larger than 10100, a googol (that is, a one followed by 100 zeroes). The number of possible proteins is certainly immense. We see only a small subset realized in living creatures.

The number 10100 is not arbitrary. Elsasser showed that a list containing 10100 molecules would require a computer memory containing more than all the matter in the universe. Worse, to analyze the contents of the list, we would need longer than the age of the Universe, 13.8 billion years. There is an immense number of new molecules with unknown properties to be explored. The same goes for the number of genetic combinations, cell types, and mental states.

It is thus impossible to predict the behavior of complex biomolecules from a bottom-up approach based on fundamental physical laws. Quarks do not explain the behavior of neurons. The passage from one level of material organization to the next is not continuous. New laws are required for different layers of material organization, as described in the fast-growing field of complex systems theory. There are many texts on the subject, including this somewhat technical book. The exciting aspect of this new field is that it calls for new ways of thinking about natural systems, which are by nature more holistic such as network theory, nonlinear dynamics, chaos theory and fractals, and information theory. Climate science is another clear example.

In his prescient 1972 essay More is Different, Nobel laureate physicist Philip Anderson argued for this layering of physical laws, which are irreducible: We cannot deduce laws from a higher layer by starting at a lower level of complexity. The reductionist program meets a brick wall, where progress needs to be carved at each specific level of complexity. There are theories of things and not a theory of everything.

More:

Why reductionism fails at higher levels of complexity - Big Think

Read More..

The ‘I’ in ‘Physics’: how our experiences shape the study of physical phenomena – CBC.ca

Physics can be daunting to many people. It may conjure up incomprehensible scientific theories and absurdly complicated equations.

But as Aaron Collier's one-person play called Frequencies, shows, we all experience physics every moment of our lives, whether through the gravity that keeps us firmly grounded, or the waves and particles we perceive as sound or light. These everyday occurrences make physics an intimate and highly subjective experience.

Frequencies is haunted by the absence of Aaron's brother, David, who died in an accident some years before Aaron was born. It oscillates between Aaron's attempts to come to terms with the death of his brother and contemplations of the dizzying abundance of life, energy, waves and matter in the universe.

The title of Frequencies is, of course, a play on many different kinds of frequencies frequencies of light and sound that we see and hear, as well as the passage of the seasons, how long it takes for a planet to orbit the sun, and the rhythms of human life from birth to death.

Those frequencies, rhythms and patterns are translated into the techno music at the heart of the play turning planetary orbits into a musical chord of the solar system or translating the frequencies of different colours into sound.

The National Arts Centre in Ottawa staged Frequencies as part of its Theatre and Physics Symposium last November. A panel moderated by IDEAS host Nahlah Ayed followed with a discussion of the relationships between individuals and physics, at the levels of perception, identity and the study of physical phenomena.

In the panel, Collier explained the inspiration behind one of the most intriguing passages of the play a meditation on the sound of colours as leaves change in the fall and how the range of sound frequencies we hear is much greater than the range of frequencies of light we can see.

"Ostensibly, the frequencies of these leaves are going down," Collier said.

"Green is a higher frequency than is yellow, than is orange, than is red. I can hear all these octaves of sound. But I started to recognize that the visual world, the light that enters my eyes it's all the same thing, but less of it. We can only see, well, one octave of [light]. Our experience of the world is really limited to these little confines of what we see or hear or feel."

The panel also explored other themes that arose from Frequencies, such as the importance of the unique perspectives of individuals in the study of science. Historically, those perspectives have not included many women or members of racialized groups.

"The universe doesn't care [who you are]", said Dr. Shohini Ghose, a quantum physicist at Wilfrid Laurier University."The law of gravity doesn't care who we are or who's doing the physics or not. So that is, to me, an ultimate sense of belonging. You know, that connection with the universe is not filtered through any systems made up by any human beings. Those laws are the same.

"It means that I can be whoever I am, and the universe will not say, 'well, that part of you, because you're a woman, is somehow less relevant to your perspective on the universe.' So what I bring to studying the universe is just as valid as anybody else."

Guests in this episode:

Aaron Collier is the performer, composer and co-writer of Frequencies and the co-founder and technical director of Halifax-based live art company HEIST.

Shohini Ghose is a quantum physicist at Wilfrid Laurier University and the NSERC Chair for Women in Science and Engineering. Kevin Hewitt is a molecular imaging physicist at Dalhousie University and the founder of a STEM outreach program for Black students called the Imhoteps Legacy Academy.

Music for Frequencies was composed, produced and mixed by Aaron Collier.Additional production by Matt MillerMastered by Ron Anonsen.The play's score is available to stream or buy at http://www.liveheist.com

*This episode was produced by Chris Wodskou.

Continue reading here:

The 'I' in 'Physics': how our experiences shape the study of physical phenomena - CBC.ca

Read More..

Einstein and why the block universe is a mistake – IAI

The present has a special status for us humans our past seems to no longer exists, and our future is yet to come into existence. But according to how physicists and philosophers interpret Einsteins Theory of Relativity, the present isnt at all special. The past and the future are just as real as the present - they all coexist and you could, theoretically, travel to them. But, argues Dean Buonomano, this interpretation of Einsteins theory might have more to do with the way our brains evolved to think of time in a similar way to space, than with the nature of time.

The human brain is an astonishingly powerful information processing device. It transforms the blooming buzzing confusion of raw data that impinges on our sensory organs into a compelling model of the external world. It endows us with language, rationality, and symbolic reasoning, and most mysteriously, it bestows us with consciousness (more precisely it bestows itself with consciousness). But, on the other hand, the brain is also a rather feeble and buggy information processing device. When it comes to mental numerical calculations the most complex device in the known universe is embarrassingly inept. The brain has a hodge-podge of cognitive biases that often lead to irrational decisions. And when it comes to understanding the nature of the universe, we should remember that the human brain was optimized to survive and reproduce in an environment we outgrew long ago, not decipher the laws of nature.

Is Einstein still right?Read moreTo date, the most powerful tool we have devised to overcome the brains limitations is called mathematics. Once in a while an outlier such as Einstein or Schrdinger conjures up equations that allow us to describe and predict the external world, independently of whether the human mind is capable of intuitively understanding those equations. We can plug those equations into a computer, which can then pump out predictions about what will occur when, whether or not we (or the computer) understand those equations.

___

Much as chess is beyond the grasp of Schrdingers cat, an intuitive understanding of quantum mechanics is probably beyond the grasp of the human brain.

___

Mathematics, however, is mostly agnostic to the interpretation of the equations of modern physics. This is particularly clear in the case of Schrdingers equation, which helped master the quantum world of particles that underlies much of our digital technology. No one can really claim to intuitively understand what a wavefunction actually is, or what it means for two photons two be entangled. Much as chess is beyond the grasp of Schrdingers cat, an intuitive understanding of quantum mechanics is probably beyond the grasp of the human brain.

The equations that comprise the laws of modern physics have proven accurate beyond any reasonable expectation, but when we interpret the equations of relativity and quantum mechanics, we often forget to take into account the inherent limitations, constraints, and biases, of the organ doing the interpreting. This point is particularly relevant in the context of what the laws of physics tell us in regard to the nature of time.

___

Under eternalism time-travel is a theoretical possibility, as my past and future selves are in some sense physically real. In contrast, under presentism the notion of time travel is impossible by definition.

___

While there is no universally accepted view as to the nature of time, the two main views are referred to as eternalism and presentism. In its simplest form eternalism maintains that the past, present, and future all stand on equal footing in an objective physical sense. The past, present, and future all coexist within what is called the block universe. Under presentism, my local present moment is fundamentally and objectively different from the past and future, because the past no longer exists and the future is yet to exist. Importantly presentism is local, and distinct from the empirically disproven Newtonian notion of absolute time, in which clocks moving at different speeds will remain synchronized. While some have argued that the distinction between eternalism and presentism is a false dichotomy, the fundamental difference between them can be easily captured in the context of time travel. Under eternalism time-travel is a theoretical possibility, as my past and future selves are in some sense physically real. In contrast, under presentism the notion of time travel is impossible by definition, one cannot travel to moments that dont exist.

___

It is important to note that relativity does not predict that we live in an eternalist universe, rather it allows for an eternalist universe.

___

One of the strongest arguments for eternalism was planted in 1908 by Herman Minkowskis geometric interpretation of Einsteins special theory of relativity. In it, time is represented as one axis in four-dimensional space, and movement of a clock along any of the three spatial dimensions will slow the rate at which it ticksMinkowski bound space and time into spacetime. But any geometric representation of time inevitably corrals the brain to think about time much like spacethinking of past and future moments in relation to now, as being as real as positions to the left and right of here. Indeed geometry, as formalized by Euclid over two thousand years ago was the study of static spatial relationships, and it was likely the first field of modern science because it had the luxury of ignoring time. Einsteins theory of general relativity further cemented the concept of spacetime into physics. But it is important to note that relativity does not predict that we live in an eternalist universe, rather it allows for an eternalist universe. Relativity makes no explicit testable predictions regarding eternalism versus presentism. Indeed, it is far from clear that there are any testable predictions that could prove or disprove eternalism or presentism (other than the emergence of a confirmed time traveler). And if advanced aliens ever came to Earth and assured us that we live in a presentist universe, I dont think anybody would claim that proves relativity is wrong (although presentism does set boundaries on the solutions to the equations of general relativity).

___

Contrary to our everyday experiences, when interpreting the laws of physics, perhaps the architecture of the human brain imposes a bias towards eternalism.

___

While the laws of physics do not assign any special significance to the present, they are ultimately agnostic as to whether the present may be fundamentally different from the past and future. Why then, despite our clear subjective experience that the present is special, is eternalism the favored view of time in physics and philosophy? Contrary to our everyday experiences, when interpreting the laws of physics, perhaps the architecture of the human brain imposes a bias towards eternalism. Thinking about time as a dimension in which all moments are equally real, better resonates with the brains architecture which readily accepts that all points in space are equally real.

The human brain is unique in its ability to conceptualize time along a mental timeline and engage in mental time travel. We can think about the past and simulate potential futures to degrees that evade the cognitive ability of other animals. It is mental time travel that allows us to engage in species-defining future-oriented activities, such as agriculture, science, and technology development. But how did humans come to acquire this ability? Evidence from linguistics, brain imaging, psychophysics, and brain lesion studies, suggest that the human brain may have come to grasp the concept of time by co-opting older evolutionary circuits already in place to represent and conceptualize space. A common example in the context of linguistics is that we use spatial metaphors for time (it was a long day; I look forward to seeing you). Imaging studies show a large overlap in brain areas associated with spatial and temporal cognition, and people with brain lesions that result in spatial hemineglect (generally characterized by an unawareness of left visual space), often exhibit deficits in mental time travel.

Our brains certainly did not evolve to understand the nature of time or the laws of the physics, but our brains did evolve to survive in a world governed by the laws of physics. Survival, of course, was not dependent on an intuitive grasp of physical laws on the quantum and cosmological scaleswhich is presumably why our intuitions epically fail on these scales. But questions pertaining to the reality of the past and future, fall squarely within the mesoscale relevant to survival. Thus, if one accepts that our subjective experiences evolved to enhance our chances of survival, our subjective experience about the passage of time and the fundamental differences between the present, past, and future, should be correlated to reality. A common counterexample to this point is our incorrect intuitions about the movement of the Earth. However, our incorrect perception that the Earth is static while the sun moves around us, pertains to the cosmological scale and is largely irrelevant to survival.

Empirical evidence from physics should always override our intuitions about the world. Yet in the case of the presentism versus eternalism debate there is actually no empirical evidence for eternalism. But there is some empirical evidence for presentism. Our brains are information processing devices designed to take measurements and make inferences about the physical world. Indeed, on the mesoscopic scale the brain does an impressive job at creating a representation of reality by measuring the physical properties of the world. It measures light, weight, temperature, movement, and time, in order to simulate the world well enough to survive in it. Our subjective experience of color or temperature, help us survive because they are correlated with reality.

I suspect that our subjective experiences regarding the nature of time also evolved because they capture some truth about the nature of the universe.

Perhaps one day objective evidence will emerge that we live in an eternalist universe, and we will understand why our subjective experiences are misleading. But until that day, we should accept our experience that the present is objectively different from the past and future as empirical evidence in favor of presentism.

See the rest here:

Einstein and why the block universe is a mistake - IAI

Read More..

WVU Today | Machine learning may predict where need for COVID tests is greatest – WVU Today

WVU researchers have earned $2.15 million in funding from the National Institutes of Health to develop machine-learning tools and GIS analyses to predict where COVID-19 testing will be most crucial, in addition to other trends relating to the virus. (WVU Photo/Jennifer Shephard)

The National Institutes of Health has called COVID-19 testing the key to getting back to normal. Yet testing rates have dropped nationwide, even as the Delta and Omicron variants accelerated the virus spread.

West Virginia University researchers Brian Hendricks and Brad Price are using machine learning and geographic information systems to identify communities in West Virginia where COVID-vaccine uptake is especially low. What the technology reveals can help get testing resources tothe people who need them the most: those who live where low vaccination rates make persistent, localized outbreaks likely.

In late 2020 and early 2021, when the vaccine came out, there was a one-third drop in testing, said Hendricks, an assistant professor of epidemiology and biostatistics in the School of Public Health. Thats a huge issue because a drop in testing hurts your epidemic modeling, your calculation of the basic reproductive number, your ability to plan for research allocationall of that. So, as the pandemic evolves, we have to keep testing to monitor localized outbreaks and understand when a new variant is introduced.

The National Institute on Minority Health and Health Disparitiesa division of NIHhas awarded WVU $2.15 million for the project.

Hendricks, Price and their colleagues will create and validate new machine-learning tools and GIS analyses to maximize the use of localized information on case counts, testing trends, emerging variants and vaccinations. In doing so, theyll pinpoint counties that face an increased risk of potential outbreaks, and theyll predict where testing will be most crucial.

Machine learning is a form of artificial intelligence that uses huge amounts of frequently updated data to draw conclusions that grow more and more accurate. Because its dynamicrather than staticits a boon for COVID researchers.

We want to take into account the changes that can occur over time, said Price, an assistant professor or the John Chambers College of Business and Economics who focuses on machine learning. Because we know the pandemic changes with time, right? Weve seen variants pop up. Weve seen surges in cases. Weve seen cases fall off. Weve seen masks go on and come off. And now were talking about booster shots. So, there's a lot of things we have to take into account. If were just saying, This is the data. Analyze it, without considering how its moved over time and how it will continue to move over time, were missing a big piece of the puzzle.

Once the researchers know where the COVID hotspots are, they can work with community members in those locations to determine the best ways to get more people tested.

Were conducting interviews to understandfrom their perspectivewhat are the barriers to COVID testing? Hendricks said. How does the community feel about COVID testing? What are some things we could do to motivate communities to participate in continued testing? And why is this important?

By avoiding a one-size-fits-all approach and acknowledging that communities are unique, the researchers hope that efforts to increase testing rates will bear measurable successes.

What might such efforts look like? Local first responders, for instance, might attend a big cookout thats free, open to the public and advertised on social media. Staff from QLabsa research partner of Hendricks and Pricecould be available at the cookout to conduct COVID testing. The first responders might circulate among the community members and encourage them to be tested.

I want them to do what they do every day, which is go up to the people who are eating the food at these events and say, Hey, I care about you. Hows your family doing? Hows your mom doing? Have you gotten tested lately? You havent? Well, I care about you. Let me walk you up to the table where you can get tested, Hendricks said.

The awarded grant marks the second phase of NIHsRapid Acceleration of Diagnostics for Underserved Populations initiative. RADx-UP aims to reduce disparities in underserved populations, whom COVID-19 affects disproportionately. The overarching goal of the initiative is to understand and ameliorate factors that place a disproportionate burden of the pandemic on vulnerable populations.

The prior phase of the programled by Sally Hodder, the associate vice president for clinical and translational science and the director of the West Virginia Clinical and Translational Science Institutefocused on expanding the scope and reach of COVID testing interventions to reduce these disparities.

The next RADx phase will be critically important as we address future COVID activity, Hodder said. Drs. Price and Hendricks will focus on those areas of West Virginia with low vaccine uptake. We know that individuals who have not received COVID vaccines are at increased risk for severe COVID disease and even death. However, new oral drugs are now available that greatly decrease that risk. Therefore, testing is extremely important as folks testing positive for COVID will be able to receive pills that decrease their chances of hospitalization.

How Hendricks and Price collect and analyze the data could, in itself, prove useful in the future. After all, this wasnt the first pandemic the world has experienced, and it wont be the last. According to WHO, the United Nations, the World Economic Forum and others, climate change is apt to increase the spread of infectious diseases in the years to come.

At the beginning of the pandemic, we couldnt do anything because we didnt have data, Price said. In the middle of the pandemic, we couldnt do anything because we didnt have an infrastructure for that data. Now were starting to piece it together. And I think one of the things Im going to be focusing on is making sure we have that infrastructure so that the next time this happens, we have our policies, protocols and systems built, and the second we have data available, we can hit the ground running.

Research reported in this publication was supported by the National Institute on Minority Health and Health Disparities of the National Institutes of Health under Award Number 1U01MD017419-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of NIH.

-WVU-

see/de/03/09/22

CONTACTS: Nikky LunaDirector, Marketing and CommunicationsWVU School of Public Health304-293-1699; nikky.luna@hsc.wvu.edu

OR

Heather RichardsonAssistant Dean, Communications, Engagement and ImpactJohn Chambers College of Business and Economics304-293-9625; hrichard@mail.wvu.edu

Call 1-855-WVU-NEWS for the latest West Virginia University news and information from WVUToday.

Follow @WVUToday on Twitter.

See more here:
WVU Today | Machine learning may predict where need for COVID tests is greatest - WVU Today

Read More..

Funding to make data ready for AI and machine learning – National Institute on Aging

*The authors thank their colleagues in the NIA Artificial Intelligence and Data Sharing Working Groups for their support on this post.

Biomedical data science is fast evolving, thanks in large part to the growth of artificial intelligence (AI) and machine learning (ML) technologies as powerful additions to the scientific communitys toolbox. The challenge is harnessing the massive data flow including whats produced by NIA-supported research and making it easier for investigators to tap into. Several teams at NIA and across the broader NIH are working on solutions, and were pleased to announce supplemental funding is now available in four key areas to help researchers modernize their data.

NIH data policy aims to make data Findable, Accessible, Interoperable, and Reusable (FAIR). NIH also aims to ensure that our data repositories align with the Transparency, Responsibility, User focus, Sustainability, and Technology (TRUST) principles. The goal is to have high-impact data usable with AI or ML applications to improve our understanding of healthy aging and identify factors and interventions associated with disease resistance and successful treatments.

In Fiscal Year (FY) 2021, NIA partnered with the NIH Office of Data Science Strategy (ODSS) to supplement active NIA research projects in four key focus areas. The NIA community responded robustly to this opportunity, resulting in 23 supplement awards across these four notices of special interest (NOSIs). ODSS funded 19 supplements and NIA supported four, investing a combined total of nearly $6 million.

In the true spirit of open science, this funding will aid the development of teaching materials, workshops, and freely accessible online content so that other investigators can train their students. For example, these awards support scientists who are creating classes and curricula aimed at making data AI/ML-ready and aligned with FAIR and TRUST principles.

In FY 2022, NIA is again partnering with ODSS and has joined four Notices of Special Interest (NOSIs). Three are reissues of last years notices:

The fourth supplement opportunity is new this year:

If youre as excited about the nexus of IT and healthy aging research as we are, we hope youll apply for these NOSIs to potentially accelerate your projects! If you have questions, please email the contacts listed above or leave a comment below.

Read more from the original source:
Funding to make data ready for AI and machine learning - National Institute on Aging

Read More..

Solving Content in 2022: Machine Learning With a Human-in-the-Loop – GlobeNewswire

SAN ANTONIO, March 09, 2022 (GLOBE NEWSWIRE) -- Content Marketing in 2022 comes with several challenges. To stay competitive, businesses need to create consistent, high-quality content that performs well, and produce that content at scale, while staying within budget. Today, Scripted announces new AI-integrated tools to help marketers overcome these challenges using Scripted's industry-leading content production platform. Scripted has launched their new content creation system, combining machine learning ideation with expert writers and strategists; this is the solution to the problem that has plagued content marketing for years.

"Every business needs great content. Enterprise-level businesses and agencies especially, need to be able to create content at scale. Our mission was to solve that problem, and we believe combining machine learning ideation with a human-in-the-loop is that solution." - Jeremy Bellinghausen, CEO

Scripted provides every customer with not only experienced writers, but also editors and content strategists to ensure their goals are at the forefront of all content created. Content strategists work with each business to create a plan that will perform well in search, is relevant to their target audience, and helps drive conversions. Next, Scripted uses machine learning technology to auto-generate content ideas around that strategy. This process, paired with content management tools, allows Scripted's customers to quickly scale any content campaign with ease.

Why not just have AI write the content?

According toExtremeTech's review of a GPT-3 Powered AI Writing Assistant, AI-powered content falls short:"Like any limited AI, it can tell you facts and knit sentences into coherent paragraphs, but it struggles to understand. We found that the app is most useful when the writer already has a sense of narrative and all their facts straight."

Scripted tested dozens of AI-powered content solutions to see if they could handle writing long-form industry-specific content and came to the same conclusion: The robots have not yet mastered the written word.

What Scripted did find out is that AI was exceptional at ideation. With this insight, Scripted designed their workflow so that AI creates the content ideas while their expert writers research and produce the content. This changed everything. Scripted's clients save time and money because they no longer need to spend hours coming up with engaging content ideas. They finally have scalability.

As technology advances, Scripted plans on incorporating machine learning in all of their processes. It's their goal to use this technology to improve both the experience of the reader and refine the skills of the writer. Scripted will use AI-powered tools to spot patterns in search data to help their writers improve their content and their process. An optimized process for optimal results.

Visithttps://www.scripted.com/ or call us at 1.866.501.3116 to get your own AI-powered content recommendations.

Related Images

Image 1: Scripted

Scripted Content Writing

This content was issued through the press release distribution service at Newswire.com.

Link:
Solving Content in 2022: Machine Learning With a Human-in-the-Loop - GlobeNewswire

Read More..

Lecture on leveraging machine learning in nonprofits to be presented March 15 – Pennsylvania State University

UNIVERSITY PARK, Pa. Ryan Shi, a doctoral candidate in the School of Computer Science at Carnegie Mellon University, will present a free public lecture titled From a Bag of Bagels to Bandit Data-Driven Optimization at 4 p.m. on Tuesday, March 15. The lecture is part of the Young Achievers Symposium series hosted by the Center for Socially Responsible Artificial Intelligence and will be held live viaZoom View Webinar. No registration is required.

Shis work aims to address the unique challenges that arise in machine learning projects for the public and nonprofit sectors. His talk will discuss his three-year collaboration with a large food rescue organization that led him to develop a new recommender system that selectively advertises available rescues to food rescue volunteers.

Upcoming lectures in the Young Achievers Symposium series include:

Previous lectures can be viewed at theCenter for Socially Responsible Artificial Intelligence website.

About the Young Achievers Symposium

TheYoung Achievers Symposiumhighlights early career researchers in diverse fields of AI for social impact. The symposium series seeks to focus on emerging research, stimulate discussions, and initiate collaborations that can advance research in artificial intelligence for societal benefit. All events in the series are free and open to the public unless otherwise noted. Penn State students, postdoctoral scholars, and faculty with an interest in socially responsible AI applications are encouraged to attend.

For more information, contact Amulya Yadav, assistant professor in the College of Information Sciences and Technology, atauy212@psu.edu.

Original post:
Lecture on leveraging machine learning in nonprofits to be presented March 15 - Pennsylvania State University

Read More..

How AI and Machine Learning trained to work in Paraphrasing tool – Techiexpert.com – Techiexpert.com

Paraphrasing tools help bloggers and writers in creating new content from the preexisting content. These tools use the advanced technology of artificial intelligence and machine learning to generate paraphrases. In this article, we will be discussing how artificial intelligence and machine learning are trained to work in a paraphrasing tool. Firstly, lets discuss the components of the paraphrasing task

There are two different tasks in paraphrasing. These two tasks are paraphrase identification (PI) and paraphrase generation (PG).

The purpose of the paraphrase identification task is to check if a sentence pair is pointing towards the same meaning. In paraphrase identification, the system yields a figure between 1 and 0. Here, the value 1 shows that the sentence pair have the same meaning while 0 exhibits that the sentence pair is not a paraphrase of each other. Github: https://github.com/nelson-liu/paraphrase-id-tensorflow.git

The paraphrase identification task is a machine learning task. The systems are always trained with a corpus of sentence pairs. The learned knowledge is then used by machine learning to identify if a sentence pair is paraphrased or not. First, the system is trained with a corpus of labeled sentences pairs. Then, use the learned knowledge to identify whether two sentences are paraphrases

In the second task of paraphrase generation, the aim is to generate one or more paraphrases of the input text automatically. So, the aim is to create paraphrases that are fluent and have the same meaning. The paraphrase identification system takes this task as a classification task whereas the paraphrase generation system takes this task as a language generation task. The algorithms of machine learning (ML) and artificial intelligence (AI) bring about the classification of sentences. It means that these algorithms create a model that is useful for input and output mapping. In other words, machine learning or ML uses a number of strategies to make two sentences similar in meaning.

As the focus of this article is paraphrasing or paraphrase generation, we will now look at different techniques of paraphrase generation. We can classify such techniques into two major categories.

In this approach, the paraphrase generation is controlled by a template or syntactic tree. Kumar and Ahuja with their associates proposed an approach in 2020 for paraphrase generation. It uses both syntactic trees and tree encoders by employing LSTM (long short-term memory) neural networks.

Another approach (retriever editor approach) was given in the same year for paraphrase generation in which embedding distance related to the source is used to select a similar source-target pair. After that, the editor has to modify the input sentence with the help of a transformer. The retriever should select the source-target pair with the highest similarity on the basis of embedding distance with the source. The job of the editor is to modify the input accordingly.

In these techniques, language models with fine-tuning such as GPT2 and GPT3 works to generate paraphrases. There is an approach of paraphrase generation using GPT2 where the ability of GPT2 to understand the language is exploited. GPT2 is trained on a large open-domain corpus, therefore, its ability to understand language is exceptional. The aim of this approach is to fine-tune the weight of the pre-trained GPT2 model.

In this section, we will discuss a unified system architecture that is capable of both PI and PG. The major components of such a system are as follows.

The first component of a system is to collect data from a variety of sources. The sources may be Quora duplicate question pairs, MSRP or Microsoft paraphrase research database, PARANMT 50M, etc. The training set is usually very large because these sources contain a lot of datasets with many thousands of sentence pairs. These different types of data are valuable to train the paraphrasing tool models.

The purpose of this stem is to increase data diversity. It is achieved by sampling and filtering the original data. Usually, paraphrase generation models give correct paraphrases with no recurrence. It is due to the huge lexical resource and syntactic diversity that is present in the data used during training. As a result, the paraphrasing tools generate various paraphrases having the same meaning, however, the vocabulary is varied. In addition, it is necessary to perform a number of transformations to the training data to enhance data diversity. As a result of this step, the diversity semantic similarity and fluency are provided to the system.

The system is trained so that it can perform the task of paraphrase generation. For this purpose, the Text-To-Text Transfer Transformer is used to train the system on data. For instance, it is possible to use T5 based pre-trained model for this purpose which is a Text-To-Text Transfer Transformer.

The models such as the T5 model have a self-attention technique that is used in transformers receiving input sequences and generating an output sequence. The output sequence is of the same length as that of the input sequence. So, it is important to compute every element of the output sequence by performing calculations of an average of the input sequence given.

In the end, the whole model is trained for up to 200 epochs on systems having at least 120GB of RAM (random access memory). The algorithm takes quite a lot of time, about three days, to train on the task of paraphrase generation. The system should be efficient as well as lightweight. It is possible to optimize the parameters to improve the performance of the system further.

There are many paraphrasing tools trained with Machine Learning and Artificial Intelligence.

For example: Paraphrasingtool.ai is the perfect example of AI Based paraphrasing tool that uses its own trained model using transformers to rewrite content. This paraphrasing tool is the most accurate, reliable, free and plagiarism free paraphrasing tool available on the web. It can rewrite content in any language automatically and accordingly. This paraphrase tool is very carefully tested to avoid any manual processing and to ensure the quality.

The process of paraphrasing has two tasks paraphrase generation and paraphrase identification. These tasks have huge significance in NLP or natural language processing. There are different approaches available for paraphrase generation. Artificial intelligence and machine learning play their roles in paraphrase generation and identification tasks.

Various models such as T5 model works for sentence generation. The system developed as a result of these algorithms is trained extensively with various datasets and data sources. Consequently, the paraphrasing tools based on artificial intelligence and machine learning have increased diversity and a huge vocabulary.

Link:
How AI and Machine Learning trained to work in Paraphrasing tool - Techiexpert.com - Techiexpert.com

Read More..