Page 1,384«..1020..1,3831,3841,3851,386..1,3901,400..»

ALCF Supercomputers Help Design of Peptide-Based Drugs with … – HPCwire

A team from the Flatiron Institute is leveraging ALCF computing resources to advance the design of peptides with the aim of speeding up the search for promising new drugs.

Existing peptide drugs rank among our best medicines, but almost all of them have been discovered in nature. Theyre not something we could design rationallythat is, until very recently, said Vikram Mulligan, a biochemistry researcher at the Flatiron Institute.

Mulligan is the principal investigator on a project leveraging supercomputing resources at the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science user facility located at DOEs Argonne National Laboratory, with the aim of improving the production of peptide drugs. Peptides are chains of amino acids similar to those that form proteins, but shorter.

The original motivation for the research lies in Mulligans postdoctoral work at the University of Washingtons Baker Lab, in which he sought to apply what were determined to be accurate methods for designing proteins that could fold in specific ways.

Proteins are the functional molecules in our cells, theyre the molecules responsible for all the interesting cellular activities that take place, he explained. It is their geometry that dictates those activities.

Naturally occurring proteinsproteins produced by living cellsare built from just 20 amino acids. In the laboratory, however, chemists can synthesize molecules from thousands of different building blocks, allowing for innumerable structure combinations. This effectively means that a scientist might be able to manufacture, for example, enzymes capable of catalyses that no natural enzyme could perform.

Mulligan is particularly interested in making small peptides that can act as drugs that bind to some target, either in the human body or in a pathogen, and that treat a given disease by altering the function of that target.

To this end, via a project supported through DOEs INCITE program, Mulligan is using ALCF computational resources, including the Theta supercomputer, to advance the design of new peptide compounds with techniques including physics-based simulations and machine learning. His teams work at the ALCF is driven by the Rosetta software suite, the applications and libraries of which enable protein and peptide structure prediction and design, RNA fold prediction, and more.

Between small-molecule drugs and protein drugs

Mulligans research aims to treat a broad spectrum of diseases, as evidence suggests that peptide-based compounds have the potential to operate as an especially versatile class of drugs.

While they are easily and effectively administered, a primary therapeutic limitation of small-molecule drugs, by contrast, is that they often display an equal affinity for other sites in a patient as they do for the intended target. This translates into being a source of side effects for the patient taking the drugs.

Small-molecule drugs are like simple luggage keys in that they can unlock more than what theyre made for, Mulligan said.

Larger protein drugs such as antibodies, meanwhile, have the advantage of acting on their targets with a high degree of specificity, on account of their (the drugs) comparatively large surface area. Their disadvantages, however, include an inability to cross biological barriers (such as the gut-blood barrier or the blood-brain barrier) or to pass through cells, thereby limiting their targets to extracellular proteins. Given that immune systems have evolved to recognize foreign proteins and remove them from the body, protein drugs must evade these highly efficient mechanisms, an additional challenge for researchers.

Peptide drugs split the difference in terms of size, combining certain advantages of small-molecule drug with those of protein drugs so that theyre small enough to be permeable and evade the immune system, but large enough that theyre unlikely to bind to much aside from the intended targets.

To design such drugs, Mulligans work applies methods for protein design to what are called noncanonical design molecules, which comprise nonnatural building blocks and fold as proteins do, with the potential to bind to a target.

Physics-based simulations

While in the context of protein design, detailed structural information can be inferred from the 200,000 or so protein structures that have been solved experimentally, its determination is much more challenging when building peptides in the laboratory. At present, a mere two dozen peptide structures have been solved, several of which have been designed by Mulligan himself or by researchers applying his methods.

Because many more peptide structures must be solved before machine learning can guide the design of new peptide drugs, the team continues to rely on physics-based simulations for the time being and generates new strategies both for sampling conformations and the exploration of possible amino-acid sequences.

With peptides as with proteins, the particular sequence of amino acids determines how the molecule folds up into a specific 3D structure, and this specific 3D structure determines the molecules function, Mulligan said. If you can get a molecule to rigidly fold into a precise shape, then you can create a compound that binds to that target. And if that target is, say, the active site of an enzyme, then you can inhibit that enzymes activity.

That was the idea as conceived in 2012. It took a long time to get it to work, but were now at the point where we can design, relatively robustly, folding peptides constructed from mixtures of natural and nonnatural amino-acid building blocks.

A particular success among the handful of molecules that Mulligans team has designed binds to NDM-1, or New Dehli metallo-beta-lactamase 1, an enzyme responsible for antibiotic resistance in certain bacteria.

The notion here was that if we could make a drug that inhibits the NDM-1 enzyme, this drug could be administered alongside conventional antibiotics, reviving the usefulness of those sidelined by resistance, he said.

However, while this drug progressed from computer-aided design to laboratory manufacture, its ability to cross biological barriers must be fine-tuned in order to proceed through clinical trials.

Mulligan explained, The next challenge is to try to make something that hits its target and also has all the desirable drug properties like good pharmacokinetics, good persistence in the body, and the ability to pass biological barriers or enter cells and get it wherever it needs to go.

Furthermore, different biological barriers represent different degrees of difficulty when designing drugs. The low-hanging fruit are targets in the gut, because those can be reached and acted on simply via oral medicine, he said. The most challenging cases are intracellular cancer targets, which necessitate that the drugs passively diffuse into cellsan ongoing problem in science.

Approximating with proteins

The present physics-based methods for design include quantum chemistry calculations, which compute the energy of molecules with extreme precision by solving the Schrdinger equation, the central equation of quantum mechanics. Because such solutions have high computational costs that grow exponentially as the objects of study increase in size and complexity, they are historically obtained for only the smallest molecular systems. To try to minimize these costs, the research team employs an approximation known as a force field.

We pretend that the atoms in our system are small spheres exerting forces on each other, Mulligan explained, a classical approximation that reduces our accuracy but gives us a lot of speed and makes tractable a lot of otherwise intractable equations.

The accuracy of the method diminishes the less the peptide building-blocks resemble conventional amino acids: building blocks that bear strong similarities to conventional amino acids permit the use of force-field approximations generated by training machine learning applications on protein structures, but their applicability is comparatively tenuous if the building blocks have exotic features or contain certain chemical elements.

As such, one goal of the research team is to incorporate quantum chemistry calculations into the design and validation pipelines while minimizing to the greatest possible extent the tradeoffs between accuracy and precision inherent to approximations.

Benchmarking and validation

Incorporation of the calculations requires benchmarking and testing so as to determine the appropriate level of theory and which approximations are appropriate.

There are all sorts of approximations that quantum chemists have generated to try to scale quantum chemistry methods, Mulligan said. Many of them are quite goodare established practices within the quantum chemistry communitybut have yet to take root in the molecular-modeling community. With something like the force-field approximation, we need to ask the questions, how do we use it efficiently for molecular-modeling tasks? Under what circumstances is it good enough to use the force field? Under what circumstances do we want to use this approximation and under what circumstances is this approximation not good enough? Would we have to employ a higher level of quantum chemistry theory in order to complete the necessary benchmarking? To perform all the concomitant trial and error, we need powerful computational resources, which is where leadership-class systems are especially important. To this end, many of our calculations are fairly parallelizable.

Validation strategies involve taking a designed peptide with a certain amino-acid sequence and altering the sequence to create all sorts of alternative conformations to identify those that result in the desired fold.

Such approaches to conformational sampling, in which countless molecular arrangements are explored, are similarly parallelizable, enabling the researchers to carry out the conformation analyses across thousands of nodes on the ALCFs Theta supercomputer. These brute-force calculations lay the foundation for data repositories that the team will use to train machine-learning models.

Part of the challenge of machine learning is figuring out how to use it well, Mulligan said. Because a machine-learning model is going to make mistakes regardless of how its programmed, I tried to train this one to generate more false positives than false negativesto identify more things as peptides that fold than it should. Its easier to sift through extra hay, if you will, further down the line in our search for a needle.

Successful validation of peptide designs using quantum chemistry calculations itself represents a significant advance. Moreover, in the aforementioned NDM-1 example (that concerning the mitigation of antibiotic resistance among bacteria), all the design and validation work was completed using the force-field approximations.

Adapting to exascale

Ongoing and future work requires substantial revision of the Rosetta software suite for next-generation optimization, including accelerator-based computing systems such as the ALCFs Polaris and Aurora supercomputers.

Rosetta started its life in the late 1990s as a protein modeling package written in FORTRAN, and its been subsequently rewritten several times, Mulligan said. While its written in modern C++, it is beginning to show its age; even the latest version was written more than 10 years ago. Weve continually refactored the code to try to make it more general, try to make it work for nonnatural amino acids, but taking advantage of modern hardware has posed challenges. While the software parallelizes on central processing units (CPUs) and scales well, graphics processing units (GPUs) are not supported to their full capability.

Because Polaris is a hybrid CPU-GPU system, as the exascale Aurora system will be, I and others are working on rewriting Rosettas core functionality from scratch. By creating a successor to the current software, its my hope that we can continue to use these software methods efficiently on new hardware for years to come, and that we can build atop them to permit more challenging molecular design tasks to be tackled.

Source: Nils Heinonen, ALCF

Visit link:

ALCF Supercomputers Help Design of Peptide-Based Drugs with ... - HPCwire

Read More..

How the Big Bang model was born – Big Think

This is the eighth article in a series on modern cosmology.

The Big Bang model of cosmology says the Universe emerged from a single event in the far past. The model was inspired by the adventurous cosmic quantum egg idea, which suggested that in the beginning, all that exists was compressed into an unstable quantum state. When this single entity burst and decayed into fragments, it created space and time.

To take this imaginative notion and craft a theory of the Universe was quite a feat of creativity. To understand the cosmic infancy, it turns out, we need to invoke quantum physics, the physics of the very small.

It all started in the mid-1940s with the Russian-American physicist George Gamow. He knew that protons and neutrons are held together in the atomic nucleus by the strong nuclear force, and that electrons are held in orbit around the nucleus by electrical attraction. The fact that the strong force does not care about electric charge adds an interesting twist to nuclear physics. Since neutrons are electrically neutral, it is possible for a given element to have different numbers of neutrons in its nucleus. For example, a hydrogen atom is made of a proton and an electron. But it is possible to add one or two neutrons to its nucleus.

These heavier hydrogen cousins are called isotopes. Deuterium has a proton and a neutron, while tritium has a proton and two neutrons. Every element has several isotopes, each built by adding or extracting neutrons in the nucleus. Gamows idea was that matter would build from the primeval stuff that filled space near the beginning. This happened progressively, building from the smallest objects to larger ones. Protons and neutrons joined to form nuclei, then binding electrons to form complete atoms.

How do we synthesize deuterium? By fusing a proton and a neutron. What about tritium? By fusing an extra neutron to deuterium. And helium? By fusing two protons and two neutrons, which can be done in a variety of ways. The build-up continues as heavier and heavier elements are synthesized inside of stars.

A fusion process releases energy, at least up to the formation of the element iron. This is called the binding energy, and it equals the energy we must provide to a system of bound particles to break a bond. Any system of particles bound by some force has an associated binding energy. A hydrogen atom is made of a bound proton and an electron, and it has a specific binding energy. If I disturb the atom with an energy that exceeds its binding energy, I will break the bond between the proton and the electron, which will then move freely away from each other. This buildup of heavier nuclei from smaller ones is called nucleosynthesis.

In 1947, Gamow enlisted the help of two collaborators. Ralph Alpher was a graduate student at George Washington University, while Robert Herman worked at the Johns Hopkins Applied Physics Laboratory. Over the following six years, the three researchers would develop the physics of the Big Bang model pretty much as we know it today.

Gamows picture starts with a Universe filled with protons, neutrons, and electrons. This is the matter component of the early Universe, which Alpher called ylem. Added to the mix were very energetic photons, the early Universes heat component. The Universe was so hot at this early time that no binding was possible. Every time a proton tried to bind with a neutron to make a deuterium nucleus, a photon would come racing to hit the two away from each other. Electrons, which are bound to protons by the much weaker electromagnetic force, didnt have a chance. There can be no binding when it is too hot. And we are talking about some seriously hot temperatures here, around 1 trillion degrees Fahrenheit.

The image of a cosmic soup tends to emerge quite naturally when we describe these very early stages in the history of the Universe. The building blocks of matter roamed freely, colliding with each other and with photons but never binding to form nuclei or atoms. They acted somewhat like floating vegetables in a hot minestrone soup. As the Big Bang model evolved to its accepted form, the basic ingredients of this cosmic soup changed somewhat, but the fundamental recipe did not.

Structure started to emerge. The hierarchical clustering of matter progressed steadily as the Universe expanded and cooled. As the temperature lowered and photons became less energetic, nuclear bonds between protons and neutrons became possible. An era known as primordial nucleosynthesis started. This time saw the formation of deuterium and tritium; helium and its isotope helium-3; and an isotope of lithium, lithium-7. The lightest nuclei were cooked in the Universes earliest moments of existence.

According to Gamow and collaborators, this all took about 45 minutes. Accounting for more modern values given to the various nuclear reaction rates, it only took about three minutes. The remarkable feat of Gamow, Alpher, and Hermans theory was that they could predict the abundance of these light nuclei. Using relativistic cosmology and nuclear physics, they could tell us how much helium should have been synthesized in the early Universe it turns out that about 24 percent of the Universe is made of helium. Their predictions could then be checked against what was produced in stars and compared to observations.

Gamow then made a much more dramatic prediction. After the era of nucleosynthesis, the ingredients of the cosmic soup were mostly the light nuclei in addition to electrons, photons, and neutrinos particles that are very important in radioactive decay. The next step in the hierarchical clustering of matter is to make atoms. As the Universe expanded it cooled, and photons became progressively less energetic. At some point, when the Universe was about 400,000 years of age, the conditions were ripe for electrons to bind with protons and create hydrogen atoms.

Before this time, whenever a proton and an electron tried to bind, a photon would kick them apart, in a sort of unhappy love triangle with no resolution. As the photons cooled down to about 6,000 degrees Fahrenheit, the attraction between protons and electrons overcame the photons interference, and binding finally occurred. Photons were suddenly free to move around, chasing their dance across the Universe. They were not to interfere with atoms anymore, but to exist on their own, impervious to all this binding that seems to be so important for matter.

Gamow realized these photons would have a special distribution of frequencies known as a blackbody spectrum. The temperature was high at the time of decoupling that is, in the epoch when atoms formed and photons were free to roam across the Universe. But since the Universe has been expanding and cooling for about 14 billion years, the present temperature of the photons would be very low.

Earlier predictions were not very accurate, as this temperature is sensitive to aspects of nuclear reactions that were not accurately understood in the late 1940s. Nevertheless, in 1948 Alpher and Herman predicted this cosmic bath of photons would have a temperature of 5 degrees above absolute zero, or about -451 degrees Fahrenheit. The current given value is 2.73 Kelvin. Thus, according to the Big Bang model, the Universe is a giant blackbody, immersed in a bath of very cold photons peaked at microwave wavelengths the so-called fossil rays from its hot early infancy. In 1965, this radiation was accidentally discovered, and cosmology would never be the same. But that story deserves its own essay.

Here is the original post:

How the Big Bang model was born - Big Think

Read More..

Entanglement Could Step in Where GPS Is Denied – IEEE Spectrum

Using the strange quantum phenomenon known as entanglement, which can link particles together anywhere in the universe, sensors can become significantly more accurate and faster at detecting motion, a new study reveals. The findings may help augment navigation systems that do not rely on GPS, scientists say.

In the new study, researchers experimented with optomechanical sensors, which use beams of light to analyze how their components move in response to disturbances. The sensors serve as accelerometers, which smartphones use to detect motions. Accelerometers can find use in inertial navigation systems in situations where GPS performs badly, such as underground, underwater, inside buildings, remote locations, and places where radio signal jamming is in use.

To boost the performance of optomechanical sensing, researchers experimented with using entanglement, which Einstein dubbed spooky action at a distance. Entangled particles essentially act in sync regardless of how far apart they are.

The researchers expect to have a prototype entanglement accelerometer chip within the next two years.

However, quantum entanglement is also incredibly vulnerable to outside interference. Quantum sensors capitalize on this sensitivity to help detect the slightest disturbances in their surroundings.

Previous research in quantum-enhanced optomechanical sensing has primarily focused on improving sensitivity at a single sensor, says study lead author Yi Xia, a quantum physicist at the University of Arizona at Tucson. However, recent theoretical and experimental studies have shown that entanglement can significantly improve sensitivity among multiple sensors, an approach known as distributed quantum sensing.

Optomechanical sensors depend on two synchronized laser beams. One beam gets reflected off a component known as an oscillator, and any movement of the oscillator changes the distance the light travels on its way to a detector. Any such difference in distance traveled shows up when the second beam overlaps with the first. If the sensor is still, the two beams are perfectly aligned. If the sensor moves, the overlapping light waves generate interference patterns that reveal the size and speed of the sensors motions.

In the new study, the sensors from Dal Wilsons group at University of Arizona at Tucson used membranes as oscillators. These acted much like drumheads that vibrate after getting struck.

Instead of having one beam illuminate one oscillator, the researchers split one infrared laser beam into two entangled beams, which they bounced off two oscillators onto two detectors. The entangled nature of this light essentially let two sensors analyze one beam, altogether leading to improvements in speed and precision.

The vision is to deploy such sensors in autonomous vehicles and spacecraft to enable precise navigation in the absence of GPS.Zhenshen Zhang, University of Michigan

Entanglement can be leveraged to enhanced the performance of force sensing undertaken by multiple optomechanical sensors, says study senior author Zheshen Zhang, a quantum physicist at the University of Michigan at Ann Arbor.

In addition, to boost the precision of the device, the researchers employed squeezed light. Squeezed light takes advantage of a key tenet of quantum physics: Heisenbergs uncertainty principle, which states that one cannot measure a feature of a particle, such as its position, with certainty without measuring another feature of that particle, such as its momentum, with less certainty. Squeezed light takes advantage of this trade-off to squeeze or reduce the uncertainty in the measurements of a given variablein this case, the phase of the waves making up the laser beamswhile increasing the uncertainty in the measurement of another variable the researchers can ignore.

We are one of the few groups who can build squeezed-light sources and are currently exploring its power as the basis for the next-generation precision measurement technology, Zhang says.

All in all, the scientists were able to collect measurements that were 40 percent more precise than with two unentangled beams and do it 60 percent faster. In addition, the precision and speed of this method is expected to rise in proportion to the number of sensors, they say.

The implication of these findings would be that we can further push the performance of ultraprecise force sensing to an unprecedented level, Zhang says.

He adds that improving optomechanical sensors may not only lead to better inertial navigation systems but also help detect enigmatic phenomena such as dark matter and gravitational waves. Dark matter is the invisible substance thought to make up five-sixths of all matter in the universe, and detecting the gravitational effects it might have could help scientists figure out its nature. Gravitational waves are ripples in the fabric of space and time that could help shed light on mysteries from black holes to the Big Bang.

The scientists now plan to miniaturize their system. They can already put a squeezed-light source on a chip just a half centimeter wide. They expect to have a prototype chip in the next year or two that includes a squeezed-light source, beam splitters, waveguides, and inertial sensors. This would make this technology much more practical, affordable, and accessible, Zhang says.

In addition, we are currently working with Honeywell, JPL, NIST, and a few other universities in a different program to develop chip-scale quantum-enhanced inertial measurement units, Zhang says. The vision is to deploy such integrated sensors in autonomous vehicles and spacecraft to enable precise navigation in the absence of GPS signals.

The scientists detailed their findings online 20 April in the journal Nature Photonics.

From Your Site Articles

Related Articles Around the Web

Read more from the original source:

Entanglement Could Step in Where GPS Is Denied - IEEE Spectrum

Read More..

Astrophysicists reveal the nature of dark matter through the study of … – Science Daily

Most of the matter in the universe, amounting to a staggering 85% by mass, cannot be observed and consists of particles not accounted for by the Standard Model of Particle Physics (see remark 1). These particles are known as Dark Matter, and their existence can be inferred from their gravitational effects on light from distant galaxies. Finding the particle that makes up Dark Matter is an urgent problem in modern physics, as it dominates the mass and, therefore, the gravity of galaxies -- solving this mystery can lead to new physics beyond the Standard Model.

While some theoretical models propose the existence of ultramassive particles as a possible candidate for Dark Matter, others suggest ultralight particles. A team of astrophysicists led by Alfred AMRUTH, a PhD student in the team of Dr Jeremy LIM of the Department of Physics at The University of Hong Kong (HKU), collaborating with Professor George SMOOT, a Nobel Laureate in Physics from the Hong Kong University of Science and Technology (HKUST) and Dr Razieh EMAMI, a Research Associate at the Center for Astrophysics | Harvard & Smithsonian (CFA), has provided the most direct evidence yet that Dark Matter does not constitute ultramassive particles as is commonly thought but instead comprises particles so light that they travel through space like waves. Their work resolves an outstanding problem in astrophysics first raised two decades ago: why do models that adopt ultramassive Dark Matter particles fail to correctly predict the observed positions and the brightness of multiple images of the same galaxy created by gravitational lensing? The research findings were recently published in Nature Astronomy.

Dark Matter does not emit, absorb or reflect light, which makes it difficult to observe using traditional astronomical techniques. Today, the most powerful tool scientists have for studying Dark Matter is through gravitational lensing, a phenomenon predicted by Albert Einstein in his theory of General Relativity. In this theory, mass causes spacetime to curve, creating the appearance that light bends around massive objects such as stars, galaxies, or groups of galaxies. By observing this bending of light, scientists can infer the presence and distribution of Dark Matter -- and, as demonstrated in this study, the nature of Dark Matter itself.

When the foreground lensing object and the background lensed object -- both constituting individual galaxies in the illustration -- are closely aligned, multiple images of the same background object can be seen in the sky. The positions and brightness of the multiply-lensed images depend on the distribution of Dark Matter in the foreground lensing object, thus providing an especially powerful probe of Dark Matter.

Another assumption of the nature of Dark Matter

In the 1970s, after the existence of Dark Matter was firmly established, hypothetical particles referred to as Weakly Interacting Massive Particles (WIMPs) were proposed as candidates for Dark Matter. These WIMPs were thought to be ultramassive -- more than at least ten times as massive as a proton -- and interact with other matter only through the weak nuclear force. These particles emerge from Supersymmetry theories, developed to fill deficiencies in the Standard Model, and have since been widely advocated as the most likely candidate for Dark Matter. However, for the past two decades, adopting ultramassive particles for Dark Matter, astrophysicists have struggled to correctly reproduce the positions and brightness of multiply-lensed images. In these studies, the density of Dark Matter is assumed to decrease smoothly outwards from the centres of galaxies in accordance with theoretical simulations employing ultramassive particles.

Beginning also in the 1970s, but in dramatic contrast to WIMPs, versions of theories that seek to rectify deficiencies in the Standard Model, or those (e.g., String Theory) that seek to unify the four fundamental forces of nature (the three in the Standard Model, along with gravity), advocate the existence of ultralight particles. Referred to as axions, these hypothetical particles are predicted to be far less massive than even the lightest particles in the Standard Model and constitute an alternative candidate for Dark Matter.

According to the theory of Quantum Mechanics, ultralight particles travel through space as waves, interfering with each other in such large numbers as to create random fluctuations in density. These random density fluctuations in Dark Matter give rise to crinkles in spacetime. As might be expected, the different patterns of spacetime around galaxies depending on whether Dark Matter constitutes ultramassive or ultralight particles -- smooth versus crinkly -- ought to give rise to different positions and brightness for multiply-lensed images of background galaxies.

In work led by Alfred AMRUTH, a PhD student in Dr Jeremy LIM's team at HKU, astrophysicists have for the first time computed how gravitationally-lensed images generated by galaxies incorporating ultralight Dark Matter particles differ from those incorporating ultramassive Dark Matter particles.

Their research has shown that the general level of disagreement found between the observed and predicted positions as well as the brightness of multiply-lensed images generated by models incorporating ultramassive Dark Matter can be resolved by adopting models incorporating ultralight Dark Matter particles. Moreover, they demonstrate that models incorporating ultralight Dark Matter particles can reproduce the observed positions and brightness of multiply-lensed galaxy images, an important achievement that reveals the crinkly rather than smooth nature of spacetime around galaxies.

'The possibility that Dark Matter does not comprise ultramassive particles, as has long been advocated by the scientific community, alleviates other problems in both laboratory experiments and astronomical observations,' explains Dr Lim. 'Laboratory experiments have been singularly unsuccessful at finding WIMPs, the long-favoured candidate for Dark Matter. Such experiments are in their final stretch, culminating in the planned DARWIN experiment, leaving WIMPs with no place to hide if not found (see remark 2).'

Professor Tom BROADHURST, an Ikerbasque Professor at the University of the Basque Country, a Visiting Professor at HKU, and a co-author of the paper adds, 'If Dark Matter comprises ultramassive particles, then according to cosmological simulations, there should be hundreds of satellite galaxies surrounding the Milky Way. However, despite intensive searches, only around fifty have been discovered so far. On the other hand, if Dark Matter comprises ultralight particles instead, then the theory of Quantum Mechanics predicts that galaxies below a certain mass simply cannot form owing to the wave interference of these particles, explaining why we observe a lack of small satellite galaxies around the Milky Way.'

'Incorporating ultralight rather than ultramassive particles for Dark Matter resolve several longstanding problems simultaneously in both particle physics and astrophysics,' said Amruth Alfred, 'We have reached a point where the existing paradigm of Dark Matter needs to be reconsidered. Waving goodbye to ultramassive particles, which have long been heralded as the favoured candidate for Dark Matter, may not come easily, but the evidence accumulates in favour of Dark Matter having wave-like properties as possessed by ultralight particles.' The pioneering work used the supercomputing facilities at HKU, without which this work would not have been possible.

The co-author Professor George SMOOT added, 'Understanding the nature of particles that constitute Dark Matter is the first step towards New Physics. This work paves the way for future tests of Wave-like Dark Matter in situations involving gravitational lensing. The James Webb Space Telescope should discover many more gravitationally-lensed systems, allowing us to make even more exacting tests of the nature of Dark Matter.'

Remarks: 1. The Standard Model of Particle Physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions -- excluding gravity) in the universe and classifying all known elementary particles. Although the Standard Model has met with huge successes, it leaves some phenomena unexplained -- e.g., the existence of particles that interact with known particles in the Standard Model only through gravity -- and falls short of being a complete theory of fundamental interactions.

Read the rest here:

Astrophysicists reveal the nature of dark matter through the study of ... - Science Daily

Read More..

Prestigious NSF fellowships awarded to 43 graduate students … – University of Colorado Boulder

Students from across campus have received Graduate Research Fellowships, a prestigious award that recognizes and supports outstanding students in a wide variety of science-related disciplines

The National Science Foundation has awarded 43 University of Colorado Boulder students with the prestigious graduate research fellowship, the federal agencyannounced earlier this month.

The Graduate Research Fellowship Program (GRFP) recognizes outstanding graduate students from across the country in science, technology, engineering and mathematics (STEM) fields, paving the way for their continued work exploring some of the most complex and pressing issues of our time.

This years recipients of the five-year fellowship represent a wide swath of disciplines, spanning quantum physics to ecology. Each GRFP recipient will receive three years of financial support, including an annual stipend of $37,000, as well as professional development and research opportunities.

Of those 43 winners, which places the university in the top fifteen nationwide in terms of number awarded, 60% participated in a workshop or information session organized by the Graduate School, in partnership with the College of Arts and Sciences and the College of Engineering and Applied Sciences. These included specialized writing workshops, coaching sessions and general informational sessions about applying for the GRFP.

"Our continued extraordinary performance among the nations top graduate schools in securing NSF GRFPs is not only a testament to our outstanding graduate students at CU, but also the Graduate Schools approach to cultivating talent with our campus partners and tremendously supportive faculty, said E. Scott Adler, dean of the Graduate School and the vice provost for graduate affairs. We are very proud of the students who have been recognized by this highly competitive program.

This years recipients include:

In addition to the fellowship award winners, 15 students were recognized with an honorable mention. Eleven alumni were also recognized.

See original here:

Prestigious NSF fellowships awarded to 43 graduate students ... - University of Colorado Boulder

Read More..

India’s Minister of Science visits Imperial to strengthen research links – Imperial College London

Indias Minister of Science and Technology Dr Jitendra Singh visited Imperial to meet researchers and students and strengthen ties with India.

The Minister, accompanied byProfessor Ajay Kumar Sood, Principle Scientific Advisor to the Government of India, and other scientists and officials,visited Imperials labs and heard about Imperial's growing research collaborations with India.

The Minister also met with some of Imperial's Indian students and scholars and encouraged them to reach their potential. He also highlighted the rapid progress science in India was making in a number of fields and the growth of Indian startups.

Professor Mary Ryan, Vice Provost (Research and Enterprise) at Imperial College London, said:Wereincredibly proud of our longstanding connections with India.

"We are privileged to host just over 800 talented and highly entrepreneurial students from India.Imperial is also home to a thriving community of researchers and staff with connections to India.

"Our academics collaborate with a range of partners in India, as well as our incredible alumni community, to tackle shared health and climate challenges such as the transition to clean energy, antimicrobial resistance, and infectious disease."

During the visit Imperial announced that it was creating a new scholarship programme for Indian Masters students.

The'Future Leaders Scholarship' programmewill support 30 students over the next three years with half of the scholarships reserved for female scholars. The scholarships will be for students inMSc programmes across Imperial's Faculties of Engineering, Natural Sciences, Medicine and the Business School.

Professor Peter Haynes, Vice-Provost (Education and Student Experience) at Imperial, said: It is a real priority for Imperial to continue tofacilitateand support two-way mobility between India and the UK.

"I hope that weare able towelcome even more students from India in the future and I amvery pleasedto announce that Imperial is investing just over 400,000 in scholarships for the future STEM-B leaders of India.The investment will see the launch of 30 merit-based scholarships over the next three years, with the first application round opening next academic year.

At least 50 per cent of these prestigious Future Leaders scholarships will be reserved for female scholars.I hope that we can continue to work together to build upon our success and that Imperial can be at the fore of supporting UK-India partnerships in the coming decade.

Imperial also announced a new partnership with Chevening for scholars from India. Funding will be available forscholars who demonstrate the greatest potential to become leaders, decision-makers and opinion-formers in India. The Chevening scholarship award will support academic fees and provide a monthly stipend.

TheScience Ministervisited Imperials Carbon Capture Pilot Plant to see how Imperial students aretraining to become the next generation of chemical engineers.

He also saw a demonstration of the latest imaging from the Mars Rover at the Data Science Institute by Professor Sanjeev Gupta.

The Minister then visited Imperials Hydrodynamics lab for a demonstration of the wave tank byProfessor Washington Ochieng and Professor Graham Hughes.

Imperial has longstanding and successful links with India - and is committed to expanding and strengthening partnerships with the country.

Imperial andthe Indian Institute of Science (IISc), Bangalore launched an ambitious partnership in research and education.

The joint seed fund with IISc has already enabled exchange mobility this academic year, with partners spending time in each others labs to explore topics as diverse as bio-acoustics, air pollutants and quantum physics.

Imperials School of Public Health has been collaborating with the Indian Council of Medical Research (ICMR)in support of Indias COVID-19 response. Academics from theJameel InstituteandMRC Centre for Global Infectious Disease Analysis, including Professor Katharina Hauckand Professor Nimalan Arinaminpathy,have supported scientists within the ICMR, toperform modelling analysis to address key questions faced by public health authorities in the country. For example, early in Indias vaccination drive against COVID-19 the largest in the world Imperialcollaborated with the ICMR to provide modelling analysis informing the prioritisation of risk groups.

A major focus of ProfessorArinaminpathy teams research is in the control of human tuberculosis (TB) in high-burden countries. They work closely with India's National Tuberculosis Elimination Programme, contributing mathematical models to help inform intervention planning to meet India's ambitious goals for TB elimination.

Earlier this year Imperial welcomed the Indian High Commissioner tothe College to discuss deepening ties with the country.

Last year Imperial hosted the former Principal Scientific Advisor, Prof Vijay Raghavan and his UK counterpart, Sir Patrick Vallance, at an event to discuss the role of science and innovation partnerships in driving the India-UK Roadmap and creating healthier, resilient societies.

Read the rest here:

India's Minister of Science visits Imperial to strengthen research links - Imperial College London

Read More..

Newly observed effect makes atoms transparent to certain frequencies of light – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

A newly discovered phenomenon dubbed "collectively induced transparency" (CIT) causes groups of atoms to abruptly stop reflecting light at specific frequencies.

CIT was discovered by confining ytterbium atoms inside an optical cavityessentially, a tiny box for lightand blasting them with a laser. Although the laser's light will bounce off the atoms up to a point, as the frequency of the light is adjusted, a transparency window appears in which the light simply passes through the cavity unimpeded.

"We never knew this transparency window existed," says Caltech's Andrei Faraon (BS '04), William L. Valentine Professor of Applied Physics and Electrical Engineering, and co-corresponding author of a paper on the discovery that was published on April 26 in the journal Nature. "Our research has primarily become a journey to find out why."

An analysis of the transparency window points to it being the result of interactions in the cavity between groups of atoms and light. This phenomenon is akin to destructive interference, in which waves from two or more sources can cancel one another out. The groups of atoms continually absorb and re-emit light, which generally results in the reflection of the laser's light. However, at the CIT frequency, there is a balance created by the re-emitted light from each of the atoms in a group, resulting in a drop in reflection.

"An ensemble of atoms strongly coupled to the same optical field can lead to unexpected results," says co-lead author Mi Lei, a graduate student at Caltech.

The optical resonator, which measures just 20 microns in length and includes features smaller than 1 micron, was fabricated at the Kavli Nanoscience Institute at Caltech.

"Through conventional quantum optics measurement techniques, we found that our system had reached an unexplored regime, revealing new physics," says graduate student Rikuto Fukumori, co-lead author of the paper.

Besides the transparency phenomenon, the researchers also observed that the collection of atoms can absorb and emit light from the laser either much faster or much slower compared to a single atom depending on the intensity of the laser. These processes, called superradiance and subradiance, and their underlying physics are still poorly understood because of the large number of interacting quantum particles.

"We were able to monitor and control quantum mechanical lightmatter interactions at nanoscale," says co-corresponding author Joonhee Choi, a former postdoctoral scholar at Caltech who is now an assistant professor at Stanford University.

Though the research is primarily fundamental and expands our understanding of the mysterious world of quantum effects, this discovery has the potential to one day help pave the way to more efficient quantum memories in which information is stored in an ensemble of strongly coupled atoms. Faraon has also worked on creating quantum storage by manipulating the interactions of multiple vanadium atoms.

"Besides memories, these experimental systems provide important insight about developing future connections between quantum computers," says Manuel Endres, professor of physics and Rosenberg Scholar, who is a co-author of the study.

More information: Mi Lei et al, Many-body cavity quantum electrodynamics with driven inhomogeneous emitters, Nature (2023). DOI: 10.1038/s41586-023-05884-1

Journal information: Nature

See the rest here:

Newly observed effect makes atoms transparent to certain frequencies of light - Phys.org

Read More..

Explainer: Indias quantum computing ambitions – The Financial Express

The Centre has approved Rs 6,003 crore for the National Quantum Mission, to fund scientific and industrial research development in quantum technology. What is quantum technology and why is India so keen on investing in it? Where will the applications of this lie? Sarthak Ray takes a look at these questions

Quantum technology is largely understood as the segment of technology that is based on principles of quantum physics. Quantum physics, in turn, is the study of matter and energy at the most fundamental level, where classical laws of physics dont apply. It aims to uncover the properties and behaviours of the very building blocks of nature, says Caltech.

Also Read: Watch out for AI, multi-cloud & quantum computing in 2023

Quantum physics principles and discoveries already mark their presence in our worldthe discoveries over the decades fuelled innovation and allowed us to come up with devices and applications, such as lasers and transistors, etc. They have also put us forth on the path to quantum computing, which is still under development.

Quantum computers have the same foundational elements as classical onesthey use chips, circuits and logic gates, and operations are orchestrated by algorithms. Data is also transmitted in the binary code of 0s and 1s. Where they differ is classical computing uses bit as the fundamental unit of data; a bit can either be 1 or 0 exclusively.

Quantum computing, on the other hand, uses quantum bits or qubits. A qubit exhibits superpositiona quantum physics principle per which an object exists as the combination of multiple possible states in a simultaneous manner. Imagine waves originating from separate points on a pond and travelling outward. Eventually, waves from the distinct points of origin form a more complex pattern when they overlap. This is superposition. A qubit is a superposition of both 1 and 0 simultaneously until its state is measured.

Why do we need quantum computing? Well, for a start, it offers unprecedented speed of computing. We only have nascent quantum computers now, but the speed achieved by even these is mindblowing. Googles 54-qubit Sycamore processor performed a target computation in 200 seconds; the worlds fastest supercomputer would have taken 10,000 years.

Also Read: Indias first quantum computing-based telecom network link now operational, says Ashwini Vaishnaw

One can make qubits by manipulating normal atoms, atoms carrying electrical charge (or ions), even electrons. They can also be made by nano-engineering artificial atoms, or semiconductor nanocrystals that have their own discrete electronic structure, like atoms. These are made with a printing method callled lithography. The states of different qubits, exhibiting superposition, can get entangledthey can be linked to each other via quantum mechanics.

The computing edge and the impact on a host of areas, including communication, is one that no country would want to miss. There are massive gains to be made in the areas of quantum cryptography and quantum sensing.

So, naturally, many nations, especially the developed ones, have been quick to announce public-sector focus on quantum computing. Six nationsthe US, Canada, China, Finland, France and Germany have loosened their pursestrings to that endthe US has committed $7 billion and China $15 billion.

India, on the other hand, has scaled back fundingwhen the Mission was first announced in 2019, the amount earmarked was Rs 8,000 crore. The latest announcement has scaled back the amount. There are a few Indian companies that are working with MNCs on quantum computing, but what India needs is a push from the government to walk faster on quantum computing research.

Globally, some marquee tech companies are working on quantum computing projects, including IBM. The technology, now, is too costly & cumbersome to get commercialised. For instance, qubits are kept in a chambers that chill them to near absolute zero temperature.

Go here to see the original:

Explainer: Indias quantum computing ambitions - The Financial Express

Read More..

Massive Black Holes and Glue Particles Physicists Uncover a … – SciTechDaily

Black holes with dimensions of billions of kilometers (left, as imaged by the Event Horizon Telescope) share features with a dense state of subatomic gluons created in collisions of atomic nuclei (right). Credit: Event Horizon Telescope Collaboration (left) and Brookhaven National Laboratory (right).

Physicists have demonstrated that black holes and the dense state of gluons, which are the glue particles responsible for holding nuclear matter together, have similar characteristics.

Physicists have uncovered a remarkable correspondence between dense gluon states, which are responsible for the strong nuclear force within atomic nuclei, and massive black holes in the universe. Dense gluon walls, referred to as color glass condensate (CGC), emerge from collisions between atomic nuclei and are incredibly small, measuring just 10-19 kilometers in sizeless than a billionth of a kilometer. In stark contrast, black holes can span billions of kilometers.

This groundbreaking research reveals that both systems consist of densely arranged, self-interacting force carrier particles. In the case of CGC, these particles are gluons, while in black holes, they are gravitons. Both the organization of gluons within CGC and gravitons within black holes is optimized for the energy and size of each respective system.

The high degree of order in CGC and black holes is driven by each system packing in the maximal amount of quantum information possible about the particles features. This includes their spatial distributions, velocities, and collective forces. Such limits on information content are universal. This means the research suggests that quantum information science could provide novel organizing principles for understanding these widely different systems.

The mathematical correspondence between these systems also means that studying each can improve our understanding of the other. Of particular interest are comparisons of gravitational shockwaves in black hole mergers with gluon shockwaves in nuclear collisions.

Scientists study the strong force in nuclear collisions. For example, at the Relativistic Heavy Ion Collider, a Department of Energy user facility, atomic nuclei accelerated close to the speed of light become dense walls of gluons known as color glass condensate (CGC). When the nuclei collide, CGC evolves to form a nearly perfect liquid of quarks and gluons, the fundamental building blocks that make up all visible matter.

Though the strong force operates at subatomic scales, this recent analysis by scientists at Ludwig Maximilian University of Munich, the Max Planck Institute for Physics, and Brookhaven National Laboratory shows that CGC shares features with black holes, enormous conglomerates of gravitons that exert gravitational force across the universe.

Both sets of self-interacting particles appear to organize themselves in a way that satisfies a universal limit on the amount of entropy, or disorder, that can exist in each system. This mathematical correspondence points to similarities between black hole formation, thermalization, and decay and what happens when walls of gluons collide in nuclear collisions at ultrarelativistic speedsnear the speed of light.

The limit on entropy that drives this correspondence is related to maximal information packinga key feature of quantum information science (QIS). QIS may therefore further inform scientists understanding of gluons, gravitons, CGC, and black holes. This approach may also advance the design of quantum computers that use cold atoms to simulate and address questions about these complex systems.

Reference: Classicalization and unitarization of wee partons in QCD and gravity: The CGC-black hole correspondence by Gia Dvali and Raju Venugopalan, 29 March 2023, Physical Review D.DOI: 10.1103/PhysRevD.105.056026

The study was fudned by the Department of Energy Office of Science, Nuclear Physics program, the Humboldt Foundation, and the German Research Foundation.

Go here to read the rest:

Massive Black Holes and Glue Particles Physicists Uncover a ... - SciTechDaily

Read More..

Fighting giants: eco-activist Vandana Shiva on her battle against GM multinationals – The Guardian

Global development

The formidable Indian environmentalist discusses her 50-year struggle to protect seeds and farmers from the poison cartel of corporate agriculture

Fri 28 Apr 2023 01.00 EDT

You dont have to look very far to find the essence of life, says Vandana Shiva. But in a society caught up in a blur of technological advances, bio-hacks and attempts to improve ourselves and the natural world, she fears we are hellbent on destroying it.

Everything comes from the seed, but we have forgotten that the seed isnt a machine, says Shiva. We think we can engineer life, we can change the carefully organised DNA of a living organism, and there will be no wider impact. But this is a dangerous illusion.

For almost five decades, Shiva has been deeply engaged in the fight for environmental justice in India. Regarded as one of the worlds most formidable environmentalists, she has worked to save forests, shut down polluting mines, exposed the dangers of pesticides, spurred on the global campaign for organic farming, championed ecofeminism and gone up against powerful giant chemical corporations.

Her battle to protect the worlds seeds in their natural form rather than genetically altered and commercially controlled versions continues to be her lifes work.

Shivas anti-globalisation philosophy and pilgrimages across India have often been compared to Mahatma Gandhi. Yet while Gandhi became synonymous with the spinning wheel as a symbol of self-reliance, Shivas emblem is the seed.

Now 70, Shiva who is divorced and has one son has spent her life refusing to conform to the patriarchal norms so often imposed on women in India, particularly in the 1950s. She has published more than 20 books and when she is not travelling the world for workshops or speaking tours, she spends her time between her office in Delhi and her organic farm in the foothills of the Himalayas.

She credits her spirit of resistance to her parents, who were feminists at a higher level than Ive ever known long before we even knew the word feminism. After 1947, when India gained independence, her father left the military for a job in the forests of the mountainous state of Uttarakhand, where Shiva was born and brought up always to believe she was equal to men. The forests were my identity and from an early age the laws of nature captivated me, she says.

She was about six when she stumbled on a book of quotes by Albert Einstein buried in a small, musty library in a forest lodge. She was transfixed, determined against all odds to be a physicist. Though science was not taught at her rural convent school, Shivas parents encouraged her curiosity and found ways for her to learn. By the time she was in her 20s, she was completing her PhD in quantum physics at a Canadian university.

Yet as logging, dams and development wreaked ecological devastation on Uttarakhands forests and local peasant women rose up to fight it a movement known as Chipko Shiva realised, on returning to India, that her heart lay not with quantum physics but with a different, nagging question. I couldnt understand why were we told that new technology brings progress, but everywhere I looked, local people were getting poorer and landscapes were being devastated as soon as this development or new technology came in, she says.

In 1982, in her mothers cow shed in the mountain town of Dehradun, Shiva set up her research foundation, exploring the crossover between science, technology and ecology. She began to document the green revolution that swept rural India from the late 1960s, where in a bid to drive up crop yields and avert famine, the government had pushed farmers to introduce technology, mechanisation and agrochemicals.

It instilled in her a lifelong opposition to industrial interference in agriculture. Though the green revolution is acknowledged to have prevented widespread starvation and introduced some necessary modernisation into rural communities, it was also the beginning of a continuing system of monoculture in India, where farmers were pushed to abandon native varieties and instead plant a few high-yielding wheat and rice crops in quick-turnaround cycles, burning the stubble in their fields in between.

It also created a reliance on subsidised fertilisers and chemicals that, though costly and environmentally disastrous, lasts to this day. Soil in fertile states such as Punjab, once known as the breadbasket of India, has been stripped of its rich minerals, with watercourses running dry, rivers polluted with chemical run-off and farmers in a perpetual state of deep crisis and anger.

Shivas suspicions about the chemical industry worsened further when, in the early 1990s, she was privy to some of the first multilateral discussions around agricultural biotechnology and plans by chemical companies to alter crop genes for commercial purposes.

There was a race on by companies to develop and patent these GM crops, but no one was stopping to ask: what will be the impact on the environment? How will they impact on diversity? What will this cost the farmers? They only wanted to win the race and control all the worlds seeds. To me, it all seemed so wrong, says Shiva.

In 1991, five years before the first genetically modified (GM) crops had been planted, she founded Navdanya, meaning nine seeds, an initiative to save Indias native seeds and spread their use among farmers. Eight years later, she took the chemical monolith Monsanto, the worlds largest producer of seeds, to the supreme court for bringing its GM cotton into India without permission.

Monsanto became notorious in the 1960s for producing the herbicide Agent Orange for the US military during the Vietnam war, and subsequently led the development of GM crops in the 1990s. It moved quickly to penetrate the international market with its privatised seeds, particularly in developing, predominantly agricultural countries.

The company, which was bought in 2018 by the German pharmaceutical and biotech company Bayer, became embroiled in legal action. In 2020 it announced a $11bn (8.7bn) payout to settle claims of links between its herbicide and cancer on behalf of almost 100,000 people but denied any wrongdoing. In 2016, dozens of civil society groups staged a peoples tribunal in The Hague, finding Monsanto guilty of human rights violations and developing an unsustainable system of farming.

Shiva says taking Monsanto to court felt like going up against a mafia and alleges that many attempts were made to threaten and pressure her into not filing the case.

Monsanto finally got permission to bring GM cotton to India in 2002, but Shiva has kept up her fight against chemical multinationals, which Shiva refers to as the poison cartel. Currently more than 60% of the worlds commercial seeds are sold by just four companies, which have led the push to patent seeds, orchestrated a global monopoly of certain GM crops such as cotton and soya and sued hundreds of small-scale farmers for saving seeds from commercial crops.

We have taken on these giants when they said weve invented rice, weve invented wheat, and we have won, she says.

She remains adamant that GM crops have failed. But though the legacy of GM pest-resistant cotton in India is complex and has increased pesticide use, not all would agree that the issue is black and white. Indeed, her outspoken and often intransigent positions on GM organisms and globalisation have earned her many critics and powerful enemies.

She has been accused of exaggerating the dangers of GM and simplifying facts around the direct correlation between farmers suicides and genetically modified crops, and been called an enemy of progress for her rhetoric against globalisation, given the threats facing the world.

As the global population has ballooned to 8 billion people, and the climate crisis throws agriculture into disarray, even some prominent environmentalists have shifted their positions and have argued that GM crops can underpin food security. Countries including the UK, which had imposed strict laws around GM foods, are now pushing for more gene editing of crops and animals. Last year India approved the release of a new GM mustard seed.

Shiva is scathing of this renewed push for GM organisms, arguing that much of the gene-editing process is still dangerously unpredictable and calling it ignorance to think climate-adapted crops can only come from industrial labs.

Farmers have already bred thousands of climate-resilient and salt-tolerant seeds; they werent the invention of a few big companies, no matter what patents they claim, she says.

For Shiva, the global crisis facing agriculture will not be solved by the poison cartel nor a continuation of fossil fuel-guzzling, industrialised farming, but instead a return to local, small-scale farming no longer reliant on agrochemicals. Globally, the subsidies are $400bn a year to make an unviable agriculture system work, she says.

This industrialised globalised system of food is destroying soil, it is destroying water and it is generating 30% of our greenhouse gases. If we want to fix this, weve got to shift from industrial to ecological farming.

Nonetheless, while her crusade against the might of chemical corporations will continue, Shiva considers her most important work to be her travels through Indias villages, collecting and saving seeds including 4,000 varieties of rice setting up more than 100 seed banks, and helping farmers return to organic methods.

My proudest work is listening to the seed and her creativity, she says. Im proud of the fact that a lie is a lie is a lie, no matter how big the power that tells the lie. And Im proud that Ive never ever hesitated in speaking the truth.

This article was amended on 28 April 2023. A previous version incorrectly stated that Vandana Shiva had no children; she has a son.

Vandana Shivas latest book, Terra Viva: My Life in a Biodiversity of Movements, is published by Chelsea Green. Shiva will speak at the Extinction or Regeneration conference at the QEII Centre, London, on 11-12 May

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Continue reading here:

Fighting giants: eco-activist Vandana Shiva on her battle against GM multinationals - The Guardian

Read More..