Category Archives: Deep Mind
Google Deepmind breakthrough could revolutionise weather forecasts, company says – The Independent
Sign up to our free weekly IndyTech newsletter delivered straight to your inboxSign up to our free IndyTech newsletter
A new artificial intelligence breakthrough could revolutionise weather forecasts, its creators say.
The new technology, built by Google Deepmind, allows for 10-day weather forecasts to be produced in just a minute. And it does so with unprecedented accuracy, Deepmind said.
The forecasts made by the GraphCast system are not only more accurate but produced more efficiently, meaning they can be made more quickly and with fewer resources.
It can also help spot possible extreme weather events, being able to predict the movement of cyclones and provide early alerts of possible floods and extreme temperatures. Google therefore says it could help save lives by allowing people to better prepare.
At the moment, weather forecasts usually rely on a system called Numerical Weather Prediction, which combined physics equations with computer algorithms that are run on supercomputers. That requires vast computing resources as well as detailed expertise by weather forecasters.
The new system is one of a range of technologies that instead use deep learning. Instead of looking at physical equations, it learns from weather data and then uses that to model how the Earths weather changes over time.
Creating the model was intensive, since it required training on decades of weather data. But now that it is created it could vastly reduce the resources required for predicting the weather: 10-day forecasts take a minute on one machine, a process that might otherwise take hours and use hundreds of machines in a supercomputer.
In use, the system was able to provide more accurate forecasts than the gold-standard traditional system in 90 per cent of tests, its creators write in a paper newly published in the journalScience.
Whats more, the system is able to spot extreme weather events despite not being trained on it. In September for instance it had predicted the path of Hurricane Lee nine days before it arrived, compared to six days for traditional forecasts.
Deepmind noted that GraphCasts prediction of extreme temperatures could be particularly useful given the climate crisis. The system can predict areas where the heat will arrive above the historical top temperatures, allowing people to anticipate heat waves and prepare for them.
The company will also open source the system so that it can be used by others. That may help with other new tools and research to help tackle environmental challenges, Deepmind said.
Original post:
Google Deepmind breakthrough could revolutionise weather forecasts, company says - The Independent
DeepMind AI accurately forecasts weather on a desktop computer – Nature.com
Conventional weather forecasts are the result of intensive processing of data from weather stations around the world.Credit: Carlos Munoz Yague/Look At Science/Science Photo Library
Artificial-intelligence (AI) firm Google DeepMind has turned its hand to the intensive science of weather forecasting and developed a machine-learning model that outperforms the best conventional tools as well as other AI approaches at the task.
The model, called GraphCast, can run from a desktop computer and makes more accurate predictions than conventional models in minutes rather than hours.
GraphCast currently is leading the race amongst the AI models, says computer scientist Aditya Grover at University of California, Los Angeles. The model is described1 in Science on 14 November.
Predicting the weather is a complex and energy-intensive task. The standard approach is called numerical weather prediction (NWP), which uses mathematical models based on physical principles. These tools, known as physical models, crunch weather data from buoys, satellites and weather stations worldwide using supercomputers. The calculations accurately map out how heat, air and water vapour move through the atmosphere, but they are expensive and energy-intensive to run.
To reduce the financial and energy cost of forecasting, several technology companies have developed machine-learning models that rapidly predict the future state of global weather from past and current weather data. Among them are DeepMind, computer chip-maker Nvidia and Chinese tech company Huawei, alongside a slew of start-ups such as Atmo based in Berkeley, California. Of these, Huaweis Pangu-weather model is the strongest rival to the gold-standard NWP system at the European Centre for Medium-Range Weather Forecasts (ECMWF) in Reading, UK, which provides world-leading weather predictions up to 15 days in advance.
Machine learning is spurring a revolution in weather forecasting, says Matthew Chantry at the ECMWF. AI models run 1,000 to 10,000 times faster than conventional NWP models, leaving more time for interpreting and communicating predictions, says data-visualization researcher Jacob Radford, at the Cooperative Institute for Research in the Atmosphere in Colorado.
GraphCast, developed by Googles AI company DeepMind in London, outperforms conventional and AI-based approaches at most global weather-forecasting tasks. Researchers first trained the model using estimates of past global weather made from 1979 to 2017 by physical models. This allowed GraphCast to learn links between weather variables such as air pressure, wind, temperature and humidity.
The trained model uses the current state of global weather and weather estimates from 6 hours earlier to predict the weather 6 hours ahead. Earlier predictions are fed back into the model, enabling it to make estimates further into the future. DeepMind researchers found that GraphCast could use global weather estimates from 2018 to make forecasts up to 10 days ahead in less than a minute, and the predictions were more accurate than the ECMWFs High RESolution forecasting system (HRES) one version of its NWP which takes hours to forecast.
In the troposphere, which is the part of the atmosphere closest to the surface that affects us all the most, GraphCast outperforms HRES on more than 99% of the 12,00 measurements that weve done, says computer scientist Remi Lam at DeepMind in London. Across all levels of the atmosphere, the model outperformed HRES on 90% of weather predictions.
GraphCast predicted the state of 5 weather variables close to the Earths surface, such as the air temperature 2-metres above the ground, and 6 atmospheric variables, such as wind speed, further from the Earths surface.
It also proved useful in predicting severe weather events, such as the paths taken by tropical cyclones, and extreme heat and cold episodes, says Chantry.
When they compared the forecasting ability of GraphCast with Pangu-weather, the DeepMind researchers found that their model beat 99% of weather predictions that had been described in a previous Huawei study.
Chantry notes that although GraphCasts performance was superior to other models in this study, based on its evaluation by certain metrics, future assessments of its performance using other metrics could lead to slightly different results.
Rather than entirely replacing conventional approaches, machine-learning models, which are still experimental, could boost particular types of weather prediction that standard approaches arent good at, says Chantry such as forecasting rainfall that will hit the ground within a few hours.
And standard physical models are still needed to provide the estimates of global weather that are initially used to train machine-learning models, says Chantry. I anticipate it will be another two to five years before people can use forecasting from machine learning approaches to make decisions in the real-world, he adds.
In the meantime, problems with machine-learning approaches must be ironed out. Unlike NWP models, researchers cannot fully understand how AIs such as GraphCast work because the decision-making processes happen in AIs black box, says Grover. This calls into question their reliability, she says.
AI models also run the risk of amplifying biases in their training data and require a lot of energy for training, although they consume less energy than NWP models, says Grover.
View post:
DeepMind AI accurately forecasts weather on a desktop computer - Nature.com
DeepMind AI can beat the best weather forecasts – but there is a catch – New Scientist
Can AI tell you if you will need an umbrella?
SEBASTIEN BOZON/AFP via Getty Images
AI can predict the weather 10 days ahead more accurately than current state-of-the-art simulations, says AI firm Google DeepMind but meteorologists have warned against abandoning weather models based in real physical principles and just relying on patterns in data, while pointing out shortcomings in the AI approach.
Existing weather forecasts are based on mathematical models, which use physics and powerful supercomputers to deterministically predict what will happen in the future. These models have slowly become more accurate by adding finer detail, which in turn requires more computation and therefore ever more powerful computers and higher energy demands.
Rmi Lam at Google DeepMind and his colleagues have taken a different approach. Their GraphCast AI model is trained on four decades of historical weather data from satellites, radar and ground measurements, identifying patterns that not even Google DeepMind understands. Like many machine-learning AI models, its not very easy to interpret how the model works, says Lam.
To make a forecast, it uses real meteorological readings, taken from more than a million points around the planet at two given moments in time six hours apart, and predicts the weather six hours ahead. Those predictions can then be used as the inputs for another round, forecasting a further six hours into the future.
Researchers at DeepMind ran this process with data from the European Centre for Medium-Range Weather Forecasts (ECMWF) to create a 10-day forecast. They say it beat the ECMWFs gold-standard high-resolution forecast (HRES) by giving more accurate predictions on more than 90 per cent of tested data points. At some altitudes, this accuracy rose as high as 99.7 per cent.
Matthew Chantry at the ECMWF, who worked with Google DeepMind, says his organisation had previously seen AI as a tool to supplement existing mathematical models, but that in the past 18 months it has come to be regarded as something that could actually provide forecasts on its own.
We at the ECMWF view this as a hugely exciting technology to lower the energy costs of making forecasts, but also potentially improve them. Theres probably more work to be done to create reliable operational products, but this is likely the beginning of a revolution this is our assessment in how weather forecasts are created, he says. Google DeepMind says that making 10-day forecasts with GraphCast takes less than a minute on a high-end PC, while HRES can take hours of supercomputer time.
But some meteorologists have expressed caution about turning weather forecasting over to AI.Ian Renfrew at the University of East Anglia, UK, says GraphCast currently lacks the ability to marshal data for its own starting state, a process known as data assimilation. In traditional forecasts, this data is carefully placed into the simulation after thorough checks against physics and chemistry calculations to ensure accuracy and consistency. Currently, GraphCast needs to use starting states prepared in the same way by the ECMWFs own tools.
Google is not going to be running weather forecasts anytime soon, because they cannot do the data assimilation, says Renfrew. And the data assimilation is typically half to two-thirds of the computing time in these forecasting systems.
He says that he would also have concerns about ditching deterministic models based on chemistry and physics entirely and relying on AI output alone.
You can have the best forecast model in the world, but if the public dont trust you, and dont act, then whats the point? If you set out an order to evacuate 30 miles of coastline in Florida, and then nothing happens, then youve blown decades of trust that has been built up, he says. The advantage of a deterministic model is you can interrogate it and if you do get bad forecasts, you can interrogate why theyre bad forecasts and try to target those aspects for improvement.
Topics:
Originally posted here:
DeepMind AI can beat the best weather forecasts - but there is a catch - New Scientist
DeepMind claims it can boost weather prediction with AI – SiliconRepublic.com
After claiming to hit a breakthrough by predicting the structure of nearly every known protein, DeepMind is now turning its AI models to observe the weather.
Google-owned DeepMind claims its latest AI model can make accurate, fast predictions of the weather and give earlier warnings of extreme storms.
The company claims its AI model GraphCast can predict weather conditions up to 10 days in advance, in a more accurate way than standard industry methods. DeepMind also said this model can make prediction in less than one minute.
There are estimates that 10-day weather forecasts are only accurate roughly half of the time, compared to a 90pc accuracy rate for five-day forecasts. Improving weather prediction presents benefits for both citizens and various industries, such as renewable energy or event organisers.
DeepMind also said its AI model can track of cyclones with great accuracy, identify flood risk and predict the onset of extreme temperatures.
GraphCast takes a significant step forward in AI for weather prediction, offering more accurate and efficient forecasts and opening paths to support decision-making critical to the needs of our industries and societies, DeepMind said in a blogpost.
By open sourcing the model code for GraphCast, we are enabling scientists and forecasters around the world to benefit billions of people in their everyday lives.
The company said GraphCast is already being used by the European Centre for Medium-Range Weather Forecasts. This institution is currently running a live experiment of the AI model on its website.
DeepMind said its AI model uses deep learning to create its weather forecast system, instead of the usual method of physical equations called Numerical Weather Prediction (NWP).
The company said GraphCast is trained on decades of historical weather data to help it predict how weather patterns evolve and that it combines elements of traditional weather prediction. Despite this, DeepMind claims the model is rather small compared to other AI models, containing 36.7m parameters.
This trove is based on historical weather observations such as satellite images, radar, and weather stations using a traditional NWP to fill in the blanks where the observations are incomplete, to reconstruct a rich record of global historical weather, DeepMind said.
Last year, DeepMind claimed to achieve a scientific breakthrough last year and said its AlphaFold model predicted the structure of nearly every known protein known to science more than 200m in total.
Earlier this month, DeepMind claimed the next version of AlphaFold can predict nearly all molecules in the Protein Data Bank a database for the 3D structures of various biological molecules.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republics digest of essential sci-tech news.
See the original post here:
DeepMind claims it can boost weather prediction with AI - SiliconRepublic.com
Ancient Wisdom Part 26: Mind-blowing benefits of deep breathing – Hindustan Times
Note to readers: Ancient Wisdom is a series of guides that shines a light on age-old wisdom that has helped people for generations with time-honoured wellness solutions to everyday fitness problems, persistent health issues and stress management, among others. Through this series, we try to provide contemporary solutions to your health worries with traditional insights.
Some ancient practices, once lost but now rediscovered, have significantly contributed to transforming our modern wellness journey. Deep breathing, dating thousands of years back, was practiced by Yogis in India and came to be known as Pranayama. Prana means life force and yama means control. By controlling your breathing, one can not only master mind but also keep several diseases at bay. Deep breathing is more relevant in today's world than ever before in the light of increasing stress, deteriorating air, decreased immunity and growing threat of chronic diseases like diabetes, asthma, heart disease, blood pressure among a host of other health issues.
Not just India, deep breathing has been linked with health and vitality, and also as a way to connect to divine in different countries for centuries. China's Qigong practice involved moving meditation, deep rhythmic breathing, and a calm meditative state of mind. In ancient Greece, deep breathing was called pneuma. As per Egyptians, deep breathing helped form a connection with the divine.
Deep breathing also known as diaphragmatic breathing improves oxygen flow in the body significantly which could calm nerves reducing all the stressful thoughts and anxiety symptoms. Deep breathings also helps improve release of endorphins which can naturally elevate mood. Deep breathing is also known to work wonders for your lung health. According to British Lung Foundation, deep breathing can help remove mucus from the lungs after pneumonia and allows more air to circulate. The practice can also provide a workout to your heart muscles which strengthen as a result.
In today's edition of Ancient Wisdom, let's discuss how this age-old practice can transform your overall health.
Deep breathing, often associated with modern wellness and mindfulness practices, has a deep history in ancient cultures. Across civilizations, deep breathing has been recognized for its potential to enhance physical, mental, and spiritual well-being.
"In ancient India deep breathing was known as pranayama. In Sanskrit, 'prana' translates to life force, and 'yama' means control. Ancient Indian yogis believed that deep breathing influences the flow of vital energy throughout the body which improves physical health and mental clarity. Sutra 2.51 of Maharishi Patanjalis Yoga Sutras asserts that the fluctuations of the mind are intimately connected to the breath, and by breathing deeply, we can achieve mastery over the mind," says Dr Hansaji Yogendra, Director of The Yoga Institute.
"In ancient China, the Daoist tradition emphasized the cultivation and balance of Qi, the vital energy that flows through the body. Deep breathing was a fundamental aspect of Daoist practices, with exercises like 'Dao Yin' focusing on the regulation of breath to harmonize and enhance the flow of Qi. While in ancient Greece, deep breathing was known as 'pneuma'. Pneuma was the vital breath or life force and was believed to be the divine breath of the gods. Early Greek philosophers, like Empedocles, associated pneuma with the fundamental elements of air and fire, considering it the animating force that sustained life. In ancient Egypt, the hieroglyphic symbol for breath, 'ankh', represented life and immortality. The Egyptians believed that deep breathing, facilitates a connection with the Divine," adds Dr Hansaji.
"Various indigenous cultures too practiced deep breathing. In Native American ceremonial rituals rhythmic breathing was a means to connect with Nature and the spirit world. Similarly, in the Aboriginal cultures of Australia, the concept of 'Dadirri' uses deep and mindful breathing, as a means of connecting with the land and ancestral wisdom," says the Yoga expert.
"The stream of breath should be always like the stream of relaxed reverse, relax in and relax out. For therapeutic purposes, Kapalbhati and Bhastrika can be sustained at a pace, but otherwise for equalizing our polarities, ha-tha, Chandra-surya, ida-pingala. Slow breathing in, slow breathing out popularly known as Anulom-vilom is of the highest order when it comes to neutralizing diseases, increasing immune response, immunoglobulins, triggering our T cells and B cells and increasing oxygen in our lungs, thus in blood, activating your brain functions and most importantly all the simple breathing exercises also can do the same," says Dr. Mickey Mehta, Global Leading Holistic Coach.
Breathing techniques should not stress you out, it should not create any kind of mental stress, even the temporary pace of your kapalbhati and bhastrika can be focused on releasing your mental and your emotional disturbing thoughts, hurt, pain etc.
"Do not forget the relaxed rhythm of the reverse, that is the way your breath should be. When you do shavasana, youre in and out through the nose consciously while the stomach balloons and retreats as you breathe out is the best way of relaxing as your entire body releases all the stress into gravity," says Dr Mehta.
ALSO READ
Ancient Wisdom Part 16: 5 delicious ways to add amaranth to your diet; know health benefits
Ancient Wisdom Part 17: Lowering cholesterol to weight loss; amazing benefits of chia seeds
Ancient Wisdom Part 18: How to lose weight with jowar; ways to add to diet and wonderful health benefits
Ancient Wisdom Part 19: Weight loss to boosting heart health; many benefits of honey
Ancient Wisdom Part 20: Weight loss to cholesterol control; amazing benefits of cumin seeds
Acharya Dr. Lokesh Muni, Founder President at Ahimsa Vishwa Bharti shares many benefits of deep breathing.
Delving into the ancient wisdom of various cultures and studies reveal that conscious and deep breathing is far more than a biological necessity. It is a gateway to physical, mental, and spiritual transformations. It creates harmony in the body and the mind in miraculous ways by:
Deep breathing taps into divine consciousness, creating a calming effect on the mind. It helps achieve inner peace by liberating the mind from chaos and fostering mindfulness. Conscious breathwork is believed to cleanse and boosts the energy channels, aligning the individual with higher states of consciousness.
Encouraging the acknowledgment and observation of feelings without judgment, deep breathing creates a space for emotional acceptance and understanding. It teaches us to respond thoughtfully rather than react impulsively.
Deep breathing significantly boosts the immune system, relaxes the nervous system, and improves oxygen flow to the lungs, positively impacting the heart. It also helps mitigate conditions like diabetes, asthma, blood pressure, and other respiratory difficulties.
Deep breathing aids in alleviating panic attacks, anxiety, and depression by restoring control over breath, promoting a steady heartbeat, and triggering a relaxation response that decreases the jitters and fosters a happy and calm mind.
A composed and alert mind, facilitated by mindful breathing, creates a physiological state conducive to quick reflexes. It improves cognitive function, enabling quicker decision-making it also aids in anger management, stress management and concentration of mind.
"Regularly practicing breathing exercises improves lung capacity and overall health. Focusing on the breath is a sort of meditation that helps us calm down. It forces us to focus all of our attention on one thing, serving as a mental break from everyday stress. Breathing helps us to instantly ease anxiety and reduce stress," says Shivani Bajwa, Founder, of YogaSutra Holistic Living, India's Leading Functional Medicine Health Clinic.
hile the benefits of deep breathing are numerous, regular practice is essential for noticeable changes in the body. As the mind and body align, a universal language emergesa language spoken by the body, mind, and spirit. This language whispers the profound truth that the key to a well-lived life lies in the ebb and flow of breath.
Shivani Bajwa shares deep breathing exercises to reduce stress.
Exhalation emphasis: Bring the shoulders away from the ears, relax the muscles in the face, and soften the eyelids. You can close your eyes if it's comfortable for you. If not, you can leave them open, relaxing your eyelids. Take a slow, deep inhale through the nose without forcing anything, and exhale slowly with control through the nose. Were trying to lengthen each exhale, making the exhales longer than the inhale.
Continued here:
Ancient Wisdom Part 26: Mind-blowing benefits of deep breathing - Hindustan Times
What Have You Changed Your Mind About? – The New York Times
Have you ever changed your mind about something a song, a food, an activity, a person that you were sure you loved or hated?
Do you tend to be open-minded and flexible about your likes and dislikes? Or are you generally set in your tastes? Are you willing to give something or somebody a second chance? How about admitting you were colossally wrong in your initial judgment?
In I Thought I Hated Pop Music. Dancing Queen Changed My Mind., Jeff Tweedy, the singer and guitarist of the band Wilco, writes about his newfound love for Abba, the Swedish supergroup from the 1970s:
Its important to admit when youre wrong. And though I once bristled at the notion that there could ever be such a thing as a wrong musical opinion, I have since come to accept that there is, in fact, such a thing. I know because I had one: I was colossally wrong about the song Dancing Queen by Abba.
Im happy I can admit it, maybe even a touch proud of myself for not digging in my heels and hating this song for even a second longer than I had to (unlike some friends I know who are still holding out). To me, looking back, the weirdest part is that I ever felt I had to hate something so clearly irresistible.
In a way, I blame the time and place where I grew up. The mid-1970s, when Dancing Queen came out, was a time when there were very strict lines being drawn between cultural camps. As a kid who liked punk rock, this tune was situated deep in enemy territory, at the intersection of pop and disco.
I am, perhaps, a bit skeptical by nature, but scanning the horizons of my memory seeing what I saw around me from about the mid-70s to the late 80s Id say there was something else going on, too. I was just a kid. And in that particular nanosecond of geological time, kids hated stuff.
In particular, my group of friends and I despised a lot of music and, by extension, the morons who would dare admit that they liked something we hated. Music. Can you believe it? It seems hard to imagine now that a group of preteens could be capable of conjuring vein-bulging fury at the mere mention of the band Styx. But we were. And we did.
Why did we feel this way? Mostly, I think, because hating certain music gave us a way of defining ourselves. Our identities were indistinct, and drawing a line in the sand between what we liked and what we hated made our young hearts feel whole.
Mr. Tweedy writes about the moment many years later when his thinking completely changed. He was standing in a grocery store aisle, staring at the overhead speaker, just reeling at this familiar melody and how exuberantly sad it was. Having the time of your life! He explains: It was a real come to Jesus moment. A come to Agnetha, Bjrn, Benny and Anni-Frid moment.
He continues:
Before that day, I, along with many others, had denied myself an undeniable joy. Countless fantastic records and deep grooves were dismissed and derided out of ignorance. But of course, this song and this music was always going to win eventually. Because its just too special to ignore forever.
To this day, whenever I think I dislike a piece of music, I think about Dancing Queen and am humbled.
That song taught me that I cant ever completely trust my negative reactions. I was burned so badly by this one song being withheld from my heart for so long. I try to never listen to music now without first examining my own mind and politely asking whatever blind spots Im afflicted with to move aside long enough for my gut to be the judge. And even then, if I dont like something, I make a mental note to try it again in 10 years.
Students, read the entire article and then tell us:
What have you changed your mind about? Tell us about something that you once liked, loved, hated or dismissed but that you later dramatically revised your judgment about. Was it hard to admit to yourself or to others that you were wrong?
How have your tastes changed or grown over time? Do you tend to be open-minded and flexible about your likes and dislikes? Or, once you love or hate something, do you never waiver?
Mr. Tweedy writes that in the 1970s, when Dancing Queen came out, there were very strict lines being drawn between cultural camps and that, as a kid who liked punk rock, the song was situated deep in enemy territory. Does that resonate with your own experiences? Do you ever feel as if you arent allowed to like or dislike certain things because of your own cultural identification?
Mr. Tweedy ends his essay, So if you take anything away from this, I hope it will be this recommendation: Spend some time looking for a song (or a book or a film or a painting or a person) you might have unfairly maligned. Do you agree with his advice? Are we all too quick to pass judgment on things, and might that make us miss out on great experiences and joys? In the future, do you think you will try to give your dislikes and hates another chance?
Students 13 and older in the United States and Britain, and 16 and older elsewhere, are invited to comment. All comments are moderated by the Learning Network staff, but please keep in mind that once your comment is accepted, it will be made public and may appear in print.
Find more Student Opinion questions here. Teachers, check out this guide to learn how you can incorporate these prompts into your classroom.
Read more:
What Have You Changed Your Mind About? - The New York Times
LHC physicists can’t save them all – Symmetry magazine
In 2010, Mike Williams traveled from London to Amsterdam for a physics workshop. Everyone there was abuzz with the possibilitiesand possible drawbacksof machine learning, which Williams had recently proposed incorporating into the LHCb experiment. Williams, now a professor of physics and leader of an experimental group at the Massachusetts Institute of Technology, left the workshop motivated to make it work.
LHCb is one of the four main experiments at the Large Hadron Collider at CERN. Every second, inside the detectors for each of those experiments, proton beams cross 40 million times, generating hundreds of millions of proton collisions, each of which produces an array of particles flying off in different directions. Williams wanted to use machine learning to improve LHCbs trigger system, a set of decision-making algorithms programmed to recognize and save only collisions that display interesting signalsand discard the rest.
Of the 40 million crossings, or events, that happen each second in the ATLAS and CMS detectorsthe two largest particle detectors at the LHCdata from only a few thousand are saved, says Tae Min Hong, an associate professor of physics and astronomy at the University of Pittsburgh and a member of the ATLAS collaboration. Our job in the trigger system is to never throw away anything that could be important, he says.
So why not just save everything? The problem is that its much more data than physicists could everor would ever want tostore.
Williams work after the conference in Amsterdam changed the way the LHCb detector collected data, a shift that has occurred in all the experiments at the LHC. Scientists at the LHC will need to continue this evolution as the particle accelerator is upgraded to collect more data than even the improved trigger systems can possibly handle. When the LHC moves into its new high-luminosity phase, it will reach up to 8 billion collisions per second.
As the environment gets more difficult to deal with, having more powerful trigger algorithms will help us make sure we find things we really want to see, says Michael Kagan, lead staff scientist at the US Department of Energys SLAC National Accelerator Laboratory, and maybe help us look for things we didnt even know we were looking for.
Hong says that, at its simplest, a trigger works like a motion-sensitive light: It stays off until activated by a preprogrammed signal. For a light, that signal could be a person moving through a room or an animal approaching a garden. For triggers, the signal is often an energy threshold or a specific particle or set of particles. If a collision, also called an event, contains that signal, the trigger is activated to save it.
In 2010, Williams wanted to add machine learning to the LHCb trigger in the hopes of expanding the detectors definitions of interesting particle events. But machine-learning algorithms can be unpredictable. They are trained on limited datasets and dont have a humans ability to extrapolate beyond them. As a result, when faced with new information, they make unpredictable decisions.
That unpredictability made many trigger experts wary, Williams says.We dont want the algorithm to say, That looks like [an undiscovered particle like] a dark photon, but its lifetime is too long, so Im going to ignore it, Williams says. That would be a disaster.
Still, Williams was convinced it could work. On the hour-long plane ride home from that conference in Amsterdam, he wrote out a way to give an algorithm set rules to followfor example, that a long lifetime is always interesting. Without that particular fix, an algorithm might only follow that rule up to the longest lifetime it had previously seen. But with this tweak, it would know to keep any longer-lived particle, even if its lifetime exceeded any of those in its training set.
Williams spent the next few months developing software that could implement his algorithm. When he flew to the United States for Christmas, he used the software to train his new algorithm on simulated LHC data. It was a success. It was an absolute work of art, says Vava Gligorov, a research scientist at the National Centre for Scientific Research in France, who worked on the system with Williams.
Updated versions of the algorithm have been running LHCbs main trigger ever since.
Physicists use trigger systems to store data from the types of particle collisions that they know are likely to be interesting. For example, scientists store collisions that produce two Higgs bosons at the same time, called di-Higgs events. Studying such events could enable physicists to map out the potential energy of the associated Higgs field, which could provide hints about the eventual fate of our universe.
Higgses are most often signaled by the appearance of two b quarks. If a proton collision produces a di-Higgs, four b quarks should appear in the detector. A trigger algorithm, then, could be programmed to capture data only if it finds four b quarks at once.
But spotting those four quarks is not as simple as it sounds. The two Higgs are interacting as they move through space, like two water balloons thrown at one another through the air. Just as the droplets of water from colliding balloons continue to move after the balloons have popped, the b quarks continue to move as the particle decays.
If a trigger can see only one spatial area of the event, it may pick up only one or two of the four quarks, letting a di-Higgs go unrecorded. But if the trigger could see more than that, looking at all of them at the same time, that could be huge, says David Miller, an associate professor of physics at the University of Chicago and a member of the ATLAS experiment.
In 2013, Miller started developing a system that would allow triggers to do just that: analyze an entire image at once. He and his colleagues called it the global feature extractor, or gFEX. After nearly a decade of development, gFEX started being integrated into ATLAS this year.
Trigger systems have traditionally had two levels. The first, or level-1, trigger, might contain hundreds or even thousands of signal instructions, winnowing the saved data down to less than 1%. The second, high-level trigger contains more complex instructions, and saves only about 1% of what survived the level-1. Those events that make it through both levels are recorded for physicists to analyze.
For now, at the LHC, machine learning is mostly being used in the high-level triggers. Such triggers could over time get better at identifying common processesbackground events they can ignore in favor of a signal. They could also better identify specific combinations of particles, such as two electrons whose tracks are diverging at a certain angle.
You can feed the machine learning the energies of things and the angles of things and then say, Hey, can you do a better job distinguishing things we dont want from the things we want? Hong says.
Future trigger systems could use machine learning to more precisely identify particles, says Jennifer Ngadiuba, an associate scientist at Fermi National Accelerator Laboratory and a member of the CMS experiment.
Current triggers are programmed to look for individual features of a particle, such as its energy. A more intelligent algorithm could learn all the features of a particle and assign a score to each particle decayfor example, a di-Higgs decaying to four b quarks. A trigger could then simply be programmed to look for that score.
You can imagine having one machine-learning model that does only that, Ngadiuba says. You can maximize the acceptance of the signal and reduce a lot of the background.
Most high-level triggers run on computer processors called central processing units or graphics processing units. CPUs and GPUs can handle complex instructions, but for most experiments, they are not efficient enough to quickly make the millions of decisions needed in a high-level trigger.
At the ATLAS and CMS experiments, scientists use different computer chips called field-programmable gate arrays, or FPGAs. These chips are hard-wired with custom instructions and can make decisions much faster than a more complex processor. The trade-off, though, is that FPGAs have a limited amount of space, and some physicists are unsure whether they can handle more complex machine-learning algorithms.The concern is that the limits of the chips would mean reducing the number of instructions they can provide to a trigger system, potentially leaving interesting physics data unrecorded.
Its a new field of exploration to try to put these algorithms on these nastier architectures, where you have to really think about how much space your algorithm is using, says Melissa Quinnan, a postdoctoral researcher at the University of California, San Diego and a member of the CMS experiment. You have to reprogram it every time you want it to do a different calculation.
Many physicists dont have the skillset needed to program FPGAs. Usually, after a physicist writes code in a computer language like Python, an electrical engineer needs to convertthe code to a hardware description language, which directs switch-flipping on an FPGA. Its time-consuming and expensive, Quinnan says. Abstract hardware languages, such as High-Level Synthesis, or HLS, can facilitate this process, but many physicists dont know how to use them.
So in 2017, Javier Duarte, now an assistant professor of physics at UCSD and a member of the CMS collaboration, began collaborating with other researchers on a tool that directly translates computer language to FPGA code using HLS. The team first posted the tool, called hls4ml, to the software platform GitHub on October 25 that year. Hong is developing a similar platform for the ATLAS experiment. Our goal was really lowering the barrier to entry for a lot of physicists or machine-learning people who arent FPGA experts or electronics experts, Duarte says.
Quinnan, who works in Duartes lab, is using the tool to add to CMS a type of trigger that, rather than searching for known signals of interest, tries to identify any events that seem unusual, an approach known as anomaly detection.
Instead of trying to come up with a new theory and looking for it and not finding it, what if we just cast out a general net and see if we find anything we dont expect? Quinnan says. We can try to figure out what theories could describe what we observe, rather than trying to observe the theories.
The trigger uses a type of machine learning called an auto-encoder. Instead of examining an entire event, an auto-encoder compresses it into a smaller version and, over time, becomes more skilled at compressing typical events. If the auto-encoder comes across an event it has difficulty compressing, it will save it, hinting to physicists that there may be something unique in the data.
The algorithm may be deployed on CMS as early as 2024, Quinnan says, which would make it the experiments first machine learning-based anomaly-detection trigger.
A test run of the system on simulated data identified a potentially novel event that wouldnt have been detected otherwise due to its low energy levels, Duarte says. Some theoretical models of new physics predict such low-energy particle sprays.
Its possible that the trigger is just picking up on noise in the data, Duarte says. But its also possible the system is identifying hints of physics beyond what most triggers have been programmed to look for. Our fear is that were missing out on new physics because we designed the triggers with certain ideas in mind, Duarte says. Maybe that bias has made us miss some new physics.
Illustration by Sandbox Studio, Chicago with Thumy Phan
Physicists are thinking about what their detectors will need after the LHCs next upgrade in 2028. As the beam gets more powerful, the centers of the ATLAS and CMS detectors, right where collisions happen, will generate too much data to ever beam it onto powerful GPUs or CPUs for analyzing. Level-1 triggers, then, will largely still need to function on more efficient FPGAsand they need to probe how particles move at the chaotic heart of the detector.
To better reconstruct these particle tracks, physicist Mia Liu is developing neural networks that can analyze the relationships between points in an image of an event, similar to mapping relationships between people in a social network.She plans to implement this system in CMS in 2028. That impacts our physics program at large, says Liu, an assistant professor at Purdue University. Now we have tracks in the hardware trigger level, and you can do a lot of online reconstruction of the particles.
Even the most advanced trigger systems, though, are still not physicists. And without an understanding of physics, the algorithms can make decisions that conflict with realitysay, saving an event in which a particle seems to move faster than light.
The real worry is its getting it right for reasons you dont know, Miller says. Then when it starts getting it wrong, you dont have the rationale.
To address this, Miller took inspiration from a groundbreaking algorithm that predicts how proteins fold. The system, developed by Googles DeepMind, has a built-in understanding of symmetry that prevents it from predicting shapes that arent possible in nature.
Miller is trying to create trigger algorithms that have a similar understanding of physics, which he calls self-driving triggers. A person should, ideally, be able to understand why a self-driving car decided to turn left at a stop sign. Similarly, Miller says, a self-driving trigger should make physics-based decisions that are understandable to a physicist.
What if these algorithms could tell you what about the data made them think it was worth saving? Miller says. The hope is its not only more efficient but also more trustworthy.
The rest is here:
LHC physicists can't save them all - Symmetry magazine
Elon Musks xAI introduces Grk: A generative AI model with a sense of humour – The Indian Express
xAI Corp announced its first large language generative AI model called Grk (which means understand something intuitively) on Saturday, with the goal of creating a Good AGI (artificial general intelligence). The latest LLM was created by a group of engineers who have been part of other major generative AI models like OpenAI, Google, and Deep Mind.
According to Elon Musk, xAIs Grk is maximally curious and truth-curious compared to other language models. It is also said to be a truth-seeking artificial intelligence. He also stated that Grk requires substantial computing resources and is designed to be a helpful tool for consumers and businesses. It is claimed to be the best among the existing language models.
You have exhausted your monthly limit of free stories.
Read more stories for freewith an Express account.
Continue reading this and other premium stories with an Express subscription. Use promo code DIWALI30 to get 30% off.
This premium article is free for now.
Register to read more free stories and access offers from partners.
Continue reading this and other premium stories with an Express subscription. Use promo code DIWALI30 to get 30% off.
This content is exclusive for our subscribers.
Subscribe now to get unlimited access to The Indian Express exclusive and premium stories.
The primary advantage of Grk over other LLMs, such as ChatGPT, is its real-time access to information on X, previously known as Twitter. This is said to help platform users in delivering real-time news without any bias. Additionally, the generative AI model has been trained to include some humour with a hint of sarcasm in its responses, and it is also voice-ready.
The generative AI model is based on an 886.03 GB knowledge base called The Pile, along with the entire exabytes amount of data from platform X. In the coming days, Grk will gain more capabilities, such as image and audio recognition, along with image generation.
Other prominent features of the Grk generative AI model include up to a 25,000-character context window, higher response speed, a live search engine within X, and native compatibility with Tesla vehicles. xAIs Grk system is currently in the early beta stage and will soon be available for X Premium+ subscribers, which costs Rs. 1,300 a month or Rs. 13,600 a year when subscribed from a desktop.
IE Online Media Services Pvt Ltd
First published on: 05-11-2023 at 09:30 IST
Here is the original post:
Elon Musks xAI introduces Grk: A generative AI model with a sense of humour - The Indian Express
MiND-AID Muskoka free film screening on youth mental health – muskokaregion.com
State AlabamaAlaskaArizonaArkansasCaliforniaColoradoConnecticutDelawareFloridaGeorgiaHawaiiIdahoIllinoisIndianaIowaKansasKentuckyLouisianaMaineMarylandMassachusettsMichiganMinnesotaMississippiMissouriMontanaNebraskaNevadaNew HampshireNew JerseyNew MexicoNew YorkNorth CarolinaNorth DakotaOhioOklahomaOregonPennsylvaniaRhode IslandSouth CarolinaSouth DakotaTennesseeTexasUtahVermontVirginiaWashingtonWashington D.C.West VirginiaWisconsinWyomingPuerto RicoUS Virgin IslandsArmed Forces AmericasArmed Forces PacificArmed Forces EuropeNorthern Mariana IslandsMarshall IslandsAmerican SamoaFederated States of MicronesiaGuamPalauAlberta, CanadaBritish Columbia, CanadaManitoba, CanadaNew Brunswick, CanadaNewfoundland, CanadaNova Scotia, CanadaNorthwest Territories, CanadaNunavut, CanadaOntario, CanadaPrince Edward Island, CanadaQuebec, CanadaSaskatchewan, CanadaYukon Territory, Canada
Postal Code
Country United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe
More:
MiND-AID Muskoka free film screening on youth mental health - muskokaregion.com
On AI regulation, how the US steals a march over Europe amid the UKs showpiece Summit – The Indian Express
Over the last decade, Europe has taken a decisive lead over the US on tech regulation, with overarching laws safeguarding online privacy, curbing Big Tech dominance and protecting its citizens from harmful online content.
British Prime Minister Rishi Sunaks showpiece artificial intelligence event that kicked off in Bletchley Park on Wednesday sought to build on that lead, but the United States seems to have pulled one back, with Vice President Kamala Harris articulating Washingtons plan to take a decisive lead on global AI regulation, helped in large measure by an elaborate template that was unveiled just two days prior to the Summit. Harris then went on to elaborately flesh out the US plan for leadership in the AI regulation space before a handpicked audience, which included former British PM Theresa May, at the American Embassy in London, while she was there to attend Sunaks Summit.
The template for Harris guidance on tech regulation was the freshly released White House Executive Order on AI, which proposed new guardrails on the most advanced forms of the emerging tech where American companies dominate. And in contrast to the UK-led initiative, where the Bletchley Declaration signed by 28 signatories was the only major high-point, the US executive order is at the point of being offered as a well-calibrated template that could work as a blueprint for every other country looking to regulate AI, including the UK.
Harris was emphatic in her assertion that there was a moral, ethical, and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits. And to address predictable threats, such as algorithmic discrimination, data privacy violations, and deep fakes, the US had last October released a Blueprint for an AI Bill of Rights seen as a building block for Mondays executive order.
After its Bill of Rights was released, Washington had extensive engagement with the leading AI companies, most of which are American (with the exception of London-based Deep Mind, which is now a Google subsidiary) in a bid to evolve a blueprint and to establish a minimum baseline of responsible AI practices.
We intend that the actions we are taking domestically will serve as a model for international action. Understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world. Fundamentally it is our belief that technology with global impact requires global action, Harris said just before travelling to the United Kingdom for the summit on AI safety.
Let us be clear when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. This is America that can catalyse global action and build global consensus in a way that no other country can. And under President Joe Biden, it is America that will continue to lead on AI, Harris said before the signing of Mondays executive order, clearly outlining Washingtons intent to take a lead on AI regulation just ahead of the UK-led safety summit.
This assumes significance, given that over the last quarter century, the US Congress has not managed to pass any major regulation to rein in Big Tech companies or safeguard internet consumers, with the exception of just two side legislations: one on child privacy and the other on blocking trafficking content on the net.
In contrast, the EU has enforced the landmark GDPR (General Data Protection Regulation) since May 2018 that is clearly focused on privacy and requires individuals to give explicit consent before their data can be processed and is now a template being used by over 100 countries, Then there are a pair of sub-legislations the Digital Services Act (DSA) and the Digital Markets Act (DMA) that take off from the GDPRs overarching focus on the individuals right over her data. The DSA focused on issues such as regulating hate speech, counterfeit goods etc. while the DMA has defined a new category of dominant gatekeeper platforms and is focused on non competitive practices and the abuse of dominance by these players.
On AI, though, the tables may clearly be turning. Washingtons executive order is a detailed blueprint aimed at safeguarding against threats posed by artificial intelligence and seeks to exert oversight over safety benchmarks that companies use to evaluate conversation bots such as ChatGPT and Google Bard. The move is being seen as a vital first step by the Biden administration in the process of regulating rapidly-advancing AI technology, which White House deputy chief of staff Bruce Reed had described as a batch of reforms that amounted to the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.
EU lawmakers, on the other hand, are yet to reach an agreement on several issues related to its proposed AI legislation and the deal is reportedly not expected anytime before December.
The US executive order required AI companies to conduct tests of their newer products and share the results with the US federal government officials before the new capabilities were made available to consumers. These safety tests undertaken by developers, known as red teaming, are aimed at ensuring that new products do not pose a threat to users or the public at large. Under these new government powers, enabled under the US Defense Production Act, the federal government is empowered to subsequently force a developer to either tweak the product or abandon an initiative.
As part of the initiative, the United States will launch an AI safety institute to evaluate known and emerging risks of AI models: this move could be in parallel to an initiative by London to set up a United Kingdom Safety Institute, though Washington has subsequently indicated that the proposed US institute would establish a formal partnership with the UK Institute.
Among the standards set out in the US order, a new rule seeks to codify the use of watermarks that alert consumers when they encounter a product enabled by AI, which is aimed at potentially limiting the threat posed by content such as deepfakes. Another standard stipulates that biotech firms take appropriate precautions when using AI to create or modify biological material. Incidentally, the industry guidance has been prescribed more as suggestions rather than binding requirements, giving developers and firms enough elbow room to work around some of the government recommendations.
Also, the executive order explicitly directs American government agencies to implement changes in their use of AI, thereby creating industry best practices that Washington expects will be embraced by the private sector. The US Department of Energy and the Department of Homeland Security will, for instance, take steps to address the threat that AI poses for critical infra, the White House said in a statement.
Harris said the focus of the move, while aiming for the existential threats of generative AI being highlighted by experts, also resonated at an individual or citizen level.Additional threads that also demand our action threats that are currently causing harm and which too many people also feel existential. Consider for example, when a senior is kicked off his health care plan because of a faulty AI algorithm, is that not existential for him? When a woman is threatened by an abusive partner with explicit deep fake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of biased AI, facial recognition, is that not existential for his family? And when people around the world cannot discern fact from fiction because of a flood of AI enabled myth and disinformation, I ask, is that not existential for democracy?
Varied Approaches
These developments come as policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPTs explosive launch. The concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights.
The policy response has been different too, across jurisdictions, with the European Union having taken a predictably tougher stance by proposing to bring in its new AI Act that segregates artificial intelligence as per use case scenarios, based broadly on the degree of invasiveness and risk; the UK is seen to be on the other end of the spectrum, with a decidedly light-touch approach that aims to foster, and not stifle, innovation in this nascent field.
The US approach now slots somewhere in between, with Washington now clearly setting the stage for defining an AI regulation rulebook with Mondays executive order. This clearly builds on the move by the White House Office of Science and Technology Policy last October to unveil its Blueprint for the AI Bill of Rights. China too has released its own set of measures to regulate AI.
This also comes in the wake of calls by tech leaders Elon Musk, Steve Wozniak (Apple co-founder) and over 15,000 others for a six-month pause in AI development in April this year, saying labs are in an out-of-control race to develop systems that no one can fully control. Musk was in attendance at Bletchley Park, where he warned that AI is one of the biggest threats to humanity and that the Summit was timely because AI posed an existential risk to humans, who face being outsmarted by machines for the first time.
Original post:
On AI regulation, how the US steals a march over Europe amid the UKs showpiece Summit - The Indian Express