Page 1,054«..1020..1,0531,0541,0551,056..1,0601,070..»

A Curious Thing Happened When Elon Musk Tweeted One Of My Columns – Forbes

of SpaceX and Tesla and owner of Twitter, Elon Musk attends the Viva Technology conference dedicated to innovation and startups at the Porte de Versailles exhibition centre on June 16, 2023 in Paris, France. Elon Musk is visiting Paris for the VivaTech show where he gives a conference in front of 4,000 technology enthusiasts. He also took the opportunity to meet Bernard Arnaud, CEO of LVMH and the French President. Emmanuel Macron, who has already met Elon Musk twice in recent months, hopes to convince him to set up a Tesla battery factory in France, his pioneer company in electric cars. (Photo by Chesnot/Getty Images)Getty Images

It all started with an article about an AI god.

In 2017, an engineer named Anthony Levandowski filed the paperwork for a new non-profit called The Way of the Future. Levandowski was a well-known figure in tech circles, since he was a self-driving car expert.

The firm, which is now defunct, had all the makings of a religion. At the time, I wrote about how a super-intelligent AI could lead people to who worship it, bowing down before something so powerful it could control our lives and dictate our future. The new company was just the latest and most obvious example of that.

In fact, The Way of the Future mission statement made the goal quite clear: To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead.

At the time, I wrote about how this type of AI god could write a bible, create the dictums to follow, and determine how we should live. It might tell you what to do each day, or where to travel, or how to live your life, I wrote. I dont have access to the traffic for the article anymore, but I remember it was one of the sites most-read thought pieces that year.

When the article exploded in popularity, hundreds of people started commenting on social media channels, writing their own articles, tweeting the link, and inevitably finding fault with what I had written. One of those naysayer tweets came from none other than Elon Musk. Heres the tweet in question:

As you can imagine, my popular opinion piece then went nuclear. I still receive emails asking me about the ideas in that piece. It only takes one tweet for that to happen. (Sadly, the days of tweeting might be over, which means were stuck with TikTok instead.)

Of course, Musk wasnt critical of the article itself, even though the tweet could have easily been interpreted that way. Instead, he took issue with the concept of someone creating a powerful super intelligence (e.g., an all-knowing entity capable of making human-like decisions). In the hands of the wrong person, an AI could become so powerful and intelligent that people would start worshiping it.

Another curious thing? I believe the predictions in that article are about to come true a super-intelligent AI will emerge and it could lead to a new religion.

Were now living in an age when AI can write entire articles, create photos and videos that look hyper-realistic, help us to program apps and websites, and imitate our voice and even insert us into a video. Its not a stretch to suggest that a powerful AI could appear in the next 10-20 years and that people could eventually start worshiping a digital deity.

Its not time to panic, but it is time to plan. The real issue is that a super intelligent AI could think faster and more broadly than any human. AI bots dont sleep or eat. They dont have a conscience. They can make decisions in a fraction of a second before anyone has time to react. History shows that, when anything is that powerful, people tend to worship it. Thats a cause for concern, even more so today.

Its hard to predict when an AI will emerge that seems so powerful that anyone would worship it. The crazy thing is that AI may have already reached that point and we dont even know it.

John Brandon is a well-known journalist who has published over 15,000 articles on social media, technology, leadership, mentoring, and many other topics. Before starting his writing career in 2001, he worked as an Information Design Director at Best Buy Corporation. Follow him on Twitter: https://twitter.com/johnbrandonmn

Read the original:

A Curious Thing Happened When Elon Musk Tweeted One Of My Columns - Forbes

Read More..

CTech’s Book Review: Welcome to Life 3.0 with Artificial General … – CTech

Saar Barhoom is the SVP R&D Veeva Crossix, a technology platform built for the development and delivery of large-scale patient data and analytics. He has joined CTech to share a review of Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark.

Title: Life 3.0: Being Human in the Age of Artificial IntelligenceAuthor: Max TegmarkFormat: TabletWhere: Home

1 View gallery

Saar Veeva Crossix

(Photo: Veeva Crossix/Amazon)

AI has lately come under the spotlight with ChatGPT and similar services that offer the possibility of chatting freely with a bot and getting easier access to enormous masses of knowledge. This sparks peoples imagination and creates lots of buzz about the related potential which is indeed vast. But AGI - Artificial General Intelligence - is not just about chatting to a bot or building smart robots, it is so much more than that.

This book, published back in 2017, unveils both the huge potential and risk related to Artificial General Intelligence as it is expected to dramatically increase its capabilities over the years. Unlike previous technological achievements that were typically limited to specific domains, limited in capabilities, and well controlled by humans, AGI is relevant everywhere, and is expected to be capable of improving itself - which is the essence of what the author calls Life 3.0 - and a scenario where it gets out of control is distant from science fiction. The author claims that Life 3.0 is on its way and when it is here, things will rapidly evolve and nothing will be the same. Hence careful thinking is required to agree on how this change should be handled in order to ensure things evolve the way we want them to. It is amazing to see how over six years since the book was published, we are indeed already witnessing new developments in the AI field which follow some of the book's predictions.

The author first explains the concept of the three phases of life, from its phase where learning was slow and capabilities were improving via DNA evolution passed to the next generation (Life 1.0), then faster learning of skills and behaviors during the lifetime of an entity (Life 2.0) and lastly, Life 3.0 - where an entity will be able to redesign itself and so exponentially extend its capabilities. It discusses the term intelligence - the capability to learn - which is basically dependent on two pillars, available memory and computation power. These basic terms are all about information management and are independent of the substrate used for their implementation. This independence means there is immense room for improvement as weve seen over the years with information technologies reaching ever higher peaks.

Next, the book discusses the near future. What can be achieved with the near-term expected AI becoming stronger, which is currently mainly based on deep reinforcement learning methods applied in specific fields, and what we should expect and consider?

AI can promote fast progress in areas like healthcare. It can speed up drug development, speed up the diagnosis of complicated cases, promote surgeries using robots, and more. Of course, that requires access to high-quality, big, and anonymized training data sets to allow required patient data privacy; enabling access to such data sets is something that I am happy to note that our development center takes part in.

In addition, it can promote progress in transportation, manufacturing, energy, law, military, and more, changing dramatically the employment market. With a total memory of the entire law system, regulations, and historical legal cases it could become an efficient, tireless, and unbiased judging authority. On the military side, it could become a massive deterrent measure, can improve defense systems, or maybe change wars so there would not be any need to involve humans.

A poor implementation can be very hazardous. Throughout history, as part of normal technological progress, people always learned from their mistakes. However, the cost of mistakes is going to rise when technology is more powerful and used to control not only limited systems but also power grids, stock markets, and nuclear weapons. This means we need to change our approach and become more proactive rather than reactive and AI safety should become a domain of itself.

In the longer term, the author explains what an Intelligence Explosion is and what might be its outcomes. Basically, once AI becomes capable of redesigning itself, its improvement rate will become exponential, in the spirit of Moores law, limited only by physical factors, and can get very fast out of control. It will become super intelligent compared to humans and keep improving until it plateaus at a level constrained by physical factors. At some point along that way when AGI is more powerful than people, we might find ourselves in one of a variety of end states. Longer term, if indeed a super intelligence gets to its limit, consequences might be well outside the scope of our planet with wide implications. This part of the book is also fascinating but becomes more theoretical, and I found it less relevant to the current discussion. The last part of the book discusses consciousness - suggests a broad definition of it as a subjective experience and discusses the implications of AGI becoming conscious at some point. This is important since morale cares for conscious creatures so if AGI is conscious - it should be treated not as a tool.

First, replication of information and being goal driven are the basic building blocks for life. Memory and computation are the basis of any intelligence and are inherently substrate-independent. Theres a good chance there is a point where a machine initially built by humans could be considered alive and more intelligent. We should think if a point where AGI systems come up with their own goals is a desired state. Furthermore, learning is based on memory and computation. History shows us we are able to dramatically progress on these computer capabilities by replacing how computers are built. One of the first uses of AGI is expected to be the redesign and improvement of the AGI itself. The physical limit for a computation that a piece of material can do is now considered as 10^33 times more than todays state, which will take us 200 years to get to if we keep doubling every 2 years. An intelligence explosion where this process gets out of control and keeps us far behind superintelligent entities without an option to recover is a valid possibility. If we dont think about what we want, we will probably get something that we dont want.

Ensuring that the goals of AGI systems are aligned with our goals is not easy, since intelligent systems derive sub-goals that might be hard to consider. Some think that there will be a point where AGI systems will be conscious. When this happens, AI systems should be viewed as a natural evolution of life and we should accept that even it means the end of mankind.

Our laws are very far behind this new world. As many jobs might become redundant, we should think about how to ensure people keep both the source and purpose of living.

Considering the speed at which AI technologies evolve, current standards for controlling it put humanity at the moment at an unsustainable course. Once real-world systems are connected to AI we must ensure it is verified, validated, secured, and controlled tightly to avoid catastrophic results. A high-level conversation about the potential of and the regulation required is already underway, part of driven by the Future of Life Institute.

I enjoyed the book a lot since the possible future outlines are so different from what we have today, yet the book is capable of explaining them and looking at them as completely feasible rather than science fiction. I was impressed with the fact the book already predicted some developments over the last years since it was published. Although parts of it are theoretical and remote, I think it does provoke serious thinking and discussion of AI which is extremely important. I think that the high stakes call for a wider understanding of that subject rather than keeping it in the hands of a handful of people to make the calls for us with the general public mostly unaware of the possibilities.

Who should read this book:

Techies and science fiction lovers are obviously going to love this book but Id like to recommend it first and foremost to decision-makers in multiple areas where AGI is expected to be leveraged as they would benefit from understanding potential risks. However, I think that many others can benefit from reading the book as well since it opens the door to understanding better AGI and joining a discussion of our approach where people will be asking informed questions and considering potential outcomes. Improving the public discussion on that important topic can hopefully positively impact the end result.

Read the original here:

CTech's Book Review: Welcome to Life 3.0 with Artificial General ... - CTech

Read More..

Exploring the Role of Artificial Intelligence in Anesthesiology – HealthITAnalytics.com

July 20, 2023 -In anesthesiology, as in all medical specialties, clinicians strive to support patient safety and improve outcomes. Some anesthesiology professionals are investigating how advanced technologies like artificial intelligence (AI) and machine learning (ML) may positively impact the field.

Anesthesia patients generate massive amounts of data that could be used to bolster these efforts. But capturing and analyzing high-quality big data presents a challenge for health systems and providers.

Research into AIs clinical applications and current limitations in anesthesiology practice suggests that these tools may demonstrate utility in various areas, including depth of anesthesia monitoring, control of anesthesia, event and risk prediction, ultrasound guidance, pain management, and operating room logistics.

Other studies analyzing trends in AI and anesthesia indicate that ML tools, robots, clinical decision support systems, and other technologies may play a significant role in anesthetic care in the future. Some even suggest that the combination of AI, nanotechnology, and genomic medicine may one day advance the quality of anesthesia practice.

But how can anesthesia teams navigate the hype around these tools and work to leverage them appropriately and effectively?

Anesthesiology has historically been at the forefront of patient safety initiatives, with anesthesiologists working to establish reliable processes and implement technologies that can help reduce adverse outcomes like morbidity and mortality, noted Desire Chappell, CRNA, vice president of clinical quality at Northstar Anesthesia in Irving, Texas, during an interview with HealthITAnalytics.

However, taking anesthesia to the next level in terms of enhancing patient safety requires access to high-quality data on patients and outcomes. Limited access to such data can hinder efforts to fine-tune intraoperative practices that may improve postoperative outcomes.

Additional factors, like staff shortages and patient complexity, can create hurdles, resulting in a need for advanced technologies to support known anesthesia-related safety measures.

We know the things that save lives, and we know the things that improve care, but having us, as people on teams, practice in that way reliably and at all times is the challenge, explained Jonathan Tan, MD, vice chair of Analytics and Clinical Effectiveness at the Childrens Hospital Los Angeles (CHLA), who serves as an assistant professor of Clinical Anesthesiology at CHLA and the Keck School of Medicine at the University of Southern California. [Because of these] factors, I think there's huge opportunity for us to scale the way we practice more safely by using technology, including artificial intelligence and machine learning.

Lack of standardization within the anesthesiology specialty is also a limitation that can potentially affect patient outcomes and safety, Chappell stated.

I think that because we have been able to do anesthesia, [which] can be done in a lot of different ways, that we haven't traditionally looked at variation as an issue potentially with patient outcomes and patient safety, she said. But the more we standardize, the better patient outcomes are.

This is where AI and ML come in.

AI may be able to address the issues described above by identifying nuances in the data and helping standardize patient care.

[In anesthesiology,] there's actually so much information and data coming our way, Tan said. It's an extraordinary amount of information that's being generated every second, probably [more information at a higher density] at a given moment than the rest of the hospital from a vital sign and patient standpoint.

Ingesting and analyzing that data while caring for patients and fulfilling their other responsibilities can create a significant cognitive load for anesthesia professionals.

In this case, AI can augment the practice of anesthesiology by helping to reduce that cognitive load and allowing providers to focus on more important aspects of patient care. By leveraging AI in this assistive capacity, rather than replacing clinicians' experience and expertise, Tan and Chappell indicated that care teams can prioritize the essential human connection between patients and providers.

Chappell likened using AI to navigate the wealth of anesthesiology data to using a GPS while driving.

[When] you're driving your car somewhere, and you're using your navigation tool, even though you think you know how to get there, you don't necessarily know what the traffic patterns are," she said. "You don't know if there's been an accident.

[AI] is just helping you as a tool to navigate where you're trying to go, she added. We need tools to help us get to where we're going more efficiently, to have a little bit more information that is deciphered in a different way, and I think that AI can help us do that.

But Chappell and Tan agreed that these technologies could never replace the human aspect of their work or the need for meaningful connections between anesthesia teams and their patients.

The purpose of technology, and the purpose of AI and other tools like that, is to actually free up our ability to spend more time with the patient, to spend more time at the bedside, to be physically and emotionally there to care for patients, Tan noted. And in the complex world that we live in now, I think that those tools are more important than ever to be able to help us do our job better by being at the bedside with the patient.

AI and other technological advancements have their place in anesthesiology and healthcare more broadly. Still, the key lies in identifying high-value use cases for these tools and integrating them effectively in clinical workflows, Chappell noted. In doing so, care teams can meaningfully leverage technology to improve patient care.

AI has diverse applications in anesthesiology. Some are already in use, while others will come onto the market in the near future, particularly for use cases related to patient experience, procedure guidance, risk assessment, and intraoperative optimization, said Tan.

He indicated that AI could facilitate communication between healthcare providers and families before surgery, which may reduce the burden on staff. These tools can also provide practice guidance for anesthesiologists during certain procedures by identifying important anatomical structures to target or avoid using ultrasound or airway devices.

Additionally, AI and ML tools can support pre-surgical risk stratification by flagging patient risk factors beyond those captured by current evidence-based scoring systems. Postoperatively, the technology can help support clinical decision-making by identifying the patients at the highest risk for readmission or mortality following surgery.

During surgery, these tools can be integrated into closed-loop systems that automate the delivery of medications and fluids based on patient parameters, such as weight or BMI, said Chappell. This integration can help support and optimize hemodynamic stability during surgery, which is critical to maintaining patient safety and improving outcomes.

Across use cases, Tan emphasized the importance of not only focusing on the technology itself but also on integrating it into hospitals.

When we talk about artificial intelligence [in anesthesiology], we're often talking about the technology itself, the tools, the data science, and there's a lot of focus on that, he stated. But I think a lot of that involvement is already pretty far along. And the other half of [this] is actually the implementation and bringing that technology to hospitals to reduce variation in care.

Bringing AI into clinical settings would require increased education for clinicians about the basics of AI and how it applies to the practice of anesthesiology, Tan explained.

Implementing these tools also necessitates considering the impact of change management, or the transformation of a health systems processes and technologies, on anesthesia providers, Chappell stated.

We have to remember that as new technology comes out, that each one of us as anesthesia providers and clinicians, we're individuals, too, she explained.

She added that when stakeholders leverage implementation science to deploy new technologies, they can forget how challenging change management can be for some.

There are a lot of people who get really super excited about the shiny, bright new tool that comes out, and then there are a lot of people who are very on the other side of the change management curve, Chappell continued. I think that as we go forward to adopt this, we need to be really sensitive to that and think of the human factors of the people who have to use this in practice, what's the best way that we get adoption for the long term, and sustainable adoption of new tools could potentially have such a huge impact on patient safety.

Tan further underscored that achieving the next level of patient safety in anesthesiology requires a collaborative ecosystem of health systems, care teams, patient safety scientists, data scientists, and specialists in implementation science and change management working to support the use of AI and ML in the field.

Editors Note: This article was updated to correct Desire Chappells credentials at 3:00pm ET on July 24, 2023.

Read the original here:

Exploring the Role of Artificial Intelligence in Anesthesiology - HealthITAnalytics.com

Read More..

Artificial intelligence has become the cardiologist’s ‘super-assistant’ – Medical Xpress

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Researcher Andreas stvik at SINTEF demonstrating the equipment which uses artificial intelligence to harvest experience from previous patients. This will enable doctors to make decisions based on potentially thousands of similar examinations. Credit: William Hoven.

Cardiovascular diseases are the biggest killers in the world, accounting for 17.9 million deaths globallyevery single year, according to the World Health Organization (WHO). To put that number into perspective, it is more than twice the entire population of London.

So if there is an urgent need for you to get your heart checked, it is important for the examinations performed by the doctors to be the best they possibly can be.

"One of the most important methods is the echocardiograph," says Bjrnar Grenne, who is a senior consultant in the cardiac department at St. Olav's Hospital in Trondheim, and an Associate Professor at NTNU.

Each year, a large number of patients are admitted to St. Olav's for cardiac check-ups. This means everyone from people with chest pain, to those collapsing in the street, or patients receiving regular heart checks.

What most of them then do is lie on a couch to be examined by a doctor, who uses ultrasound to look inside the body.

"The heart is extremely complex and is very well hidden within the body. We don't think about it being there, but it's there all the same, beating up to 100,000 times a day, every dayin each and every one of us.

"There is a good reason why the heart is so well hidden, but because we cannot see it, this also makes it harder to examine. That's why we need good ways of studying it."

The WHO has stated that cardiovascular diseases accounted for 32% of all deaths on a global basis in 2019.

"It's important to establish what is wrong with the heart at an early stage, so that people can quickly get the right treatment."

Beside the examination couch, Senior Consultant Grenne demonstrates the probe, which looks like a joystick, and guides it so that the heart of a volunteer from the research team is displayed on the ultrasound screen.

"This gives us real-time images of the heart, which are essential if we are to make the correct diagnosis," Grenne explains.

The challenge is that you need a lot of experience in order to guide this probe correctly and get the best possible images of the heart. Analyzing the images afterwards is also very time-consuming.

"We can take as many as 70100 different images and videos of the heart during an examination. These must also be studied carefully afterwards by people with a great deal of experience in this field, and that can easily take half an hour if you want to do it properly."

That is where AI, or artificial intelligence, comes in as an excellent assistant.

"Artificial intelligence can help Bjrnar and his colleagues guide the probe in the right direction and obtain the perfect image every time. AI can also analyze the images as soon as they pop up on the screen and help us to see what is wrong with the heart," says Andreas stvik, who is a researcher at SINTEF Digital and NTNU.

Using machine learning, the researchers have fed info into a machine, with Senior Consultant Grenne and his colleagues defining the criteria that must be met in order to get the right cardiac images, and how these images should be interpreted.

This allows artificial intelligence to be used to harvest experience from previous patients, which means that the doctors can make their decisions based on potentially thousands of similar examinations.

During the process, the AI assistant shows a green or red light, so that the doctor knows whether the probe is at the right angle. When the images are correct, AI interprets these, and automatically takes measurements of the heart. Typically, these are measurements of the size of the heart, and how good it is at contracting.

The technology has now been developed to an advanced stage and has already been used on some patients as part of the treatment offered to them in connection with the research project. However, strict rules apply in the field of medicine, which means that it could take some time before doctors will be able to use this treatment option in all hospitals in Norway.

"We're expecting it to be available in a few years. We have to test it firstpatient safety is paramount, and we have to know that it works before it is made available everywhere," the SINTEF researcher says.

Grenne adds that the current way of examining the heart is very good, but that it will obviously be tremendously helpful if AI could contribute as an assistant.

"It saves us a lot of time and resources, which means that we can help even more peoplewhich could then save more lives," he says.

See the rest here:

Artificial intelligence has become the cardiologist's 'super-assistant' - Medical Xpress

Read More..

Unraveling the Fiction and Reality of AI’s Evolution: An Interview with … – EnterpriseAI

July 24, 2023 by Steve Conway, Sr Analyst, Intersect 360 Research

(GrandeDuc/Shutterstock)

Editors Note:In the wake of rising concerns about AIs potential impact after the introduction of ChatGPT and other generative AI applications, HPCwire asked Steve Conway, senior analyst at Intersect360 Research, to interview Paul Muzio, formervice president for HPC at Network Computing Systems, Inc.,and current chair of the HPC User Forum, an organization Conway helped to create. At a recent User Forum meeting, Muzio gave a well-received talk chronicling the history of human concerns about artificial intelligence and questioning whether intelligence is limited to humans. A link to Muzios presentation appears at the end of the interview.

HPCwire:Paul, people have been concerned for a long time about machines with super-human intelligence taking control of us and maybe even deciding to eliminate humanity. Your talk provided some examples from popular culture. Can you mention some of those?

Muzio:As I mention in my presentation, in my opinion the most profound prognostication of machines with super-human intelligence was presented in the 1956 movieForbidden Planet. The movie foretells a global or planetary version of Google, the metaverse, machine-to-brain and brain-to-machine communication and what might go wrong. I also mention R.U.R., a play written in 1921 by Karel and Josef Capek. The Capeks are the inventors of the word robot. There is one line in that play that grabbed me, from a technical point of view, the whole of childhood is a sheer absurdity. So much time lost. This concept is also addressed inForbidden Planet. There are many other writings in science fiction that I did not mention, such asI, Robotby Earl and Otto Binder, the moviesEx Machina,2001: A Space Odyssey, and many others.

HPCwire:The impressive capabilities of generative AI have amplified concerns about where AI might be headed. In your opinion, how concerned should we be? You pointed out several times in your talk that unlike humans, computers retain what theyve learned forever, without the need to educate the next generation.

Muzio: It is easy to make mistakes; it is hard to guarantee correctness. But even correctness does not preclude unintentional or adverse consequences. In The Complete Robot, Asimov discusses the situation where there is an iterative development of algorithms and that after a number of iterations, no human can understand the nth algorithm. This is illustrated to a degree by the development of AlphaGo by DeepMinds. AlphaGo was played against AlphaGo and in the end developed not only superhuman capability but also evolved to an algorithmic complexity beyond what humans could have developed. Recent experiments with developmental versions of GPT-4 have also resulted in some unexpected results. In fact, OpenAI has had to dumb down GPT-4 prior to its general availability.

GPT, as a released product, does not in and of itself have memory, i.e., it does not have operational access to a global planetary library which contains all knowledge. But we are building, at the present, huge decentralized libraries: libraries of human history and thought, libraries of biology, libraries of evolutionary trends, libraries of the universe. Of course, even data collections down to who we communicate with, what our preferences and dislikes are, and our everyday interactions. We strive to protect, perpetuate, and share those libraries. We are, with current computing technology, acquiring and preserving exabytes and exabytes of data. And, there is more sharing of that information than we are aware of. Right now, generative AI (G-AI) tools have access to some data for training purposes. What happens in the future when and if future G-AI tools gain access to all these decentralized libraries?

By the way, there are those who say that you have to show AI millions of pictures for it to be able to recognize a cat, whereas a child can quickly learn to recognize a cat. I argue that argument fails to acknowledge that the child has also seen millions of pictures of diverse things including the cat. I think that when G-AI has access to all those libraries we are building, it too will quickly learn.

HPCwire: Generative AI is still an early development. Its generally still within the realm of so-called path problems, where a human provides the machine with a desired outcome and the machine obeys the human command by following a step-by-step path to pursue that outcome. At some future point machines should be able to handle insight problems, where they pursue and sometimes achieve innovations without prescribed outcomes. That has great potential benefits for humanity but is that also a cause for concern?

Muzio:I recently watched apresentation by Sebastien Bubeck, a very brilliant researcher at Microsoft.I think he clearly shows that an experimental version of GPT-4 has gone beyond the path problem. Yes, he concludes that GPT-4 is not capable of planning, but has many attributes of intelligence. His is a really great presentation and analysis of where we are today. Watch it.

As I point out in my presentation, it took 5,000 years to go from the invention of the wheel to the building of an automobile. The world of computers and AI is only a few decades old. Where will we be a few decades from now?Forbidden Planetand other science fiction books/movies tend to present a bleaker future and maybe science fiction may actually foretell the future. I would add the following: it is human hubris to assume that we are the pinnacle of evolution.

HPCwire:On a practical level this whole topic might revolve around the human-machine interface, or HMI, and the possibility that at some point computers or other machines might sever that connection as something no longer needed by them or even annoying. Do you see that as a possibility?

Muzio:Certainly this is so postulated inR.U.R.and the movieEx Machina. I would expect it to be more evolutionary. We become more dependent on intelligent systems, and we become less capable of surviving in the world. I currently live out in Montauk, New York, which was long a quiet fishing community (the nearest traffic light to my house is 17 miles away). It is now inundated in the summer by Gen-Zers. Unfortunately, no one has taught Gen-Zers that when you walk on a country street with no sidewalks that you should walk facing traffic. I have a hunch that GPT-4 would know. In my presentation, I cite two books that address biological evolution with a crossover to AI. I highly recommend them.

HPCwire:AI is already being used to help design computer chips. You mentioned in your talk that this process could get out of human hands if the process becomes self-sustaining and the chips design their even-smarter successors. Should chipmakers be taking preventive measures?

Muzio:In my presentation, I mention that the chipmakers will not like what I say, but I believe the only preventative measure is to limit the further development of advanced chips. I guess I am not alone in this as the U.S. Government is restricting the export to the PRC of the technology to build advanced chips.

HPCwire:So far, weve been talking about two forms of intelligence, human and machine, but in your talk you referred to scientific evidence that humans arent the only natural creatures with intelligence. Can you say something about that?

Muzio:If you grew up with a pet or with animals, you recognized that they could think, plan, and had feelings, i.e., they had intelligence. Two millennia ago, the ancient Romans recognized that octopodes were uniquely intelligent. Some birds are able to count. Researchers have found that plants can recognize insect threats and communicate. In my presentation, I mentioned two books, both published in 2022,An Immense Worldby Ed Yong andWays of Being: Animals, Plants, Machines-The Search for Planetary Intelligenceby James Bridle. Both books have extensive citations to refereed research publications. Both books give you a different perspective on intelligence.

HPCwire:With AI, as with most transformational technologies, there can be a big difference between whatcanbe done and whatshouldbe done. In 2016, Germany became the first country to pass a national law governing automated vehicles. Ethicists and even religious leaders were part of the group that developed this legislation. Is it time to require that training in ethics be added to AI-related university curricula?

Muzio:Ethics is important. Unfortunately, most ethics courses are poorly taught and not remembered. But yes, ethics should be taught in AI-related university curricula, and I would recommend that required reading includeR.U.R., AsimovsThe Complete Robot, the two books I cited above, and a screening ofForbidden Planetand maybe my presentation if teachers think its worthwhile enough.

HPCwire:A final question. The definitions of life Ive seen are pretty broad. Do you think AI machines at some point may qualify as living things? Does that matter?

Muzio:The short answer to the first final question is, yes. The answer to the second final question is more difficult. InForbidden Planet, the goal was to build an eternal machine into which the Krell could intellectually live forever. If that could be achieved, a lot of people would be very happy. If the goal was to dispense with people altogether, that would also matter. And, if in x-billion years, the universe fades into nothing, it doesnt matter at all.

Presentation link (short 20-minute video)

This article first appeared on HPCwire.

Related

More:

Unraveling the Fiction and Reality of AI's Evolution: An Interview with ... - EnterpriseAI

Read More..

The pros and cons of AI and how we must stay Human | theHRD – The HR Director Magazine

Contributor: Thom Dennis - Serenity In Leadership | Published: 24 July 2023

Thom Dennis - Serenity In Leadership20 July 2023

AI is going to impact your life significantly and soon. ChatGPT is just one recent manifestation which has ignited a user take-up rate far exceeding expectations, with others in close pursuit.The World Economic Forumsays a quarter of jobs will be impacted over just the next five years as a result of technology and digitalisation.BT has unveiled its strategy to decrease its workforce by up to 55,000 employees by the year 2030, with roughly 10,000 positions being substituted by artificial intelligence. UK energy giantOctopus Energyreported that customer experience satisfaction is greater in those who have interacted with an AI-driven assistant (80% customer satisfaction) compared with human staff (65%). So where does this leave leaders and their teams?

What Are The Effects We Can Expect?The truth is at this early stage we can only guess what the actual effects and implications of AI will be, and how quickly things may change.We are going to live in an increasingly automated and influenced world, and AI will audit and improve productivity in probably all aspects of a business. The difficulty is that even its creators are unsure about what its capable of, how quickly that capability will develop, and what the impacts will be.

Leaders often take decisions without exploring the unintended consequences that are likely to ensue; with AI there are few precedents to work with to begin to understand the consequences. AI will often save us money and time, but the trouble is it is very crude and is a runaway train, and runaway trains tend to crash. So should we be excited or anxious about it, or are we always alarmed by the new? Wherever AI is going to take us, as humans we must maintain an extremely high level of vigilance and also a real sense of our own autonomy.

The Potential Benefits Associated With AIApart from increasing the profit and bottom line, AI will help in countless ways from aiding recruitment and addressing employment shortages and skills gaps to making more time for creativity and big thinking. AI is sure to improve efficiency and effectiveness by helping streamline workflow, automating time-consuming tasks and enhancing customer experience. Processes can be super-organised, and AI will improve accuracy and minimise human error as long as it is fully and accurately programmed. There are excellent opportunities for instance in the health field with the potential for AI to examine and diagnose increasing numbers of diseases better than a team of humans.

Key Problems Associated With AIThere isnt enough control.There were three boundaries that experts called out for at the earliest stages: dont put AI on the open Internet until you solve the control problem, dont teach AI to code because that makes it self-designing without inherent control, and dont have other AIs prompting it, and as entrepreneur and writer Mo Gawdat has reflected, we have crossed all three. The only way to defend against a super-intelligence is with another super intelligence so we are in a situation of runaway competition and Governments that are slow to act and far from ready. ChatGPT is based on reinforcement learning if you get an answer that is wrong you can ask it to think again, so one of the implications is that it will learn ethics and morality as it develops according to how we interact. It is not only well-intentioned people looking for an advantage in the use of this technology. AI is already seen to reflect unconscious as well as conscious biases and the influence can be subtle and pernicious.

Security and privacy concerns.From AIs ability to make decisions to handling personal data and the potential risks of cyber-attacks and data breaches, it is difficult but crucial for us to collaborate across countries to comprehend these risks and take necessary measures to minimise them. The precedent has been achieved in the case of human cloning, which is an active area of research today but is not in medical practice anywhere in the world.

Job losses. Goldman Sachs estimates thatas many as 300 million full-time jobsglobally could be automated in some way by the newest wave of AI. Many of these job losses, but not all, will likely affect the roles of lower-income employees and minorities meaning further exclusion or discrimination for these staff and a potential greater separation of groups of people when so much effort is going into increased inclusion.

Misinformation and fake news. Any inaccuracy or sloppiness in the phrasing of prompts will likely result in unexpected, misleading and often unwelcome, results.Vuelio and Daneburysresearch showed that two-thirds (67%) of polled business decision-makers worry about their company falling victim to fake news/misinformation and there is a danger of ruining reputations with 77% believing fake news/misinformation would cause their company reputational damage.

Losing our sense of being human. We already suffer from not being truly seen as we are, with social media only presenting very limited facets, working from home. This suggests a possible lack of work connection and being habitually time-short resulting in loss of social connection and empathy, and increased isolation.

How can we stay human whilst using AI?

http://www.serenityinleadership.com

Go here to see the original:

The pros and cons of AI and how we must stay Human | theHRD - The HR Director Magazine

Read More..

More than 1,300 experts call AI a force for good – BBC

18 July 2023

Image source, Getty Images

An open letter signed by more than 1,300 experts says AI is a "force for good, not a threat to humanity".

It was organised by BCS, the Chartered Institute for IT, to counter "AI doom".

Rashik Parmar, BCS chief executive, said it showed the UK tech community didn't believe the "nightmare scenario of evil robot overlords".

In March, tech leaders including Elon Musk, who recently launched an AI business, signed a letter calling for a pause in developing powerful systems.

That letter suggested super-intelligent AI posed an "existential risk" to humanity. This was a view echoed by film director Christopher Nolan, who told the BBC that AI leaders he spoke to saw the present time "as their Oppenheimer moment". J.Robert Oppenheimer played a key role in the development of the first atomic bomb, and is the subject of Mr Nolan's latest film.

But the BCS sees the situation in a more positive light, while still supporting the need for rules around AI.

Richard Carter is a signatory to the BCS letter. Mr Carter, who founded an AI-powered startup cybersecurity business, feels the dire warnings are unrealistic: "Frankly, this notion that AI is an existential threat to humanity is too far-fetched. We're just not in any kind of a position where that's even feasible".

Signatories to the BCS letter come from a range of backgrounds - business, academia, public bodies and think tanks, though none are as well known as Elon Musk, or run major AI companies like OpenAI.

Those the BBC has spoken to stress the positive uses of AI. Hema Purohit, who leads on digital health and social care for the BCS, said the technology was enabling new ways to spot serious illness, for example medical systems that detect signs of issues such as cardiac disease or diabetes when a patient goes for an eye test.

She said AI could also help accelerate the testing of new drugs.

Signatory Sarah Burnett, author of a book on AI and business, pointed to agricultural uses of the tech, from robots that use artificial intelligence to pollinate plants to those that "identify weeds and spray or zap them with lasers, rather than having whole crops sprayed with weed killer".

AI-powered robotic laser weeding in action.

The letter argues: "The UK can help lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation".

By doing so, it says Britain "can become a global byword for high-quality, ethical, inclusive AI".

In the autumn UK Prime Minister Rishi Sunak will host a global summit on AI regulation.

While the BCS may argue existential threats are sci-fi, some issues are just over the horizon or are already presenting problems.

It has been predicted that the equivalent of up to 300 million jobs could be automated, and some companies have already said they will pause hiring in some roles as a result of AI.

But Mr Carter thinks AI - rather than replacing humans - will boost their productivity. In his own work he says ChatGPT is useful, but he says he is wary of putting too much trust in it, comparing it to a "very knowledgeable and a very excitable, 12-year-old".

He argues companies will always need to have humans involved in the workplace, to take responsibility if things go wrong: "If you take the human completely out of the loop, how do you manage accountability for some sort of catastrophic event happening?"

He, like other signatories, believes regulation will be needed to avoid the misuse of AI.

Ms Purohit says a motive for signing was the need for rules to "make sure that we don't just run off and create lots and lots of things without paying attention to the testing and the governance, and the assurance that sits behind it".

Read the rest here:

More than 1,300 experts call AI a force for good - BBC

Read More..

Who Will Win the AGI Race? – Analytics India Magazine

Tom Cruises Mission: Impossible Dead Reckoning shows the world how AI can be the perfect villain. While The Entity, the faceless antagonist in the movie, manipulates the course of humanity, big techs inching towards AGI in real life are trying really hard to build a safer entity. But, theres a twist everyone is still figuring it out with what they believe will lead them towards it.

OpenAIs mission is to ensure that artificial general intelligence (AGI) by which we mean highly autonomous systems that outperform humans at most economically valuable work benefits all of humanity, OpenAI Charter 2018.

OpenAI has been clear from the beginning in defining their goals outlining AGI as their mission. CEO Sam Altman says LLMs could pave the way to building an AGI. He also believes that this entity will not have a body. We are deep into the unknown here, said Altman, in the Lex Fridman podcast.For me, a system that cannot significantly add to the sum total of scientific knowledge we have access to, kind of discover, invent, whatever you wanna call it, new fundamental science, is not super intelligence, said Altman.

He further said that there is a possibility that GPT-10 could evolve into true AGI with just a few innovative ideas. However, he believes that the true excitement lies in AI serving as a tool that participates in a human feedback loop, acting as an extension of human will and amplifying their capabilities.

* Weights have been provided based on usage of each type of model/functionality

OpenAIs dedication towards building an exhaustive list of transformer models trained on large datasets is probably their key to unlocking AGI, for even Altman believes that LLM could be a part of the way to build an AGI. He also feels that expansion of the GPT paradigm in important ways will help but doesnt know what those ways are.

The transformer model is the key neural network architecture for the OpenAI GPT models. From the first GPT model in 2018, which was trained on 117 million parameters to the latest GPT-4 model launched in March this year (whose parameters are not confirmed), OpenAI has been focusing on the LLMs. The companys list of transformer models extend to even text-to-image models DALL-E and DALL-E-2, voice-to-text Whisper and text-to-music Jukebox.

Google DeepMinds CEO Demis Hassabis believes that with the ongoing progress, AGI is just a few years away, maybe just about ten. However, he foresees uncertainties as careful exploration is required in the field. Swearing by reinforcement learning, a method that learns through the process of trial and error, Google DeepMind holds the crown. With models such as AlphaFold, AlphaZero, and others, DeepMind also believes that the maximisation of total reward might be sufficient to understand intelligence and its associated abilities, and that reward is enough to reach artificial general intelligence.

While DeepMind has had its share of AGI conversations, Sundar Pichai believes the race is not priority. While some have tried to reduce this moment to just a competitive AI race, we see it as so much more than that. He also mentioned that their emphasis is on the race to build AI responsibly, and to make sure as a society we get it right.

I dont think I have any particular insight on when a singular AI system that is a general intelligence will get created, said Mark Zuckerberg when asked about AGI timelines on the Lex Fridmans podcast.

Metas Yann LeCun said that supervised learning and reinforcement learning will not lead to AGI as these approaches are inadequate in developing systems capable of reasoning with commonsense knowledge about the world. He believes that self-supervised learning is the way towards AGI.

This method does not rely on data thats been labelled by humans for training purposes, instead, it trains on entirely unlabelled or new data. There has been promising results with self-supervised language understanding models, libraries, frameworks that have surpassed traditional and fully supervised models. Since 2013, the company has expanded its research efforts towards self-supervised learning.

With the latest launch of xAI , Musk seeks to build good AGI with the purpose to understand the universe. Musk explains how building AI that cares about understanding the universe will unlikely annihilate humans because we are an interesting part of the universe. He also predicts that AGI will be achieved by 2029.

While others are moving towards AGI with a bodyless form, Musks investment in everything robotics is probably a reflection of how a physical form may be the answer. A working prototype of Optimus robot, that is powered by the same self-driving computer that runs Tesla cars, was unveiled on the Tesla AI Day, last year. Musk believed that these advancements would one day contribute towards AGI.

Google and OpenAI (though not largely) have incorporated multimodality functions in their models. Googles PaLM-E, MedPalM-2 have multimodal capabilities. OpenAIs transformer-based architecture CLIP, released in Jan 2021, processes textual descriptions associated with images, and performs zero-shot learning for image classification and object detection. GPT-4 supports image uploads, and ChatGPT app supports voice commands through Whisper integration.

There are also a number of other models under research that may have a future role towards aiding companies achieve AGI causality is one of them. Believed to bring a huge transformation, causality refers to the relationship between cause (what we do) and effect (what happens as a result of it) where machines try to learn like us.

Altmans above tweet on testing the latest custom instructions feature was rather too specific. Resorting to ChatGPT to unroll a path towards super intelligence may be a playful number, but looking at tech leaders interpretation of AGI and super intelligence, their ambiguity on the matter is crystal clear. However, their approach to get there, either intentionally or unintentionally, is different and aligned with what they seem fits their long-term company goals.

Just like them, it seems obscure as to who might finish first in the AGI race. Each company discussed here has employed different models but their reliance on each are at varying degrees. However, OpenAIs huge dependency on transformer language models, and the fact that the primary goal for OpenAI itself was AGI, the company might be at the forefront of the so-called AGI race.

See the original post:

Who Will Win the AGI Race? - Analytics India Magazine

Read More..

Democracy, Defence and Conflict in the Age of AI – INSEAD Knowledge

The rapid advancement of artificial intelligence (AI) has resulted in its proliferation across various sectors from consulting and banking to manufacturing and healthcare, to name just a few. It is therefore critical to assess its impact on democratic principles and institutions, as well as military defence.

AI companies worldwide are racing to develop and deploy the technology as quickly as possible and attain AI supremacy. However, alarm bells have been sounded over the need to ensure that adequate safety measures are put in place to protect consumers, existing democratic institutions and society at large, and to prevent the technology from one day posing an existential threat to humanity.

What will be the impact of generative AI on our political system and in shaping public opinion and discourse? How can we guarantee that AI is utilised responsibly in conflict situations? And what is the proper role of governments in regulating the technology?

These were some of the questions posed at a recent Tech Talk X organised by [emailprotected], which was conducted under Chatham House Rule. Moderator Peter Zemsky, Deputy Dean and Dean of Innovation at INSEAD, was joined by panellists Antoine Bordes, vice president at Helsing, an AI and software company in defence; Judith Dada, general partner at venture capital fund La Famiglia; and Juri Schnller, co-founder and managing director at digital political communications and campaigning firm Cosmonauts & Kings.

During the discussion, the speakers explored the intersection of AI and democracy, including the implications of AI for defence. They also proposed strategies to ensure that AI technologies foster, rather than undermine, democratic values amid the challenges posed by an increasingly volatile international security landscape.

Overview of the AI landscape

AI and big data play a key role in framing public discourse during elections, and the technology willundoubtedly affectthe 2024 United States presidential race. The discussion kicked off with the panellists dissecting the evolution of how AI is used in the political realm.

Today, political parties and super PACs (political action committees, which raise and distribute campaign funds to their chosen candidates), especially in the US, are investing millions in developing and deploying AI models. These models allow them to dig deep into data points on individual voters to help facilitate more targeted campaign initiatives.

In addition to this, there is the widespread issue of bots and deepfakes being used to drive misinformation campaigns. As the technology becomes more sophisticated, it will become increasingly difficult for the average person to distinguish them from real or factual content.

Given the stresses that generative AI is putting on the political system, it is imperative for policymakers to play a key role in managing the technology appropriately. However, as this is a relatively new domain, the question is whether existing policymakers are equipped with the right knowledge and frameworks to understand the technology and enact the appropriate legislation around it.

The discussion then moved on to how AI is being used in military defence. In the Ukraine War, for instance, many AI tools that have commercial applications and are used by civilians are being harnessed to strengthen defensive capabilities, such as battlefield data analytics and drone technology. Indeed, the defence sector and European companies in particular saw record investment from venture capital firms in 2022, despite the wider slowdown in technology funding.

A tale of two regions

The panellists also touched on differences in the growth of AI between the US and Europe, and how European AI companies can catch up to their American counterparts. As one of the speakers pointed out, US companies have generally been a lot more strategic about investing in AI, leading to significant differences in value capture.

However, there seems to be a newfound sense of pride among European entrepreneurs who are eager to develop AI technology and shape the economic, political and regulatory perspective with a European viewpoint one that prioritises and upholds democratic values. Generative AI, in particular, presents a big opportunity for European companies to ensure that models incorporate European data sets in their training, thereby reflecting cultural references and values in the output.

Establishing the right frameworks and regulations can nurture these seeds of progress. However, the challenge lies in designing AI regulations that help promote the creation of economic value, without putting consumers at risk. European Union lawmakers recently passed the AI Act, billed as the first law on AI by a major regulator. Although it has yet to become law, it will have major implications for the development of AI in the region.

While the panellists were all in agreement on the necessity of regulations, one point that was raised emphasised that these regulations should not curtail AI development by start-ups or smaller companies in Europe. The concern was that such restrictions would indirectly benefit Big Tech, US-based firms and similar start-ups in China. These hurdles could come in the form of heavy reporting burdens, restrictions, paperwork and time lost as companies adapt to new legislation and ensure that they are not running afoul of the law.

Ideally, these regulations will help mitigate consumer risks while also creating the conditions to build a flourishing European AI ecosystem. One of the panellists suggested that a multi-stakeholder approach to this complex issue could be more effective than leaving it in the hands of politicians.

Upholding democratic values

Much has been said about AIs role in stoking populism and threatening the democratic process. One of the speakers framed democracy as a conversation that breaks down if it gets overwhelmed by bots and deepfakes.

As one of the panellists stressed, it will be crucial to have systems that verify AI-created content and clearly label it as being generated by AI. As political parties build customised large language models to serve their interests, it could be necessary to mandate the disclosure of the specific AI tools they are using and for what purpose, and how they train their data sets. This approach would be similar to disclosures required for political funding.

Of course, there are many cases of the technology being used for good. As a panellists commented, some NGOs are leveraging AI to help stateless individuals by getting real-time information to them in their language. The World Food Programme has also used AI to improve its ability to respond to emergencies caused by natural disasters.

Another panellist emphasised that this could potentially be the biggest technological shift humankind has ever seen. This engagement is vital to ensure the preservation of democratic values in society.

Read more from the original source:

Democracy, Defence and Conflict in the Age of AI - INSEAD Knowledge

Read More..

The hidden cost of the AI boom: social and environmental exploitation – BusinessWorld Online

Mainstream conversations about artificial intelligence (AI) have been dominated by a few key concerns, such as whether super intelligentAIwill wipe us out, or whetherAIwill steal our jobs. But weve paid less attention the various otherenvironmentalandsocialimpacts of our consumption ofAI, which are arguably just as important.

Everything we consume has associated externalities the indirect impacts of our consumption. For instance, industrial pollution is a well-known externality that has a negative impact on peopleandthe environment.

The online services we use daily also have externalities, but there seems to be a much lower level of public awareness of these. Given the massive uptake in the use ofAI, these factors mustnt be overlooked.

In 2019, French think tank The Shift Project estimated that the use of digital technologies produces more carbon emissions than the aviation industry.AndalthoughAIis currently estimated to contribute less than 1% of total carbon emissions, theAImarket size is predicted to grow ninefold by 2030.

Tools such as ChatGPT are built on advanced computational systems called large language models (LLMs). Although we access these models online, they are runandtrained in physical data centers around the world that consume significant resources.

Last year,AIcompany Hugging Face published an estimate of the carbon footprint of its own LLM called BLOOM (a model of similar complexity to OpenAIs GPT-3).

Accounting for the impact of raw material extraction, manufacturing, training, deploymentandend-of-life disposal, the models developmentandusage resulted in the equivalent of 60 flights from New York to London.

Hugging Face also estimated GPT-3s life cycle would result in ten times greater emissions, since the data centers powering it run on a more carbon-intensive grid. This is without considering the raw material, manufacturinganddisposal impacts associated with GTP-3.

OpenAIs latest LLM offering, GPT-4, is rumored to have trillions of parametersandpotentially far greater energy usage.

Beyond this, runningAImodels requires large amounts of water. Data centers use water towers to cool the on-site servers whereAImodels are trainedanddeployed. Google recently came under fire for plans to build a new data centre in drought-stricken Uruguay that would use 7.6 million liters of water each day to cool its servers, according to the nations Ministry of Environment (although the Minister for Industry has contested the figures). Water is also needed to generate electricity used to run data centers.

In a preprint published this year, Pengfei Liandcolleagues presented a methodology for gauging the water footprint ofAImodels. They did this in response to a lack of transparency in how companies evaluate the water footprint associated with usingandtrainingAI.

They estimate training GPT-3 required somewhere between 210,000and700,000 liters of water (the equivalent of that used to produce between 300and1,000 cars). For a conversation with 20 to 50 questions, ChatGPT was estimated to drink the equivalent of a 500 milliliter bottle of water.

LLMs often need extensive human input during the training phase. This is typically outsourced to independent contractors who face precarious work conditions in low-income countries, leading to digital sweatshop criticisms.

In January, Time reported on how Kenyan workers contracted to label text data for ChatGPTs toxicity detection were paid less than US$2 per hour while being exposed to explicitandtraumatic content.

LLMs can also be used to generate fake newsandpropaganda. Left unchecked,AIhas the potential to be used to manipulate public opinion,andby extension could undermine democratic processes. In a recent experiment, researchers at Stanford University foundAI-generated messages were consistently persuasive to human readers on topical issues such as carbon taxesandbanning assault weapons.

Not everyone will be able to adapt to theAIboom. The large-scale adoption ofAIhas the potential to worsen global wealth inequality. It will not only cause significant disruptions to the job market but could particularly marginalize workers from certain backgroundsandin specific industries.

The wayAIimpacts us over time will depend on myriad factors. Future generativeAImodels could be designed to use significantly less energy, but its hard to say whether they will be.

When it comes to data centers, the location of the centers, the type of power generation they use,andthe time of day they are used can significantly impact their overall energyandwater consumption. Optimizing these computing resources could result in significant reductions. Companies including Google, Hugging FaceandMicrosoft have championed the role theirAIandcloud services can play in managing resource usage to achieve efficiency gains.

Also, as direct or indirect consumers ofAIservices, its important were all aware that every chatbot queryandimage generation results in waterandenergy use,andcould have implications for human labour.

AIs growing popularity might eventually trigger the development of sustainability standardsandcertifications. These would help users understandandcompare the impacts of specificAIservices, allowing them to choose those which have been certified. This would be similar to the Climate Neutral Data Centre Pact, wherein European data centre operators have agreed to make data centers climate neutral by 2030.

Governments will also play a part. The European Parliament has approved draft legislation to mitigate the risks ofAIusage.Andearlier this year, the US senate heard testimonies from a range of experts on howAImight be effectively regulatedandits harms minimized. China has also published rules on the use of generativeAI, requiring security assessments for products offering services to the public. Reuters

Read more:

The hidden cost of the AI boom: social and environmental exploitation - BusinessWorld Online

Read More..