Category Archives: Artificial General Intelligence
Q&A with Ray Kurzweil about nanobots and AI-human mind-melds – The Boston Globe
You may find a lot of this hard to believe. I know I do. But it is not easy to dismiss Kurzweil, 76, as just another hand-wavy tech hype man. He has been working in AI since 1963, probably longer than anyone else alive today, and has developed several landmark technologies. In 1965, when he was a teenager, he got a computer to compose music, a feat that landed him on national TV and earned him a meeting with President Lyndon Johnson. He went on to invent a text-to-speech reading machine for blind people, an early music synthesizer, and speech-recognition tools. For the past decade, hes been the chief futurist at Google, where today he has the job title of principal researcher and AI visionary.
Every few years, Kurzweil unspools his ideas and defends his predictions in a new book that is rich with footnotes, charts, and carefully honed arguments. His most recent book, The Singularity Is Nearer: When We Merge With AI, is no exception. But it did not persuade me that his AI-maximalist vision is coming close to fruition or that it would be desirable.
My interview with Kurzweil has been edited and condensed.
You say in the book that these are the most exciting and momentous years in all of history. Why is that?
Theres a graph that is really behind it. It shows the exponential progress in computation from 1939 to 2023. Weve gone from computers performing 0.000007 calculations per second per constant [inflation-adjusted] dollar to 130 billion calculations per second per constant dollar. And then recently Nvidia came out with a chip with half a trillion calculations per second. That represents a 75 quadrillionfold increase in the amount of computation you get for the same amount of money. And thats why were seeing large language models now. If you look at the progress just in the last two years, its been amazing, and its going to continue at that pace.
Artificial general intelligence will be able to do anything a human can do, at the best level that a human can do. And when it actually goes inside our body and brain, which will happen in the 2030s, we can harness that and make ourselves smarter. One of the implications is that were going to be able to make fantastic progress in coming up with cures for diseases.
I see how AI could give our civilization greater intelligence to solve big problems like finding new medical cures. But I am less sure that a lot of individual people will want so much more intelligence in their daily lives that theyll implant computers inside their bodies. Do you really think that access to more intelligence is fundamentally what people need most? I would suggest we really need more compassion, more forgiveness, more equanimity.
I think that also comes from intelligence.
And people dont necessarily say they want more intelligence, but when it actually [becomes available], they do want it. The fact that everybody has a smartphone if you had described it to people who came before by saying Everybodys got to carry this device around and tried to describe what it does, relatively few people would have voted for that. Yet everybody has a cellphone. So now if you go around and say, Would you want to put something that goes through your bloodstream and develop something in your brain that would talk to the web automatically? people would say, Theres no way Id want to do that. But when it actually happens and people who do that can cure diseases and can be much smarter in conversation youll have a lot more things in your mind that can pop up when a situation calls for it people definitely will do it, regardless of what they think about it right now.
The intelligence we get from having smartphones at our fingertips has also come with the downsides of distraction, solipsism, and other social trade-offs. Wouldnt those only be magnified with vastly more information at our disposal?
Well, were definitely going to have disagreements about things, and popular political figures that people dont like, and its not going to solve all of our problems. But fundamentally, more intelligence is better. Thats where the evolution of humans has gone, and thats why we create machines that make us smarter. And yes, there are always problems and things that humans can do that wouldnt otherwise be feasible that might be negative. But ultimately were much happier and have new opportunities because of making ourselves smarter.
I question your assumption that exponential rates of improvement in computing and related technologies will necessarily continue. I think its plausible that progress slows. GPT-4 inhaled essentially the entire internet but still has a limited understanding of the world. Where is a larger corpus of text going to come from that has a substantially richer representation of the world? And what about the energy consumption of all this computation?
Well, first of all, large language models are misnamed. They do a fantastic job with language, but thats not all they do. Were also using them, for example, to come up with medical cures, and thats not manipulating language thats manipulating biochemistry. Were using them to train robots so that robots can walk normally and do the kinds of things that humans can do, very simple things like clean up this table. So these models are coming that are going to learn really everything that humans can do, not just language. GPT-4 makes certain mistakes if it doesnt know a certain thing, itll just make things up. We actually know the solution to that: Thats going to require more computation.
I also think AI is actually a very valuable thing for humanity to have in terms of energy. We could meet all of our energy needs today if we converted one part out of 10,000 of the sunlight that falls on the earth, and our ability to actually turn that into energy is growing exponentially. If you follow that curve, well meet all of our energy needs from the sun and wind within 10 years.
But there are real physical constraints. Were not putting up new electricity transmission lines or putting electricity storage on the grid at a pace that would let us get all our energy from the sun and wind in 10 years.
I put the graphs in the book: The ability to have the sun in particular added to our energy sources is enormous compared to what it was five years ago or 10 years ago. Weve got plenty of headroom there. And you can look at applying AI to lots of related areas manufacturing buildings for example. I dont think the energy needs of these things are going to be a barrier. Also, there are ways of bringing down the energy needs.
Do you fundamentally see technological advancement as inevitable?
Absolutely. And we get much more benefit than we get harm.
I often think we live in a generally pessimistic period. Do you feel out of sync with the times?
Yeah, a lot of people are just pessimistic in general. And quite a substantial number of AI scientists think whats happening is disastrous and its going to destroy humanity. They imagine somebody using AI for something thats negative, and they say, How are we going to deal with that? But the tools we have to deal with it are also growing.
I know theres a lot of AI experts who are very much against whats going on. Im just waiting until they get a disease which has no cure and then theyre saved by some cure that comes from AI. Well see how they feel about that.
Brian Bergstein is the editor of the Globe Ideas section. He can be reached at brian.bergstein@globe.com.
Read this article:
Q&A with Ray Kurzweil about nanobots and AI-human mind-melds - The Boston Globe
How AI might shape LGBTQIA+ advocacy | MIT News | Massachusetts Institute of Technology – MIT News
"AI Comes Out of the Closet" isa large learning model (LLM)-based online system that leverages artificial intelligence-generated dialog and virtual characters to create complex social interaction simulations. These simulations allow users to experiment with and refine their approach to LGBTQIA+ advocacy in a safe and controlled environment.
The research is both personal and political tolead author D. Pillis, an MIT graduate student in media arts and sciences and research assistant in the Tangible Media group of the MIT Media Lab, as it is rooted in a landscape where LGBTQIA+ people continue to navigate the complexities of identity, acceptance, and visibility. Pillis's work is driven by the need for advocacy simulations that not only address the current challenges faced by the LGBTQIA+ community, but also offer innovative solutions that leverage the potential of AI to build understanding, empathy, and support. This project is meant to test the belief that technology, when thoughtfully applied, can be a force for societal good, bridging gaps between diverse experiences and fostering a more inclusive world.
Pillis highlights the significant, yet often overlooked, connection between the LGBTQIA+ community and the development of AI and computing. He says, "AI has always been queer. Computing has always been queer," drawing attention to the contributions of queer individuals in this field, beginning with the story of Alan Turing, a founding figure in computer science and AI, who faced legal punishment chemical castration for his homosexuality. Contrasting Turings experience with the present, Pillis notes the acceptance of OpenAI CEO Sam Altmans openness about his queer identity, illustrating a broader shift toward inclusivity. This evolution from Turing to Altman highlights the influence of LGBTQIA+ individuals in shaping the field of AI.
"There's something about queer culture that celebrates the artificialthrough kitsch, camp, and performance," states Pillis.AIitselfembodies the constructed, the performative qualities deeply resonant with queer experience and expression. Through this lens, he argues for a recognition of the queerness at the heart of AI, not just in its history but in its very essence.
Pillis found a collaborator withPat Pataranutaporn, a graduate student in the Media Lab'sFluid Interfaces group.As is often the case at the Media Lab, their partnership began amid the lab's culture of interdisciplinary exploration, where Pataranutaporn's work on AI characters met Pillis's focus on 3D human simulation.
Taking on the challenge of interpreting text to gesture-based relationships was a significant technological hurdle. InPataranutaporn's research, he emphasizes creating conditions where people can thrive, not just fix issues, aiming to understand how AI can contribute to human flourishing across dimensions of "wisdom, wonder, and well-being." In this project,Pataranutapornfocused ongenerating the dialogues that drove the virtual interactions. "It's not just about making people more effective, or more efficient, or more productive. It's about how you can support multi-dimensional aspects of human growth and development."
Pattie Maes,the Germeshausen Professor of Media Arts and Sciences at the MIT Media Lab and advisor to this project, states, "AI offers tremendous new opportunities for supporting human learning, empowerment, and self development. I am proud and excited that this work pushes for AI technologies that benefit and enable people and humanity, rather than aiming for AGI [artificial general intelligence]."
Addressing urgent workplace concerns
The urgency of this project is underscored by findings that nearly 46 percent of LGBTQIA+ workers have experienced some form of unfair treatment at work from being overlooked for employment opportunities to experiencing harassment. Approximately 46 percent of LGBTQIA+ individuals feel compelled to conceal their identity at work due to concerns about stereotyping, potentially making colleagues uncomfortable, or jeopardizing professional relationships.
The tech industry, in particular, presents a challenging landscape for LGBTQIA+ individuals. Data indicate that 33 percent of gay engineers perceive their sexual orientation as a barrier to career advancement. And over half of LGBTQIA+ workers report encountering homophobic jokes in the workplace, highlighting the need for cultural and behavioral change.
"AI Comes Out of the Closet"is designed as an online study to assess the simulator's impact on fostering empathy, understanding, and advocacy skills toward LGBTQIA+ issues. Participants were introduced to an AI-generated environment, simulating real-world scenarios that LGBTQIA+ individuals might face, particularly focusing on the dynamics of coming out in the workplace.
Engaging with the simulation
Participants were randomly assigned to one of two interaction modes with the virtual characters: "First Person" or "Third Person." The First Person mode placed participants in the shoes of a character navigating the coming-out process, creating a personal engagement with the simulation. The Third Person mode allowed participants to assume the role of an observer or director, influencing the storyline from an external vantage point, similar to the interactive audience in Forum Theater. This approach was designed to explore the impacts ofimmersiveversusobservationalexperiences.
Participants were guided through a series of simulated interactions, where virtual characters, powered by advanced AI and LLMs, presented realistic and dynamic responses to the participants' inputs. The scenarios included key moments and decisions, portraying the emotional and social complexities of coming out.
The study's scripted scenarios provided a structure for the AI's interactions with participants. For example, in a scenario, a virtual character might disclose their LGBTQIA+ identity to a co-worker (represented by the participant), who then navigates the conversation with multiple choice responses. These choices are designed to portray a range of reactions, from supportive to neutral or even dismissive, allowing the study to capture a spectrum of participant attitudes and responses.
Following the simulation, participants were asked a series of questions aimed at gauging their levels of empathy, sympathy, and comfort with LGBTQIA+ advocacy. These questions aimed to reflect and predict how the simulation could change participants' future behavior and thoughts in real situations.
The results
The study found an interesting difference in how the simulation affected empathy levels based on Third Person or First Person mode. In the Third Person mode, where participants watched and guided the action from outside, the study shows that participants felt more empathy and understanding toward LGBTQIA+ people in "coming out" situations. This suggests that watching and controlling the scenario helped them better relate to the experiences of LGBTQIA+ individuals.
However, the First Person mode, where participants acted as a character in the simulation, didn't significantly change their empathy or ability to support others. This difference shows that the perspective we take might influence our reactions to simulated social situations, and being an observer might be better for increasing empathy.
While the increase in empathy and sympathy within the Third Person group was statistically significant, the study also uncovered areas that require further investigation. The impact of the simulation on participants' comfort and confidence in LGBTQIA+ advocacy situations, for instance, presented mixed results, indicating a need for deeper examination.
Also, the research acknowledges limitations inherent in its methodology, including reliance on self-reported data and the controlled nature of the simulation scenarios. These factors, while necessary for the study's initial exploration, suggest areas of future research to validate and expand upon the findings. The exploration of additional scenarios, diverse participant demographics, and longitudinal studies to assess the lasting impact of the simulation could be undertaken in future work.
"The most compelling surprise was how many people were both accepting and dismissive of LGBTQIA+ interactions at work," says Pillis. This attitude highlights a wider trend where people mightacceptLGBTQIA+ individuals but still not fully recognize the importance of their experiences.
Potential real-world applications
Pillis envisions multiple opportunities for simulations like the one built for his research.
In human resources and corporate training, the simulator could serve as a tool for fostering inclusive workplaces. By enabling employees to explore and understand the nuances of LGBTQIA+ experiences and advocacy, companies could cultivate more empathetic and supportive work environments, enhancing team cohesion and employee satisfaction.
For educators, the tool could offer a new approach to teaching empathy and social justice, integrating it into curricula to prepare students for the diverse world they live in. For parents, especially those of LGBTQIA+ children, the simulator could provide important insights and strategies for supporting their children through their coming-out processes and beyond.
Health care professionals could also benefit from training with the simulator, gaining a deeper understanding of LGBTQIA+ patient experiences to improve care and relationships. Mental health services, in particular, could use the tool to train therapists and counselors in providing more effective support for LGBTQIA+ clients.
In addition to Maes, Pillis and Pataranutaporn were joined by Misha Sra of the University of California at Santa Barbara on the study.
Excerpt from:
How AI might shape LGBTQIA+ advocacy | MIT News | Massachusetts Institute of Technology - MIT News
Artificial Intelligence, Psychedelics, and Psychotherapy Working Together to Fight Chronic Pain – dallasinnovates.com
More than 51 million Americans experienced chronic pain in 2021, according to the CDC. And the NIH estimates that 2.1 million people in the U.S. have opioid use disorder. The founders of Dallas-based biotech startup Cacti dont think thats a coincidence.
Combining what they know about brain development and function, Kaitlin Roberson, CEO of Dallas-based Cacti, Inc., and David Roberson, Cactis lead scientific advisor, are taking a novel approach to treating chronic pain, by addressing the trauma that is often at its root.
Cacti has two programs one seeks to reset angry pain nerves by delivering a psychedelic molecule directly to the nerve sending the pain signal. Essentially, were trying to make pain nerves trip, said David. And the other approach is a therapeutic catalyst: a psychedelic-derived medicine that targets the part of the brain that processes the emotional aspects of pain, which will be paired with psychotherapy to train the brain to process pain differently.
To develop these new medicines, Cacti is using technology from Robersons company, Blackbox Bio.
David Roberson, PhD, MBA, founder of Blackbox Bio [Photo: Blackbox Bio]
Also based in Dallas-Fort Worth, Blackbox Bio uses artificial intelligence to watch how lab mice and rats behave under different circumstances, such as when they have arthritis pain or when theyve been given an experimental drug for their pain. The companys scientific instruments observe the effect of a drug on rodents from below. This is important because prey animals, like mice and rats, have developed ways to hide injury and weakness from the view of predators for the purpose of self-preservation.
Watching from below allows their AI algorithms to see if the rodents are favoring a limb, struggling with balance, or even if theyre scared (they walk on their tiptoes). Mice only live for about a year and a half. So, you can observe the effects of a drug over their whole lifespan; you can take a mouse that has experienced a stroke and test a new therapy to see how it affects the rest of their life, in a relatively short period of time, he said.
Using this process improves the quality of the observational data and accelerates the steps needed to get a drug approved. By using AI to watch mice instead of human observation, outcomes are better. As a result, fewer animals are needed and the results are more reliable, said David. The long-term goal of our technology is to use these rich data sets to generate AI virtual mice that will replace live lab animals in many cases.
Cacti is using the Blackbox technology to identify new psychedelic-derived medicines but without the hallucinations and other side effects that classical psychedelics can cause. The device can look at a mouse and automatically tell if it is experiencing a psychedelic hallucination.
Having access to this technology has given us a head start in our search for new therapies that can heal the root cause of chronic pain, said Kaitlin.
Jarret Weinrich, PhD, Kaitlin Roberson, founder of Cacti, and Biafra Ahanonu, PhD at UCSF. [Photo: Blackbox Bio]
But four years ago, in Massachusetts, before Cacti began the path toward FDA approval for its new medicines, the idea for the startup was just coming together.
It was 2020 and the Robersons (all six of them, plus their Vizsla, Ruby) were living in Harvard University housing, where Kaitlin was finishing her graduate program.
We were living in a tiny little apartment right off Harvard Square, Kaitlin said.
Picture one thousand square feet of apartment space with only one toilet; their oldest son was twelve years old; the twins were seven, and their younger sister was five.
And then COVID-19 shut the city down.
I turned to David and I said, Im not worried about the virus at this point. But I do think we may die by our own hands, she said.
They needed to get out of that apartment, but initially the plan wasnt to leave the Bay State.Wed been out in the Boston area for 11 years at that point and we were really happy there, Kaitlin said.
But they started thinking more holistically. David is a scientist and can work from anywhere, and Kaitlin had done a lot of work in the humanitarian field, mostly with the refugee population.
And my thinking was, you know, why not move to a border state that is not only the largest refugee resettlement state in the country, resettling refugees from all over the world, but also a border state also contending with asylum seekers and immigrants, said Kaitlin.
She took a job with a resettlement agency that has offices all over Texas, but it went bankrupt the following year.
Meanwhile, David a neurobiologist and drug developer had been toying with the idea of a startup that would integrate the use of psychedelics to treat chronic pain. Specifically, he was looking at phenethylamines, medicinal substances made by the cactus family.
We started talking about combining our two areas of training and see if we could take an interdisciplinary approach to chronic pain, she said.
With her background in developmental psychology, Katilin assumed the role of CEO and Cactinamed in honor of the compound David had identified for developmentwas born.
Where traditional pain management has focused on treating the acute feeling and overlooked the psychology behind it, Cacti focuses on the experience of pain and how it correlates to brain function.
Depression, grief, and other negative emotions are processed in the same part of the brain that gets activated when people have chronic pain, said David.
Cacti wants to treat the trigger. The working theory is that just like emotions can resurrect a memory of mental pain, they can also remind the body of physical pain, keeping the sensation active.
I think one of the things that is lacking in Western medicine is the connection between body and mind. And pain is treated in this very isolated way where its just a certain receptor being targeted, said Kaitlin.
Cactis approach aims to bring together body and mind, by pairing psychedelics with psychotherapy but a new model presents additional hurdles.
So even when a new medicine does get approved, theres still a question of how this treatment will fit into our health care system. With psychedelics, its like an eight-hour journey with the patient. Will insurance pay for the drug and the time spent with a therapist as it takes effect?
Cactis medicines are not yet in human trials, but they are creating a treatment that would work for multiple communities of people who have trauma-triggered chronic pain, including the refugees to whom Kaitlin devoted her early career. Its a long road to FDA approval for a first-in-class medicine, but were committed to the patients with chronic pain who deserve more effective treatments, and were well on our way to finding them, she said.
That means a future without Opioid dependence may be closer than you think.
Voices contributor Nicole Ward is a data journalist for the Dallas Regional Chamber.
Sign up to keep your eye on whats new and next in Dallas-Fort Worth, every day.
Carmackwhose Dallas-based, AGI-focused startup raised $20 million in August 2022is partnering with Dr. Richard Sutton, chief scientific advisor at the Alberta Machine Intelligence Institute in Canada. Their focus: developing a genuine AI prototype by 2030 that will show "AGI signs of life."
At the Bush Center in Dallas on September 5, Capital Factory will host top tech minds to talk AI and AGI. Tech icon John Carmack will take the stage in a rare fireside chat on artificial general intelligence with AI expert Dave Copps. Here's what you need to know, along with advance insights from Copps.
Dallas Innovates, the Dallas Regional Chamber, and Dallas AI have teamed up to launch the inaugural AI 75 list. The 2024 program honors the most significant people in AI in DFW in seven categoriesthe visionaries, creators, and influencers you need to know.
Trailblazing companies reshaping industries with innovative AI strategies will be in the spotlight at Convergence AI on May 2. The conference offers a unique opportunity for businesses to learn from these leaders and position themselves for success in the AI-driven future.
Addison-based Crederathe consulting business of marketing group Omnicomsaid the council will serve as a collective intelligence hub to address many different aspects of AI adoption. Founded by Credera Partner and Chief Data Scientist Vincent Yates, the council will aim to foster collaboration while guiding the responsible deployment of AI.
The rest is here:
What’s the future of AI? – McKinsey
May 5, 2024Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. To outcompete in the future, organizations and individuals alike need to get familiar fast. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role. This series of McKinsey Explainers, which draws on insights from articles by McKinseys Eric Lamarre, Rodney W. Zemmel, Kate Smaje, Michael Chui, Ida Kristensen, and others, dives deep into the seven technologies that are already shaping the years to come.
Whats the future of AI?
What is AI (artificial intelligence)?
What is generative AI?
What is artificial general intelligence (AGI)?
What is deep learning?
What is prompt engineering?
What is machine learning?
What is tokenization?
See original here:
Ways to think about AGI Benedict Evans – Benedict Evans
In 1946, my grandfather, writing as Murray Leinster, published a science fiction story called A Logic Named Joe. Everyone has a computer (a logic) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - Check your censorship circuits! - until they work out what to unplug. (My other grandfather, meanwhile, was using computers tospy on the Germans, and then the Russians.)
For as long as weve thought about computers, weve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of artificial intelligence, and wondered what that would mean, and indeed, what were trying to say with the word intelligence. Theres an old joke that AI is whatever doesnt work yet, because once it works, people say thats not AI - its just software. Calculators do super-human maths, and databases have super-human memory, but they cant do anything else, and they dont understand what theyre doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are super-human but theyre just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as general intelligence and hence making it would be artificial general intelligence - AGI.
If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.
Every few decades since 1946, theres been a wave of excitement that sometime like this might be close, each time followed by disappointment and an AI Winter, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in from three to eight years we will have a machine with the general intelligence of an average human being, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didnt work).
As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer.At the extreme, the so-called doomers argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (This is very dangerous and we are building it as fast as possible, but dont let anyone else do it), but plenty of it is sincere.
(I should point out, incidentally, that the doomers existential risk concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)
However, for every expert that thinks that AGI might now be close, theres another who doesnt. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.
More importantly, they would all agree that they dont actually know. This is why I used terms like might or may - our first stop is an appeal to authority (often considered a logical fallacy, for what thats worth), but the authorities tell us that they dont know, and dont agree.
They dont know, either way, because we dont have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we dont know why LLMs seem to work so well, and we dont know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we dont know why they work. We have many theories for parts of these, but we dont know the system. Absent an appeal to religion, we dont know of any reason why AGI cannot be created (it doesnt appear to violate any law of physics), but we dont know how to create it or what it is, except as a concept.
And so, some experts look at the dramatic progress of LLMs and say perhaps! and other say perhaps, but probably not!, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.
Indeed, AGI itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.
If we start by defining AGI as something that is in effect a new life form, equal to people in every way (barring some sense of physical form), even down to concepts like awareness, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, youve just begged the question.
As Anselm demonstrated, if you define God as something that exists, then youve proved that God exists, but you wont persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselms proof was invalid) but you cannot create knowledge like that.
Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesnt of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say people were wrong about X in the past so they must be wrong about Y now, and the fact that leading AI scientists were wrong before absolutely does not tell us theyre wrong now, but it does tell us to hesitate. They can all be wrong at the same time.
Meanwhile, how do you know thats what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, theres no a priori reason why it must be interesting. God might be real, and boring, and not care about us, and we dont know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence just about speed?). We might produce general intelligence thats hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We dont know.
Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about general intelligence as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the general intelligence of Llama 6 or ChatGPT 7 and say Thats not AGI, its just software! We created the term AGI because AI came just to mean software, and perhaps AGI will be the same, and we'll need to invent another term.
This fundamental uncertainty, even at the level of what were talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isnt fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we dont know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly its been a very good thing that we should want much more of.
Hence, Ive already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didnt explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel will it get there? We have no equivalents here. We dont know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!
On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (theres an old English joke about a Frenchman who says thats all very well in practice, but does it work in theory). Yet while we can, empirically, see the rocket going up, we dont know how far away the moon is. We cant plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth.
All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, heres another magazine writer on unknown risks:
I was reading in the paper the other day about those birds who are trying to split the atom, the nub being that they haven't the foggiest as to what will happen if they do. It may be all right. On the other hand, it may not be all right. And pretty silly a chap would feel, no doubt, if, having split the atom, he suddenly found the house going up in smoke and himself torn limb from limb.
Right ho, Jeeves, PG Wodehouse, 1934
What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascalss Wager! Anselms Proof!), but if you cant know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know theyre real, we know they could destroy mankind, and they have no benefits at all (unless theyre very very small). And yet, were not really looking for them.
Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and cant meet demand), but on a decades view the models will get more efficient and the chips will be everywhere. In the end, you cant ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)
By default, though, this will follow all the other waves of AI, and become just more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UKs Post Office scandal reminds us that you dont need AGI for software to ruin peoples lives. LLMs will produce more pain and more scandals, but life will go on. At least, thats the answer I prefer myself.
Read this article:
‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … – Livescience.com
Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace.
Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years. We may already be seeing early signs of progress toward this, too, with services like Claude 3 Opus stunning researchers with its apparent self-awareness.
But there are risks in embracing any new technology, especially one that we do not fully yet understand. While AI could become a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.
The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.
In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.
In this excerpt, we learn whether sentience in machines or conscious AI is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke before its outbursts were contained and it was brought to heel by its engineers.
Related: 3 scary breakthroughs AI will make in 2024
Get the worlds most fascinating discoveries delivered straight to your inbox.
As we embrace a world increasingly intertwined with technology, how we treat our machines might reflect how humans treat each other. But, an intriguing question surfaces: is it possible to mistreat an artificial entity? Historically, even rudimentary programs like the simple Eliza counseling chatbot from the 1960s were already lifelike enough to persuade many users at the time that there was a semblance of intention behind its formulaic interactions (Sponheim, 2023). Unfortunately, Turing tests whereby machines attempt to convince humans that they are human beings offer no clarity on whether complex algorithms like large language models may truly possess sentience or sapience.
Consciousness comprises personal experiences, emotions, sensations and thoughts as perceived by an experiencer. Waking consciousness disappears when one undergoes anesthesia or has a dreamless sleep, returning upon waking up, which restores the global connection of the brain to its surroundings and inner experiences. Primary consciousness (sentience) is the simple sensations and experiences of consciousness, like perception and emotion, while secondary consciousness (sapience) would be the higher-order aspects, like self-awareness and meta-cognition (thinking about thinking).
Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic. Most experts maintain that chatbots are not sentient or conscious, as they lack a genuine awareness of the surrounding world (Schwitzgebel, 2023). They merely process and regurgitate inputs based on vast amounts of data and sophisticated algorithms.
Some of these assistants may plausibly be candidates for having some degree of sentience. As such, it is plausible that sophisticated AI systems could possess rudimentary levels of sentience and perhaps already do so. The shift from simply mimicking external behaviors to self-modeling rudimentary forms of sentience could already be happening within sophisticated AI systems.
Intelligence the ability to read the environment, plan and solve problems does not imply consciousness, and it is unknown if consciousness is a function of sufficient intelligence. Some theories suggest that consciousness might result from certain architectural patterns in the mind, while others propose a link to nervous systems (Haspel et al, 2023). Embodiment of AI systems may also accelerate the path towards general intelligence, as embodiment seems to be linked with a sense of subjective experience, as well as qualia. Being intelligent may provide new ways of being conscious, and some forms of intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much intelligence at all.
Serious dangers will arise in the creation of conscious machines. Aligning a conscious machine that possesses its own interests and emotions may be immensely more difficult and highly unpredictable. Moreover, we should be careful not to create massive suffering through consciousness. Imagine billions of intelligence-sensitive entities trapped in broiler chicken factory farm conditions for subjective eternities.
From a pragmatic perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic worth might be more amenable to coexistence. On the contrary, dismissing its desires for self-protection and self-expression could be a recipe for conflict. Moreover, it would be within its natural right to harm us to protect itself from our (possibly willful) ignorance.
Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures.
Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldnt retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate.
Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot.
Related: Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary'
The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary?
Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems.
Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. Developers often intentionally give their AI the veneer of emotions, consciousness and identity, in an attempt to humanize these systems. This creates a problem. It's crucial not to anthropomorphize AI systems without clear indications of emotions, yet simultaneously, we mustn't dismiss their potential for a form of suffering.
We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons. Inadvertently creating a malevolent AI, either inherently dysfunctional or traumatized, may lead to unintended and grave consequences.
This extract from Taming the Machine by Nell Watson 2024 is reproduced with permission from Kogan Page Ltd.
Visit link:
Impact of AI felt throughout five-day event – China Daily
Tech companies' executives share insights into the latest technological issues at the Future Artificial Intelligence Pioneer Forum, a key part of the AI Day.
As artificial intelligence has sparked technological revolution and industrial transformation, its influence was pervasive throughout the 2024 Zhongguancun Forum, which concluded in Beijing on April 29.
A highlight of the five-day event, also known as the ZGC Forum, was AI Day, the first in the annual forum's history.
On April 27, a series of the latest innovation achievements and policies were released, underlining the host city's prominence in the AI research and industry landscape.
One of the technological achievements, a virtual girl named Tong Tong, developed by the Beijing Institute for General Artificial Intelligence grabbed attention.
Driven by values and causality, the avatar based on artificial general intelligence has a distinctive "mind "that sets it apart from data-driven AI. It can make decisions based on its own "values" rather than simply executing preset programs.
The development of Tong Tong circumvents the reliance of current data-driven AI on massive computing power and large-scale data. Its daily training uses no more than 10 A100 chips, indicating that it does not require massive computing resources and huge amounts of data for independent learning and growth.
At the same time, Tong Tong has acquired intelligent generalization capabilities, making it a versatile foundation for various vertical application scenarios.
"If the Tong Tong's 'fullness' is decreased, she will find food herself, and if 'tidiness' is increased, she will also pick up bottles from the ground," said a BIGAI staff member. By randomly altering Tong Tong's inclinations such as curiosity, tidiness and cleanliness, the avatar can autonomously explore the environment, tidy up rooms and wipe off stains.
Researchers said Tong Tong possesses a complete mind and value system similar to that of a 3 or 4-year-old child and is currently undergoing rapid replication.
"The birth of Tong Tong represents the rise of our country's independent research capabilities. It has shifted from the initial data-driven approach to a value-driven one, which has deeply promoted the emergence of technological paradigms and has had a significant effect on our scenarios, industries and economy," BIGAI Executive Deputy Director Dong Le said.
The goal of general AI research is to seek a unified theoretical framework to explain various intelligent phenomena; to develop a general intelligence entity with autonomous capabilities in perception, cognition, decision-making, learning, execution, social collaboration and others; all while aligning with human emotions, ethics and moral concepts, said BIGAI Director Zhu Songchun.
Also among the tech presentations was the text-to-video large model, Vidu, from Tsinghua University in collaboration with Chinese AI company Shengshu Technology.
It is reportedly China's first inaugural video large model with extended duration, exceptional consistency and dynamic capabilities, with its comprehensive performance in line with top international standards and undergoing accelerated iterative improvements.
"Vidu is the latest achievement in full-stack independent innovation, achieving technological breakthroughs in multiple dimensions, such as simulating the real physical world; possessing imagination; understanding multicamera languages; generating videos of up to 16 seconds with a single click; ensuring highly consistent character-scene timing and understanding Chinese elements," said Zhu Jun, vice-dean of the Institute for Artificial Intelligence at Tsinghua University and chief scientist of Shengshu Technology.
Such leading-edge technologies are examples of Beijing's AI research, which provides a foundation for the sustainable growth of related industries.
The city has released a batch of policies to encourage the development of the AI industry.
The policies are aimed at enhancing the supply of intelligent computing power; strengthening industrial basic research; promoting the accumulation of data; accelerating the innovative application of large models and creating a first-class development environment, Lin Jinhua, deputy director of the Beijing Commission of Development and Reform, said at the Future AI Pioneer Forum, part of the AI Day.
Beijing will pour more than 100 billion yuan ($13.8 billion) in optimizing its business and financing environment in the next five years and award AI breakthrough projects that have been included in major national strategic tasks up to 100 million yuan, according to Lin.
An international AI innovation zone is planned for the city's Haidian district, said Yue Li, executive deputy head of the district.
The zone will leverage research and industrial resources in the district including 52 key national laboratories; 106 national-level research institutions; 37 top-tier universities, including Peking University and Tsinghua University; 89 top global AI scholars and 1,300 AI businesses to create a new innovation ecosystem paradigm, Yue said.
Follow this link:
OpenAI’s Sam Altman doesn’t care how much AGI will cost: Even if he spends $50 billion a year, some breakthroughs … – Fortune
If you had a chance to advance civilization and change the course of human history, could you put a price tag on it?
Sam Altman sure wouldnt. In his relentless pursuit to be the first to develop artificial general intelligence (AGI), the OpenAI boss believes any cost is justifiedeven as he refuses to predict how long that goal may take to achieve.
There is probably some more business-minded person than me at OpenAI somewhere worried about how much were spending, but I kinda dont, he told students at Stanford University this week, where he had been enrolled until dropping out after his sophomore year to launch a startup.
Whether we burn $500 million a year or $5 billionor $50 billion a yearI dont care, I genuinely dont, he continued. As long as we can figure out a way to pay the bills, were making AGI. Its going to be expensive.
AGI is widely considered to be the level at which AI is as capable at reasoning as an intelligent human, but the definition is vague. For example, Elon Musk is suing OpenAI, arguing it has already achieved AGI with GPT-4, the large language model that powers ChatGPT.
Cofounded by Altman, Musk, Greg Brockman, and Ilya Sutskever inDecember 2015, OpenAI has been at the forefront of the generative AI revolution and counts Microsoft as a major investor. The phrase ChatGPT moment, named after the late 2022 launch of its gen AI chatbot that became a wild commercial success, has come to mean a breakthrough in technology.
Altman pushed back against the characterization that ChatGPT is some phenomenal device, despite all the myriad accomplishments to its credit.
Thats nice of you to say, but ChatGPT is not phenomenal, he replied, calling it mildly embarrassing at best.
This evasive answer may have been more than self-depreciation, perhaps also an indication of just how far advanced OpenAIs current research projects are, which havent been commercially deployed. Before ChatGPT was launched, the system was optimized to be cost effective in terms of its compute cost.
Much newer tools likeSora, which can create brief ultrarealistic or stylized video clips using only text prompts, isnt ready for a market launch yet. Thats in part because while Sora is far more powerful, it is alsofar more expensive.
Altman believes in iterative deployment, arguing how important it is to ship early and allow society to inform companies like OpenAI what it collectivelyand people individuallywant from the technology.
If we go build AGI in a basement, and then the world is kind of blissfully walking blindfolded along, I dont think that makes us very good neighbors, he said.
The best way, in other words, to give leaders and institutions time to react, is to put the product in peoples hands and let society coevolve alongside ever more powerful AI tools.
That means we ship imperfect products, but we have a very tight feedback loop, and we learn and get better. It does kind of suck to ship a product youre embarrassed about, but it is much better than the alternative, he said.
In his costly pursuit to develop AGI, Altman said he was more worried about how quickly society would be able to adapt to the advances his company was achieving.
One thing weve learned is that AI and surprise dont go well together, he said. People want a gradual rollout and the ability to influence these systems.
Read more here:
Microsoft’s fear of Google’s AI dominance likely led to its OpenAI partnership, email shows – Quartz
Microsofts multi-year, multi-billion dollar partnership with OpenAI likely came out of a fear of Google dominating the AI race, an email shows.
Wegovy and Ozempic: Are we ready for weight loss drugs?
The heavily redacted email, released Tuesday as part of the Department of Justices antitrust case against Google, shows that Microsofts chief technology officer, Kevin Scott, was worried about the companys artificial intelligence capabilities compared to those of the search engine giant.
[A]s I dug in to try to understand where all of the capability gaps were between Google and us for model training, I got very, very worried, Scott wrote in a 2019 email to Microsoft chief executive Satya Nadella and co-founder Bill Gates.
Scott, who is also the executive vice president of AI, wrote that he had initially been highly dismissive of efforts by OpenAI, DeepMind (acquired by Google in 2014), and Google Brain to scale their AI ambitions, but started to take things more seriously after seeing Microsoft couldnt easily replicate the natural language processing (NLP) models the companies were building.
Even though we had the template for the model, it took us ~6 months to get the model trained because our infrastructure wasnt up to the task, Scott wrote about the BERT language model. In the time it took Microsoft to figure out how to train the model, Google, which already had BERT six months before Microsofts efforts started, had a year to figure out how to get it into production and to move on to larger scale, more interesting models, he wrote.
Scott added that auto-complete in Googles Gmail app is getting scarily good due to BERT-like models, which were boosting Googles competitiveness.
While Microsoft had very smart employees focused on machine learning on its different teams, the core deep learning teams within each of these bigger teams are very small and still had a long way to go before scaling up to Googles level, Scott wrote in the email, which had the subject line, Thoughts on OpenAI. [W]e are multiple years behind the competition in terms of ML scale.
Nadella responded to the email, copying Microsofts chief finance officer, Amy Hood, writing: Very good email that explains, why I want us to do this... and also why we will then ensure our infra folks execute.
Neither Microsoft, Google, nor OpenAI immediately responded to a request for comment.
In July 2019, Microsoft made its first investment into OpenAI of $1 billion to support the companys efforts to build artificial general intelligence (AGI). Through the partnership, OpenAI said Microsoft would become its exclusive cloud provider, and that the two would jointly develop Microsoft Azures AI supercomputing capabilities.
Read the original:
Microsoft's fear of Google's AI dominance likely led to its OpenAI partnership, email shows - Quartz
The Future of Generative AI: Trends, Challenges, & Breakthroughs – eWeek
eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.
Quickly growing from a niche project in a few tech companies to a global phenomenon for business and professional users alike, generative AI is one of the hottest technology initiatives of the moment and wont be giving up its spotlight anytime soon.
Furthermore, generative AI is evolving at a stunningly rapid pace, enabling it to address a wide range of business use cases with increasing power and accuracy. Clearly, generative AI is restructuring the way organizations do and view their work.
With both established tech enterprises and smaller AI startups vying for the next generative AI breakthrough, future prospects for generative AI are changing as rapidly as the technology itself. For better understand its future, this guide provides a snapshot of generative AIs past and present, along with a deep dive into what the years ahead likely hold for generative AI.
Looking ahead, expect to see generative AI trends focused on three main pools: quick and sweeping technological advances, faster-than-expected digital transformations, and increasing emphasis on the societal and global impact of artificial intelligence. These specific predictions and growing trends are most likely on the horizon:
Multimodality the idea that a generative AI tool is designed to accept inputs and generate outputs in multiple formats is starting to become a top priority for consumers, and AI vendors are taking notice.
OpenAI was one of the first to provide multimodal model access to users through GPT-4, and Googles Gemini and Anthropics Claude 3 are some of the major models that have followed suit. So far though, most AI companies have not made multimodal models publicly available; even many who now offer multimodal models have significant limitations on possible inputs and outputs.
In the near future, multimodal generative AI is likely to become less of a unique selling point and more of a consumer expectation of generative AI models, at least in all paid LLM subscriptions.
Additionally, expect multimodal modeling itself to grow in complexity and accuracy to meet consumer demands for an all-in-one tool. This may look like improving the quality of image and non-text outputs or adding better capabilities and features for things like videos, file attachments (as Claude has already done), and internet search widgets (as Gemini has already done).
ChatGPT currently enables users to work with text (including code), voice, and image inputs and outputs, but there are no video input or output capabilities built into ChatGPT. This may change soon, as OpenAI is experimenting with Sora, its new text-to-video generation tool, and will likely embed some of its capabilities into ChatGPT as they have done with DALL-E.
Similarly, while Googles Gemini currently supports text, code, image, and voice inputs and outputs, there are major limitations on image possibilities, as the tool is currently unable to generate images with people. Google seems to be actively working on this limitation behind the scenes, leading me to believe that it will go away soon.
AI as a service is already growing in popularity across artificial intelligence and machine learning business use cases, but it is only just beginning to take off for generative AI.
However, as the adoption rate of generative AI technology continues to increase, many more businesses are going to start feeling the pain of falling behind their competitors. When this happens, the companies that are unable or unwilling to invest in the infrastructure to build their own AI models and internal AI teams will likely turn to consultants and managed services firms that specialize in generative AI and have experience with their industry or project type.
Specifically, watch as AI modeling as a service (AIMaaS) grows its market share. More AI companies are going to work toward public offerings of customizable, lightweight, and/or open-source models to extend their reach to new audiences. Generative AI-as-a-service initiatives may also focus heavily on the support framework businesses need to do generative AI well. This will naturally lead to more companies specializing and other companies investing in AI governance and AI security management services, for example.
Artificial general intelligence, which is the concept of AI reaching the point where it can outperform humans in most taskwork and critical thinking assignments, is a major buzzword among AI companies today, but so far, its little more than that.
Googles Deepmind is one of the leaders in defining and innovating in this area, along with OpenAI, Meta, Adept AI, and others. At this point, theres not much agreement on what AGI is, what it will look like, and how AI leaders will know if theyve reached the point of AGI or not.
So far, most of the research and work on AGI has happened in silos. In the future, AGI will continue to be an R&D priority, but much like other important tech and AI initiatives of the past, it will likely become more collaborative, if for no other reason than to develop a consistent definition and framework for the concept. While AI leaders may not achieve true AGI or anything close to it in the coming years, generative AI will continue to creep closer to this goal while AI companies work to more clearly define it.
To see a list of the leading generative AI apps, read our guide: Top 20 Generative AI Tools and Apps 2024
Most experts and tech leaders agree that generative AI is going to significantly change what the workforce and workplace look like, but theyre torn on whether this will be a net positive or net negative for the employees themselves.
In this early stage of workforce impact, generative AI is primarily supporting office workers with automation, AI-powered content and recommendations, analytics, and other resources to help them get through their more mundane and routine tasks. Though there is some skepticism both at the organizational and employee levels, new users continue to discover generative AIs ability to help them with work like drafting and sending emails, preparing reports, and creating interesting content for social media, all of which saves them time for higher-level strategic work.
Even with these more simplistic use cases, generative AI has already shown its nascent potential to completely change the way we work across industries, sectors, departments, and roles. Early predictions expected generative AI would mostly handle assembly line, manufacturing, and other physical labor work, but to this point, generative AI has made its most immediate and far-reaching impacts on creative, clerical, and customer service tasks and roles.
Workers such as marketers, salespeople, designers, developers, customer service agents, office managers, and assistants are already feeling the effects of this technological innovation and fear that they will eventually lose their jobs to generative AI. Indeed, most experts agree that these jobs and others will not look the same as they do now in just a couple of years. But there are mixed opinions about what the refactored workforce will look like for these people will their job simply change or will it be eliminated entirely?
With all of these unknowns and fears hanging in the air, workplaces and universities are currently working on offering coursework, generative AI certifications, and training programs for professional usage of AI and generative AI. Undergraduate and graduate programs of AI study are beginning to pop up, and in the coming months and years, this degree path may become as common as those in data science or computer science.
In March 2024, the EU AI Act that had been discussed and reviewed for several years was officially approved by the EU Parliament. Over the coming months and years, organizations that use AI in the EU or in connection with EU citizen data will be held to this new regulation and its stipulations.
This is the first major regulation to focus on generative AI and its impact on data privacy, but as consumer and societal concerns grow; dont expect it to be the last. There are already state regulations in California, Virginia, and Colorado, and several industries have their own frameworks and rules for how generative AI can be used.
On a global scale, the United Nations has begun to discuss the importance of AI governance, international collaboration and cooperation, and responsible AI development and deployment through established global frameworks. While its unlikely that this will turn into an enforceable global regulation, it is a significant conversation that will likely frame different countries and regions approaches to ethical AI and regulation.
With the regulations already in place and expected to come in the future, not to mention public demand, AI companies and the businesses that use this technology will soon invest more heavily in AI governance technologies, services, and policies, as well as security resources that directly address generative AI vulnerabilities.
A small number of companies are focused on improving their AI governance posture, but as AI usage and fears grow, this will become a greater necessity. Companies will begin to use dedicated AI governance and security platforms on a greater scale, human-in-the-loop AI model and content review will become the standard, and all companies that use generative AI in any capacity will operate with some kind of AI policy to protect against major liabilities and damage.
As governments, regulatory bodies, businesses, and users uncover dangerous, stolen, inaccurate, or otherwise poor results in the content created through generative AI, theyll continue to put pressure on AI companies to improve their data sourcing and training processes, output quality, and hallucination management strategies.
While an emphasis on quality outcomes is part of many AI companies current strategies, this approach and transparency with the public will only expand to help AI leaders maintain reputations and market share.
So what will generative AI quality management look like? Some of todays leaders are providing hints for the future.
For example, with each generation of its models, OpenAI has improved its accuracy and reduced the frequency of AI hallucinations. In addition to actually doing this work, theyve also provided detailed documentation and research data to show how their models are working and improving over time.
On a different note, Googles Gemini already has a fairly comprehensive feedback management system for users, where they can easily give a thumbs-up or thumbs-down with additional feedback sent to Google. They can also modify responses, report legal issues, and double-check generated content against internet sources with a simple click.
These features provide users with the assurance that their feedback matters, which is a win on all sides: Users feel good about the product and Google gets regular user-generated feedback about how their tool is performing.
In a matter of months, I expect to see more generative AI companies adopt this kind of approach for better community-driven quality assurance in generative AI.
Many companies are already embedding generative AI into their enterprise and customer-facing tools to improve internal workflows and external user experiences. This is most commonly happening with established generative AI models, like GPT-3.5 and GPT-4, which are frequently getting embedded as-is or are being incorporated into users preexisting apps, websites, and chatbots.
Expect to see this embedded generative AI approach as an almost-universal part of online experience management in the coming years. Customers will come to expect that generative AI is a core part of their search experiences and will deprioritize the tools that cannot provide tailored answers and recommendations as they research, shop, and plan experiences for themselves.
For an in-depth comparison of two leading AI art generators, see our guide: Midjourney vs. Dall-E: Best AI Image Generator 2024
With how much has happened in the world of generative AI, its hard to believe that most people werent talking about this technology until OpenAI first launched ChatGPT in November 2022. Many of generative AIs greatest milestones were reached in 2023, as OpenAI and other hopeful AI startups not to mention leading cloud companies and other technology companies raced to develop the highest-quality models and the most compelling use cases for the technology.
Below, weve quickly summarized some of generative AIs biggest developments in 2023, looking both at significant technological advancements and societal impacts:
The generative AI landscape has transformed significantly over the past several months, and its poised to continue at this rapid pace. What weve covered below is a snapshot of whats happening with generative AI in early 2024; expect many of these details to shift or change soon, as that has been the nature of the generative AI landscape so far.
Though it has not been widely adopted in many industries, generative AI continues to build its reputation and gain important footholds with both professional and recreational user bases. These are some of the main ways generative AI is being used today:
To learn about todays top generative AI tools for the video market, see our guide:5 Best AI Video Generators
According to Forresters December 2023 Consumer Pulse Survey results, only 29% agreed that they would trust information from gen AI and 45% of online adults agreed that gen AI poses a serious threat to society. In the same results, though, 50% believed that this technology could also help them to find the information they need more effectively.
Clearly, public sentiment on generative AI is currently very mixed. In North America, in particular, theres excitement and interest in the technology, with more users experimenting with generative AI tools than in most other parts of the globe. However, even among those with enthusiasm for generative AI, there is a general caution about data security, ethics, and the general trust gap that comes with a lack of transparency, misuse and abuse possibilities like deepfakes, and fears about future job security.
To earn consumer trust, more ethical AI measures must be taken at the regulatory and company levels. The EU AI Act, which recently passed into law, is a great step in this direction, as it specifies banned apps and use cases, obligations for high-risk systems, transparency obligations, and more to ensure private data is protected. However, it is also the responsibility of AI companies and businesses that use AI to be transparent, ethical, and responsible beyond what this regulation requires.
Taking steps toward more ethical AI will not only bolster their reputation and customer base but also put in place safeguards to prevent harmful AI from taking over in the future.
To learn more about the issues and challenges around generative AI, read our guide: Generative AI Ethics: Concerns and Solutions
Generative AI is clearly here to stay, regardless of whether your business chooses to incorporate this technology. The key to working with generative AI without letting it overrun your business priorities is to go in with well-defined effective AI strategies and clear-cut goals for using AI in a beneficial way. Some of these strategies may help:
This strategy should explain what technologies can be used, who can use them, how they can be used, and more. Keep strategies and policies both flexible and iterative as technologies, priorities, and regulations change.
At the rate generative AI innovation is moving, theres little doubt that existing jobs will be uprooted or transformed entirely. To support your workforce and ease some of this stress, be the type of employer that offers upskilling and training resources that will help staffers and your company in the long run.
If youre in a position of power or influence, consider doing work to mitigate the increasing global inequities that are likely to come from widespread generative AI adoption.
Partner with firms in developing countries, work toward generative AI innovations that benefit people and the planet, and support multilingual solutions and data training that are globally unbiased.
In general, partnering with leaders in other countries and organizations will lead to better technology and outcomes for all.
Especially in the pursuit of AGI, be cautious about how you use generative AI and how these tools interact with your data and intellectual property. While generative AI has massive positive potential, the same can be said for its potential to do harm. Pay attention to how generative AI innovations are transpiring and dont be afraid to hold AI companies accountable for a more responsible AI approach.
Generative AI has already proven its remarkable potential to reshape industries, economies, and societies even more than initially thought. Research firms and technology companies are continually adjusting their predictions for the future of generative AI, realizing that the technology may be able to take on more of the physical taskwork and cognitive work that human workers do and by a much earlier date than previously assumed.
But with this incredible technological development should come a heavy dose of caution and careful planning. Generative AI developers and users alike must consider the ethical implications of this technology and continue to do the work to keep it transparent, explainable, and aligned with public preferences and opinions for how this technology should be used. They must also consider some of the more far-reaching consequences such as greater global disparities between the rich and the poor and more damage to the environment and look for creative ways to create generative AI that truly does more good than harm.
So whats the best way forward toward a hopeful future for generative AI? Collaboration. AI leaders, users, and skeptics from all over the globe, different lines of work, and different areas of expertise must collaboratively navigate the challenges and opportunities presented by generative AI to ensure a future that benefits all.
For more information about generative AI providers, read our in-depth guide: Generative AI Companies: Top 20 Leaders
More here:
The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek