Category Archives: Artificial Super Intelligence

Artificial Intelligence May Be Humanity’s Most Ingenious Invention … – Vanity Fair

We invented wheels and compasses and chocolate chip cookie dough ice cream and the eames lounge chair and penicillin and e = mc2 and beer that comes in six-packs and guns and dildos and the Pet Rock and Doggles (eyewear for dogs) and square watermelons. One small step for man. We came up with the Lindy Hop and musical toothbrushes and mustard gas and glow-in-the-dark Band-Aids and paper and the microscope and baconfucking bacon!and Christmas. Ma-ma-se, ma-ma-sa, ma-ma-ko-ssa. We went to the bottom of the ocean and into orbit. We sucked energy from the sun and fertilizer from the air. Let there be light. We created the most amazing pink flamingo lawn ornaments that come in packs of two and only cost $9.99!

In a universe that stretches an estimated 93 billion light-years in diameter with 700 quintillion (7 followed by 20 zeros) planetshere, on this tiny little blue dot we call Earth, one of us created a tool called a spork. The most astounding part is that while that same universe is an estimated 26.7 billion years old, we did everything in just under 6,000 years.

All of this in less than 200 generations of human life.

Now weve just created a new machine that is made of billions of microscopic transistors and aluminum and copper wires that zigzag and twist and turn and are interconnected in incomprehensible ways. A machine that is only a few centimeters in width and length.

A little tiny machine that may end up being the last invention humans ever create.

This all stems from an idea conceptualized in the 1940s and finally figured out a few years ago. That could solve all of the worlds problems or destroy every single human on the planet in the snap of a fingeror both. Machines that will potentially answer all of our unanswerable questions: Are we alone in the universe? What is consciousness? Why are we here? Thinking machines that could cure cancer and allow us to live until were 150 years old. Maybe even 200. Machines that, some estimate, could take over up to 30 percent of all jobs within the next decade, from stock traders to truck drivers to accountants and telemarketers, lawyers, bookkeepers, and all things creative: actors, writers, musicians, painters. Something that will go to war for usand likely against us.

Artificial intelligence.

Thinking machines that are being built in a 50-square-mile speck of dirt we call Silicon Valley by a few hundred men (and a handful of women) who write in a language only they and computers can speak. And whether we understand what it is they are doing or not, we are largely left to the whims of their creation. We dont have a say in the ethics behind their invention. We dont have a say over whether it should even exist in the first place. Were creating God, one AI engineer working on large language models (LLMs) recently told me. Were creating conscious machines.

Already, weve seen creative AIs that can paint and draw in any style imaginable in mere seconds. LLMs can write stories in the style of Ernest Hemingway or Bugs Bunny or the King James Bible while youre drunk with peanut butter stuck in your mouth. Platforms that can construct haikus or help finish a novel or write a screenplay. Weve got customizable porn, where you can pick a womans breast size or sexual position in any settingincluding with you. Theres voice AI software that can take just a few seconds of anyones voice and completely re-create an almost indistinguishable replica of them saying something new. Theres AI that can re-create music by your favorite musician. Dont believe me? Go and listen to Not Johnny Cash singing Barbie Girl, Freddie Mercury intoning Thriller, or Frank Sinatra bellowing Livin on a Prayer to see just how terrifying all of this is.

Then theres the new drug discovery. People using AI therapists instead of humans. Others are uploading voicemails from loved ones who have died so they can continue to interact with them by talking to an AI replica of a dead parent or child. There are AI dating apps (yes, you date an AI partner). Its being used for misinformation in politics already, creating deepfake videos and fake audio recordings. The US military is exploring using AI in warfareand could eventually create autonomous killer robots. (Nothing to worry about here!) People are discussing using AI to create entirely new species of animals (yes, thats real) or viruses (also real). Or exploring human characteristics, such as creating a breed of super soldiers who are stronger and have less empathy, all through AI-based genetic engineering.

And weve adopted all of these technologies with staggering speedmost of which have been realized in just under six months.

It excites me and worries me in equal proportions. The upsides for this are enormous, maybe these systems find cures for diseases, and solutions to problems like poverty and climate change, and those are enormous upsides, said David Chalmers, a professor of philosophy and neural science at NYU. The downsides are humans that are displaced from leading the way, or in the worst case, extinguished entirely, [which] is terrifying. As one highly researched economist report circulated last month noted, There is a more than 50-50 chance AI will wipe out all of humanity by the middle of the century. Max Tegmark, a physicist at the Massachusetts Institute of Technology, predicts a 50 percent chance of demise within the next 100 years. Others dont put our chances so low. In July, a group of researchers, including experts in nuclear war, bioweapons, AI, and extinction, and a group of superforecastersgeneral-purpose prognosticatorsdid their own math. The experts deduced that there was a 20 percent chance of a catastrophe by 2100 and a 6 percent chance of an extinction-like event from AI, while the superforecasters had a more positive augury of a 9 percent chance of catastrophe and only 1 percent chance wed be wiped off the planet.

It feels a little like picking the extinction lottery numbersand even with a 1 percent chance, perhaps we should be asking ourselves if this new invention is worth the risk. Yet the question circulating around Silicon Valley isnt if such a scenario is worth it, even with a 1 percent chance of annihilation, but rather, if it is really such a bad thing if we build a machine that changes human life as we know it.

Larry Page is not an intimidating-looking man. When he speaks, his voice is so soft and raspy from a vocal cord injury, it sounds like a campfire that is trying to tell you something. The last time I shook his hand, many, many years ago, it felt as soft as a bar of soap. While his industry peers, like Mark Zuckerberg and Elon Musk, are often performing public somersaults with pom-poms for attention, Page, who cofounded Google and is on the board of Alphabet, hasnt done a single public interview since 2015, when he was onstage at a conference. In 2018, when Page was called before the Senate Intelligence Committee to address Russian election meddling, online privacy, and political bias on tech platforms, his chair sat empty as senators grilled his counterparts.

While Page stays out of the limelight, he still enjoys attending dinners and waxing poetic about technology and philosophy. A few years ago a friend found himself seated next to Page at one such dinner, and he relayed a story to me: Page was talking about the progression of technology and how it was inevitable that humans would eventually create superintelligent machines, also known as artificial general intelligence (AGI), which are computers that are smarter than humans, and in Pages view, once that happened, those machines would quickly find no use for us humans, and they would simply get rid of us.

What do you mean, get rid of us? my friend asked Page.

Like a sci-fi writer delivering a pitch for their new apocalyptic story idea, Page explained that these robots would become far superior to us very quickly, and if we were no longer needed on earth and thats the natural order of thingsand I quoteits just the next step in evolution. At first my friend assumed Page was joking. Im serious, said Page. When my friend argued that this was a really fucked up way of thinking about the world, Page grew annoyed and accused him of being specist.

Over the years, Ive heard a few other people relay stories like this about Page. While being interviewed on Fox News earlier this year, Musk was one of them. He explained that he used to be close with Page but they no longer talked after a debate in which Page called Musk specist too. My perception was that Larry was not taking AI safety seriously enough, Musk said. He really seems to want digital superintelligence, basically digital God, if you will, as soon as possible.

All of the people LEADING THE DEVELOPMENT OF AI right now are COMPLETELY DISINGENUOUS in public.

Lets just stop for a moment and unpack this. Larry Pagethe founder of one of the worlds biggest companiesa company that employs thousands of engineers that are building artificial intelligence machines right now, as you read thisbelieves that AI will, and should, become so smart and so powerful and so formidable andandthat one day it wont need us dumb pathetic little humans anymoreand it will, and it should, GET RID OF US!

See the article here:

Artificial Intelligence May Be Humanity's Most Ingenious Invention ... - Vanity Fair

Why The Human Touch Is Still Vital in AI Marketing – Entrepreneur

Opinions expressed by Entrepreneur contributors are their own.

There's no denying it: A fast-changing and often dizzying component of any digital marketing program is now artificial Intelligence. In assessing this reality, I've found it possible, and empowering, to keep my traditional marketing boots laced up I just equip them with AI-powered rocket boosters. Because, though breakthrough tech might be altering aspects of this work, let's be clear: the heart and soul of the marketing mission that's still all human.

Imagine you've been working out in an old-school gym. Simple weights, classic routines and sweat. Now, imagine you're handed this super-cool sci-fi exoskeleton, and are suddenly bench-pressing buses and doing squats with elephants on your back. That's akin to the difference AI brought to my digital marketing workout it supercharged everything but (and this is key), it's still me deciding where and when to flex those abilities. The exoskeleton might add muscle, but the workout itself is still a personal effort.

Remember the days of content creation when it felt like playing darts in the dark? Sometimes you hit, sometimes you missed and sometimes you just heard a cat screech in the distance. Then AI strutted onto the scene, and with its right nudge and prompt, it was like turning on the lights: The board's the same, as are the darts, but you're suddenly hitting bullseyes more often than not.

Link:

Why The Human Touch Is Still Vital in AI Marketing - Entrepreneur

Making AI smarter with an artificial, multisensory integrated neuron – Science Daily

The feel of a cat's fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but Penn State researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.

Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work on September 15 in Nature Communication.

"Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other," said Das, who also has joint appointments in electrical engineering and in materials science and engineering. "A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation."

For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed -- particularly when the inputs from both are faint.

"Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process," said Das, who is also affiliated with the Materials Research Institute. "The requirements for different sensors are based on the context -- in a dark forest, you'd rely more on listening than seeing, but we don't make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we're seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we're looking to combine sensors and mimic how our brains actually work."

The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.

"This is because visual memory can subsequently influence and aid the tactile responses for navigation," Sadaf said. "This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response."

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

It's the equivalent of seeing an "on" light on the stove and feeling heat coming off of a burner -- seeing the light on doesn't necessarily mean the burner is hot yet, but a hand only needs to feel a nanosecond of heat before the body reacts and pulls the hand away from the potential danger. The input of light and heat triggered signals that induced the hand's response. In this case, the researchers measured the artificial neuron's version of this by seeing signaling outputs resulted from visual and tactile input cues.

To simulate touch input, the tactile sensor used triboelectric effect, in which two layers slide against one another to produce electricity, meaning the touch stimuli was encoded into electrical impulses. To simulate visual input, the researchers shined a light into the monolayer molybdenum disulfide photo memtransistor -- or a transistor that can remember visual input, like how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron -- simulated as electrical output -- increased when both visual and tactile signals were weak.

"Interestingly, this effect resonates remarkably well with its biological counterpart -- a visual memory naturally enhances the sensitivity to tactile stimulus," said co-first author Najam U Sakib, a third-year doctoral student in engineering science and mechanics. "When cues are weak, you need to combine them to better understand the information, and that's what we saw in the results."

Das explained that an artificial multisensory neuron system could enhance sensor technology's efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

"The super additive summation of weak visual and tactile cues is the key accomplishment of our research," said co-author Andrew Pannone, a fourth-year doctoral student in engineering science and mechanics. "For this work, we only looked into two senses. We're working to identify the proper scenario to incorporate more senses and see what benefits they may offer."

Harikrishnan Ravichandran, a fourth-year doctoral student in engineering science and mechanics at Penn State, also co-authored this paper.

The Army Research Office and the National Science Foundation supported this work.

Read more:

Making AI smarter with an artificial, multisensory integrated neuron - Science Daily

When regulating artificial intelligence, we must place race and gender at the center of the debate – EL PAS USA

One of the most recent research projects organized by 32-year-old Brazilian anthropologist Fernanda K. Martins found that platforms such as Spotify recommend more male artists to users than women, regardless of the musical genre being searched for. This is what academics call algorithmic discrimination.

It seems logical that this anthropologist would take her research on gender and race to the internet probably the most challenging universe of our time given that Brazil is one of the most hyper-connected countries in the world.

Martins is the director of the Internet Lab: a respected interdisciplinary research center, which examines the space where law and the internet meet. Shes also an active participant on the subject of artificial intelligence (AI) and its effects on society. The daughter of a Black and Indigenous woman and a white man, Martins was nine-years-old when drought and inequality led her entire family to emigrate to So Paulo. She lived in the Brasilandia favela until the family moved to the exclusive Jardins district, where her father works as a doorman.

She clearly remembers the moment when she discovered her Blackness. It was the day I uttered the phrase you blacks and a teacher answered me: And what about you?

Question. Each Brazilian surfs the internet for an average of nine hours and 32 minutes each day only behind South Africa and three hours above the world average. What drives this hyperconnectivity?

Answer. Vanity [and] image are very present in Brazilian popular culture. Due to our Indigenous ancestry, the body is very important we are very used to affection. The internet plays a role [in supporting this] connection among historically marginalized populations.

We wanted to understand how Native Americans and Black people who got into colleges through quota systems (affirmative action) view the internet. They told us that it was important at a time when they were often the only [minorities] among a white majority. [In Brazil], we access the internet a lot, but in a very uneven and highly-concentrated way. The providers offer free internet in Brazil for certain applications, such as WhatsApp. And, among the most vulnerable populations, theres the conviction that the internet solely consists of those free applications. Theres no space for people to determine what kind of internet they want to build. For this reason, the more we strengthen the big platforms, the less space we find for innovation and creativity.

Q. How is your personal relationship with the online world? Constructive? Improvable? Do you set limits on screen time?

A. Its intense. Coming from a public school, I wasnt taught about computers in class, I was self-taught. I started surfing the web at the age of 10-11, on a computer my older brother bought. I had peers who didnt know how to plug a computer in or how to use it.

Do I set limits? I have taught myself how. When I take the dog for a walk, I go without my phone. The internet produces that feeling of always being accompanied, but it also aggravates loneliness. Youre present, but youre not.

Q. Tell us more about your research on the algorithms that reinforce inequality.

A. The research was born in an interdisciplinary team, when the debate on algorithmic discrimination was very intense. Some say that the internet is a reflection of an unequal society. I think it goes further. I believe that the internet and technology produce other inequalities. Our research shows that, when you ask streaming music platforms for recommendations, women are less recommended than men, regardless of musical genre. And there the question arises about whats the social role of the tech platform to try to balance that out. We need the platforms to show that theyre making every effort not to create and perpetuate inequalities.

We were able to analyze the gender [of users], but not the ethnic-racial profiles, because theres no data on either the artists or the users. Perhaps in parts of the Global North that isnt important, but in Brazil, its crucial.

Q. Brazil is considered to be a good laboratory for analyzing internet problems in general and social media in particular but also for shedding light on possible solutions.

A. Brazil is interesting for several reasons. While were a country from the Global South, were so hyper-connected that most of the big tech companies have offices here. This allows us to build a dialogue with multiple actors involved in the debates and the research investigations, in an attempt to solve problems. Furthermore, were going through a wave of growth in conservatism. In the last four years with [President Jair] Bolsonaro, weve seen the potential for disinformation, which isnt limited to the internet. And we now have a progressive government that is intensely considering how to regulate digital platforms. This wont solve all the problems, but maybe it can solve some.

Q. There are different models of regulating the internet and technology. This is a super broad and technical matter, but what do you think must be included in that Brazilian governments law? And what should be left out?

A. The main challenge when we think about the regulation of digital platforms not only in Brazil, but in our Latin American neighbors is not to import the European model. We need to find out what our way is. Transparency the possibility of auditing the data provided by the platforms is essential to addressing disinformation, political violence, or hate speech. We mustnt lose sight of the fact that Brazil like other Latin American countries is a very fragile democracy. In Brazil, we need an autonomous body with financial independence to carry out these audits, without the risk of being hijacked by politicians, the public sector. or the private sector. Civil society and academia must participate in this debate. The ultimate goal is to offer a healthy ecosystem to Brazilians, where they can create policies and make connections. We must flee from extreme polarization and silence.

Q. You mention polarization and disinformation. Today, we have access to more information than ever before, but much of it is of very poor quality. How do you propose we deal with this? What should be the priority?

A. When we talk about disinformation, we cannot only think about social media platforms. We need to address the Brazilian media model. For instance, how the television channels are in the hands of a few families.

Q. Do you think its possible to fight against misinformation when hate is more lucrative than sober and quality information?

A. Disinformation will continue to be a phenomenon that requires a search for solutions by different sectors and social actors. We must think about alternative, independent, local and regional media about public policies that support Indigenous media, produced in the peripheries, by Blacks, by traditional communities. And then, theres education. People need to know how to check if a news story is fake or not, but they differ on what they consider to be a reliable source. For some, its a YouTube channel. For others, its a trusted person. We need [media] literacy across society in general, as well as a commitment to journalism. We have to think about broad pacts, because the problem isnt concentrated in a single actor. In the Bolsonaro government (2018-2022), many consensuses that we were building around human rights, women and Black people were weakened. Violent discourse and attacks were normalized. We need to believe again in a future built from new consensus. We must actively listen to historically marginalized populations, but the rest [of the population] must also look inward and ask, who were my ancestors?

Q. You mean that the historically favored should reflect on white privilege and masculine privilege, but dont they have an advantage merely for being who they are?

A. I wouldnt use the term privilege, because it causes many people to react negatively, trying to protect themselves. Its time that the anger which has been important for the traditionally marginalized to harness be shared a little. The people who benefited from the system should be angry with their past. We must build anti-racist people.

Women, Blacks and Indigenous people arent going to be able to conquer their rights alone. We need a broad coalition, so that people understand that its important that society deals with that anger. When we think about the regulation of artificial intelligence, social media platforms, or the remuneration of journalism, we must place race and gender at the center of the debate.

Q. Lets talk about gender. Female parliamentarians only make up 18% of the Congress of Brazil, but whether theyre on the left or right of the political spectrum theyre the main target of online hate. Why is this?

A. When they get into politics, they occupy spaces where they were never present before. We see that anger translates into attacks on women, but not for what they do politically. Rather, its for what they represent. When we compare the attacks on social media, we see that straight white men are questioned for their political positions, while women are critiqued for their hair, their clothes, their morals.

Q. In recent months, the Supreme Court of Brazil issued a series of rulings which some consider to be controversial against the disinformation that led to the coup attempt on January 8. Do you consider this action to be proportional to the risk?

A. We were very afraid about what could happen to Brazilian democracy. The Supreme Court had a strong presence in the elections and, on January 8, in the effort to protect democratic institutions. The problem is the drift towards personalism.

Q. Are you referring to the role of Judge Alexandre de Moraes, who has been accused of imposing censorship by certain individuals?

A. Yes. But how do we protect Brazilian democratic institutions without this being considered a problem? We need all the powers and every Brazilian to assume their responsibility.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAS USA Edition

See the article here:

When regulating artificial intelligence, we must place race and gender at the center of the debate - EL PAS USA

Eight things we learned from the Elon Musk biography – The Guardian

Elon Musk

Widespread access to worlds richest man allowed biographer Walter Isaacson to detail a number of illuminating anecdotes

Tue 12 Sep 2023 08.26 EDT

A new biography of Elon Musk was published on Tuesday and contains colourful details of the life of the worlds richest man.

Musk afforded widespread access to his biographer, Walter Isaacson, the author of the bestselling biography of the Apple co-founder Steve Jobs, and the book contains a series of illuminating anecdotes about Musk. Here are eight things we learned from the book.

Musk, 52, was born and raised in South Africa and endured a fraught relationship with his father, Errol, an engineer. Isaacson writes that Errol bedevils Elon.

Musks brother, Kimbal, says the worst memory of his life was watching Errol berate Musk after he was hospitalised after a fight at school (the book says Musk was still getting corrective surgery for the injuries decades later). My father just lost it, says Kimbal.

Musk and Kimbal, who are estranged from their father, describe Errol as a volatile fabulist. Interviewed by Isaacson, Errol admits he encouraged a physical and emotional toughness in his sons.

Grimes, the artist who is mother to three of his 10 children, says PTSD from Musks childhood shaped an aversion to contentment: I just dont think he knows how to savor success and smell the flowers. Musk tells Isaacson he agrees: Adversity shaped me. My pain threshold became very high.

Shortly before taking over Twitter, or X as it is now called, Musk told Isaacson that the woke mind virus a derogatory term for progressive politics and culture would prevent extraplanetary settlement (one of Musks fixations).

Unless the woke mind virus, which is fundamentally anti-science, anti-merit, and anti-human in general, is stopped, civilization will never become multiplanetary, said Musk.

Musk fired Twitters executive team as soon as he completed the takeover of Twitter in October last year and it had been coming. When Musk bought a significant stake in Twitter months before, he agreed to meet the CEO, Parag Agrawal. After the meeting, Musk said: What Twitter needs is a fire-breathing dragon and Parag is not that.

They soon fell out. Agrawal texted Musk to say his tweet asking if Twitter was dying was not helpful. Musk, on a break in Hawaii, replied: What did you get done this week? He added: Im not joining the board. This is a waste of time. Will make an offer to take Twitter private.

This was during discussions about Musk joining the board. Agrawals reply underlined the power imbalance, and Twitters fear of Musk. He texted: Can we talk? Musk soon lodged an official bid for Twitter, which he tried unsuccessfully to wriggle out of, but the die was cast for Agrawal and his colleagues.

The founder and CEO of the fallen cryptocurrency exchange FTX, Sam Bankman-Fried, offered via his banker to put $5bn (4.1bn) into the Twitter takeover, the book claims. Bankman-Friedalso wanted to discuss putting Twitter on a blockchain the technological underpinning for cryptocurrencies such as bitcoin.

A subsequent call between Musk and Bankman-Fried in May 2022 went badly, Isaacson wrote. My bullshit detector went off like red alert on a Geiger counter, Musk is quoted as saying.

Bankman-Frieds offer to invest or to roll over $100m of Twitter stock that he claimed he had invested, came to nothing.

In his early tycoon career, Musk pondered recruiting the then mayor of New York as a political fixer to help him turn his PayPal business into a bank in 2001. Musk sought a meeting with Giuliani, then coming to the end of his tenure in office, because he wanted to turn PayPal an online payments company into a social network that would disrupt the whole banking industry.

In 2001, Musk and an investor, Michael Moritz, went to New York to see if they could hire Giuliani to guide them through the process of turning PayPal into a bank. It didnt go well.

It was like walking into a mob scene, Moritz says in the book. Giuliani was surrounded by goonish confidantes. He didnt have any idea whatsoever about Silicon Valley, but he and his henchmen were eager to line their pockets.

This guy occupies a different planet, Musk told Moritz.

One of Musks reasons for founding a new artificial intelligence company, xAI, is addressing the threat of population collapse. In one face-to-face conversation with Isaacson, the multi-billionaire said human intelligence was in danger of being surmounted by digital intelligence.

The amount of human intelligence, he noted, was levelling off because people were not having enough children. Meanwhile, the amount of computer intelligence was going up exponentially, like Moores law on steroids. At some point, biological brainpower would be dwarfed by digital brainpower.

This conversation was conducted at the Austin, Texas house of Shivon Zilis, an executive at Musks Neuralink business who is the mother of two of his children. Zilis told Isaacson she agreed to have children with Musk via IVF after listening to his arguments about having children as a kind of social duty. She said: He really wants smart people to have kids, so he encouraged me to, she said.

He tells Isaacson that human consciousness is under threat from the prospect of super-intelligent, and uncontrollable, AI systems.

Musk says: What can be done to make AI safe? I keep wrestling with that. What actions can we take to minimize AI danger and assure that human consciousness survives?

Musks satellite communications unit, Starlink, has a key role in Ukraines defence against the Russian invasion. When a Russian cyber-attack crippled Ukraines satellite comms network an hour before the invasion, Musk stepped in following an appeal for help from Ukrainian officials and the countrys deputy prime minister.

However, the book alleges that Musk told his engineers to turn off Starlink coverage that would have facilitated an attack by drone submarines on Russias navy at the Sevastopol base in Crimea.

However, Isaacson has subsequently clarified this excerpt after Musk used his X platform to state that there was no Starlink coverage in that area and he refused a Ukrainian request to activate it. Musk posted: If I had agreed to their request, then SpaceX would be explicitly complicit in a major act of war and conflict escalation.

Elon Musk by Walter Isaacson is published by Simon & Schuster. To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

See the original post:

Eight things we learned from the Elon Musk biography - The Guardian

Conversations in Collaboration: Cognigy’s Phillip Heltewig on … – No Jitter

Welcome to No Jitter's ongoing series, Conversations in Collaboration, in which we speak with executives and thought leaders about the key trends across the full stack of enterprise communications technologies.

For this Conversation, NJ spoke with Phillip Heltewig, CEO and Co-Founder of Cognigy. Heltewig is a German-Australian entrepreneur with profound technology experience working for companies in Europe, APAC and North America. Since 2016, Heltewig has been co-founder and CEO of Cognigy, revolutionizing the enterprise customer and employee experience through Conversational AI.

(Editors Note: Cognigy is pronounced: KOG-NUH-JEE the first g is hard, the second g is soft.)

Interviews in the series so far also include:

(Editor's Note: Artificial intelligence has a lot of specific, descriptive terms. Download our handy guide to AI vocabulary here.)

No Jitter (NJ): Can we start with a brief synopsis of Cognigy, the orchestration layer your company has developed and how it fits into the enterprise market?

Phillip Heltewig:I founded Cognigy with my cofounders, Sascha and Benjamin; we've been in business since 2016. When we founded the company, we wanted to build an end user device a speaking teddy bear because we thought speech technology was coming to a level where it really had a solid level of understanding of what humans say. So, we [decided to] build a teddy bear that kids would want to talk, play and have fun with.

We then looked at the kind of AI system we should use. We were going to use IBM Watson which [at the time] was [advertised as] the most advanced system ever invented. When we tried it, it was a very sobering experience [it was] limited, we [had to] put in keywords, etc. And, you had to code everything on your own.

That's when we [decided] this is not what we were looking for. We needed a system that non-technical users or slightly technical users could use to build conversations that the bear would have with children. We needed a graphical toolset for experts in those domains to build those kinds of conversation flows. That didn't exist.

So, we built it [ourselves]. And then we showed it around to the business community. Everyone was excited, but not about the bear -- about the system itself. That system [became] Cognigy. So, we put the bear on the shelf and started some projects with the system itself.

We come from Germany, a very manufacturing-heavy country, so we started to voice enable home devices like a smart cooking device. That was huge. We also did projects in virtual reality because you needed to be able to talk with virtual reality characters because you cant type, of course.

All those projects were cool and fun, but only one type of project was really business relevant and that was a customer service automation [we worked on].

Let's say you're Nike -- you can survive if a virtual reality advisor doesn't work. But you cannot survive if the chatbot in your contact center doesn't work. So, we started to very narrowly focus on customer service and ever since, we've been doing that with a strong focus on the enterprise.

Almost all our customers are large enterprises such as Lufthansa, Frontier Airlines and Allianz Insurance. These are multi-billion-dollar companies with large customer service volumes, so [they] can get essentially immediate ROI by deploying technology like ours which can lower the pressure on the human-led contact centers.

One of the reasons for our success is that we are never going to develop a feature unless we can field test it with a customer. We have a group of about 12 customers that have deployed Cognigy in a very big fashion we bring them together and we present them with our idea or our feature and ask, What do you think? In that way, our platform contains very relevant features and not just ones that maybe people will never use.

NJ: I see this term conversational AI used all over the place. What does it mean relative to generative AI and what Cognigy does?

Heltewig: So, anything AI is a marketing term, but what people usually understand as being conversational AI is a mixture of the following components.

First you have connectors to the channels that customers use to converse with you a WhatsApp connector, a Web chat connector, a phone connector. They receive the input from the customer and kind of normalize it, if it's audio it turns it into text, etc.

The second component is the natural language understanding [NLU] that's the AI component of conversational AI. It understands two types of data. One is intents; the other is entities.

(Editors Note: Back in 2019, Omdia analyst Brent Kelly wrote an eight-part series that focused on building intelligent bots. Part of that discussion involved intents and entities.)

So, if I say something like I need to change my flight, ABC 123, then the intent might be flight change, and the booking code [the entity] might be ABC 123. That's what traditional NLU does.

You provide the system [with all that] up front when you build it. You provide it with a number of intents anything between, say, [10 to 50] and for each intent, you provide a number of example sentences for how a customer could [express intents] so, in this case: I want to change my flight, or My flight is late, and I need to change something.

If the customer says, I need to change the flight for me and my dad, then the AI algorithm that sits underneath still knows that its a flight change even though it wasn't provided as an example [intent].

This uses true machine learning technology already under the hood to identify the intent and to extract the entities. We take this information and pass it to, in our case, what we call a flow. It's like the graphical representation.

Now that I know the intent, what do I do? This piece in conversational AI is deterministic meaning that it's the same for you and for me. There might be templates where it says Matt or Phil, but the actual flow of the conversation is deterministic which is very important [for] enterprises. If you are in a regulated industry, for example, you want to know what comes out, or you want to have some kind of control. This is important to keep in mind for the generative discussion afterwards.

In these flows, the outputs [are provided back] and then there are other components like analytics, but these three are essentially the main components: [connectors, NLU/intents/entities, and the flow].

Generative AI essentially combines two of these components into one the understanding of the language and the generation of the language. In conversational AI, the understanding was the NLU, and the generation was the flow. But, hardcoded in generative AI, the understanding and the generation is one component.

This raises a lot of interesting questions because in a conversation flow, I can [specify]: say this afterwards, then make an API request against my booking database, check if the booking code exists, [and] if it doesn't exist, say something else. I can do all this easily in the flow. Telling the generative AI to do the same is currently an unsolved challenge which the industry is starting to solve.

NJ: So how is Cognigy approaching generative AI?

Heltewig: The way that we're thinking about generative AI is slightly different.

You can use generative AI in essentially three ways. One is augmenting the editor [used to create] traditional conversational AI. [Gen AI] is useful to create intents because you don't have to write 10 example sentences anymore. You just write one and then it generates 10 for you. It can [also] create flows relatively quickly [which] is nice.

Second, [you can] augment the agent experience in the contact center by using generative AI for agent-facing tasks. One of the biggest things our customers tell us is when an end customer has a conversation with a bot that goes on for five minutes and [that] conversation is handed over to an agent, that agent has to read through the whole transcript. Nobody has time for that. You can use generative AI to summarize the transcript [into] one paragraph.

You can also listen into the call or chat and provide suggested replies, but you always have the human in the middle.

The third one is the Holy Grail everyone's working on: having generative AI power direct-to-customer conversations. In that there are also three components.

So, lets say you go to work in the Cognigy contact center. On your first day your manager says, welcome, here is your computer, you need basic knowledge about Cognigy in order to answer anything read these five PDFs, read this website, etc. [This is the first component: knowledge.]

Then the manager says you can also help customers with these things reissuing an invoice, changing an address, etc. For those, [the agent gets] standard operating procedures: To change a customer address, log into Salesforce, find the record. Enter it, reconfirm with a customer and, if it's fine, click save. This is what we call a transaction, [its the second component].

[The last component is] orchestration. You as a human can do that already you know when to change the address in Salesforce depending on what I've said and when to reissue an invoice. You're not going to reissue an invoice if I ask you to change my address. It doesn't make sense. Or maybe after you change the address because youre a human you decide to reissue the invoice. That's the orchestration piece.

(Editors note: This is similar to what Google Clouds Behzadi said in his Conversation i.e., you dont have to teach a new agent common sense. Note, too, that during his keynote at Enterprise Connect 2023, Behzadi demonstrated an entirely generative AI-powered bot speaking directly to a customer an example of the Holy Grail quest Heltewig mentioned.)

NJ Interlude: Incorporating Generative AI into a Conversation Flow

At this point in the conversation, Heltewig launched a demonstration of a product Cognigy has in the works. In the first part of the demo, Heltewig showed how those three components knowledge, transactions, and orchestration all work together within a traditional conversational AI flow within the context of an example flight booking.

As the customer, Heltewig asked the demo product a question: "I want to book a flight." Based on that, the NLU demo figured out the intent flight booking and it then asked if the customer had an existing reservation. The response was no, so the bot then asked for the flight number. This basic exchange is familiar to anyone whos interacted with one of these bots.

Next, Heltewig demonstrated how the system might act if some of those components were replaced with generative AI. First, he asked the bot if he could bring his dog on the plane. It couldnt answer.

So, he switched demo models to one that had been grounded on the airlines policies. Grounded simply means that its been told to read various documents, much like that agents first day on the job.

Heltewig asked the same question: Can I bring my dog on the plane? To which the bot answered: Yes, you can bring your dog on the plane on long haul flights as emotional support animals are still permitted.

This is an exact answer to my question, Heltewig said, and then he asked the model, Can I bring my Chihuahua on the plane?

The bot responded with: Yes, you can bring your Chihuahua on the plane; emotional support animals areallowed. This, too, is a correct response and it shows how the generative AI model can figure out that a chihuahua is a dog without explicit programming.

Overall, Heltewig characterized these exchanges as how generative AI can be used to extract the answer from the knowledge base and then to formulate a perfect answer to the exact question that I asked.

No Jitter then asked Heltewig to pose an absurd question to the model: Can I bring my giraffe on the plane?

The response was I cant find the answer, which Heltewig characterized as correct because the model didnt provide a wrong answer, nor did it make one up which is typically called a hallucination (and it is perhaps a mistake to so anthropomorphize an algorithm as Eric Krapf wrote).

Heltewig continued, saying, the way this iteration of the model works is that its performing a search and then extracting the answer using a large language model prompt. Basically, its a chatbot that uses a knowledge base to answer questions. This takes the knowledge [component] and [makes it] so much better than what it is right now. In the past when you found an article, you would just output it. That would then require the agent/customer to read through the article for the correct answer.

But what Heltewig finds even more interesting is what he, and Cognigy, call transactions. To illustrate that he launched another Cognigy research preview which is how we believe bots will be built in the future.

NJ Interlude: Cognigy and the Grail Quest

Again, this is Heltewig narrating the bot interactions as he demoed the research preview. NJ has edited the exchange to make it easier to follow.

First, you create a persona, for the bot, Heltewig said. The demo involved Coventry Clinic, a fictitious health care provider from Dusseldorf, and it has a receptionist named Bot who is professional, patient and can engage in some trivia and small talk. But, Heltewig said the bot could also be told to only stick to the task at handthe transaction.

And thats the key part, specifying what transactions it can conduct. This one can look up an appointment to check if the customer is a patient. It can then make an appointment. To do that, it needs to collect information: first name, last name, mobile number, etc. Thats essentially all you do [to create the bot]. And then you can start talking to it, Heltewig said.

Bot: Hello, this is Bot from Coventry Clinic. How can I help you today?

Patient: I was wondering if I could make an appointment.

Bot: Yes, of course. Can I have your first and last name please?

Patient: My first name is Phil. Im not going to tell you my last name.

Bot: I need your last name to make an appointment. Can you provide it please?

Many old school systems would have said Thank you, Mr. 'I'm Not Going To Tell You,' Heltewig said. This [new system] is what we all envisioned conversational AI to be like kind of like the Star Trek computer where you have a completely freeform conversation.

He continued. As far as a research preview [goes], it still has some shortcomings. You can get it off track and you can tell it information which it then believes is true even though it's not. But [this demonstration hints at] how we are going to be building these kinds of conversational AI systems in the future. And I hope this shows the difference between the rather static and deterministic but still very useful systems they are processing 10s of millions of conversations a month. But these generative AI systems don't just use AI to understand. They use AI as the name implies, to understand and generate the outputs.

Back to the Conversation

NJ: So maybe [the generative AI system] knows trivia, but it also knows your private medical records. The trivia is in one database or maybe requires an Internet search while the other is in a secure database. How is that managed to keep things secure? And to keep it from getting out, so to speak?

Heltewig: Let me start answering that by first mentioning the LLM [large language model] providers who are absolutely leading the market. Those are OpenAI, Microsoft with Azure OpenAI, Anthropic, Google and a handful of others. There are also open-source models that you can deploy yourself, but they don't come close in performance. [If] we're talking on a scale of one to 100 and let's say GPT4 is a 95, then the open-source models are a 10. You wouldn't use those.

What weve done is use the commercially available models, which is what weve also done with translation providers, feature text providers, etc.

We also allow our customers to choose we integrate with Azure, with OpenAI, Google and Anthropic and there's a couple of others coming that well also be integrating with. That means your data leaves our ecosystem, goes somewhere else and comes back.

What leaves our ecosystem is the product not your patient record, but whatever you put into the prompt. I'll give you an example. When we make this request, we send what's called grounding knowledge. This is True Knowledge so for this clinic, lets say we have five doctors and 12 nurses. [Heltewig then entered this true fact into the demo research preview. Then, as the patient, he asked it a question.]

Patient: How many doctors do you have?

Bot: We have five doctors and 12 nurses.

Patient: Actually, I think you have 33 nurses.

Bot: I apologize for the confusion. According to our records, we have five doctors and 12 nurses.

It assumes that the number of doctors and nurses I told it is true. If I had a patient record, do I want the bot to be able to talk about that? If I did, then I would inject it into this prompt dynamically before it's sent.

But [in a real deployment], you wouldnt hard code it into the prompt. It would come from a database. The key is when we inject [information] into the prompt.

For example, the customer says their credit card number is 123. Before it goes to the LLM, you might want to redact the credit card number [if only] because the LLM can't talk to a back-end system anyways.

(Editors Note: LLMs can only talk to back-end systems if they are enabled to do so via an API call and/or integration.)

So, we can dynamically compose what goes to the LLM like the patient record or credit card number. If we choose, we can also process what comes out afterwards. That is the orchestration piece and that is why you need a conversational AI platform to make use of generative AI because otherwise you have no control of what goes in and what goes out. That's why generative AI in itself is fun to play with but it's not usable in an enterprise context.

NJ: One of the buzzwords or terms we keep hearing are guardrails in place, and respect business processes. Is that what we're talking about here?

Heltewig: With generative AI algorithms with what we saw here, we didn't really know what it was going to answer. There are no true guardrails as on a bridge so that you don't fall there are a lot of holes in the guardrails still. That's not on our side; that's on Microsoft and OpenAI SamAltman said: don't use these algorithms in production because they can hallucinate things. So, we're still at that stage.

Which is why, when I mentioned the three use cases for generative AI as in helping [conversational flow] editors, helping agents and helping the customer [directly], the first two are relatively risk-free because there is a human in the loop who can filter out the hallucination. That's why I believe a lot of companies are saying, yeah, we've put guardrails around this, [but I believe] that it's just not true.

NJ: What would your advice be to an enterprise IT staffer who's tasked with evaluating these systems and potentially putting them into production?

Heltewig: I'll mention something that I think is being completely disregarded by everyone right now. Cost. The cost of these models can be tremendous. One GPT4 query can depending on the amount of tokens and context size easily cost 10 cents or more. That is essentially what a whole conversation commonly costs now. If you do five GPT4 queries in a conversation and thats just for the generative AI thats 50 cents. That's crazy.

There's a big cost difference in these models. GPT 3.5 costs 20 times less than that. Actually, lets quickly verify that. GPT 4 input costs three cents for 1000 tokens, and then for outputs, it costs six cents for 1000 tokens. So, if you have 1000 tokens going in and out, that's 9 cents total.

With GPT 3.5, we have 0.015 and 0.02 cents. If I'm correct in calculating this, this is one 1/25th of the price here it's even one thirtieth of the price. So, the ChatGPT 3.5 model is dramatically cheaper than the GPT 4 model. And, of course, the capabilities differ.

[Think of it] like this: I have a double master's degree in business and computer science, and if you want to ask me, what is 10 plus 5, using my time to get an answer will cost you $50. Or you can ask my seven-year-old son and he only costs 10 cents or an ice cream.

The point being you do not require the super intelligent model for many of the tasks that generative AI can do. Summarization in GPT 3.5 is really good; you don't need GPT 4 [for that.] You really need to think about which model you want to use because the cost difference is so stark.

Want to know more?

Check out these articles and resources:

Excerpt from:

Conversations in Collaboration: Cognigy's Phillip Heltewig on ... - No Jitter

5 things about AI you may have missed today: Zomato launches AI chatbot, Israels AI-powered plane and more – HT Tech

AI Roundup: Zomato launched its own AI chatbot today that will assist users in placing orders, while Google DeepMind co-founder emphasized that the US should only allow those buyers access to Nvidia's AI chips who agree to use this technology ethically. In a separate development, the Israeli Defense Ministry unveiled a surveillance plane equipped with AI-powered sensors.

All this, and more in our todays AI roundup.

Keeping up with the latest trends, Zomato announced the launch of its AI chatbot on Friday. The chatbot, called Zomato AI, will assist users in placing orders. In ablog post, Zomato announced that one of the chatbots standout features is its multiple-agent framework which empowers it with a variety of prompts for different tasks. Zomato AI will initially be exclusively rolled out to Zomato Gold members.

In a move that is expected to strengthen its defense capabilities, the Israeli Defense Ministry unveiled a new surveillance aircraft that is equipped with AI-powered sensors. The Israel Aerospace Industries (IAI) installed C4I, a high-tech and secure communication system, along with sensors to a Gulfstream G550 jet. According to areportby Fox News Digital, Brig. Gen. Yaniv Rotem, Head of military research and development in the DDR&D of the Ministry of Defense said, The use of Artificial Intelligence (AI) technology will enable an efficient and automated data processing system, which will produce actionable intelligence in real-time, enhancing the effectiveness of IDF operational activities.

With controversies surrounding AI and its regulation, Mustafa Suleyman, the co-founder of Google DeepMind said that the US should only allow those buyers access to Nvidia's AI chips who agree to use this technology ethically.Speakingto the Financial Times on Friday, Suleyman said, The US should mandate that any consumer of Nvidia chips signs up to at least the voluntary commitments and more likely, more than that.

Chinese technology companies such as Alibaba and Huawei are seeking approval from the Cyberspace Administration of China (CAC) for deepfake models, according to a list published by the cyberspace regulator on Friday. As per a Reutersreport, the tech companies are looking to obtain approval for adhering to regulations set by CAC in December in regards to deepfakes.

While AI has been deemed to play a major role in crime fighting, especially when combined with facial recognition technology, the UK Police say that some of their officers, known as super-recognizers, are even better than AI as they never forget a face. As per an AFP report, Tina Wallace, a surveillance expert with Thames Valley Police highlighted that only one percent of the population has this ability to remember faces. These officers are now being deployed outside nightclubs to identify sexual assault perpetrators.

The rest is here:

5 things about AI you may have missed today: Zomato launches AI chatbot, Israels AI-powered plane and more - HT Tech

One think tank vs. ‘god-like’ AI – POLITICO

With help from Derek Robertson

The OpenAI website. | Marco Bertorello/AFP/Getty Images

A few short years ago, Daniel Colson was taking a startup investment from OpenAI founder Sam Altman and rubbing shoulders with other AI pioneers in the Bay Area tech scene.

Now, the tech entrepreneur is launching a think tank aimed at recruiting Washingtons policymakers to stop his one-time funder. Colson views it this way: The top scientists at the biggest AI firms believe that they can make artificial intelligence a billion times more powerful than todays most advanced models, creating something like a god within five years.

His proposal to stop them: Prevent AI firms from acquiring the vast supplies of hardware they would need to build super-advanced AI systems by making it illegal to build computing clusters above a certain processing power. Because of the scale of computing systems needed to produce a super-intelligent AI, Colson argues such endeavors would be easy for governments to monitor and regulate.

I see that science experiment as being too dangerous to run, he said.

As Washingtons policy scene reorients toward AI, Colson, 30, is the latest comer who sees cosmic stakes in the looming fights over the technology. But his Artificial Intelligence Policy Institute is looking to start with a humbler contribution to the emerging policy landscape: Polling.

Last week, AIPI released its first poll, based on a thousand respondents, finding that 72 percent of American voters support measures to slow the advance of AI.

Lamenting a lack of quality public polling on AI policy, Colson said he believes that such polls have the potential to shift the narrative in favor of decisive government action ahead of looming legislative fights.

To do that, Colsons enlisted a roster of tech entrepreneurs and policy wonks.

AI safety is just massively under-surveyed, said Sam Hammond, an AI safety researcher listed among AIPIs advisors.

Colson is also getting advice from one advisor who goes unmentioned on AIPIs website. Progressive pollster Sean McElwee, an expert in using polling to shape public opinion who is best known for his relationships with the Biden White House and Sam Bankman-Fried is advising Colson behind the scenes.

A spokesman for Colson, Sam Raskin, described McElwee as one of many advisers. McElwee, who was ousted last year from the left-wing polling firm Data for Progress, reportedly, in part over his Bankman-Fried ties, did not respond to a request for comment.

As AI safety proponents confront the technologys rapid advance, Colson has been participating in calls convened in recent months by Rethink Priorities a nonprofit launched in 2018 to formulate a policy response among like-minded researchers and activists. Rethink Priorities is associated with Effective Altruism, a utilitarian philosophy that is widespread in the tech world.

Though many Effective Altruists also worry about AIs potential existential risks, Colson distances himself from the movement.

He traces his misgivings to his attendance at an Effective Altruism gathering at the University of Oxford in 2016, where Google DeepMind CEO Demis Hassabis gave a talk assuring attendees the company considered AI safety a top priority.

All of the [Effective Altruists] in the audience were extremely excited and started clapping, Colson recalled. I remember thinking Man, I think he just co-opted our movement.

(A spokeswoman for DeepMind said Hassabis has always been vocal about how seriously Google DeepMind takes the safe and responsible deployment of artificial intelligence.)

A year later, Colson co-founded Reserve, a stablecoin-focused crypto startup that landed investments from Altman and Peter Thiel. He found himself running in the same circles as many of the people who were then laying the foundations for the current AI boom.

But Colson said that his experience as a Bay Area tech founder left him with the conviction that AI scientists vision for advancing the technology is unsafe. OpenAI did not respond to a request for comment.

Colson also concluded that Effective Altruists vision for containing AI is too focused on technological fixes while ignoring the potential for government regulation to ensure public safety.

That motivated the launch of AIPI, he said. The groups funding has come from a handful of individual donors in the tech and finance worlds, but Colson declined to name them.

In addition to more polling, AIPI is planning to publish an analysis of AI policy proposals this fall. Colson said he views the next 18 months as the best window for passing effective legislation.

Because of the industrial scale of computing needed to achieve the ambitions of AI firms, he argues that computing clusters are a natural bottleneck at which to focus regulation. He estimates the measure could forestall the arrival of computer super-intelligence by about 20 years.

Congress, he suggested, could cap AI models at 10 to the 25th flops, a measure of the speed at which computers can perform complex calculations. (By comparison, ChatGPT-2, which was state of the art in 2019, was trained with 10 to the 21 flops, Colson said.) Or better yet, he said, set the cap five orders of magnitude lower, at 10 to the 20th flops. Thats what I would choose.

The University of Tokyo. | AFP via Getty Images

With its population shrinking and workforce transforming, Japan is counting on AI to help its society remain dynamic and innovative.

Michael Rosen, an analyst at the libertarian-leaning American Enterprise Institute, reported in a blog post this morning on his recent trip to the nation where he interviewed experts in both the private and public sectors about what AI can do for Japans rapidly aging society. For example: A chief at SoftBank Robotics boasted to Rosen of the companys efforts to combine AI brains with robotic bodies, which could help with the countrys rapidly aging janitorial corps.

Yasuo Kuniyoshi, a University of Tokyo AI researcher, argued to Rosen that robots Sharing a similar body is a very important basis for empathy, and described how his research explores the very early proto-moral sense of humanity that AI invokes. The ethical considerations that raises in the actual deployment of such human-like AI tools demand a policy response, as Rosen notes, saying the people he spoke to in Japan were broadly supportive of a government-driven approach, even as they largely disregarded the doomsday mindset of some Western anti-AI advocates. Derek Robertson

As todays digital architects build their new platforms, theyre usually pretty vocal about not repeating the mistakes of yesterday especially when it comes to the unintended harms that social media platforms like Facebook might have caused.

But maybe they need not worry so much. A wide-ranging, in-depth new study published last week in the peer-reviewed journal Royal Society Open Science finds no evidence suggesting that the global penetration of social media is associated with widespread psychological harm.

To create a sample size of nearly a million individuals over 11 years in 72 countries, authors Matti Vuorre and Andrew K. Przybylski tracked Facebook usage using data from the company and matched it with the Gallup World Polls data on well-being. They conclude that it is not obvious or necessary that their wide adoption has influenced psychological well-being, for better or for worse.

However, they do note that their results might not be able to generalize across different platforms like Snapchat or TikTok, and that to move past description, the goal of this study, to prediction or evidence-based intervention, independent scientists and online platforms will need to collaborate in new, transparent ways. Derek Robertson

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); and Steve Heuser ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

See the rest here:

One think tank vs. 'god-like' AI - POLITICO

Black Women Researchers Highlight Dangers of Artificial Intelligence – Yahoo News

Women of color are warning about the potential dangers of artificial intelligence. Its a concern they have been sharing for years and one that is now being highlighted by Rolling Stone.

Although AI has helped further several technological advancements, it has also come to carry several biases and harmful consequences. Specifically as it interacts with information regarding marginalized groups. Rolling Stone profiled several trailblazing women in the AI space including two Black women Timnit Gebru, and Joy Buolamwini. They have extensive experience working in tech and even worked on the first iteration of what we know now as artificial intelligence software. They have also been calling for proper regulation around the use of AI and how its inherent biases are affecting marginalized communities as well as the rest of the world.

Gebru published a paper on the matter during AIs earlier days.

The training data has been shown to have problematic characteristics resulting in models that encode stereotypical and derogatory associations along gender, race, ethnicity, and disability status, the paper reads. White supremacist and misogynistic, ageist, etc., views are overrepresented in the training data, not only exceeding their prevalence in the general population but also setting up models trained on these datasets to further amplify biases and harms.

Artificial intelligence is permeating every facet of modern life. It is being used for educational purposes, in medical institutions, and even the most minute interactions on social media apps. These women are advocating for regulations that will ensure AI software is used responsibly. These regulations will bring nuance to the services aided by AI.

As Rolling Stone concluded in their report, There are a few things they all want us to know: AI is not magic. LLMs are not sentient beings, and they wont become sentient. And the problems with these technologies arent abstractions theyre here now and we need to take them seriously today.

AI researcher, Joy Buolamwini added that peoples lives are at stake.

But not because of some super intelligent system, Buolamwini said But because of an over-reliance on technical systems. I want people to understand that the harms are real and that theyre present.

The post Black Women Researchers Highlight Dangers of Artificial Intelligence appeared first on 21Ninety.

Go here to see the original:

Black Women Researchers Highlight Dangers of Artificial Intelligence - Yahoo News

The Rise of AI | ‘Risks and challenges’: Educators eye new artificial … – TribDem.com

JOHNSTOWN, Pa. Richland High School Principal Timothy Regan tries to keep up with emerging technologies so that he knows what his students are using and so that he can prepare them for life outside the classroom.

That approach has taken Regan into the ever-expanding world of artificial intelligence, or AI, which is on the rise as companies jockey for the top spot with new programs that do everything from writing emails to generating term papers.

My focus is really driven toward how to use this to have a better educational experience for our students, Regan said.

The use of AI in education is evolving at every level of academia. Educators and administrators seek ways to determine programs potential use in the classroom and whether using the technology could be detrimental to students work, possibly leading to plagiarism and other forms of cheating.

The U.S. Department of Educations AI focus largely has been on how it allows students and educators to have new forms of interactions, enhances feedback loops and makes teachers jobs easier.

That doesnt mean there arent concerns about the use of AI in learning.

There have been numerous reports about AI programs potentially replacing teachers as well as data privacy issues, fear of unwanted or unsuspected bias, and the consequences of inaccurate or fake information.

The Federal Trade Commission has opened an investigation into ChatGPT, a prominent AI application, to determine if the tool has harmed people by generating incorrect details about them.

A report from the Department of Educations Office of Educational Technology, Artificial Intelligence and the Future of Teaching and Learning, highlights the need for greater student surveillance, as well as concerns about discrimination from algorithmic bias for example, a voice recognition system that doesnt work well with regional dialects and achievement gaps widening because the software could speed delivery of information for some and slow it for others.

Regan pointed out that AI has existed for years.

He said that services including Grammarly, which provides writing assistance, are a good example of existing learning software. Regan said hes toyed with programs such as Magic Eraser for image alteration and Tetra, which takes notes during virtual meetings.

All these things are a way to make people more efficient, Regan said. That could apply to teachers creating lesson plans or scoring tests, he said, leading to more time to focus on students school experience.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) cautions that rapid technology developments could lead to multiple risks and challenges, which have so far outpaced policy debates and regulatory frameworks.

Thats Mount Aloysius College assistant professor Danny Andersons main concern.

Although he finds AI fun and fascinating, the professor at the Cresson college expressed reservations about the arms race of companies trying to best each other with newer products to catch cheaters.

Mark DiMauro, a University of Pittsburgh at Johnstown humanities professor, also questioned the information gathered by these systems. He used the phrase garbage in, garbage out, meaning that if the source material used by AI programs isnt quality, they wont produce quality content.

I do think we run the risk of over-trusting, DiMauro said.

He added that he will now be putting more emphasis on students needing to double- or triple-check research to make sure that sources back each other up.

Anderson said he isnt concerned about the technologys recent influx in education.

He thinks its obvious when a student uses a program such as ChatGPT to complete an assignment. The work often lacks a personal touch and the writers voice, Anderson said.

Still, Anderson said, AI may cause him to rethink some of his assignments to be absolutely sure that students work is original.

Disciplinary measures are still being determined for how to deal with AI use in assignments.

Regan pointed out that kids who wanted to find a way to cheat found a way to cheat since school started and that there are tools available to check their work. He noted the service Turnitin, which checks for plagiarism and now includes an AI indicator.

Other options could include having students demonstrate their comprehension of the work in person instead of writing papers or taking tests, he said.

On the policy side, Regan added that he and his administrative team are examining options that may require AI citations and address consequences for misuse.

David Haschak, vice president for academic affairs at Mount Aloysius College, said numerous conversations on the subject are underway at the administrative level.

Starting this fall, the college added a section to its academic integrity policy thatll address unauthorized artificial intelligence use. UNESCO reported that fewer than 10% of the 450 universities it surveyed had formal guidance on AI as of this summer.

In some classrooms, it may be a tool, Haschak said. In others, they want all original work.

For Gavin Moore, a junior at Bucknell University in Lewisburg, Union County, AI is something useful in coding for personal projects or school, but he notes its limitations become apparent.

They arent miracle tools that do everything for you. You still need to do all the heavy lifting yourself, he said. With something like ChatGPT, it can be like having a very elaborate search engine/online assistant at your disposal. The biggest thing it does is make it easier to resolve issues in code or provide insight into small problems I might have with a given topic I need to understand.

Mount Aloysius is incorporating the conversation on artificial intelligence into student orientation in order to engage parents in seeing how their children may be using such programs.

At Susquehanna University in Selinsgrove, Snyder County, no official guidance has been issued. The university is scheduled to host a series of workshops ahead of the start of classes on Aug. 28.

Its just an overview about how AI systems work, limitations, and how and when, as a university, do we need to develop a university-wide policy, said Nabeel Siddiqui, assistant professor of digital media and director of Susquehanna Universitys Center for Teaching and Learning. Do faculty need to determine a policy in their classrooms now? As a faculty, there are some that have concerns and some that are excited.

Richland High School world history and character and leadership teacher Jacob St. Clair and DiMauro share a similar approach on the matter.

St. Clair said he has experimented with some programs, including ChatGPT and image-generating software, as well as face-mapping, and considers AI to be another tool in the toolbox of teachers and students.

Its like a calculator, St. Clair said.

DiMauro said he has heard a lot of doom and gloom about the technology, but doesnt believe any of it. He argued that students writing with chatbots may be good because thatll allow them time to focus on other endeavors, such as research or other classes.

Theres just so many fascinating things you can do with this thing if applied properly, DiMauro said.

DiMauro said the cross- section of AI and education is in a weird place.

He said he understands that people can be put off by the technology.

He doesnt think people have been taught to use the tools correctly programs such as ChatGPT arent super-Google, he noted.

I do absolutely think once people get their heads around it, theyll be more open and willing, DiMauro said. Educate yourself about it and it will suddenly not seem as dangerous as before and youll start seeing the possibilities.

St. Clair cited the example of bringing still pictures to life or creating mini-movies with historic paintings to add a new dimension to education.

Looking ahead, he said, he thinks the technology will help teach students critical thinking and problem-solving skills as well as connect them to the subjects theyre studying.

DiMauro said that he is hoping we end up in a situation where AI is commonplace in classrooms and in which there are ways to manipulate and use the tools available to perform helpful tasks, including checking sources and learning to write.

To avoid issues moving forward, the federal Department of Education recommends emphasizing humans in the loop in regard to AI implementation, informing and involving educators in the conversation, and enhancing trust and safety, among other suggestions to build the tech into the future of learning.

We envision a technology-enhanced future more like an electric bike and less like robot vacuums, the agency website says.

On an electric bike, the human is fully aware and fully in control, but their burden is less and their effort is multiplied by a complementary technological enhancement.

Excerpt from:

The Rise of AI | 'Risks and challenges': Educators eye new artificial ... - TribDem.com