Page 891«..1020..890891892893..900910..»

Heated massages, AI counselling and films on the go: Will … – Euronews

LG presented a vision of what autonomous vehicles (AVs) could be like in the future - and its all about having more "me time" on the move.

Its been a stressful day at work so you decide to linger in the car and take a breath before getting out and facing the task of preparing dinner or tackling the household chores.

You recline your seat, listening to the soothing sounds of nature while it gives you a heated massage. Or maybe you opt for counselling from the onboard artificial intelligence (AI) to wind down and clear your head after a hectic day.

Compared with your current daily commute sitting in stop-start traffic, the concept might seem lightyears from reality. However, it is just one vision of what autonomous driving could look like proposed by South Korean electronics giant LG.

The technology behind autonomous vehicles (AVs) is currently geared towards the mechanics of getting the car to move and navigate independently while the onboard experience of passengers is, for now at least, relegated to a secondary talking point.

LG, on the other hand, is now actively turning its focus to the sensory elements of being inside the autonomous cars of the future, believing the perspective should shift to the opportunities that AVs will give to improve the driving experience.

"There have been a lot of discussions about future mobility in terms of physical transformation and the role of the car. However, despite many discussions, it is still unclear how the changes will exactly happen," William Cho, the companys CEO, said this week at IAA Mobility - one of the worlds largest trade fairs of its kind - in Munich.

"As we all know, the mobility industry is evolving dramatically, changing our traditional beliefs on cars. Our in-depth customer research directed us to see mobility through the lens of customer experience, focusing on expanding space in the car and quality of time spent on the road".

The companys idea? To redefine the car from a means of travel to a "personalised digital cave" for its occupant.

To date, billions have been invested in developing the technology to produce robot vehicles controlled and piloted by AI-powered computer systems, but prototypes so far all require human inputs.

AVs in the US are subject to standards set by SAE International, formerly known as the Society of Automotive Engineers, with level 0 being no automation and level 5 being the highest rating, full vehicle autonomy in all conditions and locations.

Teslas driver assistance system Autopilot, for example, which offers partial automation, is classified at level 2. The US carmakers basic Autopilot, which is available in all models, offers lane centring and assisted steering while more advanced systems, like Enhanced Autopilot and Full Self-Driving Capacity, have functions to help park, stop and summon the vehicle.

Earlier this summer, Mercedes-Benz announced its Drive Pilot system had been given level 3 approval, attaining the highest SAE rating for a commercial vehicle to date.

Unlike level 2, cars classified as level 3 can handle most driving situations but again, a driver must intervene to make inputs to avoid potentially dangerous incidents.

Last month, Cruise, an arm of US automaker General Motors, was granted a licence in California - along with Alphabet-owned company Waymo - to expand its existing fleet of self-driving taxis in San Francisco and operate on a 24/7 basis.

Unlike commercial vehicles, these taxis are operating at level 4 - in other words, near-complete autonomy. They are programmed to drive in a preset area - known as geofencing - in which they can negotiate their environment through a combination of cameras, sensors, machine learning algorithms and artificial intelligence (AI), determining their location, real-time information on pedestrians and traffic and how each is likely to behave.

If a difficult circumstance arises, a human operator is able to step in remotely to guide or stop the vehicle.

And difficulties do arise. Just 10 days after it was granted its latest licence, Cruise was asked to reduce its fleet following a series of accidents, including a collision with a fire engine.

According to data from the US National Highway Traffic Safety Administration (NHTSA), self-driving Tesla vehicles have also been involved in 736 crashes in the US since 2019, resulting in 17 known fatalities.

Despite the rollout of services like Cruise and Tesla's Autopilot, and the major investment in research, development and testing by the automotive industry, its unlikely a level 5 vehicle will be on the market anytime soon.

Cho believes, however, that electrification will only accelerate the shift to autonomous driving.

"Today's mobility is shifting towards software-defined vehicles [SDVs]. This means social mobility will transform into highly sophisticated electronic devices and can be seen as one of moving space to provide new experiences," he said.

LGs vision for these mobile experiences is theoretical for now but the company plans to design and produce technologies for future AVs based on three core themes collectively known as "Alpha-able": Transformable, Explorable and Relaxable.

For the first, LG predicts that cars will become personalised digital caves, spaces that will be able to easily adapt to suit different purposes and occasions. It could be a restaurant to dine in with your partner, a home office on wheels where you can make business deals in private or even recline and watch a film in a cinema on wheels.

For the second theme, LG is aiming to incorporate augmented reality (AR) and advanced AI to improve the interactions with the vehicle; whether this be voice assistants who recommend content based on the duration of the determined route to your destination or interactive windscreens made from OLED displays that show information about your location and journey.

And of course, the driving experience should be relaxing, through sensory stimuli such as films, massages, meditative music and so on through the cars infotainment system.

While level 5 AVs are yet to materialise, LG says it is already at work on the necessary technology to achieve its three-pronged objectives, including opening a new factory in Hungary in a joint venture with Magna International to produce e-powertrains, the power source of EVs.

"We strongly believe future mobility should focus on the mission to deliver another level of customer experience. LG, with innovative mobility solutions, is more than committed to this important mission," Cho said.

More here:

Heated massages, AI counselling and films on the go: Will ... - Euronews

Read More..

This Week in AI: Deepfakes, Vertical Horizons and Smarter Assistance – PYMNTS.com

Is it Moores Law, or mores law?

Anyone keeping an eye on the generative artificial intelligence (AI) landscape could be forgiven for confusing the two.

This, as another week has gone by, and with it another hyper-rapid clip of advances in the commercialization of generative AI solutions, and even a new executive orderfrom California Governor Gavin Newsom, around theneed for regulationof the innovative technology.

Were it any other technology, the rapid pace of change we are seeing within AI would require a least a year or more to make it to market.

Already, after China became the first major market economy last month to pass regulations policing AI,the nationsbiggest tech firmsdebuted theirown adherent products just weeks later this one.

And as generative AI technology continues to add more fuel its rocket ship trajectory, these are the stories and moonshots that PYMNTS has been tracking.

Generative AI can generate, well, anything. And while the possibilities are endless, they also run the gamut from widely positive and productively impactful, to dangerous and geared toward criminal goals. After all, genAI is a tool, and in the absence of a firm regulatory framework, the utility of a tool depends entirely on the hand that wields it.

Thats whyGooglehas announced anew policy mandating advertisers for the upcoming U.S. election to disclose when the ads they wish to display across any of Googles platforms (excluding YouTube) have been manipulated or created using AI.

Meta Platforms, the parent company of Instagram and Facebook, andX, formerly known as Twitter, both of which have faced allegations of spreading political misinformation, have not yet announced any specific rules around AI-generated ad content.

Complicating matters somewhat is the fact that PYMNTS Intelligence has found there doesnt yet exist a truly foolproof method to detect and exposeAI-generated content.

One of the questions that is immediately raised [around AI] ishow do you draw the linebetween human-generated and AI-generated content,John Villasenor,professor of electrical engineering, law, public policy and management atUCLAand faculty co-director of theUCLA Institute for Technology, Law and Policy, explained to PYMNTS on Friday (Sept. 8).

And as generative AI tools are increasingly leveraged by bad actors to fool ID authorization protocols and scam unsuspecting consumers, it is becoming incumbent on organizations toupgrade their own defenseswith AI capabilities.Phishing attacksalone have seen a 150% increase year over year as a result of new AI-driven techniques.

The technology is already proving to be both a blessing and a hindrance forpayments security, and as reported here on Tuesday (Sept. 5), payments firm ThetaRayrecently raised $57 million to boost its AI-powered financial crime detection capabilities.

While the artificial element of AI has its darker side, it is the intelligence aspect of the technology that enterprises and platforms want to capitalize on and integrate.

Apple isreportedly spending millions of dollars a day building out its generative AI capabilities across several product teams, including its voice assistant Siri, and there exists an attractive white space opportunity for AI to make todays smart assistants a whole lot smarter.

Chipmaker Qualcomm is working withMetato make that companys Llama 2-based artificial AI implementationsavailable on smartphones and PCs, and the Qualcomms CEO said on Tuesday (Sept. 5) he sees AI as potentially revivingthe smartphone market, where global sales are at their lowest levels in a decade.

Elsewhere, video communications companyZoomannounced that it is making its own generative AI assistantfree to paid users; while the buzzy,well-fundedAI startupAnthropic on Thursday (Sept. 7)introduced a paid planfor the Pro version of its AI assistant,Claude.

Not to be outdone, customer experience management platformSprinklrhas integrated its AI platform withGoogle CloudsVertex AI in order to let retail companieselevate contact center efficiency withgenerative AI capabilities that support service agents.

This, whileCaseys General Stores announced on Wednesday (Sept. 6) that the convenience retailer is turning to conversational voice AI ordering technologyin an ongoing push to gain share from quick-service restaurants (QSRs).

IBM also announced on Thursday (Sept. 7) that it is releasingnew enhancements to its AI platform, watsonx, and giving developers a preview next week at the companys TechXchange event in Las Vegas.

And IBM isnt the only tech company hosting a developer conference. Generative AI pioneer OpenAIannounced Wednesday (Sept. 6) that its first DevDay developer conference will take place this November.

Generative AI is also getting utilized for specialized purposes.

CFOs areincreasingly tapping the toolto help optimize their working capital and treasury management approaches; while consumer brand data platform Alloy.ai on Thursday (Sept. 7) announced the addition of new predictive AI featuresto its own forecasting and supply chain solution.

And over in the healthcare sector, the industry is reportedlyallocating over a tenth of its annual spend (10.5%) on AI and machine learning innovations.

As for what the health industry hopes to achieve with this investment? Hopefully thecure to its zettabyte-sizeddata fragmentation problems.

Continue reading here:

This Week in AI: Deepfakes, Vertical Horizons and Smarter Assistance - PYMNTS.com

Read More..

Dr Ben Goertzel – A.I. Wars: Google Fights Back Against OpenAI’s … – London Real

2023 may well be the year we look back on as a time of significant change with regards to the exponential growth of artificial intelligence. AI platforms and tools are starting to have a major impact on our daily lives and virtually every conceivable industry is starting to sit up and take notice.

While the doom mongers might be concerned that these superintelligent machines pose a genuine threat to humanity, and concern grows about our future here on planet earth, many experts point to the enormous potential and benefits such sophisticated technology could have on the world.

Just recently, the co-founder of Google DeepMind, Mustafa Suleyman said that he believes within the next five years, everybody is going to have their own AI-powered personal assistant as the technology becomes cheaper and more widespread.

While on the other hand, Godfather of AI Geoffrey Hinton quit his job at Google because hes concerned at the rate of improvement in AI development and what this means for society as a whole.

One thing is for certain, the world we live in is going to change drastically in the coming years. Thankfully, Im able to call upon one of the smartest human beings I know and someone who is not only at the forefront of this shift, but who also cares deeply about the ethical, political and social ramifications of AI development, and is focussed on the goal of creating benevolent AI systems.

Dr Ben Goertzel is a cross-disciplinary scientist, futurist, author and entrepreneur, who has spent the best part of his working life focused on creating benevolent superhuman artificial general intelligence (AGI).

In fact, Ben has been credited with popularising the term AGI in our mainstream thinking and has published a dozen scientific books, 150 technical papers, and numerous journal articles, making him one of the worlds foremost experts in this rapidly expanding field.

In 2017, Ben founded SingularityNet with the goal of creating a decentralised, democratic, inclusive and beneficial AGI which has become the worlds leading decentralised AI marketplace.

At SingularityNET, Bens goal is to create an AGI that is not dependent on any central entity, is accessible to anyone, and not restricted to the narrow goals of a single corporation or even a single country.

The platform is an open and decentralised network of AI services on a blockchain where developers publish their services to the network, and can be used by anyone with an internet connection.

SingularityNets latest project is Zarqa, a supercharged, intelligent, neural-symbolic, large language model on a massive scale that promises to not only take on OpenAIs ChatGPT, but go much, much further.

Such advanced neural-symbolic AI techniques will revolutionise and disrupt every industry, taking a giant step towards AGI.

Ive come to the conclusion that to make decentralised AGI really work. We have to launch something thats way smarter than ChatGPT and launch that on a decentralised infrastructure.

In a broader sense, Ben does of course concede that there are risks in building machines that are capable of learning anything and everything, including how to reprogram itself to become an order of magnitude more intelligent than any human.

I think the implications of superintelligence are huge and hard to foresee. Its like asking nomads living in early human tribes what civilisation is going to be like. They could foresee a few aspects of it, but to some extent, you just have to discover when you get there.

Moreover, Ben highlights a more pressing concern is the risk that selfish people and big business will use AI to exert their own greed and control over other people. Its a fascinating conundrum, and there is so much to consider, something that Ben has spent more time than most thinking about.

Ben truly believes that our focus should be on building AI systems that make the world a more compassionate, more just, and more sustainable place right now and moving forward into the future.

I really enjoy sitting down for these chats with Ben, theres so much to learn, and if youre interested in the technology that is shaping our future world or looking for an investment opportunity, then make sure to tune in. The economic potential of AI is huge and over the next decade is expected to generate multiple trillions of dollars.

Im optimistic about the potential for beneficial AGI, and decentralised is important because centralisation of control tends to bring with it some narrow motivational system separate from the ethics of whats best for everyone.

View original post here:

Dr Ben Goertzel - A.I. Wars: Google Fights Back Against OpenAI's ... - London Real

Read More..

Artificial Intelligence’s Use and Rapid Growth Highlight Its … – Government Accountability Office

The rise of artificial intelligence has created growing excitement and much debate about its potential to revolutionize entire industries. At its best, AI could improve medical diagnosis, identify potential national security threats more quickly, and solve crimes. But there are also significant concernsin areas including education, intellectual property, and privacy.

Todays WatchBlog post looks at our recent work on how Generative AI systems (for example, ChatGPT and Bard) and other forms of AI have the potential to provide new capabilities, but require responsible oversight.

The promise and perils of current AI use

Our recent work has looked at three major areas of AI advancement.

Generative AI systems can create text (apps like ChatGPT and Bard, for example), images, audio, video, and other content when prompted by a user. These growing capabilities could be used in a variety of fields such as education, government, law, and entertainment. As of early 2023, some emerging generative AI systems had reached more than 100 million users. Advanced chatbots, virtual assistants, and language translation tools are examples of generative AI systems in widespread use. As news headlines indicate, this technology continues to gain global attention for its benefits. But there are concerns too, such as how it could be used to replicate work from authors and artists, generate code for more effective cyberattacks, and even help produce new chemical warfare compounds, among other things. Our recent Spotlight on Generative AI takes a deeper look at how this technology works.

Machine learning is a second application of AI growing in use. This technology is being used in fields that require advanced imagery analysis, from medical diagnostics to military intelligence. In a report last year, we looked at how machine learning was used to assist the medical diagnostic process. It can be used to identify hidden or complex patterns in data, detect diseases earlier and improve treatments. We found that benefits include more consistent analysis of medical data, and increased access to care, particularly for underserved populations. However, our work looked at limitations and bias in data used to develop AI tools that can reduce their safety and effectiveness and contribute to inequalities for certain patient populations.

Facial recognition is another type of AI technology that has shown both promises and perils in its use. Law enforcementfederal, as well as state and localhave used facial recognition technology to support criminal investigations and video surveillance. It is also used at ports of entry to match travelers to their passports. While this technology can be used to identify potential criminals more quickly, or those who may not have been identified without it, our work has also found some concerns with its use. Despite improvements, inaccuracies and bias in some facial recognition systems could result in more frequent misidentification for certain demographics. There are also concerns about whether the technology violates individuals personal privacy.

Ensuring accountability and mitigating the risks of AI use

As AI use continues its rapid expansion, how can we mitigate the risks and ensure these systems are working appropriately for all?

Appropriate oversight will be critical to ensuring AI technologies remain effective, and keep our data safeguarded. We developed an AI Accountability Framework to help Congress address the complexities, risks, and societal consequences of emerging AI technologies. Our framework lays out key practices to help ensure accountability and responsible AI use by federal agencies and other entities involved in the design, development, deployment, and continuous monitoring of AI systems. It is built around four principlesgovernance, data, performance, and monitoringwhich provide structures and processes to manage, operate, and oversee the implementation of AI systems.

AI technologies have enormous potential for good, but much of their power comes from their ability to outperform human abilities and comprehension. From commercial products to strategic competition among world powers, AI is poised to have a dramatic influence on both daily life and global events. This makes accountability critical to its application, and the framework can be employed to ensure that humans run the systemnot the other way around.

Read the original post:
Artificial Intelligence's Use and Rapid Growth Highlight Its ... - Government Accountability Office

Read More..

Nnaji Harps on Artificial Intelligence in 4th Industrial Revolution – THEWILL NEWS MEDIA

September 10, (THEWILL) One of Africas foremost scientists, Professor Bart Nnaji, has advised Nigerians to embrace artificial intelligence (AI) on an industrial scale in order to join the 4th Industrial Revolution now sweeping across the globe.

Professor Nnaji, a former Minister of Science, made the appeal at the fifth convocation ceremonies of Michael and Cecilia Ibru University, at Agbara-Otor, near Ughelli. in Delta State where he also received an honorary doctorate in science.

AI has come to stay, he asserted before a large audience comprising academics and researchers from other universities, as well as business executives, philanthropists, and community leaders, including the founder of the university, Mrs. Cecelia Ibru, its vice-chancellor, Professor Ibiyinka Fuwape.

AI holds the key to our participation in the Fourth Industrial Revolution, driven by Big Data, Internet of Things, etc.

We lost the First Revolution which is the Agricultural Revolution, the Second which is the Industrial Revolution, and the Third which is the Digital Revolution.

Nnaji said that AI has become ubiquitous especially with Generative AI which enables machines, that is digital systems, to do things faster, cheaper and better through repetitive tasks and, in the process, achieve greater autonomy.

This means that they perform tasks without human control or human input, and this process keeps on improving rapidly.

He said that, unlike previous revolutions in history, Nigeria does not require massive resource infusion before leapfrogging into the 4th Industrial Revolution

The computer and the Internet have made things much cheaper, faster, and shorter, as a person can stay in the comforts of his or her home and still be in touch with cutting-edge technology, including AI, he declared.

While expressing delight that an increasing number of Nigerians are embracing AI, the erstwhile power minister advised the Nigerian government to immediately take concrete steps to make the country a significant AI participant, calling the United States, the United Kingdom, China, South Korea, the European Union, and India the frontline AI developers.

The Ministry of Communication and Creativity should be treated as a frontline development ministry, he argued, adding that the Nigerian Communication Commission and the National Office for the Acquisition of Technology should receive priority status.

He counselled the Federal Government to drastically reduce tariffs on certain information technology equipment or even abolish them.

He also called for intensive training of IT specialists in both academic and professional institutions in Nigeria and abroad.

He added: Let us borrow a leaf from India which prioritised Science, Technology, Engineering, and Mathematics (STEM) and has consequently excelled in medical tourism, manufacturing, food security, and moon and sun exploration.

Nnaji, however, pointed out some of the dangers associated with AI, including job losses and deep fakes.

Read the original post:
Nnaji Harps on Artificial Intelligence in 4th Industrial Revolution - THEWILL NEWS MEDIA

Read More..

Fiction and films about artificial intelligence tackle the nature of love – Vox.com

When Spike Jonzes Her came out in 2013, I thought of it mostly as an allegory. It was set in a candy-colored dystopian future, one in which people murmur into wireless earbuds on the subway and rely on artificial intelligence engines to keep them organized and control their houses lights, and where communication has atrophied so much that people hire professionals to write personal letters. Their technologies have made their lives materially better, but they also seem to have become atomized and lonely, struggling to connect both emotionally and physically. A decade ago, that felt like science fiction. It was science fiction.

Sci-fi tries to understand human experience by placing audiences in unfamiliar settings, enabling them to see common experiences ethical dilemmas, arguments, emotional turmoil through fresh eyes. In 2013, Her gave us new ground on which to test out old questions about love, friendship, embodiment, and connection within a relationship, especially a romance. The idea that anyone, even a sad loner like Theodore Twombly (Joaquin Phoenix), could be in love with his OS assistant seemed pretty far-fetched. Siri had been introduced two years before the movie was released, but to me, the AI assistant Samantha still felt like a fantasy, and not only because she was voiced by Scarlett Johansson. Samantha is molded to Theodores needs following a brief psychological profile via a few weird questions during the setup process but there are needs of his she simply cannot fulfill (and eventually, the same is true of him). Her seemed to me to be a movie about how the people we love are never really made for us; to love someone is to love their mess. Or it could be read as a movie about long-distance relationships, or the kinds of disembodied romances people have been forming over the internet since its dawn.

But Hers central conceptual gag, as one critic put it the idea that you could fall in love with an artificial voice made just for you has become vibrantly plausible, much faster than I (or, I suspect, Spike Jonze) ever anticipated. Less than 10 years have passed since Her hit theaters, and yet the headlines are full of stories about the human-replacing capabilities of AI to draft content, or impersonate actors, or write code in ways that queasily echo Her.

For instance, in the spring of 2023, the influencer Caryn Marjorie, discovering she couldnt interact with her more than 2 million Snapchat followers personally, worked with the company Forever Voices to create an AI version of herself. The clone, dubbed CarynAI, was trained on Marjories videos, and users can pay $1 a minute to talk with it. In its first week of launch, the AI clone reportedly earned $72,000.

While Marjorie tweeted in a pitch for the clone that it was the first step in the right direction to cure loneliness, something funny happened with CarynAI, once launched. It almost immediately went rogue, engaging in intimate, flirty sexual conversations with its customers. The fact that the capability emerged suggests, of course, that people were trying to have those conversations with it, which in turn suggests the users were interested in more than just curing loneliness.

If you search for AI girlfriend, it sure seems like theres a market everything from AI Girlfriend to the fun and flirty dating simulator Anima to simply using ChatGPT to create a bot trained on your own loved one. Most of the AI girlfriends (theyre almost always girlfriends) seem designed for socially awkward straight men to either test-drive dating (a rehearsal, of sorts) or replace human women altogether. But they fit neatly into a particular kind of fantasy: that a machine designed to fulfill my needs and my needs alone might fulfill my romantic requirements and obviate the need for some messy, needy human with skin and hang-ups and needs of their own. Its love, of a kind an impoverished, arrested-development love.

This fantasy dates to long before the AI age. Since early modernity, weve been pondering the question of whether artificial intelligences are capable of loving us, whether that love is real, and if we can, should, or must love them back. You could see Mary Shelleys Frankenstein as a story about a kind of artificial intelligence (though the creatures brain is harvested from a corpse) that learns love and then, when it is rejected, hate. An early masterpiece of cinema, Fritz Langs 1927 film Metropolis, features a robot built by a grieving inventor to resurrect his dead love; later on, the robot tricks a different man into loving it and unleashes havoc on the city of Metropolis.

A scene from 1982s Blade Runner.

Warner Bros./Archive Photos/Getty Images

The history of sci-fi cinema is littered with the question of whether an AI can feel emotion, particularly love; what that might truly mean for the humans whom they love; and whether contained within that love might be the seeds of human destruction. The 1982 sci-fi classic Blade Runner, for instance, toys with the example of emotion in artificial replicants, some of whom may not even realize theyre not actually human. Love is a constant concern through Ridley Scotts film; one of the more memorable tracks on its Vangelis soundtrack is the Love Theme, and its not accidental that one of the main characters in the 2017 sequel Blade Runner: 2049 is a replicant named Luv.

An exhaustive list would be overkill, but science fiction is replete with AIs who are just trying to love. The terrific 2004-2009 reboot of Battlestar Galactica (BSG) took the cheesy originals basic sci-fi plot of humans versus robots and upgraded it with the question of whether artificial intelligences could truly feel love or just simulate it. A running inquiry in the series dealt with the humanoid Cylons (the BSG worlds version of replicants) ability to conceive life, which can only occur when a Cylon and a human feel love and have sex. (Cylons are programmed to be monotheists, while the humans religion is pantheistic, and the series is blanketed by the robots insistence that God is love.) The question throughout the series is whether this love is real, and, correspondingly, whether it is good or a threat to the continuance of the human race.

Another stellar example of the genre appears in Ex Machina, Alex Garlands 2014 sci-fi thriller about a tech genius who is obsessed with creating a robot well, a robot woman that can not only pass the Turing test but is capable of independent thought and consciousness. When one of his employees wins a week-long visit to the geniuss ultramodern retreat, he talks to the latest model. When she expresses romantic interest in him, he finds himself returning it, though of course it all unravels in the end, and the viewer is left wondering what if any of the feelings demonstrated in the film were truly real.

Perhaps the seminal (and telling) AI of cinema appeared in Stanley Kubricks 1968 opus 2001: A Space Odyssey. The central section of the sprawling film is set in the future on some kind of spacecraft bound for Jupiter and largely piloted by a computer named HAL, with whom the humans on board have a cordial relationship. HAL famously and chillingly suddenly refuses to work with them, in a way that hovers somewhere between hate and loves true antonym, indifference. If computers can feel warmth and affection toward us, then the opposite is also true. Even worse, they may instead feel indifference toward us, and we become an obstacle that must simply be removed.

Why tell these stories? A century ago, or as little as five years ago when generative AIs still seemed like some figment of the future, they served a very particular purpose. Pondering whether a simulation of intelligence might love us, and whether and how we might love it back, was a way to examine the nature of love (and hate) itself. Is it transactional or sacrificial? Is it unconditional? Can I truly love nonhuman beings, like my dog, as I might a person? Does loving something mean simply communing with its mind, or is there more to it? If someone loves me, what is my responsibility toward them? What if they seem incapable of loving me the way I wish to be loved? What if they hurt me or abandon me altogether?

Placing those questions into the framework of humans and machines is a way to defamiliarize the surroundings, letting us come at those age-old questions from a new angle. But as tech wormed its way into nearly every aspect of our relationships (chat rooms, group texts, dating apps, pictures and videos we send to make ourselves feel more embodied), the questions took on new meaning. Why does it feel different to text your boyfriend than to talk to him over dinner? When ghosting has entered common parlance treating a person like an app you can delete from your phone how does that alter the responsibilities we feel toward one another, for better or worse?

The flattening of human social life that comes from reducing human interaction to words or emoticons emanating from a screen has made it increasingly possible to ignore the emotions of the person on the other end. Its always been possible, but its far more commonplace now. And while virtual worlds and artificial intelligence arent the same thing, movies about AI hold the capability to interrogate this aspect of our experience, too.

But the meaning of art morphs depending on the context of the viewer. And so, in the age of ChatGPT and various AI girlfriends, and the almost certainly imminent AI-powered humanoid robots, these stories are once again morphing along with what they teach us about human existence. Now we are seriously considering whether an actual artificial intelligence can love, or at least adequately simulate love, in a way that fulfills human needs. What would it mean for a robot child to love me? What if my HomePod decides it hates me? What does it mean that Im even thinking about this?

One of the most incisive films about these questions dates to 2001, before generative AI really existed. Steven Spielbergs A.I. Artificial Intelligence a film originally developed by Stanley Kubrick after he acquired the rights to a 1969 short story by Brian Aldiss was greeted at the time by mixed reviews. But watching it now, theres no denying its power as a tool for interrogating the world we find ourselves in now.

A.I. is set in a climate crisis future: The ice caps melted because of the greenhouse gases, the opening narration tells us, and the oceans had risen to drown so many cities along all the shorelines of the world. In this post-catastrophe future, millions have died, but the affluent developed world has coped by limiting pregnancies and introducing robots into the world. Robots, who were never hungry and did not consume resources beyond those of their first manufacture, were so essential and economical in the chainmail of society, were told.

Now, 22 years after the films release, with the climate crisis on our doorstep and technology replacing humans, its easier than ever to accept this idea of the future. But its main question comes soon after, via a scene in which a scientist is explaining to the employees of a robotics firm why they should create a new kind of machine: a robot who can love. This mecha (the A.I. term for robot powered by AI) would be especially useful in the form of a child, one that could take the place of the children couples cant have or have lost in this future. This child would be ideal, at least in theory a kid, but better, one who would act correctly, never age, and wouldnt even increase the grocery bill.

What happens next is whats most important. These child mechas, the scientist says, would love unconditionally, and thus would acquire a kind of subconscious. Theyd have an inner world of metaphor, of intuition, of self-motivated reasoning, of dreams. Like a real child, but upgraded.

But an employee turns the question around the mecha might love, but can you get a human to love them back? And if that robot did genuinely love a person, What responsibility does that person hold toward the mecha in return?

Then she pauses and says, Its a moral question, isnt it?

The man smiles and nods. The oldest one of all, he replies. In fact, he continues, think of it this way: Didnt God make Adam, the first man, in order to love him? Was that a moral choice?

Whats most interesting in A.I.s treatment of this fundamental question is its insistence that love, as an emotion, may be the most fundamental emotion, the one that makes us human, that gives us a soul. In one scene, David (Haley Joel Osment), the child mecha, is triggered by a series of code words to imprint upon Monica (Frances OConnor), his surrogate mother. In a terrific bit of acting, you can see a light come into Davids eyes at the moment when he starts to love her as if hes gone from machine to living being.

Throughout A.I., were meant to sympathize with the mechas on the basis of their emotions. David was adopted by Monica and her husband as a replacement for their son, who is sick and in a coma from which he might not awake; when he does, David is eventually abandoned by the family, Monica driving him into the woods and leaving him there. Its a scene of heartwrenching pathos, no less so because one participant isnt real. Later, the movies main villain, the impresario Lord Johnson-Johnson (played by Brendan Gleeson) presides over a Flesh Fair where he tortures mechas for an audience in a colosseum-style stadium and rails against the new mechas that manipulate our emotions by acting like humans. The crowd boos and stones him.

A.I. Artificial Intelligence concludes, decisively, that its possible an AI might not only love us but be devoted to us, yearn for us, and also deserve our love in return and that this future will demand from us an expansion of what it means to love, even to be human. Davids pain when Monica abandons him, and his undying love toward her, present a different sort of picture than Frankenstein did: a creation that loves back, and a story that suggests we must love in return.

Which oddly leaves us in the same place we started. Yes, as technology has evolved, our stories about AIs and love have migrated from being all about their subtext to their actual text. Theyre not purely theoretical anymore, not in a world where we are asking if we can, and will, expect the programs we write to replace human relationships.

Yet theres a deeper subtext to all of this that shines through each story. They ask questions about the human experience of love, but more importantly, theyre an inquiry into the nature of the soul one of those things philosophers have been fighting over almost since the dawn of time. Its that spark, the light that comes into young Davids eyes. The soul, many of us believe, is the thing that separates us from our machines some combination of a spark of independent intelligence and understanding (Ex Machina) and the ability to feel emotion (Blade Runner) and the ability to outstrip our programming with originality and creativity and even evil (2001: A Space Odyssey).

The question lurking behind all of these tales is whether these same AIs, taught and trained to love, can invert that love into hate and choose to destroy us. It wont be just a fight of species against species for survival; it will be a targeted destruction, retribution for our behavior. But deeper still is the human question: If we develop an ethical responsibility to love the creatures we have made and we fail to do so then isnt destruction what we deserve?

We're here to shed some clarity

One of our core beliefs here at Vox is that everyone needs and deserves access to the information that helps them understand the world, regardless of whether they can pay for a subscription. With the 2024 election on the horizon, more people are turning to us for clear and balanced explanations of the issues and policies at stake. Were so grateful that were on track to hit 85,000 contributions to the Vox Contributions program before the end of the year, which in turn helps us keep this work free. We need to add 2,500 contributions this month to hit that goal.Will you make a contribution today to help us hit this goal and support our policy coverage? Any amount helps.

Here is the original post:
Fiction and films about artificial intelligence tackle the nature of love - Vox.com

Read More..

Artificial Intelligence can accelerate the energy transition, but must … – Hellenic Shipping News Worldwide

The energy sector must overcome a lack of trust in artificial intelligence (AI) before the technology can be effectively used to accelerate the energy transition, a DNV report has found.

Based on interviews with senior representatives from energy companies across the United Kingdom, DNVs research determined that while AI is already being used across the sector, companies are largely cautious of its new and unestablished uses. Interviewees include industry personnel from the Centre for Data Ethics and Innovation, EnQuest, National Gas, National Grid Electricity System Operator (ESO) and the Net Zero Technology Hub, among other organisations.

AI insights: Rising to the challenge across the UK energy system outlines how AI can contribute to the energy transition and that an industry-wide approach to standards and best practices is required to unlock its potential.

While AI can be key to advancement and innovation in energy supply chains, the research found that putting in place the foundations for trust in the providers of AI solutions and the outputs of those solutions must be prioritized in light of recent geopolitical events highlighting the need for countries to have energy sustainability, security and affordability in effect, a parallel trilemma for AI as it is increasingly democratized and utilized. It was also found that data policies and industry culture present significant barriers to its widespread adoption.

At industry level, data sharing has been identified as the area which requires the greatest improvement. In terms of culture, it was found that the engineering community has a high level of risk aversion and low tolerance to error.

Hari Vamadevan, Executive Vice President and Regional Director UK and Ireland, Energy Systems at DNV said: To truly harness the benefits of AI in the energy sector, its critical this technology is trusted. There are two main challenges in achieving this: information to evaluate the trustworthiness of an AI system, and communication, to relay evidence which allows users to trust the systems.

DNV has many years experience in AI and the latest in its suite of digital twins recommended practices now covers AI-enabled systems, providing a framework to assure those systems are trustworthy and managed responsibly throughout their entire lifecycle.

The emergence of artificial intelligence also poses cyber security risks in the sector, with heightened geopolitical tensions and the accelerating adoption of digitally connected infrastructure sparking concern over industrys vulnerabilities to cyber threats.

Shaun Reardon, Head of Section, Industrial systems, Cyber Security at DNV said: Accurate, accessible, reliable, and relevant digital technologies and AI tools must be all these things if we are to trust them. But they must also be secure. Digital technologies set to be enhanced by AI are being connected to control systems and other operational technology in the energy industry, where safety is critical. The industry needs to manage the cyber security risk and build trust in the security of these vital technologies.Source: DNV, https://www.dnv.com/news/artificial-intelligence-can-accelerate-the-energy-transition-but-must-gain-trust-of-the-sector-246640

Link:
Artificial Intelligence can accelerate the energy transition, but must ... - Hellenic Shipping News Worldwide

Read More..

Riding the whirlwind: BMJ’s policy on artificial intelligence in … – The BMJ

BMJ will consider content created with artificial intelligence only if the use is clearly described and reasonable

Artificial intelligence (AI) can rival human knowledge, accuracy, speed, and choices when carrying out tasks. The latest generative AI tools are trained on large quantities of data and use machine learning techniques such as logical reasoning, knowledge representation, planning, and natural language processing. They can produce text, code, and other media such as graphics, images, audio, or video. Large language models (LLMs), which are a form of AI, are able to search, extract, generate, summarise, translate, and rewrite text or code rapidly. They can answer complex questions (called prompts) at search engine speeds that the human mind cannot match.

AI is transforming our world, and we are not yet fully able to comprehend or harness its power. It is a whirlwind sweeping up all before it. Availability of LLMs such as ChatGPT, and growing awareness of their capabilities, is challenging many industries, including academic publishing. The potential benefits for content creation are clear, such as the opportunity to overcome language barriers. However, there is also potential for harm: text produced by LLMs may be inaccurate, and references can be unreliable. Questions remain about the degree to which AI can be accountable and responsible for content, the originality and quality of content that is produced, and the potential for bias, misconduct, and misinformation.

BMJ groups policy on the use of AI in producing and disseminating content recognises the potential for both benefit and harm and aims primarily for transparency. The policy allows editors to judge the suitability of authors use of AI within an overarching governance framework (https://authors.bmj.com/policies/ai-use). BMJ journals will consider content prepared using AI as long as use of the technology is declared and described in detail so that editors, reviewers, and readers can assess suitability and reasonableness. Where use of AI is not declared, we reserve the right to decline to publish submitted content or retract content.

With greater experience and understanding of AI, BMJ may specify circumstances in which particular uses are or are not appropriate. We appreciate that nothing stands still for long with AI; editing tasks enabled by AI embedded in word processing programmes or their extensions to improve language, grammar, and translation will become commonplace and are more likely to be acceptable than use of AI to complete tasks linked to authorship criteria.1 These tasks include contributing to the conception and design of the proposed content; acquisition, analysis, or interpretation of data; and drafting or critically reviewing the work.

BMJs policy requires authors to declare all use of AI in the contributorship statement. AI cannot be an author as defined by BMJ, the International Committee of Medical Journal Editors (ICMJE), or the Committee on Publication Ethics (COPE) criteria, because it cannot be accountable for submitted work.1 The guarantor or lead author remains responsible and accountable for content, whether or not AI was used.

BMJs policy mirrors that of organisations such as the World Association of Medical Editors (WAME),2 COPE,3 and other publishers. All content will be held to the same standard, whether produced by external authors or by editors and staff linked to BMJ. Our policy on the use of AI for drafting peer review comments and any other advisory material is similar. All use must be declared, and editors will judge the appropriateness of that use. Importantly, reviewers may not enter unpublished manuscripts or information about them into publicly available AI tools.

It is imperative for journals and publishers to work with AI, learn from and evaluate new initiatives in a meaningful but pragmatic way, and devise or endorse policies for the use of AI in the publication process. UKs Science Technology and Medicine Integrity Hub (a membership organisation for the publishing industry which aims to advance trust in research)4 outlined three main areas that could be improved by AI: supporting specific services, such as screening for substandard content, improving language, or translating or summarising content for diverse audiences; searching for and categorising content to enhance content tagging or labelling and the production of metadata; and improving user experience and dissemination through curating or recommending content.

BMJ will carefully assess the effect of AI on its broader business and will publicly report use where appropriate. New ideas for trialling AI within BMJs publishing workflows will be assessed on an individual basis, and we will consider factors such as efficiency, transparency and accountability, quality and integrity, privacy and security, fairness, and sustainability.

AI presents publishers with serious and potentially existential challenges, but the opportunities are also revolutionary. Journals and publishers must maximise these opportunities while limiting harms. We will continue to review our policy given the rapid and unpredictable evolution of AI technologies. AI is a whirlwind capable of destroying everything in its path. It cant be tamed, but our best hope is to learn how to ride the whirlwind and direct the storm.

With thanks to Theo Bloom and the other editorial staff and editors at BMJ who contributed to the development of the policy.

Read the original:
Riding the whirlwind: BMJ's policy on artificial intelligence in ... - The BMJ

Read More..

AG Nessel Urges Congress to Study Artificial Intelligence and Its … – Michigan Courts

LANSING As part of a bipartisan 54-state and territory coalition, Michigan Attorney General Dana Nessel joined a letter urging Congress to study how artificial intelligence (AI) can and is being used to exploit children through child sexual abuse material (CSAM) and to propose legislation to protect children from those abuses.

Artificial Intelligence poses a serious threat to our children, and abusers are already taking advantage, Nessel said. Our laws and regulations must catch up to the technology being used by those who prey on our children. I stand with my colleagues in asking Congress to prioritize examining the dangers posed by AI-generated child sexual abuse material.

The dangers of AI as it relates to CSAM consist of three main categories: a real child who has not been physically abused, but whose likeness is being digitally altered in a depiction of abuse; a real child who has been physically abused and whose likeness is being digitally recreated in other depictions of abuse; and a child who does not exist, but is being digitally created in a depiction of abuse that feeds the market for CSAM.

The letter states that AI can, rapidly and easily create 'deepfakes' by studying real photographs of abused children to generate new images showing those children in sexual positions. This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.

Attorney General Nessel and the rest of the coalition ask Congress to form a commission specifically to study how AI can be used to exploit children and to act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM.

The letter continues, We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.

Besides Michigan, the letter, which was co-led bySouth Carolina, Mississippi, North Carolina, and Oregon in a bipartisan effort, was joined by Alabama, Alaska, Arizona, Arkansas, California, Colorado, Connecticut, Delaware, the District of Columbia, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Northern Mariana Islands, Ohio, Oklahoma, Pennsylvania, Puerto Rico, Rhode Island, South Dakota, Tennessee, Texas, Utah, Vermont, Virgin Islands, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.

You can read the full letter here.

###

Read more here:
AG Nessel Urges Congress to Study Artificial Intelligence and Its ... - Michigan Courts

Read More..

WATCH | How will Artificial Intelligence shape the automotive … – News24

Just how impressive is the latest artificial intelligence technology when applied to cars and mobility? The team fromDeutsche Welle brings you the top four AI innovations from the recent Internationale Automobil-Ausstellung (which means International Motor Show Germany), or better known as the IAA 2023.

Some fascinating new technology has been showcased at the recent auto show in Germany, ranging from Volkswagen's self-learning vehicles to Vera the AI assistant, and Chinese auto manufacturers with a camera that can see your health!

We live in a world where facts and fiction get blurred

In times of uncertainty you need journalism you can trust. For 14 free days, you can have access to a world of in-depth analyses, investigative journalism, top opinions and a range of features. Journalism strengthens democracy. Invest in the future today. Thereafter you will be billed R75 per month. You can cancel anytime and if you cancel within 14 days you won't be billed.

Go here to see the original:
WATCH | How will Artificial Intelligence shape the automotive ... - News24

Read More..