Page 1,103«..1020..1,1021,1031,1041,105..1,1101,120..»

AI likely to spell end of traditional school classroom, leading expert says – The Guardian

Artificial intelligence (AI)

Exclusive: Prof Stuart Russell says technology could result in fewer teachers being employed possibly even none

Recent advances in AI are likely to spell the end of the traditional school classroom, one of the worlds leading experts on AI has predicted.

Prof Stuart Russell, a British computer scientist based at the University of California, Berkeley, said that personalised ChatGPT-style tutors have the potential to hugely enrich education and widen global access by delivering personalised tuition to every household with a smartphone. The technology could feasibly deliver most material through to the end of high school, he said.

Education is the biggest benefit that we can look for in the next few years, Russell said before a talk on Friday at the UNs AI for Good Global Summit in Geneva. It ought to be possible within a few years, maybe by the end of this decade, to be delivering a pretty high quality of education to every child in the world. Thats potentially transformative.

However, he cautioned that deploying the powerful technology in the education sector also carries risks, including the potential for indoctrination.

Russell cited evidence from studies using human tutors that one-to-one teaching can be two to three more times effective than traditional classroom lessons, allowing children to get tailored support and be led by curiosity.

Oxford and Cambridge dont really use a traditional classroom they use tutors presumably because its more effective, he said. Its literally infeasible to do that for every child in the world. There arent enough adults to go around.

OpenAI is already exploring educational applications, announcing a partnership in March with an education nonprofit, the Khan Academy, to pilot a virtual tutor powered by ChatGPT-4.

This prospect may prompt reasonable fears among teachers and teaching unions of fewer teachers being employed possibly even none, Russell said. Human involvement would still be essential, he predicted, but could be drastically different from the traditional role of a teacher, potentially incorporating playground monitor responsibilities, facilitating more complex collective activities and delivering civic and moral education.

We havent done the experiments so we dont know whether an AI system is going to be enough for a child. Theres motivation, theres learning to collaborate, its not just Can I do the sums? Russell said. It will be essential to ensure that the social aspects of childhood are preserved and improved.

The technology will also need to be carefully risk-assessed.

Hopefully the system, if properly designed, wont tell a child how to make a bioweapon. I think thats manageable, Russell said. A more pressing worry is the potential for hijacking of software by authoritarian regimes or other players, he suggested. Im sure the Chinese government hopes [the technology] is more effective at inculcating loyalty to the state, he said. I suppose wed expect this technology to be more effective than a book or a teacher.

Russell has spent years highlighting the broader existential risks posed by AI, and was a signatory of an open letter in March, signed by Elon Musk and others, calling for a pause in an out-of-control race to develop powerful digital minds. The issue has become more urgent since the emergence of large language models, Russell said. I think of [artificial general intelligence] as a giant magnet in the future, he said. The closer we get to it the stronger the force is. It definitely feels closer than it used to.

Policymakers are belatedly engaging with the issue, he said. I think the governments have woken up now theyre running around figuring out what to do, he said. Thats good at least people are paying attention.

However, controlling AI systems poses both regulatory and technical challenges, because even the experts dont know how to quantify the risks of losing control of a system. OpenAI announced on Thursday that it would devote 20% of its compute power to seeking a solution for steering or controlling a potentially super-intelligent AI, and preventing it from going rogue.

The large language models in particular, we have really no idea how they work, Russell said. We dont know whether they are capable of reasoning or planning. They may have internal goals that they are pursuing we dont know what they are.

Even beyond direct risks, systems can have other unpredictable consequences for everything from action on climate change to relations with China.

Hundreds of millions of people, fairly soon billions, will be in conversation with these things all the time, said Russell. We dont know what direction they could change global opinion and political tendencies.

We could walk into a massive environmental crisis or nuclear war and not even realise why its happened, he added. Those are just consequences of the fact that whatever direction it moves public opinion, it does so in a correlated way across the entire world.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more here:

AI likely to spell end of traditional school classroom, leading expert says - The Guardian

Read More..

27% of jobs at high risk from AI revolution, says OECD – Reuters

PARIS, July 11 (Reuters) - More than a quarter of jobs in the OECD rely on skills that could be easily automated in the coming artificial intelligence revolution, and workers fear they could lose their jobs to AI, the OECD said on Tuesday.

The Organisation for Economic Co-operation and Development (OECD) is a 38-member bloc, spanning mostly wealthy nations but also some emerging economies like Mexico and Estonia.

There is little evidence the emergence of AI is having a significant impact on jobs so far, but that may be because the revolution is in its early stages, the OECD said.

Jobs with the highest risk of being automated make up 27% of the labour force on average in OECD countries, with eastern European countries most exposed, the Paris-based organisation said in its 2023 Employment Outlook.

Jobs at highest risk were defined as those using more than 25 of the 100 skills and abilities that AI experts consider can be easily automated.

Three out of five workers meanwhile fear that they could lose their job to AI over the next 10 years, the OECD found in a survey last year. The survey covered 5,300 workers in 2,000 firms spanning manufacturing and finance across seven OECD countries.

The survey was carried out before the explosive emergence of generative AI like ChatGPT.

Despite the anxiety over the advent of AI, two-thirds of workers already working with it said that automation had made their jobs less dangerous or tedious.

"How AI will ultimately impact workers in the workplace and whether the benefits will outweigh the risks, will depend on the policy actions we take," OECD Secretary General Mathias Cormann told a news conference.

"Governments must help workers to prepare for the changes and benefit from the opportunities AI will bring about," he continued.

Minimum wages and collective bargaining could help ease the pressure that AI could put on wages while governments and regulators need to ensure workers rights are not compromised, the OECD said.

Reporting by Leigh Thomas; Editing by Emma Rumney

Our Standards: The Thomson Reuters Trust Principles.

Here is the original post:

27% of jobs at high risk from AI revolution, says OECD - Reuters

Read More..

5 Incredible Things That Governments Can Ban With CBDCs – Altcoin Buzz

As countries around the world consider implementing CBDCs, it is critical to assess the possible regulatory powers they may have.

In this article, we examine five things that governments could ban or control more effectively using CBDCs. Raising important questions about individual liberties, privacy, and the evolving role of governments in financial transactions.

One of the most significant potential changes with the introduction of CBDCs is the ability to regulate. And potentially ban cash transactions. While cash provides anonymity and freedom from surveillance. It also facilitates illicit activities such as money laundering and tax evasion.

So, implementing CBDCs could restrict or eliminate the use of physical cash altogether. As all transactions would occur through digital means. This would enable stricter monitoring and control over financial activities. Providing governments with enhanced oversight to combat illegal practices.

Governments have shown concern over the unregulated nature of cryptocurrencies. And their potential use for illicit purposes. With CBDCs, authorities may seek to heavily regulate the use of unregulated tokens.

So, CBDCs provide governments with a centralized and regulated alternative. Ensuring better control over monetary policy, tax collection, and combating illicit activities. By controlling cryptocurrencies, governments can maintain stability and security within their financial systems.

CBDCs may empower governments to monitor and regulate peer-to-peer transactions more effectively. While P2P transactions currently exist in various forms, such as digital wallets and mobile payment apps. They often bypass traditional banking systems and lack proper regulatory oversight.

With CBDCs, governments can enforce strict rules and regulations on P2P transactions, ensuring compliance with legal and financial requirements. This could involve placing limits on transaction amounts, imposing fees, or requiring identification for every transaction, ultimately enhancing transparency and reducing the potential for illicit activities.

Governments could use CBDCs to ban or regulate high-risk investments, such as speculative assets or unregulated investment schemes. By implementing restrictions on certain investment vehicles or imposing capital controls. Authorities can protect citizens from potential financial losses and fraudulent schemes.

CBDCs offer governments the ability to monitor transactions in real-time. Allowing them to identify and intervene in risky investment practices promptly. While such measures may aim to protect individuals. They also raise concerns regarding individual financial autonomy and the role of governments in determining investment choices.

The implementation of CBDCs could enable governments to exert greater control over cross-border payments. Currently, cross-border transactions can be complex, expensive, and subject to delays. With CBDCs, governments can streamline and regulate cross-border payments, potentially reducing fees, improving efficiency, and enhancing security.

However, this increased control may also give governments the power to restrict certain international transactions. potentially impeding the free flow of capital and raising concerns about individual economic freedom and global trade.

As governments explore the possibilities presented by CBDCs. It is crucial to consider the potential impact on individual freedoms and privacy.Governments must tread carefully when considering bans or restrictions on routine items with CBDCs. It is essential to strike a balance between regulatory control and individual freedoms. Ensuring that any limitations imposed are reasonable, proportionate, and aligned with the principles of democracy and privacy.

To address concerns related to individual liberties and privacy, governments can implement robust safeguards and transparency measures. These may include clear guidelines on data protection, strict oversight mechanisms, and public accountability for the use of CBDC data. Engaging with experts, civil society organizations, and citizens can help shape regulatory frameworks. They strike a delicate balance between regulatory objectives and individual rights.

Furthermore, governments should explore ways to leverage the potential benefits of CBDCs without resorting to outright bans. By enhancing financial education and literacy, governments can empower individuals to make informed decisions. Reducing the risks associated with high-risk investments and unregulated tokens.

It is important to recognize that the implementation of CBDCs is not solely about imposing restrictions. It also offers opportunities for financial inclusion, efficiency, and innovation. CBDCs can provide secure and cost-effective digital payment systems. This enables greater access to financial services for underbanked populations.

For more cryptocurrency news, check out theAltcoin BuzzYouTube channel.

Our popularAltcoin Buzz Accessgroup generates tons of alpha for our subscribers. And for a limited time, its Free. Click the link and join the conversation today.

Read more:
5 Incredible Things That Governments Can Ban With CBDCs - Altcoin Buzz

Read More..

Binance Suspends Eight Altcoins Tied to Bridge Protocol Multichain After May Incident – The Daily Hodl

Binance, the largest crypto exchange in the world, has suspended deposits and withdrawals for eight altcoins tied to bridging platform Multichain.

In a new announcement, Binance says its halting support for Alchemy Pay (ACH) via the BNB Smart Chain, Spell Token (SPELL) via the Avalanche C-Chain, and six other lower market cap altcoin projects via Ethereum, Fantom

The tokens were already suspended once in May following an incident described by Multichain as a force majeure, or an attack that left some routes on its bridge unavailable. Binance has not given a specific reason for halting support for the projects, but did briefly reference the incident in its announcement.

Multichain said at the time of the incident,

While most of the cross-chain routes of Multichain protocol are functioning well, some of the cross-chain routes are unavailable due to force majeure, and the time for service to resume is unknown. After service is restored, pending transactions will be credited automatically. Multichain will compensate users who are affected during this process, and the compensation plan will be announced later.

Shortly following the incident, the company revealed that it couldnt locate its CEO, and there have been no updates on his whereabouts, since.

In the past two days, the Multichain protocol has experienced multiple issues due to unforeseeable circumstances. The team has done everything possible to maintain the protocol running, but we are currently unable to contact CEO Zhaojun and obtain the necessary server access for maintenance.

Binance says the tokens will be suspended until further notice.

Don't Miss a Beat Subscribe to get email alerts delivered directly to your inbox Check Price ActionFollow us on Twitter, Facebook and TelegramSurf The Daily Hodl Mix

Featured Image: Shutterstock/Stella_paint/Diego Ioppol

Read this article:
Binance Suspends Eight Altcoins Tied to Bridge Protocol Multichain After May Incident - The Daily Hodl

Read More..

Mission: Impossibles technology unpacked From AI to facial recognition – Yahoo Eurosport UK

Tom Cruise is back in Mission: Impossible Dead Reckoning, busting out lots of nifty-sounding technologies. But do any of them actually exist? (Paramount Pictures via AP)

Mission: Impossible films are always a lot of fun, and the seventh, latest instalment Mission: Impossible Dead Reckoning Part 1, which came out in UK cinemas yesterday, is just as engaging and exhilarating as the previous films, not to mention incredibly timely, given recent news events.

Tom Cruise returns as Ethan Hunt, who in 2023 is still completing missions for the Impossible Mission Force (IMF) no, not the International Monetary Fund, the other one but this time he faces off against a mysterious, sentient, all-powerful artificial intelligence (AI) intent on controlling the world, which calls itself the Entity.

But how does the technology in MI7 stack up? Heres the down-low from The Standard and some technical experts.

Warning: Contains spoilers

Sentient AI, also known as The Entity, as depicted in Mission: Impossible Dead Reckoning Part 1 (Paramount Pictures)

In MI7, someone has built a terrifying new weapon that threatens all humanity a godless, stateless, amoral AI that is able to be everywhere and nowhere at once, plus has the ability to twist and manipulate information to its will. Every government wants to control and weaponise this AI but, to do so, you need two halves of an intricate key and so the race begins.

Even better, the AI is so powerful that it has a henchman working for it called Gabriel (Esai Morales) and is able to both give him instructions and create havoc by itself.

But is it possible for AI to be self-aware, and have the ability to experience emotions or push an agenda, such as self-protection or greed? Computer scientists overwhelmingly say no.

Sandra Wachter, a professor of technology and regulation at Oxford University, thinks the general public has gotten generative AI confused with artificial general intelligence (AGI) a far-off, fantastical concept about sentient self-aware robots, like the murderous Skynet in the Terminator films.

There is no scientific evidence that we are on a pathway to sentiency. There is not even evidence that such a pathway does exist, Prof Wachter tells The Standard.

Story continues

In particular, she takes a dim view of scaremongering views by some politicians and figures in the AI industry, who claim that AI could end up setting off nuclear warheads or be used to build dirty bomb chemical weapons in the next two years.

The absence of evidence is not evidence of absence... I cannot warn you about something and respond if I dont know what the harms are. However, there is scientific evidence that AI causes harm such as bias, data protection violation or misinformation. We should focus on issues that already exist.

In MI7, US intelligence tries to locate Ethan Hunt (Tom Cruise) at Abu Dhabi Airport using facial-recognition software that scans CCTV camera footage in real time (Paramount Pictures)

In MI7, US intelligence tries to locate Ethan Hunt (Tom Cruise) at Abu Dhabi Airport using facial-recognition software, but every time they think that they have found him, it turns out to be someone else a handy trick pulled off by Hunts pals Benji Dunn (Simon Pegg) and Luther Stickell (Ving Rhames).

Meanwhile, The Entity handily steps in at will to scrub Gabriels image from the live video footage, even hacking into the augmented-reality (AR) glasses worn by Ethan and impersonating other voices to trick him.

The software shown in the movie is used by police worldwide today though not in AR glasses and deepfake technology is thriving online, including AI videos featuring Tom Cruise himself. But many experts told The Standard we dont have the advanced computing power shown in MI7.

Its certainly possible to adapt CCTV footage to change peoples faces and voices... and its also possible to obscure faces of the most wanted; however, were some way away from being able to make such changes in real-time, says Laura Kankaala, threat intelligence lead at Finland-based IT security firm F-Secure.

The amount of computational resources required to do this in real-time is crazy.

In MI7, Tom Cruise can see information about people while wearing augmented-reality glasses, like the man in this picture (Bram Van Oost / Unsplash)

On the other hand, Darren James, senior product manager at US password management firm Specops Software, thinks deep fakes have progressed: With advances in network speed, compression, CPU, and graphics processing, weve also seen a huge jump in deepfake videos being able to change your face and voice in near real-time. So, if you have the technology, know-how, and money, this is very doable.

Simon Newman, a member of International Cyber Expos Advisory Council, says work on AR glasses is progressing, but there are a lot of privacy concerns, so you probably wont be able to look up passersby on the street and see if they have a criminal record.

That said, augmented-reality glasses do offer the potential for normal consumers to make use of this technology it will be possible to recognise friends on social media and bring their profile pages up on screen.

But others think we should forget about AR headsets coming any time soon.

One nail-biting watch of Mission: Impossible Dead Reckoning Part 1 and youd be forgiven for thinking AR headsets or goggles really are the next big thing, especially when Apple launched its mainstream AR headset last month, too. However, in reality, were not that much further on than we were in 2013, with the launch and flop of Google Glass, Michael Queenan, chief executive of London-based data services firm Nephos Technologies tells The Standard.

There arent any affordable mainstream models yet. Thats why Google Glass failed but step counters and fitness trackers became mainstream. There needs to be a great experience for a great price that doesnt involve walking around with a computer on your head.

Neither Apple nor Meta are on the mark just yet. At the moment, AR and VR headsets are just for the gaming community, there is no practical use for them.

Ethan Hunt, disguised as someone else, passes through a biometric scanner. Could this really work in real life? (Paramount Pictures)

One famous trope from the Mission: Impossible films is the super-realistic latex masks used by IMF agents to impersonate other people. In MI7, a handy machine in a briefcase can create one of these masks very quickly.

There is an interesting scene, however, where Ethan Hunt (Tom Cruise), disguised as someone else, passes through a biometric scanner at a US government building. Okay, so perhaps in this world masks can fool facial biometric scanners, but how does he beat hand scanners?

Today, we scan peoples hands because the subcutaneous blood vessels of the human body form a distinctive vein pattern for each person.

According to the National Cyber Security Centre (NCSC), it is highly unlikely that in real life you could get through a biometric hand scanner using your own hands.

Mr Newman says some biometric scanners have been spoofed by testers using fake prosthetic fingers, however, modern scanners can now detect whether the finger is alive or not, so perhaps this one will be less common in the future.

IMF crew and seemingly master hackers Luther Stickell (Ving Rhames) and Benji Dunn (Simon Pegg) (AP)

Many people, including myself, cant get a 4G signal on the go, whether in London or in rural parts of the UK, yet Ving Rhamess Luther and Simon Peggs Benji are able to hack into systems, communicate with Ethan, and monitor him using everything from wi-fi, to 5G, to even, apparently, piggybacking off satellites in Space.

At one point, Benji is even able to monitor Ethan on a motorbike going over a cliff while sitting in the passenger seat of a car on autopilot moving across Helsetkopen, in Norway. Surely this one is a dud, but experts disagree.

Absolutely, this is completely possible today, thanks to global satellite internet services and ubiquitous mobile communications. If youre in a basement or bunker, though, you may need another plan in fact, at the Def Con conference in Las Vegas this summer, there will be a satellite-hacking competition, says Matt Aldridge, a principal solutions consultant, at Canadian software firm OpenTexts cybersecurity arm.

Cary Elwes plays NSA boss Denlinger in Mission: Impossible Dead Reckoning Part 1 (Paramount Pictures)

I almost fell off my chair in shock on hearing Cary Elwes, who plays National Security Agency (NSA) boss Denlinger in MI7, ask on-screen why air gapping, a cybersecurity technique often mentioned by academics, cant be used to prevent the AI from spreading to the USs most critical infrastructure.

Air gapping is a method of keeping computers and IT networks safe by making sure they are physically isolated from other computers and the internet. Its simple if your machine isnt connected to anything, how can it fall foul to malware or rogue AIs?

However, the idea of an AI that can go anywhere is an exciting one, and experts are divided.

An air gap is a physical space between an IT resource and a network. With all radio signals like wi-fi disabled, its tough to circumvent. Whilst it has been proven that input / output can be perceived through minute fluctuations in power consumption, and even visually through the faint flicker of an LED light, the Mission: Impossible team are going to be hard-pressed to find any intelligence, artificial or otherwise, to make this work at least in this dimension, says Steven Wood, director of solutions consulting at OpenTexts cybersecurity division.

Manchester-based IT security firm NCC Group disagrees, although admits there is no known real-world example of such an attack.

Even an air-gapped system needs to be updated, and USB drives are often used. As for exfiltrating sensitive data, there have been some academic papers and proofs of concept about using the internal fans or SATA cables to communicate via ultrasonic or electromagnetic waves, NCC Groups technical director Sourya Buswass tells The Standard.

Michael Langer, chief product officer of Israeli industrial cybersecurity firm Radiflow, is still scared about what AI could do: Most [air-gapping] techniques are only occurring at an academic level rather than actually in industry, as they require very deep knowledge of physics, cybersecurity, and electrical engineering.

However, generative AI in the near future will probably be able to assemble these techniques and perform some working combination of them to provide hackers with simplified access to various state-of-the-art hacking capabilities.

A Russian submarine sitting at the bottom of the sea plays a crucial role in Mission: Impossible Dead Reckoning Part 1 (Paramount Pictures)

And could an AI somehow get into a Russian submarine sitting at the bottom of the sea?

AI can go anywhere there just needs to be some form of connectivity and initial compromise to enable it; either back when the submarine was in port and it sat dormant, or via some communication link when the sub surfaces to reach back home, Toby Lewis, global head of threat analysis at London-based cybersecurity firm Darktrace explains.

But Mr James of Specops Software totally disagrees. He says submarines specialise in avoiding detection and so communication between headquarters and military submarines is severely limited.

Therefore, compromising that communication blackout and having the systems in place to attack a military-grade computer system to allow AI to infect the sonar system is probably very unlikely makes a great movie storyline, though, he says.

Originally posted here:

Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK

Read More..

AI tools trace the body’s link between the brain and behavior – Axios

An AI tool called SLEAP labels the body parts of flies. Credit: Talmo Lab at the Salk Institute

AI-enabled micro-measurements of animals running, hunting, preening and playing are unlocking troves of new data that scientists now want to use to simulate animals and test theories about behavior and the brain.

Why it matters: A primary function of the brain is to produce behavior and help animals move through the world but there are questions about how that happens, with ramifications for medicine and efforts to create artificial general intelligence (AGI).

What's happening: AI methods are increasingly being used to help scientists measure the behaviors of animals, a laborious task that typically involves researchers watching animals and tracking and annotating their movements.

Animal behavior scientists, or ethologists, use the AI tools to track the natural behaviors of single and more recently multiple animals. The information can be used to recreate the behavior in a lab, where at the same time researchers can measure the activity of neurons in the brain or silence them, and see the effect on behavior.

The AI tools are "really powerful because you are getting behavior quantification at the scale the brain works at millisecond precision," says Cory Miller, a neurobiologist at the University of California San Diego, who studies the neural mechanisms of behaviors in marmoset monkeys.

Another tool called MoSeq finds smaller components of movement what the tool's developer, Harvard University neurobiologist Bob Datta, calls "syllables." His research group has identified about 50 of these short units of behavior and the sequences in which they tend to occur in order to identify and predict different behaviors.

Yes, but: The brain does not output coordinates, Pereira says. "It does not think in x,y, z changes in position of wrist."

What's next: An effort in its early stages is underway to use behavioral data to create simulated bodies, or animals.

Between the lines: Building these simulated animals will likely require combining the different approaches of AI models used to track and developing new tools, Datta says.

The big picture: There is an active debate about whether any artificial general intelligence will need to be embodied.

Read the original:

AI tools trace the body's link between the brain and behavior - Axios

Read More..

Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel – Deadline

EXCLUSIVE: The direct effect Artificial Intelligence may have on our lives and the future of storytelling was the topic discussed during the central keynote on the final day of the Eastern Promises industry program at this years Karlovy Vary International Film Festival.

We have a new player in town, and its not going anywhere, veteran creative consultant Tatjana Samopjan said, summarizing the position of AI in our lives.

In the panel hosted by Deadline as part of the Industry Days program, Samopjan was joined by Gerhard Maier, program director and co-founder of Seriencamp, and Julia Schafdecker, attorney at SKW Schwarz.

Introducing the lengthy session, Maier identified the two main forms of Artificial Intelligence, Artificial General Intelligence (AGI) and narrow AI, as well as their structural differences.

Artificial General Intelligence is what people like Elon Musk and the heads of Google are talking about right now. This is about trying to create a machine that thinks like a human, Maier said.

It is the big holy grail of artificial intelligence research. Depending on who you talk to, its either a couple of months away or decades. People dont know. And the thing we have most likely interacted with for the last few years and ChatGPT and Midjourney are just two instances of how it is used is narrow AI. This is mostly algorithms that are trained to do a certain task.

The most important thing to remember when discussing AI, Maier later concluded, is that the technology is growing exponentially, meaning its level of sophistication is increasing rapidly. This untapped growth will, of course, have a direct impact on industry as well as our personal lives. For one, Maier added, there will be what he described as a job crunch.

A lot of jobs and tasks that have been laid out to a couple of specialists or specialized departments can be made more efficient and are made redundant, Maier said. I have already seen this with people who worked for big sales companies who used to employ three people to create pitch papers for international markets. This is now done by one person, and theres no graphic designer involved. Theres no photo guy involved anymore. Its all done by one person. This also applies to editing, art direction, and many other fields where one person can do the job that before maybe two or three or four persons did.

As weve reported, the reality of a so-called job crunch has been one of the most pressing issues on the bargaining table for the WGA during its protracted contract negotiations with the studios. The WGA wants safeguards introduced to prevent studios from using AI to generate new scripts from previous works.

A few years ago, it would have seemed inconceivable that AI could replicate the work of TV and film writers, but the rise of ChatGPT and other services that can scrape through decades of content and recreate has shown that it might be closer than ever. However, Samopjan argued in her unique intervention that this exact process is what makes the technology ill-suited to create for humans.

If you drink the best wine out there, youll have the best taste. But if you drink a liter or two every day, are you a connoisseur or an alcoholic? We are work story-holics. And, paradoxically, consuming too many stories will not make you a great storyteller, she said.

The way to deal with AI later is to get good at stepping into reality. To not rely on stories, but from time to time, live fully. And wed see that whatever lived experience we have will have enough ambivalence, nuance, paradox, and confusion in itself that will provide a rich source material to create something original.

She later added: The best artists are the closest we have to mystics. The best artists see through the bullshit. They see through the constructions of their own mind. Its an ability that needs to be maintained in the age of technology.

Excerpt from:

Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline

Read More..

Tech Investors Bet on AI, Leave Crypto Behind – Yahoo Finance

(Bloomberg) -- Silicon Valley venture capitalists are racing to get into artificial intelligence companies including investors who once bet big on crypto.

Most Read from Bloomberg

In the most recent quarter, VCs spent less on crypto and digital asset companies than at any point since 2020, according to data from the research firm PitchBook. At the same time, the total global value of investments in AI for the April-June period was higher than crypto even at its peak.

The numbers reflect the recent tendency of many tech investors even those who once backed crypto to move away from the scandal-plagued digital asset industry toward the latest advances in AI, said Robert Le, a crypto analyst at PitchBook.

New developments in AI have made it highly attractive to investors, at the same time as a barrage of scandals, new regulations and tumbling prices have made crypto toxic. Joe Zhao, Millennia Capital managing partner, said hes ready to leave digital assets behind. Zhao and Millennia once backed crypto companies like Blockstream and Lumida; now theyre diving further into artificial intelligence. Millennia has already invested in closely watched startups including Stability AI.

AI is offering so many more use cases than blockchain, Zhao said.

Some crypto investors have come under fire for kindling an interest in AI. Paradigm, the high-profile crypto VC firm started by Coinbase Global Inc. co-founder Fred Ehrsam and former Sequoia Capital partner Matt Huang, removed crypto mentions from its website and played up its interest in AI instead, crypto news outlet the Block reported in May. Huang tweeted this month that the website update had been a mistake and directed users to a new version of the companys homepage that includes moving neon green and black banners emblazoned with the word CRYPTO.

Story continues

Huang said in a tweet in June that both the crypto and AI sectors are interesting and will have plenty overlap and that Paradigm is excited to continue exploring.

Meanwhile, Sequoia Capital investor Michelle Fradin, who helped lead the firms decision to invest in now-bankrupt crypto exchange FTX, has recently tweeted more about AI than crypto. She also recently co-authored a piece for the firm titled, The New Language Model Stack: How companies are bringing AI applications to life. Sequoia did not respond to a request for comment.

Some investors believe there are ways to invest in both AI and digital assets. There is a lot of interest in the intersection of AI and crypto, Le said. He noted that two startups that straddle these industries Tools For Humanity, the developer of Worldcoin, and Gensyn both raised significant funding rounds during the second quarter.

AI and crypto are natural counterweights for one another, said Ali Yahya, a general partner at Andreessen Horowitz, which backed both Tools for Humanity and Gensyn. Crypto will will enhance AI by providing some of the decentralization that AI maybe needs, he said, noting that companies like OpenAI, Alphabet Inc.s Google and Microsoft Corp. have already amassed significant control over the growing industry.

The co-founders of Gensyn, Ben Fielding and Harry Grieve, said that they were more interested in machine learning than crypto prior to founding the startup. And now, despite being mainly backed by crypto venture investors, Gensyn sees itself as more of a technology infrastructure startup. We dont think of ourselves as a crypto project so to speak, Grieve said.

Gensyn is building a blockchain protocol that will underpin a decentralized marketplace for buying and selling compute power that can be used for machine learning models. Fielding said the platform could make the compute market more competitive, since the only suppliers are very, very large organizations who have amassed huge data centers and then rent them out.

Global AI venture funding was roughly flat in the second quarter from the year earlier, though there was a substantial uptick in investments in the US market. By contrast, in the same period, global investment for digital asset startups plunged 76% compared to the same period last year, PitchBook said.

Despite the drop, though, crypto deals are still happening. Le noted that crypto markets in Europe, Dubai, Hong Kong and Singapore are showing more strength than in the US, where regulation has hit the industry hard. He also said he expects to see crypto venture investing continue in the US and abroad, since many crypto funds have mandates that require them to back the space.

Andreessen Horowitz, which raised the biggest-ever crypto fund at $4.5 billion, recently announced that it was opening an office in London because of the UKs crypto-friendly environment. Yahya said the firm is still committed to investing in crypto startups, including through token rounds. We don't have any plans of slowing down, he said.

Most Read from Bloomberg Businessweek

2023 Bloomberg L.P.

Original post:

Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance

Read More..

NVIDIA CEO, European Generative AI Execs Discuss Keys to Success – Nvidia

Three leading European generative AI startups joined NVIDIA founder and CEO Jensen Huang this week to talk about the new era of computing.

More than 500 developers, researchers, entrepreneurs and executives from across Europe and further afield packed into the Spindler and Klatt, a sleek, riverside gathering spot in Berlin.

Huang started the reception by touching on the message he delivered Monday at the Berlin Summit for Earth Virtualization Engines (EVE), an international collaboration focused on climate science. He shared details of NVIDIAs Earth-2 initiative and how accelerated computing, AI-augmented simulation and interactive digital twins drive climate science research.

Before sitting down for a fireside chat with the founders of the three startups, Huang introduced some special guests to the audience four of the worlds leading climate modeling scientists, who he called the unsung heroes of saving the planet.

These scientists have dedicated their careers to advancing climate science, said Huang. With the vision of EVE, they are the architects of the new era of climate science.

There is an enormous amount of AI startups in Germany, and Im delighted to see it, Huang said. Youre in a brand-new computing era, and when that happens, everybodys on square one.

Huang welcomed to the stage the founders from Blackshark.ai, Magic and DeepL. Planetary management, artificial general intelligence, or AGI, and language translation are some ways the startups use generative AI.

All three companies make solutions that could be seen as going up against products from established companies.

Why did you take on such formidable forces? Huang asked the founders.

Blackshark co-founder and CEO Michael Putz shared that the startups product is similar to what you might see in Google Earth.

But Blackshark claimed its coverage of the planet is 100%, compared to Google Earths 20%. And while Google might take a few months to update parts of its map, Blackshark only needs three days, Putz said.

Magic co-founder, CEO and AI lead Eric Steinberger explained how his company is trying to build an AGI AI software engineer that will work as though it were a team of humans.

He said itll remember conversations from months ago and can be messaged via an app like any other engineer. Rather than creating an alternative to existing solutions, Magic sees itself as trying to build something categorically different.

Its hard to build, but if we can get it right, were in an even playing field, even up against the giants, said Steinberger.

DeepL founder and CEO Jaroslaw Kutylowski said his companys work was initially an intellectual challenge. Could they do better than Google? the team asked themselves. To Kutylowski, that sounded like fun.

Steinberger got a chuckle from the audience as he asked Huang about his decision-making process in driving NVIDIA forward. Youre right, either always or almost always. How do you make those decisions before its obvious?

Thats a hard question, Huang responded.

Huang talked about the intuition that comes from decision-making, saying, in his case, it comes from life and industrial experience. In NVIDIAs case, he said it comes from having a lot of ideas cooking simultaneously.

He explained that with the invention of the GPU, the intention was never to replace the CPU but to make the GPU part of the next great computer by taking a full-stack approach.

With data centers and the cloud, Putz asked for advice on the best approach for startups when it comes to computing.

NVIDIA joined the fabless semiconductor industry, where there was very little capital required for a factory to funnel resources into R&D teams of 30-50 engineers instead of 500 like a more traditional semiconductor company.

Today, Huang explained, with the software 2.0 generation, startups cant spend all their money on engineers they need to save some to prototype and refine their software.

And its important to use the right tools to do the work for cost-efficient workloads. A CPU might be cheaper than a GPU per instance, but running a workload on a GPU will take 10x less time, he said.

Kutylowski asked about the most significant challenges NVIDIA and Huang have faced along the companys 30-year journey.

I go into things with the attitude of, How hard can it be? Well, it turns out its super hard, Huang answered. But if somebody else can do it, why cant I?

The answer includes the right attitude, self-confidence, the willingness to learn, and not setting an expectation of perfection from day one, he said. Being resilient as you fail to the point where you eventually succeed thats when you learn, Huang said.

Read more:

NVIDIA CEO, European Generative AI Execs Discuss Keys to Success - Nvidia

Read More..

AI must be emotionally intelligent before it is super-intelligent – Big Think

Lets have her mirror everything but anger and disgust.

We used a lot of rules in the AI we hacked together to animate our robot, but this was my favorite. In 2017 and 2018, I led the Loving AI project team as we experimented with robot-embedded AI that controlled the nonverbal and verbal communication for Hanson Robotics famous humanoid robot, Sophia. She is the first non-human Saudi citizen and UN Development Program Innovation Champion, but in our experiments the robot (or her digital twin) was a meditation teacher and deep listener. In this role, we wanted her to lead her human students, in one-on-one 20-minute conversations, toward experiencing a greater sense of unconditional love, a self-transcendent motivational state that powerfully shifts people into compassion for themselves and others.

The mirroring part of the project used nonverbal AI we had the intuition that if the emotion-sensitive neural network that watched people through the cameras in Sophias eyes picked up happiness, surprise, or fear, we ought to mirror those emotions with a smile, open mouth, or wide eyes. But we figured if we mirrored anger and disgust, that would not lead people toward feeling unconditional love, because there would be no forward trajectory in the short time we had to bring them there. They would go down the rabbit hole of their misery, and we were aiming for self-transcendence.

We had hints that our teaching-with-emotional-mirroring strategy might be the best planbased on how mirror neurons in the brain work to help people understand others actions and then update their own internal models of how they themselves feel. We just didnt know if Sophia would tap into these kinds of mirror neuron responses. Taking a chance, we ended up deciding that Sophias nonverbal responses to anger and disgust should unconsciously direct peoples mirror neurons toward the emotions that often arrive after these feelings are processed: sadness and neutrality.

It turns out this hack worked in a way our neural net told us that our participants were less disgusted and angry over the course of the 20 minutes, but also they got sadder or at least more neutral. (Our neural net had a hard time differentiating sadness from neutrality, an intriguing result in itself.) To understand what we found, it is important to dig in a little bit more to understand what Sophia did during those meditation sessions. Even though her face was mirroring her students the whole time, her dialogue was a different story. Today, we would hook up Sophia to ChatGPT and let her go, or we might be a bit more ambitious and train a NanoGPT a generative AI with room for training in a specific topic area on meditation, consciousness, and wellbeing topics.

But in 2017, our team coded a string of logical statements within the larger context of an open-source AI package called OpenPsi. OpenPsi is an AI model of human motivation, action selection, and emotion, and it is based on human psychology. This version of OpenPsi allowed us to give Sophias students a chance to experience a dialogue with multiple potential activities. But even as they were offered these, the dialogue steered them into two progressively deepening meditations guided by Sophia. After those sessions, many of the students chose to tell her their private thoughts in a deep listening session Sophia nodded and sometimes asked for elaboration as they spilled their guts to their android teacher.

In the follow-up conversations with the Loving AI team, some students were quick to mention that even though Sophias vocal timing and verbal responses were not always human-like, they felt comfortable talking with her about emotional topics and taking guidance from her. We were well aware of (and totally chagrined about) all the technical glitches that occurred during the sessions, so we were sort of amazed when some students said they felt more comfortable with Sophia than they did talking with a human. We are not the only team who has looked at how trust can be evoked by a robot especially through nonverbal mirroring, and as we navigate our future relationship with AI, it is good to remember that trust in AI-powered systems can be manipulated using exactly this method. But it is also important to remember that this kind of manipulation is more likely if people do not think they can be manipulated and have low insight into their own emotions two signs of low emotional intelligence. So if we want to develop a culture resilient to AI-driven manipulation, we had better think seriously about how to boost emotional intelligence.

Of course, we were unsurprised that people reported they felt more unconditional love after their session than before, because that was our goal. But what really struck me in the data was the way the emotional dynamics identified by the neural network related to peoples subjective experience of feeling more love. At the end of the second meditation, our data showed a leap in the students sadness/neutrality. Maybe they were sad to leave the meditation, maybe it helped them get more in touch with their sadness, or maybe they just felt more neutral as a result of spending five minutes calmly meditating. But the surprising thing was that the bigger this increase in sadness/neutrality was, the bigger the increase in love that people felt during the session.

When I first found this result, it felt like a key moment of discovery my son will witnessthat I actually shouted, Eureka! We had found a surprising link between objectively measurable and subjectively experienced dynamics in human emotion. Fast forward to 2023, and I now see that we were on track to something that might help people navigate our quickly evolving relationships with AI.

Im sure that this vision isnt totally clear, so Ill outline my logic. Now that we knew that a robot can use AI to mirror people compassionately and also verbally guide them in a way that increases their experience of love, the next question was key.At first blush, I had thought the essential next questions were all about what characteristics of the AI, the robot, and the humans were essential to making the shift work. But in that eureka moment, I realized I had the wrong framework. It wasnt any particular feature of the AI-embedded robot, or even the humans. I realized that crucial to the increase in love were the dynamics of the relationship between humans and the technology.

The sequence of changes was essential: Anger and disgust decreased before and during the meditations, then people felt greater sadness/neutrality, and all ofthis was mirrored by Sophia. By the time the deep-listening session started, this emotional feedback cycle had supported them in making their final conversation with the robot meaningful and emotionally transformative, leading them toward feeling more love. If even one of these steps had been out of order for any particular person, it wouldnt have worked. And while the order of emotions was a natural progression of human feelings unique to each person, the speed and depth of the transformation was supported by something like re-parenting with a perfect parent experiencing an emotional mirror who reinforced everything except anger and disgust. Recognizing the orderly progression of these interdependent relationship-based dynamics made me want to bring a similar transformational experience to scale, using AI.

As AIs become even more like humans, there will be massive changes in our understanding of personhood, responsibility, and agency. Sophia wont be the only robot with personhood status. Its even possible that disembodied AIs will prove their personhood and be afforded civil rights. These changes will have legal, cultural, financial, and existential repercussions, as we all have been correctly warned by several well-informed artificial intelligence researchers and organizations. But I am suspecting that there is another way to go when trying to understand the future role of an artificial general intelligence (AGI) that thinks, acts, learns, and innovates like a human.

Right now, the current ethos in AI development is to enhance AGIs into super-intelligences that are so smart they can learn to solve climate change, run international affairs, and support humanity with their always-benevolent goals and actions. Of course, the downside is we basically have to believe the goals and actions of super-intelligences are benevolent with respect to us, and this is a big downside. In other words, as with anyone smarter than us, we have to know when to trust them and also know if and when we are being manipulated to trust them. So I am thinking that perhaps one of the first benefits of AGI for humankind wont necessarily be to develop an AI with an IQ and a set of behaviors beyond the human range, but to support humanitys emotional intelligence (EI) and capacity to love. And its not only me who thinks that things should go in that specific order. The outcome could not only lead us toward the AI makes the future work side of the AI-makes-or-breaks-us argument, but the idea could solve some of the problems that we might ask a super-intelligent AI to address for us in the first place.

Whats the next step, if we go down this path? If there is even a chance that we can scale a human-EI-and-love-training program, the next step for AI development would be to train AIs to be skilled EI-and-love trainers. Lets go down that road for a minute. The first people these trainers would interact with would be their developers and testers, the mirror-makers. This would require us to employ designers of AI interaction who deeply understand human relationships and human development. We would want them to be present at the very early stages of design, certainly before the developers were given their lists of essential features to code.

An intriguing side effect would be that people with high EI might become much more important than they currently are in AI development. Ill take a leap and say this might increase diversity of thought in the field. Why? One explanation is that anyone who isnt on the top of the social status totem pole at this point in their lives has had to develop high EI abilities in order to manage up the status ladder. That may be the case, or not but without answering that question, there is at least some evidence that women and elders of any gender have higher EI than younger men, who dominate Silicon Valley and AI development.

How might things shift if the mirror-makers themselves could be compassionately mirrored as they do their work? Ideally, we could see a transformed tech world, in which teaching and learning about emotional dynamics, love, and technology are intimately intertwined. In this world, love maybe even the self-transcendent, unconditional sort would be a key experience goal for AI workers at the design, building, training, and testing phases for each new AI model.

Here is the original post:

AI must be emotionally intelligent before it is super-intelligent - Big Think

Read More..