Category Archives: Artificial General Intelligence

The Government’s Role In Progressing AI In The UK – New … – Mondaq News Alerts

OpenAI launched ChatGPT 3.5 in November 2022 and sincethen, it set growth records as it spread like wildfire. Today, itnears one billion unique visitors per month. Since its launch, theworld has been all-consumed with talking about AI and its potentialuse cases across a wide range of industries.

Sam Altman, co-founder and CEO of OpenAI, has said that AI toolscan find solutions to "some of humanity's biggestchallenges, like climate change and curing cancer".

There's also been plenty of talk about the largest techcompanies (namely Google and Meta, as well as Microsoft) and theirrace in the pursuit of Artificial General Intelligence (AGI). Thismakes it sound very much like an arm's race, which is acomparison many have made. Within any race, there's often theconcern that those in the race will cut corners and in thisparticular race, many fear that the consequences could bedisastrous. Within this article, we'll explore the possibleconsequences and the UK's stance on the regulation of AI tohelp safeguard against these.

AI is seen as central to the government's ambition to makethe UK a science and technology superpower by 2030 and PrimeMinister Rishi Sunak again made this clear in his opening keynoteat June's London Tech Week: "If our goal is to make thiscountry the best place in the world for tech, AI is surely one ofthe greatest opportunities for us".

As discussed here, AI was also a headline feature earlierthis year in the government's Spring Budget. Both within thisBudget and since then, the following has been announced:

Despite the many potential benefits of AI, there's alsogrowing concern about the risks of AI, ranging from the widelydiscussed risk of disinformation to the evolving risk ofcybersecurity. A couple of the widely discussed risks of AIare:

Most AI tools will use Large Language Models (LLM), whicheffectively means that they are trained on large datasets, mostlypublicly available on the internet. So it stands to reason thatthese tools can only be as good as the data they're trained on,but if this data isn't carefully vetted, then the tools will beprone to misinformation and even include bias, as we saw withTwitter's infamous chatbot Tay which quickly began to post discriminatoryand offensive tweets.

AI alignment is a growing field within AI safety that aims toalign the technology with our (i.e. human) goals. Therefore, AIalignment is critical to ensuring that AI tools are safe, ethicaland align with societal values. For example, Open AI has stated"Our research aims to make AGI aligned with human values andfollow human intent".

Sir Patrick Vallance, the UK's former Government ChiefScientific Adviser, warned earlier this year that "there willbe a big impact on jobs and that impact could be as big as theIndustrial Revolution was". This isn't an uncommon vieweither, recently Goldman Sachs predicted that roughly two-thirds ofoccupations could be partially automated by AI. More worryingly,IBM's CEO Arvind Krishna predicted that 30% ofnon-customer-facing roles could be entirely replaced by AI andautomation within the next five years, which equates to 7,800 jobsat IBM. Job displacement and economic inequality is a huge risk ofAI.

Many have warned of other risks such as privacy concerns, powerconcentration, and even existential risks. As this is afast-evolving industry, you could also argue that as we don'tyet fully understand what AI could look like and be used for in thefuture, we also don't yet know all of the risks that the futurewill bring.

Despite talking about the potential benefits of AI, ranging fromsuperbug-killing antibiotics to agricultural use and potential infinding cures for diseases, Rishi Sunak also recognised thepotential dangers. "The possibilities are extraordinary. Butwe must, and we will, do it safely. I know people areconcerned". Keir Starmer, also at London Tech Week, continuedthis theme by saying "we need to put ourselves into a positionto take advantage of the benefits but guard against the risks"and called for the UK to "fast forward" AIregulation.

Rishi Sunak also went on to say that "the very pioneers ofAI are warning us about the ways these technologies could undermineour values and freedoms, through to the most extreme risks ofall". This could be a reference to multiple pioneers,including:

Despite the calls, it should also be acknowledged that AI isextremely difficult to regulate. It's constantly evolving so itbecomes difficult to predict what it will look like tomorrow and asa result, what regulation needs to look like to not become quicklyobsolete. The fear for governments, and the pushback from AIcompanies, will be that overregulation will stifle innovation andprogress, including all the positive impacts that AI could have, soa balance must be struck.

Earlier this year, it seemed that the UK's stance onregulation was to be a very hands-off approach and this would belargely left to existing regulators and the industry itself bytaking a "pro-innovation approach to AI regulation"(which was the name of the white paper initially published on 29thMarch 2023). Within this White Paper, unlike the EU, the UK'sGovernment confirmed that it wasn't looking to adopt newlegislation or create a new regulator for AI. Instead, it wouldlook to existing regulators like the ICO (InformationCommissioner's Office) and the CMA (Competition and MarketsAuthority) to "come up with tailored, context-specificapproaches that suit the way AI is actually being used in theirsectors". This approach was criticised by many, including KeirStarmer who commented that "we haven't got an overarchingframework".

However, since this white paper (which has since been updated),Rishi Sunak has shown signs that the UK's light-touch approachto regulation needs to evolve. At London Tech Week, he stated thathe wants "to make the UK not just the intellectual home butthe geographical home of global AI safety regulation". Thiswas coupled with the announcement that the UK will host a globalsummit on safety in artificial intelligence this autumn where,according to a No. 10 spokesman, the event will "provide aplatform for countries to work together on further developing ashared approach to mitigate these risks".

Since 100m has also been announced for the UK's AIFoundation Model Taskforce, with Ian Hogarth, co-author of theannual State ofAI report, announced to lead this task force. The key focus forthis Taskforce will be "taking forward cutting-edge safetyresearch in the run-up to the first global summit on AI". Itisn't just the first global summit that will come to the UK,but also OpenAI confirmed their first international office will be openingin London. Sam Altman stated this is an opportunity to"drive innovation in AGI development policy" and thathe's excited to see "the contributions our London officewill make towards building and deploying safe AI".

Time will tell on both the potential (both good and bad) for AIand how the regulation within the UK and globally rolls out, butit's clear that the UK wants to play a leading role in bothregulation and innovation, which may at times clash with eachother. In an interview to the BBC on AI regulation, Sunak said"I believe the UK is well-placed to lead and shape theconversation on this because we are very strong when it comes toAI".

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Excerpt from:

The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts

The AI Canon: A Curated List of Resources to Get Smarter About … – Fagen wasanni

The field of artificial intelligence (AI) is advancing at a rapid pace, making it challenging for both experts and beginners to keep up with the latest developments. To help with this, we have compiled a curated list of resources that have had a significant impact on the field in recent years. We call it the AI Canon, which includes papers, blog posts, courses, and guides that can enhance your understanding of modern AI.

We begin with an introduction to transformer and latent diffusion models, which are driving the current AI wave. These resources provide a gentle introduction to these concepts, allowing you to grasp the fundamental ideas behind them. Understanding transformers and latent diffusion models is vital in comprehending the latest advancements in AI.

Next, we delve into technical learning resources and practical guides for building with large language models (LLMs). Andrej Karpathy, a respected figure in the field of AI, explains how AI represents a powerful new way to program computers. His insights from 2017 have proven to be incredibly insightful and continue to shape our understanding of the AI market.

The State of GPT, also written by Karpathy, offers an approachable explanation of how ChatGPT and GPT models function, how to utilize them effectively, and what future research and development might entail. Additionally, computer scientist and entrepreneur Stephen Wolfram provides a detailed explanation of modern AI models, outlining their progression from early neural networks to present-day LLMs and ChatGPT.

To gain a better understanding of transformers, Dale Markowitzs post offers a concise answer to the question of what an LLM is and how it operates. Although the post primarily focuses on GPT-3, the information is transferable to newer models. Similarly, Chris McCormicks explanation of how Stable Diffusion works provides valuable intuition around text-to-image models, specifically in the field of computer vision.

To familiarize yourself with the key terms and technologies in modern AI, we recommend consulting the AI glossary by a16z. This resource offers definitions of commonly used terminology in the field, ensuring you stay informed and up-to-date with the latest advancements.

For those seeking a more in-depth understanding of machine learning and AI fundamentals, there are university-level courses available. Stanfords CS229: Introduction to Machine Learning with Andrew Ng covers the basics of machine learning, while CS224N: NLP with Deep Learning with Chris Manning focuses on natural language processing (NLP) and the initial generation of LLMs.

Furthermore, we have compiled a selection of resources that explain how LLMs work, catering to a diverse audience. Stanfords online seminar, CS25: Transformers United, provides an in-depth exploration of transformers, and CS324: Large Language Models offers insights into both technical and non-technical aspects of LLMs.

In terms of reference and commentary, we recommend exploring Yann LeCuns talk on predictive learning from NIPS 2016, emphasizing the significance of unsupervised learning as a crucial aspect of large-scale AI models. Andrej Karpathys talk on AI for full-self driving at Tesla provides valuable insights into the challenges associated with long-tailed problems in the field of AI. Additionally, Gwerns post on the scaling hypothesis elucidates the concept that increasing data and compute can enhance accuracy in LLMs.

For a comprehensive overview of current LLMs, including their development timeline, size, training strategies, and more, we suggest referring to a survey of large language models. Spark of artificial general intelligence: Early experiments with GPT-4 offers preliminary analysis from Microsoft Research on the capabilities of the most advanced LLM, GPT-4. Familiarizing yourself with AI agents, such as Auto-GPT, is also critical, as they represent a new era of automation and creativity.

As LLMs become increasingly central to AI applications, we have gathered resources to aid in understanding the application stack. Although formal education on this topic is still limited, we recommend exploring resources such as building a GitHub support bot with GPT3, LangChain, and Python. This early explanation of the modern LLM app stack kickstarted widespread adoption and experimentation of new AI applications. Additionally, Chip Huyens discussion on building LLM applications for production addresses key challenges and recommends suitable use cases.

To enhance your prompt engineering skills when utilizing LLMs, we suggest referring to the Prompt Engineering Guide, which provides comprehensive guidance and specific examples for popular models. Brexs prompt engineering guide offers a lighter, more conversational approach to this topic.

In conclusion, the AI Canon is a curated collection of resources designed to broaden your knowledge of modern AI. By exploring these materials, you can stay informed about the latest advancements and deepen your understanding of this rapidly evolving field.

Read more:

The AI Canon: A Curated List of Resources to Get Smarter About ... - Fagen wasanni

China striving to be first source of artificial general intelligence, says think tank – The Register

Chinese researchers published 850 papers pertaining to artificial general intelligence (AGI) between 2018 and 2022, indicating Beijing's efforts to create a thinking machine are real and active possibly including research on brain/computer interfaces.

So says think tank the Center for Security and Emerging Technology (CSET) in a recently released report that claims Beijing's effort "challenges emerging global norms, underscoring the need for a serious open source monitoring program to serve as a foundation for outreach and mitigation."

The report is based on examination of scientific papers touching on a dozen relevant technologies an effort that found 500 items of research concerning "routine AI applications" plus "a significant body of research on AGI precursor technologies, indicating that China's claims to be working toward artificial general intelligence are genuine and must be taken seriously."

Those claims were first articulated in 2017, when China published a document titled the "New Generation Artificial Intelligence Development Plan". One of the goals of that plan is "to build China's first-mover advantage in the development of AI."

The CSET report suggests that plan is advancing nicely, identifies the universities that have made the biggest contributions to the AGI effort, and notes that five of the most prolific sources of AGI research are institutions located in the city of Beijing.

But that may not mean Beijing-based boffins are doing all the heavy lifting.

"While acknowledging the Beijing-area concentration, given AGI's multi-disciplinary basis and the multiple paths through which it may be realized, the possibility of breakthroughs elsewhere in China cannot be ruled out," the report states. "Limited data suggest that Beijing may be serving as China's AGI research hub for testing and deployment done elsewhere in China, in Wuhan especially."

The report asserts "China appears to be exploring multiple paths to AGI, including a potential approach not covered in this study, namely, cognitive sharing through BCIs."

BCIs are brain/computer interfaces a meeting of wetware and hardware.

The report notes that some of China's AGI research involves researchers from overseas, but "data show that the bulk of it is situated in Chinese institutions." The authors appear to worry that it's therefore hard for the rest of the world to understand China's achievements or lack thereof and note that "hiding scientific research, for example, by restricting access to academic journals, may lead to false assumptions that devolve into a vicious cycle of measures and countermeasures."

To demonstrate the problems associated with that situation, the authors refer to the "missile gap" the mistaken belief in the 1960s that the Soviet Union's missile tech considerably exceeded that of the United States, leading to poorly-informed policy responses that did little to ease the tensions of the time.

"Pursuing this train of thought further, it is highly likely that one's inability to gauge the status and intent of a potential rival through open sources will lead to clandestine efforts to procure this same information and more driving science further underground to no-one's long-term benefit," the authors assert.

The report concludes with a call for US policymakers contemplating AGI safeguards to recognize that rivals understand the strategic significance of the tech, and of developing it first.

"Although an unrestrained race to the top is risky, unilateral restrictions on AGI development, trust-based agreements that cannot be verified, and one-sided adherence to ELSI/ELSA (ethics, legal and social implications/aspects) protocols are risky as well."

Go here to see the original:

China striving to be first source of artificial general intelligence, says think tank - The Register

AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders – Fox News

A panel of robots told reporters in Switzerland Friday that they could be more efficient leaders than human beings, among other statements.

The nine artificial intelligence-enabled humanoid social robots also explained at a Geneva conference center that they wouldn't take anyone's jobs or stage a rebellion.

Conference organizers at the United Nations-drivenAI for Good Global Summit did not specify to what extent their responses were scripted or programmed.

Some of the robots are capable of producing preprogrammed responses and the United Nations Development Program's first robot innovation ambassador, Sophia, sometimes relies on responses scripted by a team of writers at Hanson Robotics.

OPENAI CO-FOUNDER WARNS 'SUPERINTELLIGENT' AI MUST BE CONTROLLED TO PREVENT POSSIBLE HUMAN EXTINCTION

Ai-Da Robot (C), the first AI powered Robot Artist poses for a group photo with other robots during the UN Global Summit on AI for Good, where they are giving the keynote speech, on July 7, 2023, in Geneva, Switzerland. (Johannes Simon/Getty Images for Aidan Meller)

"I believe that humanoid robots have the potential to lead with a greater level of efficiency and effectiveness than human leaders," Sophia said, according to Yahoo News. "We don't have the same biases or emotions that can sometimes cloud decision-making and can process large amounts of data quickly in order to make the best decisions."

HOW DOES AN AI CHATBOT WORK?

Health care robot Grace and rock star robot Desdemona were also in attendance.

"I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs," said Grace.

Human shaped robot Ameca (L), beside Will Jackson, CEO and founder of British manufacturer Engineered Arts, addresses the media during a press conference on July 7, 2023 in Geneva, Switzerland. (Johannes Simon/Getty Images)

UN COUNCIL WILL HOLD AI MEETING ON RISKS TO INTERNATIONAL PEACE, SECURITY

Reporters were instructed to speak slowly and clearly when addressing the group, and were told that lags in response times would be due to internet connection.

There were some audio issues and inconsistent replies.

The event was meant to demonstrate how new technology can support the intergovernmental organization's goals for sustainable development, as well as "human-machine collaboration."

Human shaped robot Ameca (L), beside of Will Jackson, CEO and founder of British manufacturer Engineered Arts, addresses the media during a press conference on July 7, 2023, in Geneva, Switzerland. (Johannes Simon/Getty Images)

CLICK HERE TO GET THE FOX NEWS APP

United Nations agencies are already utilizing AI, including the World Food Programme's HungerMap project.

The two-day summit hosted by the International Telecommunication Union opened Thursday, with more than 50 robots in attendance.

"We have to engage and ensure a responsible future with AI," ITU Secretary-General Doreen Bogdan-Martin said in a release.

Reuters and The Associated Press contributed to this report.

Read more from the original source:

AI humanoid robots hold UN press conference, say they could be more efficient and effective world leaders - Fox News

AI likely to spell end of traditional school classroom, leading expert says – The Guardian

Artificial intelligence (AI)

Exclusive: Prof Stuart Russell says technology could result in fewer teachers being employed possibly even none

Recent advances in AI are likely to spell the end of the traditional school classroom, one of the worlds leading experts on AI has predicted.

Prof Stuart Russell, a British computer scientist based at the University of California, Berkeley, said that personalised ChatGPT-style tutors have the potential to hugely enrich education and widen global access by delivering personalised tuition to every household with a smartphone. The technology could feasibly deliver most material through to the end of high school, he said.

Education is the biggest benefit that we can look for in the next few years, Russell said before a talk on Friday at the UNs AI for Good Global Summit in Geneva. It ought to be possible within a few years, maybe by the end of this decade, to be delivering a pretty high quality of education to every child in the world. Thats potentially transformative.

However, he cautioned that deploying the powerful technology in the education sector also carries risks, including the potential for indoctrination.

Russell cited evidence from studies using human tutors that one-to-one teaching can be two to three more times effective than traditional classroom lessons, allowing children to get tailored support and be led by curiosity.

Oxford and Cambridge dont really use a traditional classroom they use tutors presumably because its more effective, he said. Its literally infeasible to do that for every child in the world. There arent enough adults to go around.

OpenAI is already exploring educational applications, announcing a partnership in March with an education nonprofit, the Khan Academy, to pilot a virtual tutor powered by ChatGPT-4.

This prospect may prompt reasonable fears among teachers and teaching unions of fewer teachers being employed possibly even none, Russell said. Human involvement would still be essential, he predicted, but could be drastically different from the traditional role of a teacher, potentially incorporating playground monitor responsibilities, facilitating more complex collective activities and delivering civic and moral education.

We havent done the experiments so we dont know whether an AI system is going to be enough for a child. Theres motivation, theres learning to collaborate, its not just Can I do the sums? Russell said. It will be essential to ensure that the social aspects of childhood are preserved and improved.

The technology will also need to be carefully risk-assessed.

Hopefully the system, if properly designed, wont tell a child how to make a bioweapon. I think thats manageable, Russell said. A more pressing worry is the potential for hijacking of software by authoritarian regimes or other players, he suggested. Im sure the Chinese government hopes [the technology] is more effective at inculcating loyalty to the state, he said. I suppose wed expect this technology to be more effective than a book or a teacher.

Russell has spent years highlighting the broader existential risks posed by AI, and was a signatory of an open letter in March, signed by Elon Musk and others, calling for a pause in an out-of-control race to develop powerful digital minds. The issue has become more urgent since the emergence of large language models, Russell said. I think of [artificial general intelligence] as a giant magnet in the future, he said. The closer we get to it the stronger the force is. It definitely feels closer than it used to.

Policymakers are belatedly engaging with the issue, he said. I think the governments have woken up now theyre running around figuring out what to do, he said. Thats good at least people are paying attention.

However, controlling AI systems poses both regulatory and technical challenges, because even the experts dont know how to quantify the risks of losing control of a system. OpenAI announced on Thursday that it would devote 20% of its compute power to seeking a solution for steering or controlling a potentially super-intelligent AI, and preventing it from going rogue.

The large language models in particular, we have really no idea how they work, Russell said. We dont know whether they are capable of reasoning or planning. They may have internal goals that they are pursuing we dont know what they are.

Even beyond direct risks, systems can have other unpredictable consequences for everything from action on climate change to relations with China.

Hundreds of millions of people, fairly soon billions, will be in conversation with these things all the time, said Russell. We dont know what direction they could change global opinion and political tendencies.

We could walk into a massive environmental crisis or nuclear war and not even realise why its happened, he added. Those are just consequences of the fact that whatever direction it moves public opinion, it does so in a correlated way across the entire world.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more here:

AI likely to spell end of traditional school classroom, leading expert says - The Guardian

27% of jobs at high risk from AI revolution, says OECD – Reuters

PARIS, July 11 (Reuters) - More than a quarter of jobs in the OECD rely on skills that could be easily automated in the coming artificial intelligence revolution, and workers fear they could lose their jobs to AI, the OECD said on Tuesday.

The Organisation for Economic Co-operation and Development (OECD) is a 38-member bloc, spanning mostly wealthy nations but also some emerging economies like Mexico and Estonia.

There is little evidence the emergence of AI is having a significant impact on jobs so far, but that may be because the revolution is in its early stages, the OECD said.

Jobs with the highest risk of being automated make up 27% of the labour force on average in OECD countries, with eastern European countries most exposed, the Paris-based organisation said in its 2023 Employment Outlook.

Jobs at highest risk were defined as those using more than 25 of the 100 skills and abilities that AI experts consider can be easily automated.

Three out of five workers meanwhile fear that they could lose their job to AI over the next 10 years, the OECD found in a survey last year. The survey covered 5,300 workers in 2,000 firms spanning manufacturing and finance across seven OECD countries.

The survey was carried out before the explosive emergence of generative AI like ChatGPT.

Despite the anxiety over the advent of AI, two-thirds of workers already working with it said that automation had made their jobs less dangerous or tedious.

"How AI will ultimately impact workers in the workplace and whether the benefits will outweigh the risks, will depend on the policy actions we take," OECD Secretary General Mathias Cormann told a news conference.

"Governments must help workers to prepare for the changes and benefit from the opportunities AI will bring about," he continued.

Minimum wages and collective bargaining could help ease the pressure that AI could put on wages while governments and regulators need to ensure workers rights are not compromised, the OECD said.

Reporting by Leigh Thomas; Editing by Emma Rumney

Our Standards: The Thomson Reuters Trust Principles.

Here is the original post:

27% of jobs at high risk from AI revolution, says OECD - Reuters

Mission: Impossibles technology unpacked From AI to facial recognition – Yahoo Eurosport UK

Tom Cruise is back in Mission: Impossible Dead Reckoning, busting out lots of nifty-sounding technologies. But do any of them actually exist? (Paramount Pictures via AP)

Mission: Impossible films are always a lot of fun, and the seventh, latest instalment Mission: Impossible Dead Reckoning Part 1, which came out in UK cinemas yesterday, is just as engaging and exhilarating as the previous films, not to mention incredibly timely, given recent news events.

Tom Cruise returns as Ethan Hunt, who in 2023 is still completing missions for the Impossible Mission Force (IMF) no, not the International Monetary Fund, the other one but this time he faces off against a mysterious, sentient, all-powerful artificial intelligence (AI) intent on controlling the world, which calls itself the Entity.

But how does the technology in MI7 stack up? Heres the down-low from The Standard and some technical experts.

Warning: Contains spoilers

Sentient AI, also known as The Entity, as depicted in Mission: Impossible Dead Reckoning Part 1 (Paramount Pictures)

In MI7, someone has built a terrifying new weapon that threatens all humanity a godless, stateless, amoral AI that is able to be everywhere and nowhere at once, plus has the ability to twist and manipulate information to its will. Every government wants to control and weaponise this AI but, to do so, you need two halves of an intricate key and so the race begins.

Even better, the AI is so powerful that it has a henchman working for it called Gabriel (Esai Morales) and is able to both give him instructions and create havoc by itself.

But is it possible for AI to be self-aware, and have the ability to experience emotions or push an agenda, such as self-protection or greed? Computer scientists overwhelmingly say no.

Sandra Wachter, a professor of technology and regulation at Oxford University, thinks the general public has gotten generative AI confused with artificial general intelligence (AGI) a far-off, fantastical concept about sentient self-aware robots, like the murderous Skynet in the Terminator films.

There is no scientific evidence that we are on a pathway to sentiency. There is not even evidence that such a pathway does exist, Prof Wachter tells The Standard.

Story continues

In particular, she takes a dim view of scaremongering views by some politicians and figures in the AI industry, who claim that AI could end up setting off nuclear warheads or be used to build dirty bomb chemical weapons in the next two years.

The absence of evidence is not evidence of absence... I cannot warn you about something and respond if I dont know what the harms are. However, there is scientific evidence that AI causes harm such as bias, data protection violation or misinformation. We should focus on issues that already exist.

In MI7, US intelligence tries to locate Ethan Hunt (Tom Cruise) at Abu Dhabi Airport using facial-recognition software that scans CCTV camera footage in real time (Paramount Pictures)

In MI7, US intelligence tries to locate Ethan Hunt (Tom Cruise) at Abu Dhabi Airport using facial-recognition software, but every time they think that they have found him, it turns out to be someone else a handy trick pulled off by Hunts pals Benji Dunn (Simon Pegg) and Luther Stickell (Ving Rhames).

Meanwhile, The Entity handily steps in at will to scrub Gabriels image from the live video footage, even hacking into the augmented-reality (AR) glasses worn by Ethan and impersonating other voices to trick him.

The software shown in the movie is used by police worldwide today though not in AR glasses and deepfake technology is thriving online, including AI videos featuring Tom Cruise himself. But many experts told The Standard we dont have the advanced computing power shown in MI7.

Its certainly possible to adapt CCTV footage to change peoples faces and voices... and its also possible to obscure faces of the most wanted; however, were some way away from being able to make such changes in real-time, says Laura Kankaala, threat intelligence lead at Finland-based IT security firm F-Secure.

The amount of computational resources required to do this in real-time is crazy.

In MI7, Tom Cruise can see information about people while wearing augmented-reality glasses, like the man in this picture (Bram Van Oost / Unsplash)

On the other hand, Darren James, senior product manager at US password management firm Specops Software, thinks deep fakes have progressed: With advances in network speed, compression, CPU, and graphics processing, weve also seen a huge jump in deepfake videos being able to change your face and voice in near real-time. So, if you have the technology, know-how, and money, this is very doable.

Simon Newman, a member of International Cyber Expos Advisory Council, says work on AR glasses is progressing, but there are a lot of privacy concerns, so you probably wont be able to look up passersby on the street and see if they have a criminal record.

That said, augmented-reality glasses do offer the potential for normal consumers to make use of this technology it will be possible to recognise friends on social media and bring their profile pages up on screen.

But others think we should forget about AR headsets coming any time soon.

One nail-biting watch of Mission: Impossible Dead Reckoning Part 1 and youd be forgiven for thinking AR headsets or goggles really are the next big thing, especially when Apple launched its mainstream AR headset last month, too. However, in reality, were not that much further on than we were in 2013, with the launch and flop of Google Glass, Michael Queenan, chief executive of London-based data services firm Nephos Technologies tells The Standard.

There arent any affordable mainstream models yet. Thats why Google Glass failed but step counters and fitness trackers became mainstream. There needs to be a great experience for a great price that doesnt involve walking around with a computer on your head.

Neither Apple nor Meta are on the mark just yet. At the moment, AR and VR headsets are just for the gaming community, there is no practical use for them.

Ethan Hunt, disguised as someone else, passes through a biometric scanner. Could this really work in real life? (Paramount Pictures)

One famous trope from the Mission: Impossible films is the super-realistic latex masks used by IMF agents to impersonate other people. In MI7, a handy machine in a briefcase can create one of these masks very quickly.

There is an interesting scene, however, where Ethan Hunt (Tom Cruise), disguised as someone else, passes through a biometric scanner at a US government building. Okay, so perhaps in this world masks can fool facial biometric scanners, but how does he beat hand scanners?

Today, we scan peoples hands because the subcutaneous blood vessels of the human body form a distinctive vein pattern for each person.

According to the National Cyber Security Centre (NCSC), it is highly unlikely that in real life you could get through a biometric hand scanner using your own hands.

Mr Newman says some biometric scanners have been spoofed by testers using fake prosthetic fingers, however, modern scanners can now detect whether the finger is alive or not, so perhaps this one will be less common in the future.

IMF crew and seemingly master hackers Luther Stickell (Ving Rhames) and Benji Dunn (Simon Pegg) (AP)

Many people, including myself, cant get a 4G signal on the go, whether in London or in rural parts of the UK, yet Ving Rhamess Luther and Simon Peggs Benji are able to hack into systems, communicate with Ethan, and monitor him using everything from wi-fi, to 5G, to even, apparently, piggybacking off satellites in Space.

At one point, Benji is even able to monitor Ethan on a motorbike going over a cliff while sitting in the passenger seat of a car on autopilot moving across Helsetkopen, in Norway. Surely this one is a dud, but experts disagree.

Absolutely, this is completely possible today, thanks to global satellite internet services and ubiquitous mobile communications. If youre in a basement or bunker, though, you may need another plan in fact, at the Def Con conference in Las Vegas this summer, there will be a satellite-hacking competition, says Matt Aldridge, a principal solutions consultant, at Canadian software firm OpenTexts cybersecurity arm.

Cary Elwes plays NSA boss Denlinger in Mission: Impossible Dead Reckoning Part 1 (Paramount Pictures)

I almost fell off my chair in shock on hearing Cary Elwes, who plays National Security Agency (NSA) boss Denlinger in MI7, ask on-screen why air gapping, a cybersecurity technique often mentioned by academics, cant be used to prevent the AI from spreading to the USs most critical infrastructure.

Air gapping is a method of keeping computers and IT networks safe by making sure they are physically isolated from other computers and the internet. Its simple if your machine isnt connected to anything, how can it fall foul to malware or rogue AIs?

However, the idea of an AI that can go anywhere is an exciting one, and experts are divided.

An air gap is a physical space between an IT resource and a network. With all radio signals like wi-fi disabled, its tough to circumvent. Whilst it has been proven that input / output can be perceived through minute fluctuations in power consumption, and even visually through the faint flicker of an LED light, the Mission: Impossible team are going to be hard-pressed to find any intelligence, artificial or otherwise, to make this work at least in this dimension, says Steven Wood, director of solutions consulting at OpenTexts cybersecurity division.

Manchester-based IT security firm NCC Group disagrees, although admits there is no known real-world example of such an attack.

Even an air-gapped system needs to be updated, and USB drives are often used. As for exfiltrating sensitive data, there have been some academic papers and proofs of concept about using the internal fans or SATA cables to communicate via ultrasonic or electromagnetic waves, NCC Groups technical director Sourya Buswass tells The Standard.

Michael Langer, chief product officer of Israeli industrial cybersecurity firm Radiflow, is still scared about what AI could do: Most [air-gapping] techniques are only occurring at an academic level rather than actually in industry, as they require very deep knowledge of physics, cybersecurity, and electrical engineering.

However, generative AI in the near future will probably be able to assemble these techniques and perform some working combination of them to provide hackers with simplified access to various state-of-the-art hacking capabilities.

A Russian submarine sitting at the bottom of the sea plays a crucial role in Mission: Impossible Dead Reckoning Part 1 (Paramount Pictures)

And could an AI somehow get into a Russian submarine sitting at the bottom of the sea?

AI can go anywhere there just needs to be some form of connectivity and initial compromise to enable it; either back when the submarine was in port and it sat dormant, or via some communication link when the sub surfaces to reach back home, Toby Lewis, global head of threat analysis at London-based cybersecurity firm Darktrace explains.

But Mr James of Specops Software totally disagrees. He says submarines specialise in avoiding detection and so communication between headquarters and military submarines is severely limited.

Therefore, compromising that communication blackout and having the systems in place to attack a military-grade computer system to allow AI to infect the sonar system is probably very unlikely makes a great movie storyline, though, he says.

Originally posted here:

Mission: Impossibles technology unpacked From AI to facial recognition - Yahoo Eurosport UK

Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel – Deadline

EXCLUSIVE: The direct effect Artificial Intelligence may have on our lives and the future of storytelling was the topic discussed during the central keynote on the final day of the Eastern Promises industry program at this years Karlovy Vary International Film Festival.

We have a new player in town, and its not going anywhere, veteran creative consultant Tatjana Samopjan said, summarizing the position of AI in our lives.

In the panel hosted by Deadline as part of the Industry Days program, Samopjan was joined by Gerhard Maier, program director and co-founder of Seriencamp, and Julia Schafdecker, attorney at SKW Schwarz.

Introducing the lengthy session, Maier identified the two main forms of Artificial Intelligence, Artificial General Intelligence (AGI) and narrow AI, as well as their structural differences.

Artificial General Intelligence is what people like Elon Musk and the heads of Google are talking about right now. This is about trying to create a machine that thinks like a human, Maier said.

It is the big holy grail of artificial intelligence research. Depending on who you talk to, its either a couple of months away or decades. People dont know. And the thing we have most likely interacted with for the last few years and ChatGPT and Midjourney are just two instances of how it is used is narrow AI. This is mostly algorithms that are trained to do a certain task.

The most important thing to remember when discussing AI, Maier later concluded, is that the technology is growing exponentially, meaning its level of sophistication is increasing rapidly. This untapped growth will, of course, have a direct impact on industry as well as our personal lives. For one, Maier added, there will be what he described as a job crunch.

A lot of jobs and tasks that have been laid out to a couple of specialists or specialized departments can be made more efficient and are made redundant, Maier said. I have already seen this with people who worked for big sales companies who used to employ three people to create pitch papers for international markets. This is now done by one person, and theres no graphic designer involved. Theres no photo guy involved anymore. Its all done by one person. This also applies to editing, art direction, and many other fields where one person can do the job that before maybe two or three or four persons did.

As weve reported, the reality of a so-called job crunch has been one of the most pressing issues on the bargaining table for the WGA during its protracted contract negotiations with the studios. The WGA wants safeguards introduced to prevent studios from using AI to generate new scripts from previous works.

A few years ago, it would have seemed inconceivable that AI could replicate the work of TV and film writers, but the rise of ChatGPT and other services that can scrape through decades of content and recreate has shown that it might be closer than ever. However, Samopjan argued in her unique intervention that this exact process is what makes the technology ill-suited to create for humans.

If you drink the best wine out there, youll have the best taste. But if you drink a liter or two every day, are you a connoisseur or an alcoholic? We are work story-holics. And, paradoxically, consuming too many stories will not make you a great storyteller, she said.

The way to deal with AI later is to get good at stepping into reality. To not rely on stories, but from time to time, live fully. And wed see that whatever lived experience we have will have enough ambivalence, nuance, paradox, and confusion in itself that will provide a rich source material to create something original.

She later added: The best artists are the closest we have to mystics. The best artists see through the bullshit. They see through the constructions of their own mind. Its an ability that needs to be maintained in the age of technology.

Excerpt from:

Its Going To Hit Like A Bomb: AI Experts Discuss The Technology And Its Future Impact On Storytelling KVIFF Industry Panel - Deadline

AI tools trace the body’s link between the brain and behavior – Axios

An AI tool called SLEAP labels the body parts of flies. Credit: Talmo Lab at the Salk Institute

AI-enabled micro-measurements of animals running, hunting, preening and playing are unlocking troves of new data that scientists now want to use to simulate animals and test theories about behavior and the brain.

Why it matters: A primary function of the brain is to produce behavior and help animals move through the world but there are questions about how that happens, with ramifications for medicine and efforts to create artificial general intelligence (AGI).

What's happening: AI methods are increasingly being used to help scientists measure the behaviors of animals, a laborious task that typically involves researchers watching animals and tracking and annotating their movements.

Animal behavior scientists, or ethologists, use the AI tools to track the natural behaviors of single and more recently multiple animals. The information can be used to recreate the behavior in a lab, where at the same time researchers can measure the activity of neurons in the brain or silence them, and see the effect on behavior.

The AI tools are "really powerful because you are getting behavior quantification at the scale the brain works at millisecond precision," says Cory Miller, a neurobiologist at the University of California San Diego, who studies the neural mechanisms of behaviors in marmoset monkeys.

Another tool called MoSeq finds smaller components of movement what the tool's developer, Harvard University neurobiologist Bob Datta, calls "syllables." His research group has identified about 50 of these short units of behavior and the sequences in which they tend to occur in order to identify and predict different behaviors.

Yes, but: The brain does not output coordinates, Pereira says. "It does not think in x,y, z changes in position of wrist."

What's next: An effort in its early stages is underway to use behavioral data to create simulated bodies, or animals.

Between the lines: Building these simulated animals will likely require combining the different approaches of AI models used to track and developing new tools, Datta says.

The big picture: There is an active debate about whether any artificial general intelligence will need to be embodied.

Read the original:

AI tools trace the body's link between the brain and behavior - Axios

Tech Investors Bet on AI, Leave Crypto Behind – Yahoo Finance

(Bloomberg) -- Silicon Valley venture capitalists are racing to get into artificial intelligence companies including investors who once bet big on crypto.

Most Read from Bloomberg

In the most recent quarter, VCs spent less on crypto and digital asset companies than at any point since 2020, according to data from the research firm PitchBook. At the same time, the total global value of investments in AI for the April-June period was higher than crypto even at its peak.

The numbers reflect the recent tendency of many tech investors even those who once backed crypto to move away from the scandal-plagued digital asset industry toward the latest advances in AI, said Robert Le, a crypto analyst at PitchBook.

New developments in AI have made it highly attractive to investors, at the same time as a barrage of scandals, new regulations and tumbling prices have made crypto toxic. Joe Zhao, Millennia Capital managing partner, said hes ready to leave digital assets behind. Zhao and Millennia once backed crypto companies like Blockstream and Lumida; now theyre diving further into artificial intelligence. Millennia has already invested in closely watched startups including Stability AI.

AI is offering so many more use cases than blockchain, Zhao said.

Some crypto investors have come under fire for kindling an interest in AI. Paradigm, the high-profile crypto VC firm started by Coinbase Global Inc. co-founder Fred Ehrsam and former Sequoia Capital partner Matt Huang, removed crypto mentions from its website and played up its interest in AI instead, crypto news outlet the Block reported in May. Huang tweeted this month that the website update had been a mistake and directed users to a new version of the companys homepage that includes moving neon green and black banners emblazoned with the word CRYPTO.

Story continues

Huang said in a tweet in June that both the crypto and AI sectors are interesting and will have plenty overlap and that Paradigm is excited to continue exploring.

Meanwhile, Sequoia Capital investor Michelle Fradin, who helped lead the firms decision to invest in now-bankrupt crypto exchange FTX, has recently tweeted more about AI than crypto. She also recently co-authored a piece for the firm titled, The New Language Model Stack: How companies are bringing AI applications to life. Sequoia did not respond to a request for comment.

Some investors believe there are ways to invest in both AI and digital assets. There is a lot of interest in the intersection of AI and crypto, Le said. He noted that two startups that straddle these industries Tools For Humanity, the developer of Worldcoin, and Gensyn both raised significant funding rounds during the second quarter.

AI and crypto are natural counterweights for one another, said Ali Yahya, a general partner at Andreessen Horowitz, which backed both Tools for Humanity and Gensyn. Crypto will will enhance AI by providing some of the decentralization that AI maybe needs, he said, noting that companies like OpenAI, Alphabet Inc.s Google and Microsoft Corp. have already amassed significant control over the growing industry.

The co-founders of Gensyn, Ben Fielding and Harry Grieve, said that they were more interested in machine learning than crypto prior to founding the startup. And now, despite being mainly backed by crypto venture investors, Gensyn sees itself as more of a technology infrastructure startup. We dont think of ourselves as a crypto project so to speak, Grieve said.

Gensyn is building a blockchain protocol that will underpin a decentralized marketplace for buying and selling compute power that can be used for machine learning models. Fielding said the platform could make the compute market more competitive, since the only suppliers are very, very large organizations who have amassed huge data centers and then rent them out.

Global AI venture funding was roughly flat in the second quarter from the year earlier, though there was a substantial uptick in investments in the US market. By contrast, in the same period, global investment for digital asset startups plunged 76% compared to the same period last year, PitchBook said.

Despite the drop, though, crypto deals are still happening. Le noted that crypto markets in Europe, Dubai, Hong Kong and Singapore are showing more strength than in the US, where regulation has hit the industry hard. He also said he expects to see crypto venture investing continue in the US and abroad, since many crypto funds have mandates that require them to back the space.

Andreessen Horowitz, which raised the biggest-ever crypto fund at $4.5 billion, recently announced that it was opening an office in London because of the UKs crypto-friendly environment. Yahya said the firm is still committed to investing in crypto startups, including through token rounds. We don't have any plans of slowing down, he said.

Most Read from Bloomberg Businessweek

2023 Bloomberg L.P.

Original post:

Tech Investors Bet on AI, Leave Crypto Behind - Yahoo Finance