What Lies Ahead For AI In The UK? – New Technology – UK – Mondaq News Alerts

OpenAI launched ChatGPT 3.5 in November 2022 and, sincethen, it set growth records as it spread like wildfire. Today, itnears one billion unique visitors per month. Since its launch, theworld has been all-consumed with talking about AI and its potentialuse cases across a wide range of industries. Sam Altman, co-founderand CEO of OpenAI, has said that AI tools can find solutions to"some of humanity's biggest challenges, like climatechange and curing cancer".

There has also been plenty of talk about the largest techcompanies (namely Google and Meta, as well as Microsoft) and theirrace in the pursuit of Artificial General Intelligence (AGI). Thismakes it sound very much like an arms race and this is a comparisonthat many have made. Within any race, there's often the concernthat those in the race will cut corners and, in this particularrace, many fear that the consequences could be disastrous. Withinthis article, we explore the possible consequences and the UK'sstance on the regulation of AI to help safeguard against these.

AI is seen as central to the government's ambition to makethe UK a science and technology superpower by 2030 and PrimeMinister Rishi Sunak again made this clear in his opening keynoteat June's London Tech Week: "If our goal is to make thiscountry the best place in the world for tech, AI is surely one ofthe greatest opportunities for us".

As discussed here, AI was also a headline feature earlier thisyear in the government's Spring Budget. Both within this Budgetand since then, the following has been announced:

Despite the many potential benefits of AI, there is also growingconcern about the risks of AI, ranging from the widely discussedrisk of disinformation to the evolving risk of cybersecurity. Acouple of the widely discussed risks of AI include:

Misinformation and bias

Most AI tools will use Large Language Models (LLM), whicheffectively means that they are trained on large datasets, mostlypublicly available on the internet. So it stands to reason thatthese tools can only be as good as the data they're trained on,but if this data isn't carefully vetted, then the tools will beprone to misinformation and even include bias, as we saw withTwitter's infamous chatbot, Tay, which quickly began to postdiscriminatory and offensive tweets.

AI alignment is a growing field within AI safety that aims toalign the technology with our (ie. human) goals. Therefore, AIalignment is critical to ensuring that AI tools are safe, ethicaland align with societal values. For example, Open AI has stated"Our research aims to make AGI aligned with human values andfollow human intent".

Protecting jobs and economic inequality

Sir Patrick Vallance, the UK's former Government ChiefScientific Adviser, warned earlier this year that "there willbe a big impact on jobs and that impact could be as big as theIndustrial Revolution was". This isn't an uncommon vieweither, recently Goldman Sachs predicted that roughly two-thirds ofoccupations could be partially automated by AI. More worryingly,IBM's CEO Arvind Krishna predicted that 30% ofnon-customer-facing roles could be entirely replaced by AI andautomation within the next five years, which equates to 7,800 jobsat IBM. Job displacement and economic inequality is a huge risk ofAI.

Many have warned of other risks such as privacy concerns, theconcentration of power and even existential risks. As this is afast-evolving industry, you could also argue that, as we don'tyet fully understand what AI could look like and be used for in thefuture, we also don't yet know all of the risks that the futurewill bring.

Despite talking about the potential benefits of AI, ranging fromsuperbug-killing antibiotics to agricultural use and potential infinding cures for diseases, Rishi Sunak also recognised thepotential dangers, saying "The possibilities areextraordinary. But we must, and we will, do it safely. I knowpeople are concerned". Keir Starmer, also at London Tech Week,continued this theme by saying "we need to put ourselves intoa position to take advantage of the benefits but guard against therisks" and called for the UK to "fast forward" AIregulation.

Rishi Sunak also went on to say that "the very pioneers ofAI are warning us about the ways these technologies could undermineour values and freedoms, through to the most extreme risks ofall". This could be a reference to multiple pioneers,including:

Despite the calls, it should also be acknowledged that AI isextremely difficult to regulate. It is constantly evolving so itbecomes difficult to predict what it will look like tomorrow and assuch, what regulation needs to look like to not become quicklyobsolete. The fear for governments, and the pushback from AIcompanies, will be that overregulation will stifle innovation andprogress, including all the positive impacts that AI could have, soa balance must be struck.

Earlier this year, it seemed that the UK's stance onregulation was to be a very hands-off approach and this would belargely left to existing regulators and the industry itself bytaking a "pro-innovation approach to AI regulation"(which was the name of the white paper initially published on 29March 2023). Within this White Paper, unlike the EU, the UK'sGovernment confirmed that it wasn't looking to adopt newlegislation or create a new regulator for AI. Instead, it wouldlook to existing regulators like the ICO (InformationCommissioner's Office) and the CMA (Competition and MarketsAuthority) to "come up with tailored, context-specificapproaches that suit the way AI is actually being used in theirsectors". This approach was criticised by many, including KeirStarmer who commented that "we haven't got an overarchingframework".

However, since this white paper (which has since been updated),Rishi Sunak has shown signs that the UK's light-touch approachto regulation needs to evolve. At London Tech Week, he stated thathe wants "to make the UK not just the intellectual home butthe geographical home of global AI safety regulation". Thiswas coupled with the announcement that the UK will host a globalsummit on safety in artificial intelligence this autumn where,according to a No. 10 spokesman, the event will "provide aplatform for countries to work together on further developing ashared approach to mitigate these risks".

100m has also been announced for the UK's AIFoundation Model Taskforce, with Ian Hogarth, coauthor of theannual State of AI report, announced to lead this task force. Thekey focus for this Taskforce will be "taking forwardcutting-edge safety research in the run-up to the first globalsummit on AI".

Time will tell on both the potential (both good and bad) for AIand how the regulation within the UK and globally rolls out, butit's clear that the UK wants to play a leading role in bothregulation and innovation, which may at times clash with eachother. In an interview to the BBC on AI regulation, Sunak said"I believe the UK is well-placed to lead and shape theconversation on this because we are very strong when it comes toAI". If you want to discuss the benefits of AI for yourspecific business situation, please contact James or get in touchwith your usual UHY adviser.

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

The rest is here:

What Lies Ahead For AI In The UK? - New Technology - UK - Mondaq News Alerts

Related Posts

Comments are closed.