Over the last decade, Europe has taken a decisive lead over the US on tech regulation, with overarching laws safeguarding online privacy, curbing Big Tech dominance and protecting its citizens from harmful online content.
British Prime Minister Rishi Sunaks showpiece artificial intelligence event that kicked off in Bletchley Park on Wednesday sought to build on that lead, but the United States seems to have pulled one back, with Vice President Kamala Harris articulating Washingtons plan to take a decisive lead on global AI regulation, helped in large measure by an elaborate template that was unveiled just two days prior to the Summit. Harris then went on to elaborately flesh out the US plan for leadership in the AI regulation space before a handpicked audience, which included former British PM Theresa May, at the American Embassy in London, while she was there to attend Sunaks Summit.
The template for Harris guidance on tech regulation was the freshly released White House Executive Order on AI, which proposed new guardrails on the most advanced forms of the emerging tech where American companies dominate. And in contrast to the UK-led initiative, where the Bletchley Declaration signed by 28 signatories was the only major high-point, the US executive order is at the point of being offered as a well-calibrated template that could work as a blueprint for every other country looking to regulate AI, including the UK.
Harris was emphatic in her assertion that there was a moral, ethical, and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits. And to address predictable threats, such as algorithmic discrimination, data privacy violations, and deep fakes, the US had last October released a Blueprint for an AI Bill of Rights seen as a building block for Mondays executive order.
After its Bill of Rights was released, Washington had extensive engagement with the leading AI companies, most of which are American (with the exception of London-based Deep Mind, which is now a Google subsidiary) in a bid to evolve a blueprint and to establish a minimum baseline of responsible AI practices.
We intend that the actions we are taking domestically will serve as a model for international action. Understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world. Fundamentally it is our belief that technology with global impact requires global action, Harris said just before travelling to the United Kingdom for the summit on AI safety.
Let us be clear when it comes to AI, America is a global leader. It is American companies that lead the world in AI innovation. This is America that can catalyse global action and build global consensus in a way that no other country can. And under President Joe Biden, it is America that will continue to lead on AI, Harris said before the signing of Mondays executive order, clearly outlining Washingtons intent to take a lead on AI regulation just ahead of the UK-led safety summit.
This assumes significance, given that over the last quarter century, the US Congress has not managed to pass any major regulation to rein in Big Tech companies or safeguard internet consumers, with the exception of just two side legislations: one on child privacy and the other on blocking trafficking content on the net.
In contrast, the EU has enforced the landmark GDPR (General Data Protection Regulation) since May 2018 that is clearly focused on privacy and requires individuals to give explicit consent before their data can be processed and is now a template being used by over 100 countries, Then there are a pair of sub-legislations the Digital Services Act (DSA) and the Digital Markets Act (DMA) that take off from the GDPRs overarching focus on the individuals right over her data. The DSA focused on issues such as regulating hate speech, counterfeit goods etc. while the DMA has defined a new category of dominant gatekeeper platforms and is focused on non competitive practices and the abuse of dominance by these players.
On AI, though, the tables may clearly be turning. Washingtons executive order is a detailed blueprint aimed at safeguarding against threats posed by artificial intelligence and seeks to exert oversight over safety benchmarks that companies use to evaluate conversation bots such as ChatGPT and Google Bard. The move is being seen as a vital first step by the Biden administration in the process of regulating rapidly-advancing AI technology, which White House deputy chief of staff Bruce Reed had described as a batch of reforms that amounted to the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.
EU lawmakers, on the other hand, are yet to reach an agreement on several issues related to its proposed AI legislation and the deal is reportedly not expected anytime before December.
The US executive order required AI companies to conduct tests of their newer products and share the results with the US federal government officials before the new capabilities were made available to consumers. These safety tests undertaken by developers, known as red teaming, are aimed at ensuring that new products do not pose a threat to users or the public at large. Under these new government powers, enabled under the US Defense Production Act, the federal government is empowered to subsequently force a developer to either tweak the product or abandon an initiative.
As part of the initiative, the United States will launch an AI safety institute to evaluate known and emerging risks of AI models: this move could be in parallel to an initiative by London to set up a United Kingdom Safety Institute, though Washington has subsequently indicated that the proposed US institute would establish a formal partnership with the UK Institute.
Among the standards set out in the US order, a new rule seeks to codify the use of watermarks that alert consumers when they encounter a product enabled by AI, which is aimed at potentially limiting the threat posed by content such as deepfakes. Another standard stipulates that biotech firms take appropriate precautions when using AI to create or modify biological material. Incidentally, the industry guidance has been prescribed more as suggestions rather than binding requirements, giving developers and firms enough elbow room to work around some of the government recommendations.
Also, the executive order explicitly directs American government agencies to implement changes in their use of AI, thereby creating industry best practices that Washington expects will be embraced by the private sector. The US Department of Energy and the Department of Homeland Security will, for instance, take steps to address the threat that AI poses for critical infra, the White House said in a statement.
Harris said the focus of the move, while aiming for the existential threats of generative AI being highlighted by experts, also resonated at an individual or citizen level.Additional threads that also demand our action threats that are currently causing harm and which too many people also feel existential. Consider for example, when a senior is kicked off his health care plan because of a faulty AI algorithm, is that not existential for him? When a woman is threatened by an abusive partner with explicit deep fake photographs, is that not existential for her? When a young father is wrongfully imprisoned because of biased AI, facial recognition, is that not existential for his family? And when people around the world cannot discern fact from fiction because of a flood of AI enabled myth and disinformation, I ask, is that not existential for democracy?
Varied Approaches
These developments come as policymakers across jurisdictions have stepped up regulatory scrutiny of generative AI tools, prompted by ChatGPTs explosive launch. The concerns being flagged fall into three broad heads: privacy, system bias and violation of intellectual property rights.
The policy response has been different too, across jurisdictions, with the European Union having taken a predictably tougher stance by proposing to bring in its new AI Act that segregates artificial intelligence as per use case scenarios, based broadly on the degree of invasiveness and risk; the UK is seen to be on the other end of the spectrum, with a decidedly light-touch approach that aims to foster, and not stifle, innovation in this nascent field.
The US approach now slots somewhere in between, with Washington now clearly setting the stage for defining an AI regulation rulebook with Mondays executive order. This clearly builds on the move by the White House Office of Science and Technology Policy last October to unveil its Blueprint for the AI Bill of Rights. China too has released its own set of measures to regulate AI.
This also comes in the wake of calls by tech leaders Elon Musk, Steve Wozniak (Apple co-founder) and over 15,000 others for a six-month pause in AI development in April this year, saying labs are in an out-of-control race to develop systems that no one can fully control. Musk was in attendance at Bletchley Park, where he warned that AI is one of the biggest threats to humanity and that the Summit was timely because AI posed an existential risk to humans, who face being outsmarted by machines for the first time.
Original post:
On AI regulation, how the US steals a march over Europe amid the UKs showpiece Summit - The Indian Express