AI companies arent afraid of regulation we want it to be international and inclusive – The Guardian

Opinion

If our industry is to avoid superficial ethics-washing, historically excluded communities must be brought into the conversation

AI is advancing at a rapid pace, bringing with it potentially transformative benefits for society. With discoveries such as AlphaFold, for example, were starting to improve our understanding of some long-neglected diseases, with 200m protein structures made available at once a feat that previously would have required four years of doctorate-level research for each protein and prohibitively expensive equipment. If developed responsibly, AI can be a powerful tool to help us deliver a better, more equitable future.

However, AI also presents challenges. From bias in machine learning used for sentencing algorithms, to misinformation, irresponsible development and deployment of AI systems poses the risk of great harm. How can we navigate these incredibly complex issues to ensure AI technology serves our society and not the other way around?

First, it requires all those involved in building AI to adopt and adhere to principles that prioritise safety while also pushing the frontiers of innovation. But it also requires that we build new institutions with the expertise and authority to responsibly steward the development of this technology.

The technology sector often likes straightforward solutions, and institution-building may seem like one of the hardest and most nebulous paths to go down. But if our industry is to avoid superficial ethics-washing, we need concrete solutions that engage with the reality of the problems we face and bring historically excluded communities into the conversation.

To ensure the market seeds responsible innovation, we need the labs building innovative AI systems to establish proper checks and balances to inform their decision-making. When the language models first burst on to the scene, it was Google DeepMinds institutional review committee an interdisciplinary panel of internal experts tasked with pioneering responsibly that decided to delay the release of our new paper until we could pair it with a taxonomy of risks that should be used to assess models, despite industry-wide pressure to be on top of the latest developments.

These same principles should extend to investors funding newer entrants. Instead of bankrolling companies that prioritise novelty over safety and ethics, venture capitalists (VCs) and others need to incentivise bold and responsible product development. For example, the VC firm Atomico, at which I am an angel investor, insists on including diversity, equality and inclusion, and environmental, social governance requirements in the term sheets for every investment it makes. These are the types of behaviours we want those leading the field to set.

We are also starting to see convergence across the industry around important practices such as impact assessments and involving diverse communities in development, evaluation and testing. Of course, there is still a long way to go. As a woman of colour, Im acutely aware of what this means for a sector where people like me are underrepresented. But we can learn from the cybersecurity community.

Decades ago they started offering bug bounties a financial reward to researchers who could identify a vulnerability or bug in a product. Once reported, the companies had an agreed time period during which they would address the bug and then publicly disclose it, crediting the bounty hunters. Over time, this has developed into an industry norm called responsible disclosure. AI labs are now borrowing from this playbook to tackle the issue of bias in datasets and model outputs.

Last, advancements in AI present a challenge to multinational governance. Guidance at the local level is one part of the equation, but so too is international policy alignment, given the opportunities and risks of AI wont be limited to any one country. Proliferation and misuse of AI has woken everyone up to the fact that global coordination will play a crucial role in preventing harm and ensuring common accountability.

Laws are only effective, however, if they are future-proof. Thats why its crucial for regulators to consider not only how to regulate chatbots today, but also how to foster an ecosystem where innovation and scientific acceleration can benefit people, providing outcome-driven frameworks for tech companies to work within.

Unlike nuclear power, AI is more general and broadly applicable than other technologies, so building institutions will require access to a broad set of skills, diversity of background and new forms of collaboration including scientific expertise, socio-technical knowledge, and multinational public-private partnerships. The recent Atlantic declaration between the UK and US is a promising start toward ensuring that standards in the industry have a chance of scaling into multinational law.

In a world that is politically trending toward nostalgia and isolationism, multilayered approaches to good governance that involve government, tech companies and civil society will never be the headline-grabbing or popular path to solving the challenges of AI. But the hard, unglamorous work of building institutions is critical for enabling technologists to build toward a better future together.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Go here to read the rest:

AI companies arent afraid of regulation we want it to be international and inclusive - The Guardian

Related Posts

Comments are closed.