Here’s a great toolkit for Artificial Intelligence (AI) governance within your organisation – Lexology

As the deployment of artificial intelligence (AI) technology continues to grow, regulators around the globe continue to grasp with how best to encourage the responsible development and adoption of this technology. Many governments and regulatory bodies have released high level principles on AI ethics and governance, which while earnest leave you asking, where do I start?

However, the UKs Information Commissioners Office (ICO) has recently released a toolkit which takes a more practical how to do it approach. Its still in draft form and the ICO is seeking views to help shape and improve it. The toolkit builds upon the ICOs existing guidance on AI: The Guidance on AI and Data Protection and guidance on Explaining Decisions Made With AI (co-written with The Alan Turing Institute).

The toolkit is focused on assisting risk practitioners assess their AI systems against UK data protection law requirements, rather than AI ethics as a whole (although aspects such as discrimination, transparency, security, and accuracy are included). It is intended to help developers (and deployers) think about the risks of non-compliance with data protection law and offer practical support to organisations auditing compliance of their use of AI. While the toolkit is EU-centric, its still a good guide for Australian organisations grappling with how to embed AI in their businesses.

AI Toolkit: how AI impacts privacy and other considerations

Finally, a toolkit worth its name

The toolkit is constructed as a spreadsheet-based self-assessment tool which walks you through how AI impacts privacy and other considerations, helps you assess the risk in your business, and suggests some strategies. For example:

The toolkit covers 13 key areas including governance issues, contractual and third-party risk, risk of discrimination, maintenance of AI system and infrastructure security and integrity, assessing the need for human review, and other considerations.

To conduct the assessment, users of the toolkit are generally instructed to:

The toolkit is not intended to be used as a finite checklist or tick box exercise, but rather as a framework for analysis for your organisation to consider and capture the key risks and mitigation strategies associated with developing and/or using AI (depending on whether you are a developer, deployer, or both). This approach recognises that the diversity of AI applications, their ability to learn and evolve, and the range of public and commercial settings in which they are deployed, requires a more nuanced and dynamic approach to compliance than past technologies. There are no set and forget approaches to making sure your AI behaves and continues to meets community expectations which will be the ultimate test of accountability for organisations if something goes wrong.

Perhaps the most helpful part of the toolkit is a section reminding of Trade offs: i.e. where organisations will need to weigh up often competing values such as data minimisation and statistical accuracy in making AI design, development and deployment decisions. This brings a refreshingly honest and realistic acknowledgement of the challenges in developing and using AI responsibly typically lacking in the high level AI principles.

What about nearer to home?

Another useful how to guide is from the ever-practical Singaporeans. In early 2020, we saw Singapores Personal Data Protection Commission (PDPC) release the second edition of its Model AI Governance Framework and with it the Implementation and Self-Assessment Guide for Organisations (ISAGO) developed in collaboration with the World Economic Forum; another example of a practical method of encouraging responsible AI adoption.

In Australia, we are yet to see these practical tools released. However, a small start has been made with the Government and Industrys piloting of Australias AI ethics principles.

Read the original here:
Here's a great toolkit for Artificial Intelligence (AI) governance within your organisation - Lexology

Related Posts

Comments are closed.