Impact and risks of artificial intelligence and its use by the board of … – JD Supra

Artificial Intelligence (AI) has become an inseparable part of our everyday lives. AI is used in Siri, facial recognition, navigation systems and search recommendations and advertisement algorithms to name just a few examples. We tend to take for granted the foregoing uses of AI as they have become commonplace. However, these AI systems are examples of Traditional AI the uses for which are limited and rather simplistic compared to cutting edge developments. New and more advanced versions of AI such as Generative AI (e.g., ChatGPT) are at the forefront of development and innovations in most industries and corporations. For example, a report from McKinsey & Company indicates that one-third of businesses are reportedly using Generative AI in at least one function.[i] As is relevant to this article, AI is becoming more commonly used as part of the decision making process by corporate boards and their directors and officers. This article considers: (1) The different classifications and types of AI; (2) how corporations and their boards are using AI; (3) the implications of its use; (4) potential impacts and emerging risks for D&O insurers; and (5) suggested regulatory frameworks for AI.

Levels of AI From Traditional to Generative and Assisted to Fully Autonomous

As previewed above, there are differing levels of AI. In terms of use in corporate decision making, commentators have generally characterized AI in levels of autonomy,[ii] which includes three levels: (1) Assisted; (2) Augmented; and (3) Autonomous. This scale categorizes AI on relative decision making ability, with Assisted simply providing support with administrative tasks with no real decision making function or impact, Augmented provides data and support to human decision makers which can impact (and hopefully improve) the human directors judgment decisions, and Autonomous where the decision making ability lays entirely with the AI system.[iii] At present, the technology appears to have moved beyond purely Assisted AI systems, but have not yet developed AI that would be capable or trusted to be fully Autonomous with corporate decision making. As such, and as discussed in the following section, it appears that Augmented AI is the most applicable level of AI automation to corporate decision makers at this time.

In addition to levels of autonomy, AI can also be characterized by the level of originality it can create. The foregoing examples of AI (Siri, navigation etc.) are forms of Traditional AI. These systems are designed with the capability to learn from data and make decisions or predictions based on that data.[iv] Traditional AI is constrained by the rules it is programmed to know. In comparison, Generative AI, which is at the cutting edge of AI developments, has the ability to create new and original pieces. For example ChatGPT and DALL-E are Generative AI systems that can create text or image outputs based on written requests or inputs, and create human-like content.[v] Applying these two types of AI to the levels of autonomy, it appears that Traditional AI may be used to analyze data and spot patterns and therefore be used in an Augmented way by corporate boards, whereas Generative AI has the potential to create new patterns and potentially become the autonomous decision maker.

Use of AI in the Board Room

Commentators and futurists seem assured that one day AI will join or replace human Directors and will be running corporations and making decisions autonomously. However, there appears to be general consensus that the current use of AI is best limited to a tool to assist Directors with decision making and not having AI as the ultimate decision maker. This may also be consistent with the current view of AI usage for other professional classes.[vi] While there are some well-known examples of purported AI Directors (e.g. VITAL, Tieto, and Salesforce)[vii] these certainly appear to be the exception, and in reality not truly autonomous.

Based on the current data, the most common use of AI still appears to be in data gathering, analysis, and reporting.[viii] One study involving executives from large US public corporations and private equity funds reported that 69% of the respondents are using AI tools as a part of due diligence.[ix] While not specified, it would appear that the respondents are using Traditional AI in an Augmented fashion. However, AI is being billed as a risk management tool, and is being used to monitor compliance and governance of the corporation by identifying vulnerabilities or patterns, detecting violations, and encouraging timely and accurate reporting.[x] There may be other uses of AI to make or assist in decision making that are not as publicized or that are in a truly experimental stage. These new uses will likely involve Generative AI.

Implications of Using AI in Decision Making: the Positive and Negative

From personal experience, AI often speeds up basic processes, such as paying for goods or predictive text and searches. The lure of the same time saving benefits are also present in the corporate board room. As illustrated above, Directors are using AI to synthesize and evaluate large quantities of information in short periods of time. This can allow for decision making based on larger and more comprehensive data sets than ever before. Further, AI has the ability to identify patterns and trends that may not be immediately obvious, therefore informing corporate strategy. These perceived benefits and characteristics can hopefully lead to a more informed board that is able to pursue multiple goals. Some commentators also suggest AI will lead to a more independent board because decisions are based on the neutral output of information and may give a stronger dissenting voice to independent directors whose positions may be supported by AI.

However, it is acknowledged that the output from AI is only as good as the data that is input, or the rules that govern how the AI functions. Both of these factors can be decided by and influenced by humans resulting in built-in biases. For example, companies using AI and historical data to hire officers and managers, found that AI favored certain applicants due to the built-in bias of the historical hiring data.[xi] Further, when considering Generative AI, that has the ability to create original work, the prompt or input the user applies can directly or inadvertently cause Generative AI to produce work that includes biases, errors, falsehoods, and even so-called AI hallucinations.[xii] AI hallucinations refer to made up or false information that AI systems posit as true. Further, Generative AI can present a black box problem, which refers to the inability of AI systems or experienced data scientists to explain or understand how certain outputs were reached based on the data input.[xiii]

Further, at the current stage of AI development, human judgment is still required, and therefore, the purported benefits of AI are naturally diluted.[xiv] Other potential risks include security and privacy concerns stemming from the vast amounts of data that is fed into AI systems. For example, Samsung banned use of ChatGPT after employees loaded sensitive company data onto the platform that subsequently leaked.[xv] Further, legal and regulatory frameworks in the US do not currently recognize non-human directors. Therefore, significant questions regarding legal liability are likely to present where AI takes a greater role in corporate decision making.

D&O Insurance Risks

How may the foregoing impact D&O insurers? The immediate impact appears to be associated with unknown legal liability. Where AI is informing or assisting in decisions of directors with varying level of oversight there is a question of who is responsible for subsequent liability. AI may also be used by boards to assess historical risks or evaluate losses. Some suggest that typical derivative actions could transform into actions also brought against AI software programmers or providers. Securities actions could arise as a result of undisclosed usage of AI in corporate decision making or inadequate disclosure of the risks that it may present. However, at present it would appear most likely that both new (AI developers) and traditional targets (Corporations and their Directors and Officers) would be pursued together which risks exposing additional policy proceeds and inflating demands and potential settlements. There also appears to be risk in the unknown, including the uncertainty of how breach of fiduciary standards will be applied when considering AI involvement in decision making. Ultimately, additional risks and areas of exposure are likely to develop as AI systems are given more autonomy over decision making.

Potential Regulatory Frameworks

Because of AIs relatively new use in the corporate landscape, and the fact much of its potential will be in now unknown future uses, including fully autonomous decision making, there is no clear regulatory or legal framework in place that will, or can, address issues presented by future AI developments. Some have suggested that AI could be regulated like the pharmaceutical industry, and be subject to rigorous testing prior to approval for market uses.[xvi] Alternatively, there have been suggestions that law makers develop clear laws paired with incentives to produce legal AI products before such technologies advance further. However, both frameworks would appear to risk stifling innovation and investment. Alternatively, certain academics have suggested a disclosure based regulatory approach, which seems similar to SEC regulations and disclosure obligations for US public companies. They suggest that such a framework would be most suitable because the cost would not unduly restrict innovation and investment in AI, yet the level of disclosure still provides the needed oversight in a developing industry.

Conclusion

Traditional AI is entrenched in everyday life and the technology has evolved significantly with Generative AI. This evolution is notable for businesses with Generative AI usage seemingly becoming widespread. This will present a number of ethical and legal issues on many fronts. For those in the D&O industry, developments in AI may also give rise to novel issues and increase potential risks. Inquiries to Insureds about the use of AI in its operations and in connection with the management of the entity will likely become more commonplace as such systems are gaining traction across a wide range of industries. Claims activity related to non-disclosure of AI risks or claims arising from the reliance on such technologies in decision making, even with the involvement of human intelligence, may also be worth monitoring.

[i] Lynch, Sarah, 2 Common Mistakes CEOs Might Be Making With Generative A.I. (August 11, 2023), Inc., Available at: https://www.inc.com/sarah-lynch-/2-common-mistakes-ceos-might-be-making-with-generative-ai.html

[ii] See Petrin, Martin, Corporate Management in the Age of AI (March 4, 2019). Columbia Business Law Review, Forthcoming, UCL Working Paper Series, Corporate Management in the Age of AI (No.3/2019) , Faculty of Laws University College London Law Research Paper No. 3/2019, Available at SSRN: https://ssrn.com/abstract=3346722; See also Mertens, Floris, The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis (January 27, 2023). Financial Law Institute Working Paper Series 2023-01, Available at SSRN:https://ssrn.com/abstract=4339413.

[iii] Id.

[iv] Marr, Benard, The Difference Between Generative AI And Traditional AI: An Easy Explanation For Anyone, (July 24, 2023) Forbes Available at:

https://www.forbes.com/sites/bernardmarr/2023/07/24/the-difference-between-generative-ai-and-traditional-ai-an-easy-explanation-for-anyone/?sh=7f39606c508a

[v] Marr, Bernard Marr, How To Stop Generative AI From Destroying The Internet, (August 14, 2023) Forbes, Available at:

https://www.forbes.com/sites/bernardmarr/2023/08/14/is-generative-ai-destroying-the-internet/?sh=4b56b4c597f6

[vi] LaCroix, Kevin, AI is Not Quite Ready to Replace the Lawyers (May 30, 2023), The D&O Diary, Available at: https://www.dandodiary.com/2023/05/articles/blogging/ai-is-not-quite-ready-to-replace-the-lawyers/

[vii] Petrin, Corporate Management in the Age of AI (Note ii).

[viii] Mertens, The Use of Artificial Intelligence in Corporate Decision-Making at Board Level: A Preliminary Legal Analysis (Note ii).

[ix] Id.

[x] Bruner, Christopher M., Artificially Intelligent Boards and the Future of Delaware Corporate Law (September 22, 2021). University of Georgia School of Law Legal Studies Research Paper No. 2021-23, Available at SSRN: https://ssrn.com/abstract=3928237.

[xi] Earley, Sen & Zivin, Sparky, The Rise of the AI CEO:A Revolution in Corporate Governance (March 8, 2023), Available at: https://www.teneo.com/the-rise-of-the-ai-ceo-a-revolution-in-corporate-governance/

[xii] Eliot, Lance, Prompt Engineering Deciphers Riddle Of Show-Me Versus Tell-Me As Choice Of Best-In-Class Prompting Technique For Generative AI, (August 12, 2023) Forbes, Available at:

https://www.forbes.com/sites/lanceeliot/2023/08/12/prompt-engineering-solves-riddle-of-show-me-versus-tell-me-as-choice-of-best-in-class-prompting-technique-for-generative-ai/?sh=661e70ed53d9

[xiii] Blackman Reid, Generative AI-nxiety, (August 14, 2023) Harvard Business Review, Available at: https://hbr.org/2023/08/generative-ai-nxiety

[xiv] Ajay Agrawal et al., What to Expect From Artificial Intelligence, 58 MITSLOAN MANAGEMENT REVIEW (2017), at 26, http://ilp.mit.edu/media/news_articles/smr/2017/58311.pdf.

[xv] Blackman, Generative AI-nxiety (Note xii).

[xvi] Kamalnath, Akshaya and Varottil, Umakanth, A Disclosure-Based Approach to Regulating AI in Corporate Governance (January 7, 2022). NUS Law Working Paper No. 2022/001, Available at SSRN:https://ssrn.com/abstract=4002876.

Original post:
Impact and risks of artificial intelligence and its use by the board of ... - JD Supra

Related Posts

Comments are closed.