Walking the artificial intelligence and national security tightrope … – The Strategist

Artificial intelligence (AI) presents Australias security as many challenges as it does opportunities. While it could create mass-produced malware, lethal autonomous weapons systems, or engineered pathogens, AI solutions could also prove the counter to these threats. Regulating AI to maximise Australias national security capabilities and minimise the risks presented to them will require focus, caution and intent.

One of Australias first major public forays into AI regulation is the Department of Industry, Science and Resources (DISR)s recently released discussion paper on responsibly supporting AI. The paper notes AIs numerous positive use cases if its adopted responsiblyincluding improvements in the medical imagery, engineering, and services sectorsbut also recognises its enormous risks, such as the spread of disinformation and harms of AI-enabled cyberbullying.

While national security is beyond the scope of DISRs paper, any general regulation of AI would affect its use in national security contexts. National security is a battleground comprised of multiple political, economic, social and strategic fronts and any whole-of-government approach to regulating AI must recognise this.

Specific opportunities for AI in national security include enhanced electronic warfare, cyber offence and defence, as well as improvements in defence logistics. One risk is that Australias adversaries will possess these same capabilities, and another is that AI could be misused or perform unreliably in life or death national security situations. Inaccurate AI-generated intelligence, for instance, could undermine Australias ability to deliver effective and timely interventions, with few systems of accountability currently in place for when AI contributes to mistakes.

Australias adversaries will not let us take our time pontificating, however. Indeed, ASPIs Critical Technologies Tracker has identified Chinas primacy in several key AI technologies, including machine learning and data analyticsthe bedrock of modern and emerging AI systems. Ensuring that AI technologies are auditable, for instance, may come at strategic disadvantage. Many so-called glass box models, though capable of tracing the sequencing of their decision-making algorithms, are often inefficient compared to black box options with inscrutable inner workings. The race for AI supremacy will continue apace regardless of how Australia regulates it, and those actors less burdened by ethical considerations could gain a lead over their competitors.

Equally though, fears of Chinas technological superiority should not lead to cutting corners and blind acceleration. This would exponentially increase risk the likelihood of incurring AI-induced disasters over time. It could also trigger an AI arms race, adding to global strategic tension.

Regulation should therefore adequately safeguard AI whilst not hampering our ability to employ it for our national security.

This will be tough and may overlap or contradict other regulatory efforts around the world. While their behaviour often raises eyebrows, big American tech companies hold over most major advances in AI is at the core of strategic relationships such as AUKUS. If governments trust bust, fragment or restrict these companies, they must also account for how a more diffuse market could contend with Chinas command economy.

As with many complex national security challenges, walking this tightrope will take a concerted effort from government, industry, academia, civil society and the broader public. AI technologies can be managed, implemented and used safely, efficiently and securely if regulators find a balance that is neither sluggish adoption nor rash acceleration. If they pull it off, it would be the circus act of the century.

See the original post here:
Walking the artificial intelligence and national security tightrope ... - The Strategist

Related Posts

Comments are closed.