It’s both AI technology and ethics that will enable JADC2 – Breaking Defense

Artificial intelligence graphic courtesy of Northrop Grumman.

Questions that loom large for the wider application of artificial intelligence (AI) in Defense Department operations often center on trust. How does the operator know if the AI is wrong, that it made a mistake, that it didnt behave as intended?

Answers to questions like that come from a technical discipline known as Responsible AI (RAI). Its the subject of a report issued by the Defense Innovation Unit (DIU) in mid-November called Responsible AI Guidelines in Practice, which addresses a requirement in the FY21 National Defense Authorization Act (NDAA) to ensure that the DoD has the ability, requisite resourcing, and sufficient expertise to ensure that any artificial intelligence technologyis ethically and reasonably developed.

DIUs RAI guidelines provide a framework for AI companies, DOD stakeholders and program managers that can help to ensure that AI programs are built with the principles of fairness, accountability, and transparency at each step in the development cycle of an AI system, according to Jared Dunnmon, technical director of the artificial intelligence/machine learning portfolio at DIU.

This framework is designed to achieve four goals, said Dunnmon:

Trust in the AI is foremost

Just like Isaac Asimovs Three Laws of Robotics describes ethical behavior for robots, the DIUs guidelines offer five ethical principles for development and use of artificial intelligence.

Its that fifth principle, governable, that addresses the questions asked at the top about letting the operator know when the AI is wrong. Operators need to establish trust in the AI systems or they simply wont be used. Thats not an option for something as complex as the Joint All Domain Command and Control concept of operations.

Dr. Amanda Muller, Consulting (AI) Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman.

Governable AI systems allow for graceful termination and human intervention when algorithms do not behave as intended, said Dr. Amanda Muller, Consulting AI Systems Engineer and Technical Fellow, who is the Responsible AI Lead for Northrop Grumman, which is one of the few companies with such a position. At that point, the human operator can either take over or make adjustments to the inputs, to the algorithm, or whatever needs to be done. But the human always maintains the ability to govern that AI algorithm.

Northrop Grummans adoption of these RAI principles builds justified confidence in the AI systems being created because the human can understand and interpret what the AI is doing, determine if its operating correctly through verification and validation, and take actions if it is not.

The importance of doing so is clear for the future of AI in the military. If AI systems do not work as designed or are unpredictable, leaders will not adopt them, operators will not use them, Congress will not fund them, and the American people will not support them, states the Final Report from the National Security Commission on Artificial Intelligence (NSCAI). This commission was a temporary, independent, federal entity created by Congress in the National Defense Authorization Act for Fiscal Year 2019. The commission was led by former Google CEO Eric Schmidt and former Deputy Secretary of Defense Robert Work, and delivered its 756-page Final Report in March 2021, disbanding in October.

The power of AI is its ability to learn and adapt to changing situations, said Muller. The battlefield is a dynamic environment and the side that adapts fastest gains the advantage. Like with all systems, though, AI is vulnerable to attack and failure. To truly harness the power of AI technology, developers must align with the ethical principles adopted by the DoD.

The complexity of all-domain operations will demand AI

The DoDs pledge to develop and implement only Responsible Artificial Intelligence will underpin development of systems for JADC2. An OODA (Observe, Orient, Decide, Act) loop stretching from space to air and ground, and to sea and cyber will only be possible through the ability of an AI system to control the JADC2 infrastructure.

Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grummans Networked Information Solutions div.

The AI could perceive and reason on the best ways to move information across different platforms, nodes, and decision makers, explained Vern Boyle, Vice President of Advanced Processing Solutions for Northrop Grummans Networked Information Solutions division. And it could optimize the movement of that information and the configuration of the network because itll be very complex.

Well be operating in contested environments where it will be difficult for a human to react and understand how to keep the network and the comm links functioning. The use of AI to control the communication and networking infrastructure is going to be one big application area.

At the same time, RAI will serve as a counterweight to Americas Great Power competitors, China and Russia, who certainly wont engage in ethical AI as they push for power. As part of its strategic plan, China has declared it will be the global leader in AI by 2030 and its investments in dual-use technologies like advanced processing, cyber security, and AI are threats to U.S. technical and cognitive dominance.

The key difference is that China is applying AI technologies broadly throughout the country, said Boyle. They are using AI for surveillance and tracking their citizens, students, and visitors. They use AI to monitor online behaviors, social interactions and biometrics.

China has no concern about privacy rights or ethical application of the data that AI is able to gather and share. All data is collected and used by both industry and the Chinese government to advance their goal of global, technical dominance by 2030.

Fundamental to the U.S response to Chinas actions is assuring that the Defense Departments use of AI reflects democratic values, according to Boyle.

It is critical that we move rapidly to set the global standard for responsible and ethical AI use, and to stay ahead of China and Russias advances toward the lowest common denominator. The U.S., our ally partners, and all democratic-minded nations must work together to lead the development of global standards around AI and talent development.

Northrop Grumman systems to close the connectivity/networking gap

Doing so will help to close one of the most significant capability gaps facing armed forces right now, which is basic connectivity and networking. The platforms and sensors needed to support JADC2satellites, unmanned air and ground systems, and guided missile destroyers, to name a fewarent necessarily able to connect and move information effectively because of legacy communications and networking systems.

That reality will dampen the DoDs ambitions for AI and machine learning for tactical operations.

Its both a gap and a challenge, observed Boyle. Lets assume, though, that everyones connected. Now theres an information problem. Not everybody shares their information. Its not described in a standard way. Having the ability to understand and reason on information presumes that youre able to understand it. Those capabilities arent necessarily mature yet either.

There are also challenges with respect to multi-level security and the ability to share and distribute information at different classification levels. That adds a level of complexity thats not typically present in the commercial sector.

The severity of this issue and the need to solve it in the name of all-domain operations is driving Northrop Grumman to prioritize the successful application of AI to communications and networking.

The company has numerous capabilities deployed now on important platforms such as Global Hawk and is working with customers to leverage gateway systems in service now for data relay, while developing new capabilities to address gaps in communications and networking.

AI graphic courtesy of Northrop Grumman.

Northrop Grummans portfolio already contains enabling technologies needed to connect joint forces, including advanced networking, AI/ML, space, command and control systems, autonomous systems powered by collaborative autonomy, and advanced resiliency features needed to protect against emerging threats. And it is developing AI that acts as the connective tissue for military platforms, sensors, and systems to communicate with one anotherenabling them to pass information and data using secure, open systems, similar to how we use the Internet and 5G in our day-to-day lives.

The DoD has stated that it must have an AI-enabled force by 2025 because speed will be the differentiator in future battles, said Boyle That means speed to understand the battle space; speed to determine the best course of action to take in a very complex and dynamic battle space; and speed to be able to take appropriate actions. Together, they will let the DoD more quickly execute the OODA Loop (Observe, Orient, Decide, Act).

AI and advanced, specialized processing at the tactical edge will provide a strategic information advantage. AI and edge computing are the core enabling technologies for JADC2.

See more here:
It's both AI technology and ethics that will enable JADC2 - Breaking Defense

Related Posts

Comments are closed.