Artificial Intelligence Risk Is Topic Of Great Valley Research – Patch.com

MALVERN, PA Four Penn State Great Valley professors will be researching ways to test for risk and vulnerability in Artificial Intelligence at the development stage, so practical problems can be headed off.

Self-driving cars and other Artificial Intelligence-assisted technologies awaiting mainstream use depend on large volumes of collected data. If data is in any way distorted, biased or tampered with, it's suddenly not so awesome and could pose risks to people's lives and to public safety.

A research grant will fund their project, "Managing Risks in AI Systems: Mitigating Vulnerabilities and Threats Using Design Tactics and Patterns." The Great Valley faculty team was one of eight across Penn State campuses to receive the one-year seed grants to fund research on cybersecurity for Artificial Intelligence.

"Every AI project should manage risks in a broad sense." said Youakim Badr, associate professor of data analytics at Penn State Great Valley. He explained, "The research project aims at applying risk management when we design an AI system and continuously monitor its behavior at runtime."

In the near future, many AI applications will be in physical contact with humans and will offer unimagined opportunities in many areas such as driverless trucks, fruit harvesting robots, autonomous boats, and robotic surgery, to mention just a few, said Badr.

Poorly designed, misused or hacked AI systems could mean loss of human control and could compromise the integrity of their own operating.

Badr said AI has become and will be a norm in the future to achieve superhuman performance in cognitive tasks, ranging from text understanding, translation between languages, question answering, to generating novels and artistic works.

AI techniques are also increasingly used to enhance decision-making processes to approve loans, diagnose diseases, predict recidivism and leverage our homeland security and defense.

The increasing dependency on AI systems poses potential risks. Risks stem from various sources, including deliberate cyberattacks from adversaries, biases in training data and machine learning algorithms, events of unpredictable root-cause, and bugs in software development.

AI Risks, if manifested, could expose them to potential threats and misbehavior that their designers would not expect or desire.

Badr and the Great Valley team at Penn State recently received a grant for research on "Managing Risks in AI Systems: Mitigating Vulnerabilities and Threats Using Design Tactics and Patterns." The project's co-principle investigators are Parth Mukherjee, assistant professor of data analytics, Raghu Sangwan, associate professor of software engineering, and Satish Srinivisan assistant professor of information science.

The project also includes Prasenjit Mitra, professor of information sciences and technology, associate dean for research in the College of Information Sciences and Technology, and the director of the Center for Socially Responsible Artificial Intelligence.

The impetus for the project came when Badr noticed significant vulnerabilities in AI systems, like self-driving cars that could be tricked to misread traffic signs or human biases imprinted upon AI algorithms and training datasets that could lead to stereotypes and injustice

Because intelligent systems aren't solely designed from software, the team saw an opportunity to explore how identifying and mitigating risks and vulnerabilities at the development stage could help AI-based systems to become safer and trustworthy.

The Great Valley faculty team was one of eight across Penn State campuses to receive the one-year seed grants to fund research on cybersecurity for Artificial Intelligence.

Risk management in AI systems is just beginning. "The discipline of AI risk management still in its infancy," said Badr.

"Today's AI systems use human reasoning as a model to achieve outperformance in specific tasks, but they are far from building the Artificial General Intelligence (AGI) which aims to understand and perform any cognitive task. AI systems learn by example to automate reasoning and thus solve problems," Badr said.

Intelligent tasks accomplished by AI systems rely on training data to build their capabilities in decision-making and prediction on unforeseen data. Badr explained AI's predictive capabilities mainly come from data collected from real-world or through interactions with AI's environments.

"And that can be the root cause of many risks, like biases and skewness" he said.

"AI systems are not only hungry for data but also thirsty for computational resources," Badr said. This opens the door for several cybersecurity risks and attacks that threaten their underlying infrastructures, communication networks and software applications.

"Adversarial attacks are remarkable cybersecurity threats by which malicious adversaries intentionally provide input (like images or text) designed in a specific way to inject backdoor patterns that may trigger AI systems to make a wrong prediction," Badr said.

For example, Badr explained, adversarial attacks can fool a self-driving vehicle by compromising its speed detector, which basically recognizes the speed limit from road signs images. An attacker could target the speed detector during the training phase by adding poisoned images of road signs with imperceptible perturbations. This can lead the car engine to speed up when the speed detector's camera captures an altered road sign with small stickers that intentionally increase the car's speed.

Risk management in AI systems is one step in a long journey to build trustworthy and safe AI. "By identifying risks at design time and at runtime, we will be able to mitigate them with appropriate treatment and enable controlled behavior with respect to predefined requirements," said Badr.

As AI technologies become more and more pervasive and efficient, every AI project must consider risk management, said Badr. But he expects we are up to the task.

"AI is to our century what electricity was in its time," he said.

Dealing with AI risks implies new complex systems and require us to look at problems from varied perspectives so that abnormalities and malicious behavior are identified but also then analyzed, evaluated and resolved. This takes multiple academic disciplines, he said.

The research of Badr, Mukherjee, Sangwan, and Srinivasan is multidisciplinary.

"It's an excellent opportunity for our campus and faculty to bring together different expertise around AI, cybersecurity and software engineering," Badr said.

Part of the work, he said, is to enable resilience and fault tolerance into AI systems, create methods and tools to test the system operating if and when one or more components are compromised or misbehave.

The team seeks to come up with a systematic approach for people who are interested in developing intelligent systems so they become aware of AI risks and vulnerabilities before they develop their products at a large scale and deploy it into real situations.

The team's diverse research background creates a unique approach to test for vulnerabilities when developing AI systems. Badr will focus on the risk management framework for AI-based systems, Mukherjee on monitoring and evaluating risk propagation when these systems are distributed, Sangwan on developing a software engineering approach to architecting and designing AI systems centering on their testability of their behaviour, and Srinivasan on fault tolerance and predictions.

The grants are funded in concert with the 2020 industryXchange, an annual University-wide event hosted by the College of Engineering.

"We are confident that our research topic will attract industry partners and have a significant impact on the development of trustworthy decentralized AI systems," said Badr.

The Great Valley campus focuses on bridging the gap between industry and academia, both for full-time students preparing to enter the workforce and for students already working in industry full-time. The broad reach of cybersecurity and AI will provide opportunities for graduate students from multiple programs to contribute to the research, also.

"We seek to create the synergy needed to provide the best opportunities between research and academic programs for the students," Badr said.

"We hope that the project's outcomes can be transferred to our classrooms and support our campus mission of providing high-quality, innovative and technologically progressive opportunities to collaborate with companies and industry."

Read more:
Artificial Intelligence Risk Is Topic Of Great Valley Research - Patch.com

Related Posts

Comments are closed.