REMINDER – Vector Institute affiliated AI Trust and Safety Experts available for commentary related to the AI Global … – GlobeNewswire

TORONTO, May 21, 2024 (GLOBE NEWSWIRE) -- The second AI Global Forum will take place in South Korea next week gathering government officials, corporate leaders, civil societies, and academics from around the world to discuss the future of AI.

The Vector Institute is affiliated to a significant number of world leading researchers working on AI Trust and Safety available to provide comment in the lead up and during the AI Global Forum.

In addition, Vectors President and CEO, Tony Gaffney will participate in person at the Forum in South Korea on Wednesday May 22nd, 2024 and will be available for comment while on site.

Media availability:

Experts will be available for commentary on AI Trust and Safety in the lead up and during the AI Global Forum.

Vector Institute Affiliated AI Trust and Safety Experts:

Jeff Clune

Jeff focuses on deep learning, including deep reinforcement learning. His research also focuses on AI Safety, including regulatory recommendations and improving the interpretability of agents

Roger Grosse

Rogers research examines training dynamics in deep learning. He is applying his expertise to AI alignment, to ensure the progress of AI is aligned with human values. Some of his recent work has focused on better understanding how large language models work in order to head off the potential for risk in their deployment.

Gillian Hadfield

Gillians current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.

Sheila McIlraith Sheilas research addresses AI sequential decision making that is human compatible, with a focus on safety, alignment, and fairness. Recent work looked at the impact of ethics education on computer science students.

Rahul G. Krishnan

Rahuls research focuses on building robust and generalizable machine learning algorithms to advance computational medicine. His recent work has developed new algorithms for causal decision making, built risk scores for patients on the transplant waitlist, and created automated guardrails for predictive models deployed in high-risk settings.

Xiaoxiao Li

Xiaoxiao specializes in the interdisciplinary field of deep learning and biomedical data analysis. Her primary mission is to make AI more reliable, especially when it comes to sensitive areas like healthcare.

Nicolas Papernot

Nicolass work focuses on privacy preserving techniques in deep learning, and advancing more secure and trusted machine learning models.

About the Vector Institute Launched in 2017, the Vector Institute works with industry, institutions, startups, and governments to build AI talent and drive research excellence in AI to develop and sustain AI-based innovation to foster economic growth and improve the lives of Canadians. Vector aims to advance AI research, increase adoption in industry and health through programs for talent, commercialization, and application, and lead Canada towards the responsible use of AI. Programs for industry, led by top AI practitioners, offer foundations for applications in products and processes, company-specific guidance, training for professionals, and connections to workforce-ready talent. Vector is funded by the Province of Ontario, the Government of Canada through the Pan-Canadian AI Strategy, and leading industry sponsors from across multiple sectors of Canadian Industry.

This availability is for media only.

For more information or to speak with an AI expert, contact: media@vectorinstitute.ai

Read the original here:
REMINDER - Vector Institute affiliated AI Trust and Safety Experts available for commentary related to the AI Global ... - GlobeNewswire

Related Posts

Comments are closed.