Syracuse University Named to Federal AI Safety Consortium
February 14, 2024
The Autonomous Systems Policy Institute, housed in the Maxwell School, is an inaugural member of U.S. AI Safety Institute Consortium.
Syracuse University has been named to a new U.S. consortium created with a presidential executive order to support the development and deployment of safe and trustworthy artificial intelligence (AI).
Through the Autonomous Systems Policy Institute (ASPI), housed at the Maxwell School of Citizenship and Public Affairs, the University is among several academic institutions across the country invited to join the U.S. AI Safety Institute Consortium (AISIC). Consortium members include AI creators and users, researchers, organizations and businesses such as OpenAI, Apple, JPMorgan Chase and numerous others.
“We’re excited to be included in this important initiative as the nation and world faces an increasingly complex challenge of navigating emerging technologies and their impacts on humanity,” said Hamid Ekbia, University Professor and director of ASPI.
Launched in 2019, ASPI serves as a hub for students and faculty across disciplines to examine social, economic and environmental challenges related to AI, autonomous systems and other emerging technologies. Since joining Syracuse in January 2023, Ekbia has focused on forging connections among AI researchers, policymakers and journalists to address what he says are rampant gaps in information, knowledge and accountability related to emerging technologies.
Ekbia recently launched the Academic Alliance for AI Policy to serve as a resource for lawmakers, policymakers and others seeking to regulate and better understand AI. On March 6, he will join Associate Provost Jamie Winders and data scientist and AI expert Rumman Chowdhury for a talk in the Goldstein Auditorium on the impact of AI on the lives of students, what policymakers have missed and what bearing AI will have on the upcoming U.S. election cycle.
The federal consortium was announced by U.S. Commerce Secretary Gina Raimondo on Feb. 7, 2024. In a statement, Raimondo said its priorities—outlined in an executive order by President Biden—include the development of guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content. Red-teaming is used in cybersecurity to identify risks; the term dates back to Cold War simulations and refers to the “enemy” team.
“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” said Raimondo in the statement. “President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do.”
By Jessica Youngman
Published in the Spring 2024 issue of the Maxwell Perspective
Related News
Research
Dec 19, 2024
Commentary
Dec 18, 2024
Research
Dec 18, 2024
Research
Dec 16, 2024