UAlbany Partners on New U.S. Artificial Intelligence Safety Institute Consortium
By Mike Nolan
ALBANY, N.Y. (Feb. 8, 2024) — The University at Albany has been selected to contribute to a new national research consortium that will support and demonstrate pathways to developing safe and trustworthy artificial intelligence.
At the direction of President Biden, the Department of Commerce’s National Institute of Standards and Technology (NIST) recently established the U.S. Artificial Intelligence Safety Institute, which will lead the U.S. government’s efforts on AI safety and trust, particularly for evaluating and developing guardrails for advanced AI models.
The institute’s new consortium, announced today, is made up of leaders in the AI community from academia, government agencies, tech companies, and non-profit organizations. The group will offer support through a Cooperative Research and Development Agreement, which facilitates research and development collaboration between federal laboratories and non-federal entities.
“We are proud to partner on the U.S. Artificial Intelligence Safety Institute, a first-of-its-kind collaboration that will empower researchers nationwide to promote the development and safe and responsible use of trustworthy AI,” said UAlbany Vice President for Research and Economic Development Thenkurussi (Kesh) Kesavadas.
“At UAlbany, we’re educating the next generation of artificial intelligence researchers and practitioners by infusing teaching about AI across all our academic and research programs. This new consortium will play a critical role in harnessing the potential of these evolving technologies, while also prioritizing safety and privacy.”
Shaping the Future of Artificial Intelligence
Last year, UAlbany announced it would add an unprecedented 27 new faculty members specializing in artificial intelligence across 20 academic departments, as part of its ambitious plan to incorporate elements of AI teaching and research across all academic programs. The University welcomed 18 of those new faculty members in the fall, with searches for the remaining positions ongoing.
UAlbany’s new Institute for Artificial Intelligence (IAI), led by interim director Eric Stern, professor and faculty chair at the College of Emergency Preparedness, Homeland Security and Cybersecurity (CEHC), serves as an interdisciplinary AI research hub connecting faculty from across campus, including all the new hires.
Through the IAI, researchers at UAlbany will support the U.S. Artificial Intelligence Safety Institute’s efforts in several areas, including:
- Exploring the complexities of AI at the intersection of society and technology
- Developing guidance and benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm
- Developing approaches to incorporating secure development practices for generative AI, a popular category of artificial intelligence that can create text, images, video, audio or code when prompted
- Developing and ensuring the availability of testing environments for new AI technologies
“The idea with the NIST consortium is to bring together experts from around the country to talk about AI safety and security from different perspectives—ranging from legal, ethical and policy questions to highly technical ones,” Stern said. “We’re honored to be selected to join this timely endeavor and stand ready to share our University’s diverse and highly relevant range of faculty expertise — as well as our cutting-edge supercomputing capabilities — with the larger consortium.”
AI Plus is UAlbany’s holistic approach to integrating teaching and learning about AI across the University’s academic and research programs to ensure every graduate is prepared to live and work in a world radically changed by technology.
AI Red Teaming
Along with offering AI expertise, UAlbany will also leverage its international experience in red teaming for the new institute through CEHC’s Center for Advanced Red Teaming (CART).
Historically, independent “red team” groups have been used by the U.S. military to test how well our armed forces can withstand attacks from adversaries. But today the tactic is evolving, with red teams using their skills and expertise to understand adversarial behavior and test security processes in a wide range of public and private industries — including emerging tech.
Launched in 2019, CART is a global leader in building the art and science of red teaming, delivering exercises across four continents to government stakeholders and industry leaders. Speaking from a recent international workshop on AI in Nuclear Security, CART Director Brandon Behlendorf, an assistant professor at CEHC, highlighted the centrality of red teaming within the new Institute.
“Red teaming is an essential tool for developing safe and trustworthy AI tools and capabilities. By collaborating with the new consortium, the experts at CART and UAlbany can work to make sure that process is rigorous, scientific and consistent. We look forward to helping build the standards for red teaming AI, both now and into the future.”
The consortium includes more than 200 member companies and organizations that are on the frontlines of developing and using AI systems, as well as the civil society and academic teams that are building the foundational understanding of how AI can and will transform society.
Consortium members will also work with organizations from like-minded nations that have a key role in advancing AI worldwide.