[CB3] DEPRECATED - Tertiary Banner

Adaptable Artificial Intelligence

Exploring adaptability: a framework for trustworthy AI that evolves with societal values to ensure safety and public trust

cb15 resource post

Subscribe to Newsletter

Related Posts

Ethics In Financial Services Insights

January 24, 2025

Azish Filabi, JD, MA, managing director of the American College Center for Ethics in Financial Services, co-authored the paper “Adaptable Artificial Intelligence,” recently published in the Journal on AI Policy and Complex Systems, which explores the critical need for adaptability in AI systems.

Man working on his laptop


This publication introduces adaptability, defined as the capacity of an artificial intelligence (AI) system’s behavior to maintain helpfulness and harmlessness as societal understandings of these concepts evolve. The authors emphasize that adaptability is a necessary property to secure durable trust in AI systems as AI increasingly permeates private, public, and corporate life.

The paper highlights that this line of research is underemphasized and scattered and introduces adaptability as a concept that unifies the field of trustworthy AI by encompassing a range of yet disparate techniques. These techniques include interpretability, explainability, steerability, and instructibility, emphasizing the need for continuous alignment with societal norms and treating trustworthiness as a dynamic goal.

To address these challenges, the authors propose a publicly governed research program. This program focuses on evaluating adaptability methods based on effectiveness, cost, and system performance. By fostering collaboration and public accountability, it aims to advance trustworthy AI through adaptability. Techniques like model editing and retrieval-augmented generation (RAG) are discussed as promising approaches to improving adaptability, though their current limitations are acknowledged. 
The authors critique market-driven AI development, arguing that market pressures have rendered private industry fundamentally incapable of developing AI systems that the public can trust. They underscore the need for public investment and governance to counteract these pressures, ensuring that AI development prioritizes public benefit and remains adaptable to evolving societal values.

 

More from The College

  • Dive deeper into the concept of adaptability and its transformative role in AI governance by reading the full paper.  
  • Stay informed about our latest AI research and initiatives—follow us on LinkedIn for updates and insights.
     

Related Posts