Speaker: Chirag Shah, Professor in Information School (iSchool) at University of Washington (UW) in Seattle.
Talk Title: Trust, But Verify: Verification and Validation of AI Systems
Abstract:
As AI systems penetrate more and more aspects of our lives, it becomes ever so important to ask how much can we trust these systems? From self-driving cars to healthcare decisions and from text generation to automated decision-making, we are increasingly relying on AI’s capabilities. Blindly using these systems can be risky in many situations, but we also don’t want to miss out on new capabilities that AI provides. So how do we use AI responsibly? I will argue in this talk that we need verification and validation as a way to ensure we can trust the systems we are using. I will show how to do this using human-in-the-loop for auditing and validating LLMs and their generation. The result is a human-AI collaborative mechanism that leads to responsibly leveraging the benefits of AI with manageable risks.
Speaker Bio:
Dr. Chirag Shah is Professor in Information School (iSchool) at University of Washington (UW) in Seattle. He is also Adjunct Professor with Paul G. Allen School of Computer Science & Engineering as well as Human Centered Design & Engineering (HCDE). He is the Founding Director for InfoSeeking Lab and Founding Co-Director of Center for Responsibility in AI Systems & Experiences (RAISE). His research involves building and studying intelligent information access systems, focusing on task-oriented search, proactive recommendations, and conversational systems. He is deeply engaged in work with generative AI, specifically in information access using large language models (LLMs). In addition to creating AI-driven information access systems that provide more personalized reactive and proactive recommendations, he is also focusing on making such systems transparent, fair, and free of biases.
He is a Senior Member of IEEE and a Distinguished Member of ACM.