Panel Session: Using AI to establish sustainably trustworthy and responsible research and innovation

Organised by the IEEE Computational Intelligence Society Task Force on Ethical and Social Implications of Computational Intelligence

Panel Aim

To provide academia and industry with an overview of the current ethical AI landscape to discuss and debate how to establish sustainably trustworthy and responsible research and innovation.

Discussion themes include (but are not limited to):

  • The impact of emerging global legislation on data and AI ethics on conducting AI research and innovation.
  • Responsibility and Accountability in the AI decision making 
  • Algorithm bias in machine learning
  • Can Industry Self-regulation Deliver ‘Ethical AI’?
  • Sustainable and Responsible AI
  • Citizen involvement in conceptualisation of AI products and services to build trust.

Panel Co-chairs

Professor Keeley Crockett, Manchester Metropolitan University, UK.

Professor Matt Garratt, UNSW Canberra, Australia,

Panellists

Meet the Panel

Jennifer Boger is internationally renowned for the human-centric development of cutting-edge technology for supporting aging, quality of life, and wellbeing, including formal research in collaborative technology development. She is spearheading the concept of ‘Ethical by Design’, which involves the systematic development of a methodology to enable disparate stakeholders to collaboratively build aspects such ethics, culture, and citizenship into products and systems throughout their lifecycle. She has been a lead researcher on more than 40 transdisciplinary projects that apply state-of-the-art computer science, engineering, and rehabilitation science resulting in more than 150 peer-reviewed publications. Dr. Boger is an Adjunct Professor in Systems Design Engineering, University of Waterloo as well as the School of Health and Exercise Sciences, University of British Columbia (Okanogan).

Ansgar Koene is Global AI Ethics and Regulatory Leader at EY (Ernst & Young) where he contributes to the work on AI governance, trusted AI frameworks and AI related public policy engagement. As part of this work, he represents EY on the OECD Network for Experts on AI (ONE.AI) and the Business at OECD Committee on Digital Economic Policy (BIAC CDEP).

He is also a Senior Research Fellow at the Horizon Digital Economy Research institute (University of Nottingham) where he contributes to the policy impact and public engagement activities of the institute and the ReEnTrust and UnBias projects. 

Ansgar chairs the IEEE P7003 Standard for Algorithmic Bias Considerations working group, and led the IEEE Ethics Certification Program from AI Systems working group on Bias. Other standards development work includes participation in the ISO/IEC JTC1 SC42 AI and the CEN-CENELEC Focus Group for AI.

He is a trustee for the 5Rgiths foundation for the Rights of Young People Online, is part of the 5Rights Digital Futures Committee, and advised on AI and Data Ethics for afroLeadership, a pan-African NGO.

Ansgar has a multi-disciplinary research background, having worked and published on topics ranging from Policy and Governance of Algorithmic Systems (AI), data-privacy, AI Ethics, AI Standards, bio-inspired Robotics, AI and Computational Neuroscience to experimental Human Behaviour/Perception studies. He holds an MSc in Electrical Engineering and a PhD in Computational Neuroscience.

Carolyn Ashurst is a Senior Research Associate in Safe and Ethical AI at the Alan Turing Institute – the UK’s national institute for data science and artificial intelligence. Her research focuses on improving the societal impacts of machine learning and related technologies, including topics in AI governance, responsible machine learning, responsible research, and algorithmic fairness. Her technical fairness research focuses on using causal models to formalise incentives for fairness related behaviours. Previously, she worked as a Senior Research Scholar at Oxford’s Future of Humanity Institute, and as a data and research scientist in various roles within government and finance. She holds a PhD in mathematics from the University of Bath. 

Robert G. Reynolds received his Ph.D.  in Computer Science, specializing in Artificial Intelligence from the University of Michigan, Ann Arbor. He is currently a Professor of Computer Science and director of the Artificial Intelligence Laboratory at Wayne State University.  He is a Senior Member of the IEEE. At the University of Michigan-Ann Arbor, Professor Reynolds is a Visiting Research Scientist with the Museum of Anthropology, and a member of the Complex Systems Group. His interests are in the development of computational models of cultural evolution for use in the simulation of complex organizations, computer gaming, and virtual world applications.  Dr. Reynolds produced a framework called Cultural Algorithms, to express and computationally test various theories of social evolution using multi-agent simulation models. He has authored or co-authored seven books in the area. His most recent books are “Cultural Algorithms: Tools for the Engineering of Social Intelligence into Complex Cultural Systems, 2020, Wiley-IEEE Press”, and “Culture on the Edge of Chaos” published by Springer-Verlag in 2018. In additional he has written over 250 papers. Currently, Dr. Reynolds along with his students, are developing a toolkit for testing Cultural Algorithms in dynamic environments; the Cultural Algorithm Toolkit (CAT). His research group has produced award winning game controller software for several international competitions using the Cultural Algorithms toolkit.  In 2017, a software system based upon Cultural Algorithms came in second in the IEEE Single Real Valued Function Optimization competition held in conjunction with the IEEE Congress on Evolutionary Computation. Dr. Reynolds has applied Cultural Algorithms to problems in social evolution including the evolution of agriculture; the origins of the state in Ancient Mexico; the discovery of ancient hunting sites underneath Lake Huron; the emergence of prehistoric urban centers in Mexico; the origins of language and culture in Peru; and the disappearance of the Ancient Anazazi in Southwestern Colorado. He has co-authored three books in this area that include the following; Flocks of the Wamani (1989, Academic Press), with Joyce Marcus and Kent V. Flannery; The Acquisition of Software Engineering Knowledge (2003, Academic Press), with George Cowan; and Excavations at San Jose Mogote 1: The Household Archaeology with Kent Flannery and Joyce Marcus (2005, Museum of Anthropology-University of Michigan Press). Another book entitled Culture on the Edge of Chaos: Cultural Algorithms and the Foundations of Social Intelligence is scheduled to appear in 201.8

Matthew Garratt received a BE degree in Aeronautical Engineering from Sydney University, Australia, a graduate diploma in applied computer science from Central Queensland University and a PhD in the field of biologically inspired robotics from the Australian National University in 2008. He is an Associate professor with the School of Engineering and Information Technology (SEIT) at the University of New South Wales, Canberra. Matt is currently the Deputy Head of School (Research) in SEIT and is the chair of the CIS task force on the Ethics and Social Implications of CI. His research interests include sensing, guidance and control for autonomous systems with particular emphasis on biologically inspired and CI approaches. He is a member of the IEEE CIS and robotics and automation society and also senior member of the American Institute of Aeronautics and Astronautics and member of the American Helicopter Society.

Keeley Crockett currently leads the Computational Intelligence Lab and Machine Intelligence theme within Centre for Advanced Computational Science at Manchester Metropolitan University, Her main research interests include fuzzy systems, computational approaches to natural language processing, and computational intelligence for psychological profiling (comprehension and deception). She is leading work on Place based practical Artificial Intelligence, facilitating a parliamentary  inquiry conducted in 2020 with Policy Connect and the All-Party Parliamentary Group on Data Analytics (APGDA) in June and July 2020 as part of Metropolis funding. She has and is working on numerous projects receiving grants from the European Union H2020 Programme for novel research, the academic co-lead in the ERDF  £6m funded GM Artificial Intelligence Foundry,  to Innovate UK funding where the focus is on knowledge transfer partnerships with business.  Keeley is currently the Chair of the IEEE Women into Computational Intelligence sub-committee, Co-Chair of the IEEE WIE Educational Outreach sub-committee, IEEE UKRI SIGHT Ethics and Wellbeing officer and a member of numerous CIS subcommittees. She is a Senior Fellow of the Higher Education Academy. Keeley serves as an Associate Editor on the IEEE Trans. Emerging Topics in Computational Intelligence, IEEE Trans. Artificial intelligence and IEEE Trans on Fuzzy systems. She is the current Chair of the IEEE Task Force on Ethical and Social Implications of Computational Intelligence and will serve as Co-General Chair at IEEE SSCI, Orlando 2021. She is Track Chair AI and Big Data at IEEE ISC2 2021 IEEE International Smart Cities Conference – 2021 and Workshop Chair at 2021 IEEE International Humanitarian Technology Conference (IHTC 2021).