IEEE Symposium on the Ethical, Social and Legal Implications of Artificial Intelligence (IEEE ETHAI)

Scope

Today, Computational Intelligence (CI) and Artificial Intelligence (AI) techniques are embodied within many technologies. For example, Fuzzy Control is a central piece within most control systems for technologies such as washing machines. Deep Neural Networks are sitting today on most smart phones offering search-by-image capabilities. Evolutionary Computation is creating a leap forward in industry and robotics when coupled with 3D printing that allows evolved robots to come to life quickly and with low cost. AI/CI researchers excel in designing and implementing these technologies to create significant positive impact on the economy and human society as a whole. It is incumbent upon us as socially-responsible AI/CI researchers to understand the ethical and social implications of the technologies we create and champion.

The objective of the proposed Symposium is to discuss the ethical and moral principles that govern the behaviour of AI/CI technology, as well as the operator, user and other stakeholders who are impacted by decisions informed by such technologies. These principles should cover the following: balancing the ecological footprint of technologies against the economic benefits; managing the impact of automation on the workforce; ensuring privacy is not adversely affected; and dealing with the legal implications of embodying AI/CI technologies in autonomous systems. As the largest technical event in the field of CI, SSCI provides an ideal forum for discussion of these issues.

Topics

  • Potential impact of AI/CI on the human workforce and distribution of wealth
  • Potential impact of AI/CI on privacy
  • Safety of AI/CI systems embedded in autonomous and automated systems (e.g. autonomous vehicles, nuclear power plant control systems)
  • Human-machine Trust and Risk in AI/CI Systems
  • Ethics of AI/CI systems in business, economics or manufacturing
  • Specific applications of AI/CI and the potential ethical/social benefits and risks (e.g. Marking of student assignments, assessment of legal documents, automated decision making in the stock market, medical research)
  • Legal implications of AI/CI (e.g. legal liabilities when things go wrong; how do you certify systems that can ‘learn’ from their environment etc)
  • Need and direction for developing formal standards in ethics for AI/CI
  • Public perception of AI/CI
  • Impact of AI/CI on human cognition and social relatedness
  • Role of AI/CI in politics and in manipulating public opinion
  • Ethics and Legal implications of use of AI/CI in National Security

Symposium Chairs

  • Keeley Crocket
    K.Crockett@mmu.ac.uk
    Manchester Metropolitan University, United Kingdom
  • Matt Garratt
    m.garratt@adfa.edu.au
    UNSW Canberra, Australia

Program Committee

  • Keeley Crockett,  Manchester Metropolitan University, UK (CHAIR)
  • Matthew Garratt, UNSW Australia (Co-chair)
  • Ambarish Natu, Australian Government
  • Chuan-Kang Ting, National Tsing Hua University, Taiwan
  • John Sheppard,  Montana State University, USA
  • Max Cappuccio,  UNSW Australia
  • Jai Galliot,  UNSW Australia
  • Chrystopher Nehaniv,   University of Waterloo, Canada
  • Robert Reynolds, Wayne State University, USA
  • Sheridan Houghten, Brock University, Canada