Special Session: Interactive Reinforcement Learning (IARL)

Reinforcement learning (RL), an adaptive control and decision-making paradigm has had prominent successes in broad applications including robotics, finance, marketing, and recommendation systems. However, deploying RL in real-world environments is challenging, which often causes safety issues and suffers from limited or biased samples. The challenges get worse as the complexity of real-world applications surges when including humans. Therefore, the inclusion of humans as a part of an AI system is receiving significant current interest and research attention. RL algorithms in Human+AI systems can model human behaviors to develop a new reliable policy. Humans can provide feedback or guidance to enhance the agent’s learning to alleviate safety issues and increase data efficiency. The potential of the human+AI interactions can pose the questions: 1) how RL agents can be designed to leverage feedback, guidance, and other information from humans that they will learn from, interact with, and assist, 2) how to design interactions and experiments in a manner that is reproducible and compassionate to the humans involved, 3) how the learning agents can perceive or model the human or human interactions in an environment in a safe, effective, efficient, and ethical manner, and 4) how to mitigate possible human cognitive load using intrinsic feedback through direct brain communication or other media.

Scope of Special Issue

We invite paper submissions applying RL algorithms to real-world human+AI problems and/or addressing the interactive RL challenges. Our topics of interest are general, including (but not limited to):

  • RL for Human+AI RL algorithms, which cover all algorithmic challenges of RL, especially those that directly address challenges faced by the inclusion of humans;
  • investigations of human activities, environments, and interactions with learning agents;
  • explorations of human-AI interaction paradigms;
  • development of novel RL algorithms, systems, architectures, and platforms to address human+AI challenges;
  • development of learning-based adaptive models, frameworks, and theories of human-AI interaction;
  • applications in robotics (for mission-critical environments or public, work, or clinical settings), autonomous driving, conversational AI, recommendation systems, or other problems in science, engineering, agriculture, forestry, life sciences;
  • Issues in deploying modern interactive RL algorithms in the real world such as model-based/model-free RL, sim2real transfer, offline learning, pre-training, representation learning, generalization, sample/time/space efficiency, exploration, reward specification and shaping, alignment, multi-objective, scalability, safety, accountability, interpretability, reproducibility, real-time learning/adaptation, etc.
  • Integrated modeling of interactive reinforcement learning with intrinsic feedback from human
  • Exploration of brain-computer interfaces for human intrinsic feedback modeling for an RL/AI agents

Special Session Chair

  • Minwoo Jake Lee
    Minwoo.Lee@uncc.edu
    University of North Carolina, Charlotte, USA