Learning from Noisy Labels for Deep Learning

With the advance in computing power and learning algorithms, we can process and apply millions or even billions of large-scale data to train robust and advanced deep learning models. Despite the impressive success, current deep learning methods tend to rely on large-scale well-annotated training data. Constructing a million-scale dataset like ImageNet is time-consuming, labor-intensive, and even infeasible in many applications. Fortunately, online resources like web image search engines and crowd-sourcing platforms can provide rich and cheap data. Accordingly, it is natural to leverage such data to construct various datasets. However, there are two critical issues in these datasets: label noise and domain mismatch. Learning directly from noisy data tends to yield poor performance.

This special session is dedicated to the latest development, research findings, and trends on learning from noisy labels for deep learning, including but not limited to:

  • Label noise in deep learning, theoretical analysis, and application
  • Webly supervised visual classification, detection, segmentation, and feature learning
  • Automatic image dataset construction and application
  • Large-scale/web-scale noisy data learning systems
  • Transfer learning across labeled and web data
  • New datasets and benchmarks for learning from noisy labels

For authors:

1) Details at Call for Special Sessions
2) Paper templates and paper submission system at Instructions for Authors

Organizers:

yaoyazhou

Yazhou Yao

孙泽人

Zeren Sun

zhang chuanyi

Chuanyi Zhang