Keynote Speakers

Picture of Oliver Schreer

Dr. Oliver Schreer

Head of the Immersive Media & Communication research group
Fraunhofer HHI,
Germany

Current State and Advances of Volumetric Video

Abstract

Volumetric video is considered as one of the major technological break troughs for highly realistic representation of humans in eXtended Reality applications, future video communication services and collaboration tools. This technology becomes important in use cases, where convincing highly realistic representation of humans are required to overcome the uncanny valley for digitized humans. This talk will reflect the research path towards volumetric video and present details on capture and processing. Insights on the overall volumetric video production workflow are explained. Recent advances in volumetric video encoding and streaming as well as interactive volumetric video are discussed. The talk is enriched by several examples from most recent productions.

Biography

Dr. Oliver Schreer is head of “Immersive Media & Communication” research group at “Vision & Imaging Technologies” department at Fraunhofer HHI and Associate Professor at Technical University Berlin. In November 1999, he finished his PhD in electrical engineering at the Technical University of Berlin. In June 2006, Oliver Schreer received his habilitation degree in the field of “Computer Vision/Videocommunication” at the faculty of Electrical Engineering and Computer Science, Technical University Berlin. Since December 2019, he is coordinating the Horizon 2020 Coordination and Support Action XR4ALL. He published more than 100 papers and he acted as main editor of a book on “3D Videocommunication” published in 2005 at Wiley & Sons, UK and the book on “Media Production, Delivery and Interaction for Platform Independent Systems: Format-Agnostic Media” was published at Wiley & Sons. He acts as reviewer for several journals and leading conferences in the Computer Vision domain. In May 2019, Oliver Schreer and his colleagues I. Feldmann and P. Kauff have been awarded with the Joseph-von-Fraunhofer Prize entitled by “Realistic people in virtual worlds – A movie as a true experience”.

tuomas

Prof. Tuomas Virtanen

IEEE Fellow, Professor of Signal Processing
Tampere University,
Finland

Acoustic scene analysis – machine learning tasks, models, data, and applications

On Thursday, 7 October at 13:30-14:30 EEST

Abstract

Our everyday acoustic environments contain lots of sounds that carry information about events happening the environment. Automatic analysis of acoustic scenes has several applications, and has therefore attracted lots of research recently. This talk will present the most common automatic analysis tasks addressed: acoustic scene classification, sound event detection, and sound event localization. We will present the tasks themselves and their discuss applications. We will present machine learning models based on deep neural networks used to address each tasks. Since data used to train systems plays a huge role in their development, we will also discuss what kind of data should be used to address each of the tasks, how such data can be obtained. We will also discuss advanced topics that go beyond the three main tasks addressed.

Biography

Tuomas Virtanen is Professor at Tampere University, Finland, where he is leading the Audio Research Group. He received the M.Sc. and Doctor of Science degrees in information technology from Tampere University of Technology in 2001 and 2006, respectively. He has also been working as a research associate at Cambridge University Engineering Department, UK. He is known for his pioneering work on single-channel sound source separation using non-negative matrix factorization based techniques, and their application to noise-robust speech recognition and music content analysis. Recently he has done significant contributions to sound event detection in everyday
environments. In addition to the above topics, his research interests include content analysis of audio signals in general and machine learning. He has authored more than 200 scientific publications on the above topics, which have been cited more than 9000 times. He has received the IEEE Signal Processing Society 2012 best paper award for his article “Monaural Sound Source Separation by Nonnegative Matrix Factorization with Temporal Continuity and Sparseness Criteria” as well as three other best paper awards. He is an IEEE Fellow, member of the Audio and Acoustic Signal Processing Technical Committee of IEEE Signal Processing Society, and recipient of the ERC 2014
Starting Grant.