Keynotes

 

Computational Imaging for Machine Vision

Machine vision is at the core of a family of technologies that are transforming how we work and live. Augmented reality and robotics including autonomous cars, drones, and manufacturing, health, and service robots are set to profoundly impact our lives. However, there are deep challenges in how to best endow these machines with 3D vision, and this talk explores the tools of computational imaging as a means of meeting these challenges. As in nature specialized embodiments benefit from specialized sensing, and I will explore how novel cameras can reduce computational burden while delivering more robustness. This approach yields more effective perception that can operate over a broader range of conditions and with greater autonomy. The talk concludes by highlighting key challenges and opportunities at the intersection of optics, algorithms, embodiment, and 3D perception.

Donald Dansereau is a postdoctoral scholar at the Stanford Computational Imaging Lab. His research is focused on computational imaging for robotic vision, and he is the author of the Light Field Toolbox for Matlab. In 2004 he completed an MSc at the University Calgary, receiving the Governor General’s Gold Medal for his pioneering work in light field processing. In 2014 he completed a PhD on underwater robotic vision at the Australian Centre for Field Robotics, University of Sydney. Donald’s industry experience includes physics engines for video games, computer vision for microchip packaging, and FPGA design for automatic test equipment. His field work includes marine archaeology on a Bronze Age city in Greece, hydrothermal vent mapping in the Sea of Crete, habitat monitoring off the coast of Tasmania, and wreck exploration in Lake Geneva.

 

Visual Attention Modeling in VR

An extensive research work has been done in the last years to develop Visual Attention (VA) models for 2D and even stereoscopic 3D images and videos. The recent emergence and development of Virtual Reality and 360° content applications has led to the research on VA for omnidirectional content. The need for reliable VA models in this context s even more relevant in order to design efficient approaches for several applications, such as coding, streaming,  foveated rendering, cinematography, movie editing, and Quality of Experience (QoE) evaluation. Furthermore, unlike traditional 2D viewing, with 360° content the users can freely explore the scene seeing different content according to their head positions. These novelties affect VA and make difficult the direct application of VA models originally developed for traditional technologies.  In this talk, I will review the current status on VA attention for 360° contents, advances and challenges from user study to modeling and benchmarking.

Patrick Le Callet is Professor at University of Nantes. He led for ten years (2006-16) the Image and Video Communication lab at CNRS IRCCyN and was one of the five members (2013-16) of the Steering Board of CNRS IRCCyN (250 researchers). Since January 2017, he is one of the seven members of the steering Board the CNRS LS2N lab (450 researchers), as representative of Polytech Nantes. He is also since 2015 the scientific director of the cluster “Ouest Industries Créatives”, a five year program gathering more than 10 institutions (including 3 universities). “Ouest Industries Créatives” aims to strengthen Research, Education & Innovation of the Region Pays de Loire in the field of Creative Industries. He is mostly engaged in research dealing with the application of human vision modeling in image and video processing. His current centers of interest are Quality of Experience assessment, Visual Attention modeling and applications, Perceptual Video Coding and Immersive Media Processing. He is co-author of more than 250 publications and communications and co-inventor of 16 international patents on these topics. He serves or has been served as associate editor or guest editor for several Journals such as IEEE TIP, IEEE STSP, IEEE TCSVT, SPRINGER EURASIP Journal on Image and Video Processing, and SPIE JEI. He is serving in IEEE IVMSP-TC (2015- to present) and IEEE MMSP-TC (2015-to present) and one the founding member of EURASIP SAT (Special Areas Team) on Image and Video Processing.