Deep learning techniques are remarkably successful on detection and recognition tasks in computer vision, reaching better than human performance in some specific applications. In my research I will develop novel learning-based algorithms and methods for virtual reality where data driven interaction and environment modelling, as well as user interfaces, can significantly benefit from advances in computer vision. Visual tracking allows different virtual reality and augmented reality devices to remain registered and synchronized. Motion estimation of surfaces, objects and actors is crucial in motion capture and 3D interaction modelling. Multi-view stereo captures real-word environments enabling navigation through these environment based on geometric relationships. My research program is set up to significantly improve visual tracking, motion and stereo algorithms with learning-based techniques by focusing on vision based measurement. Vision based measurement uses the camera as a measurement instrument to obtain a measurand through an associated measurement procedure and with an uncertainty. This is crucial but often overlooked in virtual and augmented reality where different sensors and sensing results need to be fused with each other but also combined with physical reality. The geometry but also the appearance of captured objects and characters must not only appear realistic on their own but also when integrated into the whole virtual or augmented reality. In my research program, I develop methods that on the one hand use physical constraints on the learning and on the other hand use learning to obtain physically plausible models. I work on making consistent long-term tracking possible, develop real-time learning methods for motion estimation and 3D capture, thereby advancing the state-of-the-art in virtual and augmented reality through a focus on vision based measurement.
More InfoImmersive virtual reality is gaining acceptance in entertainment and for business. One of the great challenges in virtual reality is providing high quality three-dimensional content of real-world events. In this collaboration, research on the creation of three-dimensional content creation will target a special patented camera configuration which promises to simplify and speed-up the content-creation process. This patented camera configuration of three or six cameras each providing a hemispherical view provides enough redundancy creating the opportunity to make significant improvements to real-time three-dimensional content creation. The goal of this project is to be able to capture real-world events which then can be visualized while navigating through the event. This is to enable a viewer to move through a virtual reality environment while looking in any direction which is commonly referred to as six-dimensional visualization.
More InfoMore info coming soon.