Multi-Modal Range Sensing in Complex Environments | |||||||
Participants Christopher Yang M.A.Sc. student 2002-2005 Phillip Curtis M.A.Sc. student 2003-2005 Dr. Pierre Payeur SITE, University of Ottawa Collaborators Canadian Foundation for Innovation Ontario Innovation Trust |
Collecting dense range measurements in uncontrolled environments is a challenging problem as lighting and
surfaces’ texture significantly influence the quality of the measurements. Beyond concentrating on
improving a specific type of range sensors, the overall quality of the sensing can also be enhanced through
the development of a mechanism that combines various range sensing technologies to form a multi-modal range
sensor. For example, active sensing techniques, such as laser range scanning and active triangulation, can
provide solutions to circumstances where classical stereovision cannot perceive depth. However, the depth
of view remains limited. Lidar technologies and structured lighting approaches also contribute sensing modalities
that resolve depth limitation issues, up to a certain extent, but offer reduced resolution. A combinaison of
several technologies with mechanisms to selectively merge data collected by the different means represent an
interesting direction that did not receive much attention. The lack of multi-modal range sensing systems is partially due to their complicated and lengthy nature, especially because of the fact that individual calibration approaches must be applied to each subsystem and then applied between subsystems of the multi-modal range sensor in order to permit accurate data fusion. To alleviate such problems, an experimental investigation was conducted to define straightforward and generic guidelines for calibration procedures for multi-modal vision systems. Tests are conducted on an in-house multi-modal system built from a laser range finder system, two active triangulation systems using structured lighting, and a classical stereovision system that makes use of the same pair of cameras. The whole acquisition device is integrated on a manipulator arm to automate the scanning procedure from several points of view with proper registration, while also overriding the inherent limitation of the single linescan laser device used in the prototype. A generic framework has been proposed to drive the system’s intra- and inter-calibration processes and several models were built from multiple datasets collected with the calibrated multi-modal range sensor without the use of any sort of data fitting. The research demonstrates the potential benefits of multi-modal range sensing following proper automated calibration. Especially, a strong demonstration was made of the advantages of merging subsystem’s strengths to complement other subsystem’s weaknesses while acquiring 3D measurements in complex uncontrolled environments. |
Related Publications
|