New learning paradigms for real-world environment perception
Abstract
In this document, I first analyze some of the reasons why real-world environment perception is still
strongly inferior to human perception in overall accuracy and reliability. In particular, I focus on the
task of object detection in traffic scenes and present an argument why this task is in fact a good model
task for other, related perception problems (e.g. in robotics or surveillance). Enumerating the difficulties
encountered in this model task (and therefore, by inference, in many other detection tasks as well), I come
to the conclusion that problems in object detection can in fact be, to a significant extent, traced back
to problems of the learning algorithms that are used in various forms when performing object detection.
Namely, the lack of a probabilistic interpretation, the lack of incremental learning capacity, the lack of
training samples and the inherent ambiguity of local pattern analysis are identified and used to justify a
road map for research efforts aimed at overcoming these problems. I present several of my works concerning
real-world applications of machine learning in perception, where the stated problems become very apparent.
Subsequently, I describe in detail my recent research contributions and their significance in the context of
the proposed road map: context-based object detection, generative and multi-modal learning as well as an
original method for incremental learning. The document is concluded by an outlook that addresses further
work to complete the road map, and the possibilities that are offered by such an endeavour in the field of machine perception.
Loading...