rmc logo

Institut für Robotik und Mechatronik: Dr.-Ing. Klaus H. Strobl


Dr.-Ing. Klaus H. Strobl

Deutsches Zentrum für Luft- und Raumfahrt (DLR)
Institut für Robotik und Mechatronik
Perzeption und Kognition
Münchener Str. 20
82234 Weßling

Telefon: +49-81 5328 2482
E-Mail: Dr.-Ing. Klaus H. Strobl

URL: http://www.robotic.dlr.de/Klaus.Strobl/
E-Mail: Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein!
Room: Building 135, Room 2219 (how to reach us).



Klaus Strobl is a research scientist at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR) in Oberpfaffenhofen, Germany, since December 2002. His research interests focus on computer vision, 3-D graphics, camera calibration, mobile robotics, and deep learning. Klaus studied electrical engineering (automatic control) at the Universidad de Navarra (Spain), the Vienna University of Technology (Austria), the Technische Universität München (Germany), and the Norwegian University of Science and Technology (Norway). He earned his Ph.D. summa cum laude in electrical engineering in 2014 at Technische Universität München. He held a visiting researcher position at the Department of Computing, Imperial College London in 2009, which led to a best paper finalist award at the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009) for his work on efficient motion estimation from images.  Klaus is regular reviewer for the main international conferences and journals on robotics, e.g. IEEE Transactions on Robotics, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Automation Science and Engineering, IEEE Robotics and Automation Magazine, International Journal of Robotics Research, Elsevier Robotics and Autonomous Systems, and ASME Journal of Mechanisms and Robotics, a program committee member at Robotics: Science and Systems 2012, and an expert for the German Research Foundation (DFG).


Research Interests

  • 3-D Modeling by laser triangulation
    • Image processing
    • Error modeling
    • Real-time data fusion under uncertainty
  • Calibration of
    • cameras (both pinhole and plenoptic)
    • hand-eye and head-eye
    • eye-tracking display cameras
    • laser stripe profiler
    • laser-range scanner
    • IMU
  • Visual pose tracking by
    • visual odometry
    • SLAM
    • bundle adjustment
  • Active vision for humanoid walking
  • Machine learning, deep learning (optimization)
  • Hardware:
  • Software:



  • May 19, 2016. Best Reviewer Award (aka most dutiful, pro bono blue-collar worker) by the Conference Editorial Board of the IEEE 2016 International Conference on Robotics and Automation (ICRA 2016) in Stockholm, Sweden. Special thanks to the associate editors involved.
  • Mar 3-4, 2016. 15th MEON Workshop, Oberpfaffenhofen, Germany. "Stepwise Calibration of Focused Plenoptic Cameras." I'll present the published method below.
  • Dec 22nd, 2015. Our work "Stepwise Calibration of Focused Plenoptic Cameras " has been accepted for publication in the Elsevier journal of Computer Vision and Image Understanding. Our contribution deals with the metric calibration of plenoptic cameras, which are the kind of monocular, passive depth sensors that we believe can make a big difference in many domains, especially in mobile robotics.
  • Dec 8th, 2014 at 3:20pm. Oral presentation titled "Loop Closing for Visual Pose Tracking during Close-Range 3-D Modeling" at the "10th International Symposium on Visual Computing (ISVC 2014)," ballroom 3 at Monte Carlo Resort & Casino, 3655 Las Vegas Blvd S, Las Vegas, NV, USA.
  • Jul 4, 2014 at 10am. PhD defense at the main campus of the Technische Universität München, room 1977, building 0509 (invitation letter). President of the board of examiners: Prof. Dr.-Ing. Wolfgang Kellerer. Examiners: Prof. Dr.-Ing. Klaus Diepold, Prof. Dr.-Ing. Gerd Hirzinger, and Prof Andrew J Davison. Final evaluation: Summa cum laude (with highest distinction, unanimously).
  • Jul 8, 2013 at 3pm. Introductory talk on visual, simultaneous localization and mapping at the Institute for Data Processing (LDV), Technische Universität München, room number 0938.
  • Jun 24, 2013 at 3pm. Introductory talk on calibration of cameras and others sensors at the Institute for Data Processing (LDV), Technische Universität München, room number 0938.
  • Program comittee member at the 2012 Robotics: Science and Systems Conference (RSS 2012).
  • Nov 6-13, 2011. ICCV 2011 in Barcelona, Spain. I'll be presenting a novel, simple method for camera calibration that increases accuracy in the predominant case of using planar calibration patterns.
  • Sep 22-23, 2011. 6th MEON Workshop, Oberfaffenhofen, Germany. "More Accurate Camera Calibration using the Novel Methods in DLR CalLab Version 1.0." We are presenting the novel methods and features of DLR CalLab v. 1.0 (coming soon here).
  • May 9-13, 2011. ICRA 2011 in Shanghai, China. We presented improved tracking for the self-referenced DLR 3D-Modeler.
  • Nov 9-1, 2010. VISION 2010 fair trade in Stuttgart, Germany. VISION is reportedly the world's most important trade fair for machine vision. The self-referenced DLR 3D-Modeler is to be featured at Hall 6-B 56.
  • Oct 24, 2010. Tag der offenen Tür (open day) at DLR Oberpfaffenhofen, Germany. We are located in the foyer of building 124 (Vorstandsgebäude).
  • Jun 8-11, 2010. Automatica fair in Munich, Germany. We shall display many of our research results -- this year it will be massive!
  • Apr 23, 2010. Invited talk at Department of Mechanical and Mechatronics Engineering, Universidad de Monterrey, Monterrey, Nuevo León, México.
  • Apr 20, 2010. Invited talk at Facultad de Física e Inteligencia Artificial, Universidad Veracruzana, Xalapa, Veracruz, México.
  • Apr 19-23, 2010. Hannover Messe in Hanover, Germany. We are presenting the self-referenced DLR 3D-Modeler.
  • Apr 14-17, 2010. Plenary talk at the 8th International Symposium of Mechatronics Engineering "Automatización y Tecnología 6," at Instituto Tecnológico y de Estudios Superiores de Monterrey, Monterrey, Nuevo León, México: "Flexible 3-D Modeling as a Key Technology for the Breakthrough of Robotics."
    Scientists strive to maximize the immediate performance improvement in their particular fields of expertise. This maximum efficiency paradigm achieves significant improvements in a short period of time and leads to cutting-edge technologies and highly specialized devices.  Ambitious technological goals however, like those enabling groundbreaking new industries like service robotics, invariably call for a wide range of technologies---these often turn out to be mutually restricting. Furthermore those higher goals may impose fundamental constraints like reduced cost, smaller size or lower weight. These were often not even considered during the development of the required technologies following the maximum efficiency paradigm. [read more]
  • Mar 18-19, 2010. 3rd MEON Workshop, Berlin, Germany. "Schnelle, leichte und akkurate Kalibrierung mit DLR CalDe und DLR CalLab" and "Bildbasierte Selbstlokalisierung des DLR 3D-Modellierers."
  • Oct 2009. Invited talk at the Dexterous Robotics Laboratory at NASA, Johnson Space Center, Houston, TX, USA: "Present Mechatronic Developments at the Institute of Robotics and Mechatronics of the German Aerospace Center (DLR)," with Thomas Wimböck.
  • Oct 2009. IROS 2009, St. Louis, MO, USA. Best paper finalist award... nice!
  • Jun-Oct 2009. Visiting researcher at the Robot Vision Research Group, Department of Computing, Imperial College London, London, UK, with Prof Andrew Davison.
  • ...


Publications (Google Scholar profile)


PhD thesis

K. H. Strobl.
A Flexible Approach to Close-Range 3-D Modeling.
At Chair for Data Processing, Technische Universität München. Submitted on Sept 16th, 2013. Approved on June 6th, 2014. Defense on July 4th, 2014. Final evaluation: Summa cum laude.

Service robotics has the potential to become a major socio-technological and industrial achievement. An essential aspect of this technology is the degree of autonomy featured by the robotic agent, such as its capacity to make informed decisions. It is clear that isolated robots in unknown environments are highly dependent on perception to promote their degree of autonomy. This thesis focuses on visual perception of the geometry and the appearance of the scene.
Visual perception is the process by which visual sensory information about the environment is received and interpreted. This definition leaves aside the sort of sensory information used; for example, it does not necessarily mandate a geometric 3-D model of the scene. It is believed, however, that it is through the explicit formation of 3-D models that a considerable number of the remaining challenges on visual perception eventually will be solved.
When devising perception systems for service robotics, the consideration of cost, size, and weight of the sensors is of primary importance, as are their flexibility of use and the nature of the information provided. The development of sensors compliant with all these needs is, however, rare; more often than not, technical advances in isolated areas focus researchers on high performance, specialized sensors that may not observe all of the former requirements. Though promising at first, these systems face severe limitations when deployed in service robotics applications; hence they will not likely have long term success. In contrast to these efficient solutions, this thesis advocates effective perception systems that are inherently consistent with the requirements of service robotics.
This thesis presents the algorithms required for the production of an effective, multisensory hand-held 3-D modeling system, the DLR 3D-Modeler. Critically, it is not only the sensors within the perception system that have to comply with the guidelines, but also the methods required to arrange the sensors in the first place, and to make them work. In this spirit, lightweight, flexible, and highly-accurate sensor models, as well as their novel calibration methods, are presented. In addition, the robust and efficient processing of raw sensor data that might be compromised is also addressed.
Another contribution, to promote autonomy during its operation, turned the DLR 3D-Modeler into a worldwide novelty. Due to object self-occlusion, object size, or limited field of view, it is often impossible to acquire a complete 3-D model in a single measurement step. It is common for 3-D modeling devices to revert to external tracking systems in order to represent data in a common reference frame. This option is inconvenient as external systems are the largest and most expensive part of the system. In this work the DLR 3D-Modeler is extended to passive visual pose tracking, yielding the first hand-held 3-D modeling device for close-range applications that localizes itself passively from its own images in realtime, at a high data rate.
The system is applied to a number of scenarios in robotics and beyond. This low-cost system pushes traditional 3-D modeling forward to conquer new frontiers owing to its flexibility, passivity, and accuracy.

BibTeX entry - Compressed file (7.4 MB) - @mediaTUM - Supplementary videos: EXOMARS 1, 2, 3, 4, DEOS, Justin, and the self-referenced DLR 3-D Modeler 1, 2.


Articles and conference papers

K. H. Strobl and M. Lingenauber.
Stepwise Calibration of Focused Plenoptic Cameras.
Computer Vision and Image Understanding (CVIU), Volume 145, April 2016, pp. 140-147, ISSN 1077-3142, http://dx.doi.org/10.1016/j.cviu.2015.12.010.
50-days free download: http://authors.elsevier.com/a/1SeQx_OVij71Do

Monocular plenoptic cameras are slightly modified, off-the-shelf cameras that have novel capabilities as they allow for truly passive, high-resolution range sensing through a single camera lens. Commercial plenoptic cameras, however, are presently delivering range data in non-metric units, which is a barrier to novel applications e.g. in the realm of robotics. In this work we revisit the calibration of focused plenoptic cameras and bring forward a novel approach that leverages traditional methods for camera calibration in order to deskill the calibration procedure and to increase accuracy. First, we detach the estimation of parameters related to either brightness images or depth data. Second, we present novel initialization methods for the parameters of the thin lens camera model---the only information required for calibration is now the size of the pixel element and the geometry of the calibration plate. The accuracy of the calibration results corroborates our belief that monocular plenoptic imaging is a disruptive technology that is capable of conquering new markets as well as traditional imaging domains.

BibTeX entry - Preprint  - @ScienceDirect.

K. H. Strobl.
Loop Closing for Visual Pose Tracking during Close-Range 3-D Modeling.
In G. Bebis et al. (Eds.): ISVC 2014, Part I, LNCS 8887, pp. 390--401. Springer International Publishing Switzerland (2014).

This work deals with the passive tracking of the pose of a close-range 3-D modeling device using its own high-rate images in realtime, concurrently with customary 3-D modeling of the scene by laser triangulation. The former works by Strobl et al. successfully implemented visual pose tracking [1,2]. Being accuracy a central requirement to 3-D modeling, however, here we note that accuracy can be further increased using a graph-based nonlinear optimization of the tracked pose by minimization of reprojection errors. Loop closures e.g. when having scanned all around the objects provide the opportunity to increase pose tracking and 3-D modeling accuracy. The sparse optimization is in the form of a hybrid, keyframe-based bundle adjustment algorithm on stereo keyframes, yielding rapid optimization of the whole trajectory and object mesh model within a second. The optimization is supported by the use of appearance-based SURF descriptors together with a bank of parallel three-point-perspective pose solvers.

BibTeX entry - Paper  - Supplementary video.

K. H. Strobl and G. Hirzinger.
More Accurate Pinhole Camera Calibration with Imperfect Planar Target.
Proceedings of the IEEE International Conference on Computer Vision (ICCV 2011), 1st IEEE Workshop on Challenges and Opportunities in Robot Perception, Barcelona, Spain, pp. 1068-1075, November 2011.

This paper presents a novel approach to camera calibration that improves final accuracy with respect to standard methods using precision planar targets, even if now inaccurate, unmeasured, roughly planar targets can be used. The work builds on a recent trend in camera calibration, namely concurrent optimization of scene structure together with the intrinsic camera parameters. A novel formulation is presented that allows maximum likelihood estimation in the case of inaccurate targets, as it extends the camera extrinsic parameters into a tight parametrization of the whole scene structure. It furthermore observes the special characteristics of multi-view perspective projection of planar targets. Its natural extensions to stereo camera calibration and hand-eye calibration are also presented. Experiments demonstrate improvements in the parametrization of the camera model as well as in eventual stereo reconstruction.

BibTeX entry - Paper  - Supplementary material - Poster (size A0) .

K. H. Strobl, E. Mair, and G. Hirzinger.
Image-Based Pose Estimation for 3-D Modeling in Rapid, Hand-Held Motion.
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2011), Shanghai, China, pp. 2593-2600, May 2011.

This work aims at accurate estimation of the pose of a close-range 3-D modeling device in real-time, at high-rate, and solely from its own images. In doing so, we replace external positioning systems that constrain the system in size, mobility, accuracy, and cost. At close range, accurate pose tracking from image features is hard because feature projections do not only drift in the face of rotation but also in the face of translation. Large, unknown feature drifts may impede real-time feature tracking and subsequent pose estimation---especially with concurrent operation of other 3-D sensors on the same computer. The problem is solved in Ref. [1] by the partial integration of readings from a backing inertial measurement unit (IMU). In this work we avoid using an IMU by improved feature matching: full utilization of the current state estimation (including structure) during feature matching enables decisive modifications of the matching parameters for more efficient tracking---we hereby follow the Active Matching paradigm.

BibTeX entry - Paper - Videos.

E. Mair, K. H. Strobl, T. Bodenmüller, M. Suppa, and D. Burschka.
Real-time Image-based Localization for Hand-held 3D-modeling.
KI – Künstliche Intelligenz, vol. 24, no. 3, pp. 207-214, May 2010.

We present a self-referencing hand-held scanning device for vision-based close-range 3D-modeling. Our approach replaces external global tracking devices with ego-motion estimation directly from the camera used for reconstruction. The system is capable of online estimation of the 6DoF pose on hand-held devices with high motion dynamics especially in rotational components. Inertial information supports directly the tracking process to allow for robust tracking and feature management in highly dynamic environments. We introduce a weighting function for landmarks that contribute to the pose estimation increasing the accuracy of the localization and filtering outliers in the tracking process. We validate our approach with experimental results showing the robustness and accuracy of the algorithm. We compare the results to external global referencing solutions used in current modeling systems.

BibTeX entry.

K. H. Strobl, E. Mair, T. Bodenmüller, S. Kielhöfer, W. Sepp, M. Suppa, D. Burschka, and G. Hirzinger.
The Self-Referenced DLR 3D-Modeler.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA, pp. 21-28, October 2009, best paper finalist.

In the context of 3-D scene modeling, this work aims at the accurate estimation of the pose of a close-range 3-D modeling device, in real-time and passively from its own images. This novel development makes it possible to abandon using inconvenient, expensive external positioning systems. The approach comprises an ego-motion algorithm tracking natural, distinctive features, concurrently with customary 3-D modeling of the scene. The use of stereo vision, an inertial measurement unit, and robust cost functions for pose estimation further increases performance. Demonstrations and abundant video material validate the approach.

BibTeX entry - Paper - Videos.

K. H. Strobl, W. Sepp, and G. Hirzinger.
On the Issue of Camera Calibration with Narrow Angular Field of View.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA, pp. 309-315, October 2009.

This paper considers the issue of calibrating a camera with narrow angular field of view using standard, perspective methods in computer vision. In doing so, the significance of perspective distortion both for camera calibration and for pose estimation is revealed. Since narrow angular field of view cameras make it difficult to obtain rich images in terms of perspectivity, the accuracy of the calibration results is expectedly low. From this, we propose an alternative method that compensates for this loss by utilizing the pose readings of a robotic manipulator. It facilitates accurate pose estimation by nonlinear optimization, minimizing reprojection errors and errors in the manipulator transformations at the same time. Accurate pose estimation in turn enables accurate parametrization of a perspective camera.

BibTeX entry - Paper .

E. Mair, K. H. Strobl, M. Suppa, and D. Burschka.
Efficient Camera-Based Pose Estimation for Real-Time Applications.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA, pp. 2696-2703, October 2009.

Accurate online localization is crucial for mobile robotics. In this paper, we describe a real-time image-based localization technique, which is based on a single calibrated camera. This can be supported by a second camera to improve accuracy and to provide the proper translational scale. The system aims for a robust and unbiased pose estimation on highly dynamic and resource-limited systems, requiring the following steps: The robustness of the applied pose estimation technique has been significantly improved, a novel approach for stereo subpixel accurate landmark initialization is used and the conventional tracking routines have been sped up to achieve online capability. Although, the algorithm is designed for accurate, online short-range egomotion estimation for hand-held 3D scanning, it can be used for any mobile robot application. Various tests and experimental results with a mobile platform and a hand-held 3D modeler are presented and discussed.

F. Lange, K. H. Strobl, J. Langwald, S. Jörg, G. Hirzinger, B. Gruber, J. Klein, and J. Werner.
Kameragestützte Montage von Rädern an kontinuierlich bewegte Fahrzeuge.
VDI-Berichte 2012 (Robotik 2008), Munich, Germany, pp. 155-158, June 2008, in German.

Betrachtet wird die Montage von Rädern an ein durch ein Förderband transportiertes Fahrzeug. Dazu nimmt ein Industrieroboter Rad und Schrauben auf und führt sie zum Fügen an das Fahrzeug, dessen Radnabe während der Bewegung in allen 6 Freiheitsgraden durch eine robotergeführte Kamera vermessen wird. Die Kamera ist mit einer Ringleuchte ausgestattet und hinter dem Achsloch des gehaltenen Rades angeordnet, das dadurch auch bei schwingender Karosse robust an die Radnabe herangeführt und kraftgeregelt verschraubt wird.

BibTeX entry - Abstract - Paper .

K. H. Strobl and G. Hirzinger.
More Accurate Camera and Hand-Eye Calibrations with Unknown Grid Pattern Dimensions.
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2008), Pasadena, California, USA, pp. 1398-1405, May 2008.

This paper presents two novel approaches for accurate intrinsic and extrinsic camera calibration. The rationale behind them is the widespread violation of the traditional assumption that the metric structure of the calibration object is perfectly known. A novel formulation parameterizes a checkerboard calibration pattern in such a way that the calibration performs optimally irrespective of its actual dimensions. Simulations and experiments show that it is very rare for traditional calibration methods to come by the accuracy readily attained by this approach.

BibTeX entry - Paper .

M. Suppa, S. Kielhoefer, J. Langwald, F. Hacker, K. H. Strobl, and G. Hirzinger.
The 3D-Modeller: A Multi-Purpose Vision Platform.
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2007), Rome, Italy, pp. 781-787, April 2007.

This paper deals with the concept and implementation of a multi-purpose vision platform. In robotics, numerous applications require perception. A multi-purpose vision platform suited for object recognition, cultural heritage preservation and visual servoing at the same time is missing. In this work, we draw attention to the design principles for such a vision platform. We present its implementation, the 3D-Modeller. In specifying and combining multiple sensors, laser-range scanner, laser-stripe profiler and stereo vision, we derive the required mechanical and electrical hardware design. The concepts for synchronization and communication round off our approach. Precision and frame rate are presented. We illustrate the versatility of the 3D-Modeller by addressing four applications: 3D-modeling, exploration, tracking and object recognition. Due to its low weight and generic mechanical interface, it can be mounted on industrial robots, humanoids, or free-handed as well. The 3D-Modeller is flexibly applicable, not only in research but also in industry, especially in small batch assembly.

K. H. Strobl and G. Hirzinger.
Optimal Hand-Eye Calibration.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China, pp. 4647-4653, October 2006.

This paper presents a calibration method for eye-in-hand systems in order to estimate the hand-eye and the robot-world transformations. The estimation takes place in terms of a parametrization of a stochastic model. In order to perform optimally, a metric on the group of the rigid transformations SE(3) and the corresponding error model are proposed for nonlinear optimization. This novel metric works well with both common formulations AX=XB and AX=ZB, and makes use of them in accordance with the nature of the problem. The metric also adapts itself to the system precision characteristics. The method is compared in performance to earlier approaches.

BibTeX entry - Paper .

K. H. Strobl, W. Sepp, E. Wahl, T. Bodenmüller, M. Suppa, J. F. Seara, and G. Hirzinger.
The DLR Multisensory Hand-Guided Device: The Laser Stripe Profiler.
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2004), New Orleans, LA, USA, pp. 1927-1932, April 2004.

This paper presents the DLR Laser Stripe Profiler as a component of the DLR multisensory Hand-Guided Device for 3D modeling. After modeling the reconstruction process, we propose a novel method for laser plane self-calibration based on the assessment of the deformations the miscalibration leads to. In addition, the requirement for absence of optical filtering implies the development of a robust stripe segmentation algorithm. Experiments demonstrate the validity and applicability of the approaches.

BibTeX entry - Paper - Videos (DivX): Meshing, Calibration.

J. F. Seara, K. H. Strobl, E. Martin, and G. Schmidt.
Task-Oriented and Situation-Dependent Gaze Control for Vision Guided Humanoid Walking.
Proceedings of the 3rd IEEE-RAS International Conference on Humanoid Robots (Humanoids2003), Munich and Karlsruhe, Germany, October 2003.

This article presents various aspects of a gaze control scheme for visually guided humanoid robot navigation. A modular task-oriented and situation dependent gaze control architecture is proposed. It comprises three major modules: (I) Information Management, (II) Task-specific Gaze Strategy, and (III) Decision Scheme.
The strategy is based on the maximization of the predicted visual information. For the information management a coupled hybrid Extended Kalman Filter is employed. Specific view direction control strategies for two concurrent objectives of different nature, obstacle avoidance and self-localization, have to be weighted and pursued in parallel. The main goal of this work is to formalize and implement a decision strategy in order to achieve an intelligent task-oriented active vision system for a biped walking robot. It intends to explain the active vision decision making problem: Where to look next?, of an agent facing multiple goals. The general approach rests upon the definition of a set of Utility Functions over the outcomes of the set of possible view directions. Next the various utility functions, i.e.Agents, representing different kinds of preference rankings over the predicted outcomes, are organized to solve the Action/Selection problem as a Society of Minds.

BibTeX entry - Paper

J. F. Seara, K. H. Strobl, and G. Schmidt.
Path-Dependent Gaze Control for Obstacle Avoidance in Vision Guided Humanoid Walking.
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2003), Taipei, Taiwan, pp. 887-892, September 2003.

This article presents a novel gaze control strategy for obstacle avoidance in the context of vision guided humanoid walking. The generic strategy is based on the maximization of the predicted visual information. For information/uncertainty management a new hybrid formulation of an Extended Kalman Filter is employed. The performance resulting from this view direction control scheme shows the dependence of the intelligent gazing behavior on the pre-planned local path.

BibTeX entry - Paper

J. F. Seara, K. H. Strobl, and G. Schmidt.
Information Management for Gaze Control in Vision Guided Biped Walking.
Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems (IROS 2002), Lausanne, Switzerland, pp. 31-36, October 2002.

This article deals with the information management for active gaze control in the context of vision--guided humanoid walking. The proposed biologically inspired predictive gaze control strategy is based on the maximization of visual information. The quantification of the information requires a stochastic model of both, the robot and perception system. The information/uncertainty management, i.e. relationship between system state estimation and the active measurements, employs a coupled (considering cross-covariances) hybrid (reflecting the discontinuous character of biped walking) Extended (copes with non-linear systems) Kalman Filter approach.

BibTeX entry - Paper

O. Lorch, J. F. Seara, K. H. Strobl, U. D. Hanebeck, and G. Schmidt.
Perception Errors in Vision Guided Walking: Analysis, Modeling, and Filtering.
Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2002), Washington DC, USA, pp. 2048-2053, May 2002.

This article deals with specific aspects concerning the visual perception process of a humanoid walking machine. An active vision system provides the information about the environment necessary for autonomous goal-oriented locomotion. Due to errors in each stage of the perception process, ideal environment reconstruction is not possible. By modeling these errors, stochastic components can be compensated using a hybrid Extended Kalman Filter approach with an alternating reference frame, thus reflecting the discontinuous character of biped walking. The perception results improved by filtering can be used for the autonomous locomotion of the robot. Experiments with the walking machine BARt-UH demonstrate the validity of our approach.

BibTeX entry - Paper

J. F. Seara, O. Lorch, and G. Schmidt.
Gaze Control for Goal-Oriented Humanoid Walking.
Proceedings of the 2nd IEEE-RAS International Conference on Humanoid Robots (Humanoids2001), Waseda, Tokio, Japan, pp. 187-195, November 2001.

In this article a predictive task-dependent gaze control strategy for goal-oriented humanoid walking is presented. In the context of active vision systems we introduce an information theoretical approach for maximization of visual information. Based on two novel concepts, Information Content of a view situation and Incertitude, we present a method for selecting optimal subsequent view directions, thus contributing to an improved performance in typical autonomous robot locomotion tasks. Simulations and experimental results dealing with the duality of different tasks during locomotion, i.e. obstacle avoidance and self localization, prove the applicability of our approach on humanoid walking machines.

BibTeX entry - Paper


Internal reports

K. H. Strobl.
Parametrizable, Task-Dependent Gaze Control for Vision Guided Autonomous Walking.
Master's Thesis, Lehrstuhl für Steuerungs- und Regelungstechnik, Technische Universität München, Germany, May 2002.

In this thesis, a predictive gaze control strategy of an active vision system based on the maximization of visual information is described. The quantification of the information arises from the complete stochastic modeling of both the robot system and the perception system.
The uncertainty management -- relationship between system state estimation and measurements -- has been carried out by means of a coupled (considering cross-covariances) hybrid (mirroring the discrete character of biped walking) extended (copes with non-linear systems) Kalman Filter. An appropriate choice of the state variables has been made, with the idea of solving the view direction problem for both self localization and obstacle avoidance.

BibTeX entry - Thesis - Extension - Videos (DivX): dead-reckoning, dead-reckoning+measurements, obstacle avoidance, self localization, and a mixture of them.


Internship reports

K. H. Strobl and O. Kristiansen.
MovingCam, Technical Documentation - Threeplex.
THREEPLEX Project, Work Package 5, Task 5.3. Report 32.1023.00/07/03 28p. 2apps. NTNU Multiphase Flow Laboratory and SINTEF Petroleum Research, Trondheim, Norway, 2003.

K. H. Strobl.
A Testing Set for Piezoelectric Ultrasonic Microphones.
Microelectronics and Microsystems Department, CEIT, San Sebastián, Spain, August 2001.



  • DE102010004233B3: [EN] Method for determining position of camera system with respect to display, involves determining geometric dimensions of object and calibration element, and determining fixed spatial relation of calibration element to object. [DE] Verfahren zur Bestimmung der Lage eines Kamerasystems bezüglich eines Objekts.

    The method involves recording an image with an image content by a camera system (2) for an undetermined position of a mirror (4). A parameter of an imaging function is determined from image information of a part of the image content, where the imaging function characterizes imaging of an object point in an image storage of the camera system. Geometric dimensions of an object i.e. display (3), and a calibration element i.e. calibration pattern (6), are determined, and fixed spatial relation of the calibration element to the object is determined.


Student projects

  • DLR CalLab - Reprogramming from Matlab into IDL and Extensions (Internship, awarded to Mr. Cristian Paredes 16.05.2005-14.10.2005).
  • Evaluation of omnidirectional camera calibration methods, extension to stereo omnidirectional camera calibration, and C++ implementation of 3-D reconstruction methods for omnidirectional cameras (Internship, awarded to Mr. Michal Smisek 20.6.2011-9.9.2011).
  • Development and Implementation of New Image Processing Methods for Robust Laser Profiler Operation (Diploma-Thesis, awarded).
  • Stereo Light Stripe Profiler with Redundancy Check for Robust 3-D Modeling (Internship, awarded).
  • Implementation of an Algorithm for Simultaneous Localization and Mapping using RGB(-D) Data and a Wheeled Mobile Robot's Odometry; Adaptation to Indoor Navigation (MSc Thesis, free).
  • 3-D Modeling Using a Monocular Plenoptic Camera (MSc Thesis, free).




© DLR - Institute of Robotics and Mechatronics. All rights reserved.


Zuletzt aktualisiert: Dienstag, 17. Mai 2016 von Dr.-Ing. Klaus H. Strobl