Presentations

World modelling for robots

Time: 11.25 - 11.50

Presenter: Dr.ir. René van de Molengraft
Research group: Control Systems Technology
Department: Mechanical Engineering
University: Eindhoven University of Technology

With the incredible growth and success of machine learning in doing object detection from images the impression might occur that it cannot be that hard for robots to understand their environment. The contrary is true. One of the main challenges in robotics is to develop a world modelling method to allow robots to act in a varying world and dynamically changing environment. This talk will provide background and insights in this challenge.

Automated HAD Map Making from Crowd-sourced Data

Time: 11.50 - 12:15

Presenter: Dr. Gijs Dubbelman
Research group: Video coding & Architectures
Department: Electrical Engineering
University: Eindhoven University of Technology

The transition towards autonomous driving requires accurate and up-to-date maps, so called Highly Automated Driving (HAD) maps. They are crucial to enable higher levels of vehicle automation, i.e. SAE levels 3 and above. In this talk, the content and the functionality of these HAD maps will be explored. Furthermore, strategies and methods to automatically create and update HAD maps from crowdsourced data are detailed.

Advancing robot locomotion and manipulation abilities via physical interaction models and contact-aware control

Time: 14.05 - 14.30

Presenter: Dr. Alessandro Saccon
Research group: Dynamics and Control
Department: Mechanical Engineering
University: Eindhoven University of Technology

Progress in mechatronics design will allow next generation robots to withstand and exploit dynamical physical interactions with the environment, where physical contact is established and broken at non-zero speed. At the TU/e, we are aiming at developing a fully integrated paradigm to bring robots to an unprecedented level of manipulation and locomotion abilities. We believe that this can be achieved by relying on four scientific and technological breakthroughs:

  1. Computational-effective physical interaction models have to be developed and validated experimentally, to be able to predict post-contact robot states based on pre-contact velocities. 
  2. Task specification formalisms and associated posture planners have to be developed, for planning non-zero speed contact motions such as on-the-fly pick-and-place tasks.
  3. Multi-contact and impact-aware sensor fusion algorithms have to be developed to reliably perceive the dynamical state of the robot, in particular when breaking and establishing contacts.
  4. A new control framework that integrates all of the above aspects has to be developed to execute robot tasks where energetic and complex impacts are explicitly part of mission specification.

In this talk, I will provide an overview of the approach, pinpoint where we stand, and discuss next steps to be taken.

Towards humans understanding robots that understand humans

Presenter: Dr.ir. Raymond Cuijpers
Research group: Human Technology Interaction
Department: Industrial Engineering & Innovation Sciences
University: Eindhoven University of Technology

All autonomous robots and vehicles need to understand where they are, where they should go and how to get there. This is relatively easy in constrained and known environments without people. However, modern robotic applications from automated guided vehicles to socially assistive robots need to interact with people as part of their functional behavior. In its most simple form an autonomous robot simply circumnavigates people with sufficient clearance. However, treating people as objects with a given size and position is not enough for successful interaction with people.

For one, a robot must respect social rules for keeping proper distance and using the proper direction of approach. What is proper, however, is elusive, because it depends on the task of the robot, the state of user, and environmental context. For example, an autonomous car should stop for a pedestrian who wants to cross the street at a zebra crossing. This means that the vehicle should recognize the pedestrian’s intent to cross the street. Similarly, when a cleaning robot that needs to clean a public area should be able to distinguish between people who are paying attention to it and those who do not. So again the robot’s proper course of action depends on the intentions of people and their awareness of the situation. Ideally, the robots understands social conventions and incorporates them in its own behavior planning. This works both ways. If people understand the robot’s intentions, they too can take a proper course of action. To this end, the robot must communicate its own intentions and recognize people’s intentions. The question is how to do this so that people understand that the robot understands them.