Keynotes

Keynote 1 – Lazy Robotics

10.30 – 11.00    

Dr.ir. René van de Molengraft

TU/e Mechanical Engineering – Control Systems Technology

The classic way to manage complexity in robot control is to convert the robot’s task into a geometric motion control problem. This often leads to robots which excel in being very busy, much beyond what’s _really_ needed for the task. The new paradigm named Lazy Robotics takes an almost opposite starting point: the robot should _never_ do more than needed to accomplish its task(s). Bringing this paradigm to life brings a lot of challenges and research questions that still need to be tackled. The presentation will discuss some of the promises of Lazy Robotics and how to reach them.

Keynote 2 – An overview of Research and Robotics at Google DeepMind

13.00 – 13.45   

Dr.ir. Francesco Nori

Google Deepmind 

Google DeepMind is working on some of the world’s most complex and interesting research challenges, With the ultimate goal of solving artificial general intelligence (AGI). We ultimately want to develop an AGI capable of dealing with a variety of environments. A truly general AGI needs to be able to act on the real world and to learn tasks on real robots. Robotics at Google DeepMind aims at endowing robots with the ability to learn how to perform complex manipulation tasks. This talk will give an introduction to Google DeepMind with specific focus on robotics, control or reinforcement learning.

 

 

Keynote 3 - Robotisation as Rationalisation: The Problem of Dehumanisation

16.00 – 16.30  

Mr.dr.ir. Lambèr Royakkers

TU/e Industrial Engineering & Innovation Sciences - Philosophy & Ethics

Driven by the belief in rationality and efficiency we have redesigned factories, offices and kitchens. Nowadays rationalisation touches every intimate aspects of our lives, from caring for the elderly to sex. Robots will contribute to this. Rationalisation is a double-edged phenomenon: besides benefits, it may reduce the freedom of people and lead to dehumanisation. We claim that robots can act as both humanising and dehumanising systems. The challenge, of course, is to stimulate humanising effects, like using robots to take over ‘dirty, dull and dangerous’ tasks, and to prevent dehumanising effects, such as by not allowing robots to make decision about life and death, or take care of our elderly. Exactly because robots can have a profound effect on our humanity, we are in need of common moral principles and criteria for orienting ourselves into the robot future.