Researcher in the Spotlight: Jordy Senden

My research is about advancing Cognitive Robotics for Flexible Agro-Food Technology.

Hi, my name is Jordy Senden and I work in the Control Systems Technology – Robotics research group of the Department of Mechanical Engineering. My research is part of the FlexCRAFT program, the goal of which is to advance Cognitive Robotics for Flexible Agro-Food Technology. In short, this means that we want to automate certain tasks in the food production chain.

Juggling endless variations

One of the main challenges in designing robotic systems for this sector is robustness against the many variations. For instance, there is a big difference between harvesting apples and tomato trusses: shape, color, delicacy, separation tools, etc. Even within one fixed task, e.g. only harvesting tomatoes, there are variations such as ripeness, position and environmental influences. To understand what’s happening around the robot, world models are made which should represent the correct level of abstraction and detail, nothing more.

In general, the approach of the FlexCRAFT program is the development of ‘generic capabilities’ that every robotic system needs to perform its task. This is split between four research lines and implemented in three use-cases. The focus of my research is the ‘world modelling’ line and the first part of my approach is showing that we should not only rely on contactless sensors: eventually, the robot needs to encounter the environment. To show that this contact can increase our knowledge of the world, I’m developing an experimental tomato plant set-up. By measuring the force between the plant and the source of excitation while traveling up the stem, I expect to estimate when the robot is getting close to the tomato.

An explosion of hypotheses

The challenge is always to create the ‘correct’ models: keeping them simple to contribute to robustness and calculation/memory efficiency. If a measurement confirms what was expected from the models, our beliefs on the state of the world are still correct. If the measurement is different, this could mean several things: 

  1. Our previous belief was wrong
  2. The current measurement is wrong
  3. The model is wrong

"To understand what’s happening around the robot, world models are made which should represent the correct level of abstraction and detail."

Every measurement for testing a hypothesis introduces the possibility of new hypotheses – another reason to use simple models. We can also combine information from multiple sources to achieve one estimate. For example, we do not rely only on camera images to estimate the position of the tomato truss but also on dynamic tactile information. Combining these sources into one knowledge base is another big challenge. As for now, we’re looking for another PhD candidate for the planning and control research line, which will be heavily connected to my world modeling research. If knowledge is missing in the world model, the system needs to actively search for it; planning and control will be an important factor in this.

A promising set of results

Given the complexity of a test set-up, there are no experimental results yet. The first experiments are expected in the first semester of 2020. We already have a one-dimensional model of the system, initial analysis of the possible control architectures and potential controller tuning methods, but the focus is on addressing the coupling and vibration problem. The idea is to demonstrate what can be achieved when considering only one degree of freedom with linear behavior and testing this in the real set-up.

Of course, the most important beneficiaries of this research are the people whose livelihood is dependent on the agro-food sector. Due to a shortage of workers, farmers are unable to harvest their crops, leading to acres of unattended fruit fields. Attracting more workers is difficult due to the harsh environments, the heavy workload and the low pay. But robotic systems can do this type of work to help farmers maintain their revenue.

The first tests on the excitation of the tomato plant show promising results. Instead of using a FEM method to model the motion of the entire stem, the motion is lumped into a couple of masses, springs and dampers. We expect to see a simple transfer between the force and the position of the actuator connected to the plant, which seems to be the case. Currently, the experiments are being extended to see what happens if the actuator is placed at different positions. The expectation is that the transfer between force and position will change in an explainable manner and I’m confident that the results will be published soon. The next step is integrating it into the robotic system.

Overview of the FlexCRAFT project. The green circle represents the four general robot capabilities that are developed in four research-lines. The blue circle represents the use-cases on which these capabilities will be integrated.