Date
Monday March 8, 2021 from 3:15 PM to 5:00 PMLocation
OnlineOrganizer
EAISIPrice
freeEAISI CAFÉ | THIRD EDITION
AI research presented by TU/e's research community
On 8 March 2021 the Eindhoven Artificial Intelligence Systems Institute (EAISI) will organize the third online EAISI Café. Goal is to present our interesting AI research to a broad audience; colleagues and industry.
This third edition will be dedicated to International Women's Day.

Program third edition - 8 march
Start | Title | Speaker |
3:15 PM | Opening | Chair: Evangelia Demerouti |
PITCHES | ||
3:20 PM | Robots that benefit people, people that benefit robots | Elena Torta |
3:30 PM | Empathic robots in healthcare and education | Emilia Barakova |
KEYNOTE | ||
3:40 PM | Identifying fake news and political bias using machine learning | Linda Bergman |
PITCHES | ||
4:00 PM | Mental Health Pandemic | Demi Kuit Cecilia Liu |
4:10 PM | AI &Traffic Management | Soora Rasouli |
4:20 PM | Augmenting decision-making with AI | Zaharah Bukhsh |
4:30 PM | Privacy issues and legal aspects of Artificial Intelligence | Chiara Gallese Nobile |
4:40 PM | Norms of Explainable AI | Emily Sullivan |
WRAP-UP |
Abstracts
Elena Torta
AI powered autonomous robots are advancing at high pace. It is not uncommon, nowadays, to encounter them in human populated spaces like restaurants, hotels and nursing homes. In this pitch I explore some of the current challenges that autonomous robots face when helping humans and outline ways in which human knowledge can help robots improve their performance.
Emilia Barakova
We present an overview of our latest research in embodying AI algorithms in robots that are developed to enable socially meaningful interaction with humans in mental healthcare and education applications. Emotion recognition from body movements, facial expressions and physiological signals is used as an enabler of cooperation, coping and persuasion. Games and other interaction scenarios are applied to restrict the range of actions for the robot while providing possibilities for ecologically valid interactions for the humans.
Linda Bergman
During this session we will explore machine learning for natural language processing. We will learn how machine learning can identify fake news, political bias and even the source publication of an article. And we will use a white box machine learning technique that gives us insight into the inner workings of the algorithm.
Demi Kuit & Cecilia Liu
The psychological impact of the COVID-19 pandemic has spread worldwide and people’s mental health has been impacted negatively to a high degree. People’s reduced autonomy and the fear of contagion, losing loved ones, and unemployment, as well as psychological symptoms due to social isolation, have long-term impacts on the population. Student team FruitPunch AI Eindhoven and student association D.S.A Pattern is therefore combining their talents and communities to organize a hackathon focused on tackling the declining mental health in times of COVID-19.
Soora Rasouli
Extremely large amount of data is available regarding real time (private and public) vehicle positions, aggregate traffic conditions, real time states of infrastructure, position of sharing services in the network, availability of parking areas, accidents and in the future information about the presence of AVs’ in the network. In order to maximize the benefit from collecting such huge volume of data for the (built) environment, they have to be collectively and simultaneously used to maximizing temporal and spatial efficiency of service and infrastructure and increasing safety and convenience of travelers (given their preferences) by giving right travelers, right types of advice at right times and right places. A system that provides information to and advise and control the action of travelers (including drivers), fleet operators and network managers.
Zaharah Bukhsh
Businesses are increasingly investing in developing machine learning modeling capabilities to achieve specific goals such as derive sales, improve customer services and effective resource management. In the machine learning research, the focus has been on algorithmic advancements, mostly for standard benchmark datasets. This mismatch of expectations has caused a disconnect in translating the abstract output of predictive models to achieve the specific business objectives. My research seeks to mitigate this gap by developing methods and tools that augment predictions in order to provide decision-support for achieving desired goals. Classical optimization approaches, decision theory methods and simulation are well-established research areas, that are if combined with learning-based methods can enable effective translation of high-level objectives. This can be achieved, for example, by learning from interaction with the simulated environment in deep reinforcement learning, feature perturbation for interpretability, deriving causal relationships among features with causal ML, counter-factual reasoning and taking process-centric view via process mining. The application areas of my research include operational processes and infrastructure asset (bridges, water pipes) management.
Chiara Gallese Nobile
In recent years, the need for regulation of robots and Artificial Intelligence (AI), together with the urgency of reshaping the civil liability framework, has become apparent in Europe. European Union has been seeking a standardized regulation that will ensure a high level of security in robotics systems and AI-based applications to prevent potential breaches, with due regard for privacy; it is also deemed necessary to regulate the responsibility of programmers, producers and manufacturers of robots and AI-based systems in order to ensure that they develop safe products, and keep them updated as long as they are in use. However, those issues have to be balanced with the need for stimulation of technological progress and for investments in research, which could be slowed down by a strict regulatory framework.
Emily Sullivan
Explaining the decisions of AI systems is important for transparency and user trust. The GDPR also gives us the right to explanation whenever an AI system makes an impactful decision. But how do we know whether the explanations provided are successful? In this talk, I discuss the research of my upcoming Veni project on the norms explanations must follow in order to be socially responsible and provide understanding.