Date
Tuesday February 12, 2019 from 12:00 PM to 1:30 PMLocation
Traverse building – Dorgelo roomAddress
TU/eOrganizer
Data SciencePrice
FreeBuilding
TraverseAbout the event
Many TU/e researchers are advancing or using Artificial Intelligence in their research projects. To support cross disciplinary learning and to strengthen the TU/e AI network, we organize a series of (internal) lunch meetings where various researchers talk about their projects.
Program
Start | Speaker | Title |
12:00 | Johan Lukkien (Dean M&CS) | Introduction |
12:15 | Jim Portegies, Ergo-learning (M&CS) | Ergo-learning |
12:35 | Janet Huang, Future Everyday (ID) | Designing for complex creative task solving |
12:55 | Tanir Ozcelebi, System Architecture & Networking (M&CS) | Synthesizing and reconstructing missing sensory modalities in behavioral context recognition |
13:15 |
| Wrap up |
ABSTRACTS
Jim Portegies
The ergo-learning conjecture suggests that in the depths of the human brain runs an immensely powerful, simple, efficient and task- and signal-independent learning algorithm. The search for and development of these algorithms goes hand in hand with mathematical definitions of concepts such as meaning and understanding. I will discuss ergo-learning, and some of the (tiny) developments towards mathematical grip on these difficult concepts.
Janet Huang
Performing creative tasks is challenging, for such tasks are typically open-ended and ill-defined. To solve these complex problems, people need to spend much time and effort to learn professional skills and improve the in-progress work through an iterative process. Feedback is a critical component of this process for helping people discover errors and iterate toward better solutions. To meet the demand of timely feedback, recent work has explored technologies to connect problem solvers with feedback providers online. However, most research focuses on improving the content of feedback, but neglects the most important aspect of how to support problem solvers to effectively integrate feedback into revision for facilitating high-quality outcomes. In this work, we propose an iterative feedback framework called Never-Ending Creative Learning that leverages the power of crowds and machines to generate effective feedback and guide novice learners to interpret and integrate feedback into revision. Several technologies have been explored to support not only feedback generation but also revision process. First, we start with our crowd-powered feedback system, StructFeed, and demonstrate a crowdsourcing approach for generating effective feedback to help writers resolve high-level writing issues in their revisions. Second, we present Feedback Orchestration, which guides writers to resolve writing is- sues in a particular workflow by orchestrating feedback with different levels. In the end, we envision that the Crowd-AI enabled systems can guide learners to reflect on the creative process and contribute their lesson learned for improving the task performance on complex creative tasks.
Tanir Ozcelebi
Detection of human activities along with the associated context is of key importance for various application areas, including assisted living and well-being. To predict a user’s context in the daily-life situation a system needs to learn from multimodal data that are often imbalanced, and noisy with missing values. The model is likely to encounter missing sensors in real-life conditions as well (such as a user not wearing a smartwatch) and it fails to infer the context if any of the modalities used for training are missing. We propose a method based on an adversarial auto encoder for handling missing sensory features and synthesizing realistic samples. We empirically demonstrate the capability of our method in comparison with classical approaches for filling in missing values on a large-scale activity recognition dataset collected in-the-wild. We develop a fully-connected classification network by extending an encoder and systematically evaluate its multi-label classification performance when several modalities are missing. Furthermore, we show class-conditional artificial data generation and its visual and quantitative analysis on context classification task; representing a strong generative power of adversarial auto encoders.