The amount of medical scans being collected at hospitals offers many opportunities for using machine learning to discover patterns of disease. However, the amount of annotated scans, needed by the algorithms, is not growing at the same rate because manual annotation by experts is costly and time-consuming.
Recently a few studies have shown that crowdsourcing - outsourcing tasks to internet users without specialized expertise - is able to provide annotations of good quality for tasks such as outlining different organs in the scan. However, many research questions remain. Can we always replace experts, or is this only possible for some types of data? How can we use the (noisy) annotations to improve machine learning algorithms? How can we best explain the annotation problem to the crowd?
We investigate these and other related questions on a variety of applications in medical imaging, with a particular focus on lung and skin lesion images. We organize a series of MICCAI workshops (LABELS) dedicated to the annotation problem, and won the eScience-Lorentz competition with our crowdsourcing workshop on the topic in July 2018.