Real-time 3D environment mapping : occlusion and collision in augumented reality
Achieving realistic augmented reality gives us several complex challenges to solve. Past workhas shown that in most cases the technology available was not up to the task. However, theadvances made in recent years have drastically changed the possibilities, and in this thesis weinvestigate them. To aid in this, we categorize augmented reality systems based on the type ofinformation they use. The categorization shows us that we have not yet achieved a truly realisticmerging of the virtual world with our real world. This thesis examines the basic operations weneed for that purpose. By solving the collision and occlusion problems, we lay the groundwork forfurther work in achieving realistic augmented reality. We create a testbed and define metrics forquantifying the performance of our system, with respect to collision and occlusion. This testbedcan be used to quantify the performance of any augmented reality system, and we apply it to ourown implementation.The results show that we have defined a working testing methodology, and using the testbedwe show that the Kinect V2 depth-sensor is suitable for 3D environment mapping. The collisionand occlusion results become worse when too much smoothing is applied, or when upsamplingintroduces too much noise. Limitations of this thesis are that we implemented the system on theCPU only, when a GPU approach would produce better running time performance. Based on thisthesis, future work can be performed on merging virtual lighting with the real world.