NEMO Converter 3D: Reconstruction of 3D Objects from Photo and Video Footage for Ambient Learning Spaces
Allgemeines
Art der Publikation: Conference Paper
Veröffentlicht auf / in: AMBIENT 2017 - The Seventh International Conference on Ambient Computing, Applications, Services and Technologies
Jahr: 2017
Seiten: 6-11
Verlag (Publisher): IARIA
ISBN: 978-1-61208-601-9
Autoren
Zusammenfassung
In ambient and mobile learning contexts, 3D renderings create higher states of immersion compared to still images or video. To cope with the considerable effort to create 3D objects from images, with the NEMO Converter 3D (NOC3D) this paper presents a technical approach to automatically reconstruct 3D objects from semantically annotated media, such as photos and more importantly video footage, in a background process. By using the Mobile Learning Exploration System (MoLES) with a smartphone, the user creates and collects media in mobile context, which are automatically uploaded into the NEMO-Framework (Network Environment for Multimedia Objects) together with semantic annotations for contextualized access and retrieval. NEMO provides an extendable web-based framework to store media like photos, videos and 3D objects together with semantic annotations. The framework has been developed for Ambient Learning Spaces (ALS) in a research project. With InfoGrid, a mobile augmented reality application connected to NEMO, the user experiences the previously generated 3D object placed and aligned into real world scenes. 3D objects automatically reconstructed from photo and video footage by NOC3D are stored in NEMO and thus provided to all applications accessing the NEMO API. Related to the pedagogical background of our research project, this paper focuses on the technical realization and validation of NOC3D with reference to a realistic scenario for the usage of NOC3D in ambient and mobile contexts. In Section 2, we regard related work. In Section 3, we present a practical scenario for using NOC3D. In Section 4, we describe the technical environment for NOC3D and our research project. In Section 5, we outline the realization of NOC3D in the ambient context of our scenario. In Section 6, we present our findings and conclude with a summary and outlook in Section 7.