Publications Repository - Helmholtz-Zentrum Dresden-Rossendorf
|Total number to be selected: 1 Title record|
Hyper 3D-AI: Artificial Intelligence for 3D multimodal point cloud classification
Independent of the application field, spatially detailed information is commonly provided in the form of image data. Accordingly, major developments in image processing and artificial intelligence (AI) for image data interpretation are based on an image-like data structure, i.e. a spatially two-dimensional data grid with a custom number of informative layers. While sufficient for large-scale geographical data, this approach has major flaws when applied in any oblique-angle scenario, in particular as it inherently distorts the spatial characteristics of the observed target (virtual vs. real-world neighborhood relationships, occlusions). Todays’ most crucial image data applications (e.g., resources, energy, mobility, medicine), however, heavily rely on the accurate interpretation of the spatial relationship of objects in all three dimensions. It has been shown that the upscaling of 2D-images to multi-feature attributed 3D point clouds boosts the interpretational value of the dataset. This approach is not only beneficial for the fusion of image data with 3D-information (such as orientation, shape, and surface roughness), but also offers a straightforward solution for the fusion of higher dimensional multi-sensor data. Although point clouds or meshes are routinely used as 3D analogues of real-world targets, the processing of multi-feature point clouds in terms of clustering, classification or material characterization is still in its infancy. Innovative AI approaches such as PointNets or 3D-CNN have shown great potential for point cloud clustering using the spatial relationships of the individual points. However, classifications based on both spatial and auxiliary, high-dimensional point information such as spectral signatures or compositional characteristics is yet to be developed. The proposed project aims at the development of advanced machine (deep) learning approaches to fill this exact gap. These approaches comprise both the challenging fusion of multiple sensors as well as the subsequent classification and segmentation. Besides the algorithm design, the testing on representative scenarios from different application fields is a main work package, including the creation of reusable benchmark datasets for the validation and future development of algorithms. If successful, the project will improve the characterization of objects and surfaces for a wide range of potential applications such as exploration and mining, recycling, autonomous systems, quality assessment, sorting systems or detection of falsified objects. From a resource perspective, an enhanced material characterization will directly contribute to making processes more material and energy efficient. Regarding autonomous systems, the project will advance the research and implementation of methods for robust sensor fusion of multimodal sensors. Due to the versatility in application, the project outcome could support any process that requires a multi-sensor-based discrimination of objects and materials.
Invited lecture (Conferences)
Helmholtz Imaging Virtual Conference, 23.09.2021, virtual, Germany