Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty

Abstract

In remote sensing, each sensor can provide complementary or reinforcing information. It is valuable to fuse outputs from multiple sensors to boost overall performance. Previous supervised fusion methods often require accurate labels for each pixel in the training data. However, in many remote sensing applications, pixel-level labels are difficult or infeasible to obtain. In addition, outputs from multiple sensors often have different resolution or modalities. For example, rasterized hyperspectral imagery presents data in a pixel grid while airborne Light Detection and Ranging (LiDAR) generates dense 3-D point clouds. It is often difficult to directly fuse such multi-modal, multi-resolution data. To address these challenges, we present a novel Multiple Instance Multi-Resolution Fusion (MIMRF) framework that can fuse multi-resolution and multi-modal sensor outputs while learning from automatically-generated, imprecisely-labeled data. Experiments were conducted on the MUUFL Gulfport hyperspectral and LiDAR data set and a remotely-sensed soybean and weed data set. Results show improved, consistent performance on scene understanding and agricultural applications when compared to traditional fusion methods.

Links

PDF
IEEE Xplore
arXiv
GitHub

Citation

Plain Text:
X. Du and A. Zare, “Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty,” in  IEEE Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2755-2769, April 2020, doi: 10.1109/TGRS.2019.2955320.

BibTeX:
@ARTICLE{du2019mimrf, 
author={X. Du and A. Zare}, 
journal={IEEE Transactions on Geoscience and Remote Sensing}, 
title={Multiresolution Multimodal Sensor Fusion for Remote Sensing Data With Label Uncertainty}, 
year={2020},
volume={58},
number={4},
pages={2755-2769}
doi={10.1109/TGRS.2019.2955320}}