Research

Dimensionality Reduction

Local manifold learning has been successfully applied to hyperspectral dimensionality reduction in order to embed nonlinear and non-convex manifolds in the data. Local manifold learning is mainly characterized by affinity matrix construction, which is composed of two steps: neighbor selection and computation of affinity weights. There is a challenge in each step: (1) neighbor selection is sensitive to complex spectral variability due to non-uniform data distribution, illumination variations, and sensor noise; (2) the computation of affinity weights is challenging due to highly correlated spectral signatures in the neighborhood. To address the two issues, in this work a novel manifold learning methodology based on locally linear embedding (LLE) is proposed through learning a robust local manifold representation (RLMR). More specifically, a hierarchical neighbor selection (HNS) is designed to progressively eliminate the effects of complex spectral variability using joint normalization (JN) and to robustly compute affinity (or reconstruction) weights reducing collinearity via refined neighbor selection (RNS). Additionally, an idea that combines spatial-spectral information is introduced into the proposed manifold learning methodology to further improve the robustness of affinity calculations.
Related Publication:Danfeng Hong, Naoto Yokoya, Xiao Xiang Zhu, "Learning a Robust Local Manifold Representation for Hyperspectral Dimensionality Reduction," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017, 10(6): 2960-2975. [code]
Despite the fact that nonlinear subspace learning techniques (e.g., manifold learning) have successfully applied to data representation, there is still room for improvement in explainability (explicit mapping), generalization (out-of-samples), and cost-effectiveness (linearization). To this end, a novel linearized subspace learning technique is developed in a joint and progressive way, called joint and progressive learning strategy (J-Play), with its application to multi-label classification. The J-Play learns high-level and semantically meaningful feature representation from high-dimensional data by 1) jointly performing multiple subspace learning and classification to find a latent subspace where samples are expected to be better classified; 2) progressively learning multi-coupled projections to linearly approach the optimal mapping bridging the original space with the most discriminative subspace; 3) locally embedding manifold structure in each learnable latent subspace.
Related Publication:Danfeng Hong, Naoto Yokoya, Jian Xu, Xiao Xiang Zhu, "Joint & Progressive Learning from High-Dimensional Data for Multi-Label Classification," European Conference on Computer Vision (ECCV), Munich, Germany, September, pp. 469-484, 2018. [code]

Spectral Unmixing

Hyperspectral imagery collected from airborne or satellite sources inevitably suffers from spectral variability, making it difficult for spectral unmixing to accurately estimate abundance maps. The classical unmixing model, the linear mixing model (LMM), generally fails to handle this sticky issue effectively. To this end, we propose a novel spectral mixture model, called the augmented linear mixing model (ALMM), to address spectral variability by applying a data-driven learning strategy in inverse problems of hyperspectral unmixing. The proposed approach models the main spectral variability (i.e., scaling factors) generated by variations in illumination or typography separately by means of the endmember dictionary. It then models other spectral variabilities caused by environmental conditions (e.g., local temperature and humidity, atmospheric effects) and instrumental configurations (e.g., sensor noise), as well as material nonlinear mixing effects, by introducing a spectral variability dictionary. To effectively run the data-driven learning strategy, we also propose a reasonable prior knowledge for the spectral variability dictionary, whose atoms are assumed to be low-coherent with spectral signatures of endmembers, which leads to a well-known low-coherence dictionary learning problem. Thus, a dictionary learning technique is embedded in the framework of spectral unmixing so that the algorithm can learn the spectral variability dictionary and estimate the abundance maps simultaneously. Extensive experiments on synthetic and real datasets are performed to demonstrate the superiority and effectiveness of the proposed method in comparison with previous state-of-the-art methods.
Related Publication:Danfeng Hong, Naoto Yokoya, Jocelyn Chanussot, Xiao Xiang Zhu, "An Augmented Linear Mixing Model to Address Spectral Variability for Hyperspectral Unmixing," IEEE Transactions on Image Processing, 2019, 28(4): 1923-1938. [code]
To support high-level analysis of spaceborne imaging spectroscopy (hyperspectral) imagery, spectral unmixing has been gaining significance in recent years. However, from the inevitable spectral variability, caused by illumination and topography change, atmospheric effects and so on, makes it difficult to accurately estimate abundance maps in spectral unmixing. Classical unmixing methods, e.g. linear mixing model (LMM), extended linear mixing model (ELMM), fail to robustly handle this issue, particularly facing complex spectral variability. To this end, we propose a subspace-based unmixing model using low-rank learning strategy, called subspace unmixing with low-rank attribute embedding (SULoRA), robustly against spectral variability in inverse problems of hyperspectral unmixing. Unlike those previous approaches that unmix the spectral signatures directly in original space, SULoRA is a general subspace unmixing framework that jointly estimates subspace projections and abundance maps in order to find a ‘raw’ subspace which is more suitable for carrying out the unmixing procedure. More importantly, we model such ‘raw’ subspace with low-rank attribute embedding. By projecting the original data into a low-rank subspace, SULoRA can effectively address various spectral variabilities in spectral unmixing. Furthermore, we adopt an alternating direction method of multipliers (ADMM) based to solve the resulting optimization problem.
Related Publication:Danfeng Hong, Xiao Xiang Zhu, "SULoRA: Subspace Unmixing with Low-Rank Attribute Embedding for Hyperspectral Data Analysis," IEEE Journal of Selected Topics in Signal Processing, 2018, 12(6): 1351-1363. [code]

Multi / Cross-Modality Learning

With a large amount of open satellite multispectral imagery (e.g., Sentinel-2 and Landsat-8), considerable attention has been paid to global multispectral land cover classification. However, its limited spectral information hinders further improving the classification performance. Hyperspectral imaging enables discrimination between spectrally similar classes but its swath width from space is narrow compared to multispectral ones. To achieve accurate land cover classification over a large coverage, we propose a cross-modality feature learning framework, called common subspace learning (CoSpace), by jointly considering subspace learning and supervised classification. By locally aligning the manifold structure of the two modalities, CoSpace linearly learns a shared latent subspace from hyperspectral-multispectral(HS-MS) correspondences. The multispectral out-of-samples can be then projected into the subspace, which are expected to take advantages of rich spectral information of the corresponding hyperspectral data used for learning, and thus leads to a better classification. Extensive experiments on two simulated HSMS datasets (University of Houston and Chikusei), where HS-MS data sets have trade-offs between coverage and spectral resolution, are performed to demonstrate the superiority and effectiveness of the proposed method in comparison with previous state-of-the-art methods.
Related Publication:Danfeng Hong, Naoto Yokoya, Jocelyn Chanussot, Xiao Xiang Zhu, "CoSpace: Common Subspace Learning from Hyperspectral-Multispectral Correspondences," IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(7): 4349-4359. [code]
In this paper, we aim at tackling a general but interesting cross-modality feature learning question in remote sensing community --- can a limited amount of highly-discrimin-ative (e.g., hyperspectral) training data improve the performance of a classification task using a large amount of poorly-discriminative (e.g., multispectral) data?Traditional semi-supervised manifold alignment methods do not perform sufficiently well for such problems, since the hyperspectral data is very expensive to be largely collected in a trade-off between time and efficiency, compared to the multispectral data. To this end, we propose a novel semi-supervised cross-modality learning framework, called learnable manifold alignment (LeMA). LeMA learns a joint graph structure directly from the data instead of using a given fixed graph defined by a Gaussian kernel function. With the learned graph, we can further capture the data distribution by graph-based label propagation, which enables finding a more accurate decision boundary. Additionally, an optimization strategy based on the alternating direction method of multipliers (ADMM) is designed to solve the proposed model. Extensive experiments on two hyperspectral-multispectral datasets demonstrate the superiority and effectiveness of the proposed method in comparison with several state-of-the-art methods.
Related Publication:Danfeng Hong, Naoto Yokoya, Nan Ge, Jocelyn Chanussot, Xiao Xiang Zhu, "Learnable Manifold Alignment (LeMA) : A Semi-supervised Cross-modality Learning Framework for Land Cover and Land Use Classification," ISPRS Journal of Photogrammetry and Remote Sensing, 2019, 147: 193-205. [code]