# MCMC methods for hyperspectral imagery

## Abstract

Hyperspectral imagery has been widely used in remote sensing for various civilian and military applications. A hyperspectral image is acquired when a same scene is observed at different wavelengths. Consequently, each pixel of such image is represented as a vector of measurements (reflectances) called spectrum. One major step in the analysis of hyperspectral data consists of identifying the macroscopic components (signatures) that are present in the sensored scene and the corresponding proportions (concentrations).

The latest techniques developed for this analysis do not properly model these images. Indeed, these techniques usually assume that the signatures appear as pure pixels in the image. However, a pixel is rarely composed of pure spectrally elements, distinct from each other. Thus, such models could lead to weak estimation performance. The aim of this thesis is to propose new models more suited to the intrinsic properties of hyperspectral images. The unknown model parameters are then infered within a Bayesian framework. The use of Markov Chain Monte Carlo (MCMC) methods allows one to overcome the difficulties related to the computational complexity of these inference methods.

## Hyperspectral image analysis with the Normal Compositional Model

The Normal Compositional Model, developed by Michael T. Eismann and David W. J. Stein, represents endmembers as Gaussian vectors whose means are given by an endmember extraction algorithm, such as the well-known N-FINDR or VCA. This representation alleviates the problem of estimating the endmembers when there is no pure pixels in the analyzed image. We propose a Bayesian unmixing algorithm, based on this statistical model, using MCMC methods.

The corresponding Matlab codes are available :.

## Semi-supervised Bayesian algorithm for hyperspectral image unmixing

An extension of the previous algorithm based on the Normal Compositional Model to a case where the number of endmembers is unknown is also proposed. At first, the endmember spectra must belong to a known library. The number of endmembers R and the corresponding spectra are added with the abundance coefficients and variance to the list of unknown parameters. As in the previous hierarchical Bayesian approach, we define appropriate prior distributions for these parameters (number of components, spectra involved in the mixture and their respective abundance coefficients). A reversible jump MCMC strategy is then employed for their estimation.

The corresponding Matlab codes are available :.

## Joint unmixing and segmentation of hyperspectral images

Most inversion algorithms analyze the pixels independently. Since an image often presents homogeneous regions (lake, cultures, etc), a new approach has been proposed taking into account spatial correlations in a Bayesian model. The image is assumed to be partitioned into several classes that share the same abundance means and covariances. Then, hidden variables or *labels* are introduced in order to differentiate the classes. Spatial dependencies between these labels are modelled using Markov random fields that are a very useful tool to describe neighborhood dependance between image pixels. The abundances, reparametrized to handle the positivity and sum-to-one constraints, has been assigned appropriate prior distributions with unknown means and variances according to the corresponding class. Then, other prior distributions are judiciously chosen for the noise variance (or endmember variance in case of using the NCM) and the hyperparameters. As previously, MCMC methods are employed for the estimation of the parameters. From the approximated MAP estimator of the labels, a classification map is given. Then, conditionnally to the MAP estimates of the labels, the abundance vector are then estimated.