 Research
 Open Access
 Published:
Hyperspectral image classification with SVM and guided filter
EURASIP Journal on Wireless Communications and Networking volume 2019, Article number: 56 (2019)
Abstract
Hyperspectral image (HSI) classification has been long envisioned in the remote sensing community. Many methods have been proposed for HSI classification. Among them, the method of fusing spatial features has been widely used and achieved good performance. Aiming at the problem of spatial feature extraction in spectralspatial HSI classification, we proposed a guided filterbased method. We attempted two fusion methods for spectral and spatial features. In order to optimize the classification results, we also adopted a guided filter to obtain better results. We apply the support vector machine (SVM) to classify the HSI. Experiments show that our proposed methods can obtain very competitive results than compared methods on all the three popular datasets. More importantly, our methods are fast and easy to implement.
Introduction
Hyperspectral imaging sensors have been widely used in remote sensing, biology, chemometrics, and so on [1]. Hyperspectral imaging sensors can obtain spatial and spectral information of materials, which is called the hyperspectral image (HSI), for the same time. Due to abundant spectral information, HSI is widely applied to material recognition and classification, such as land cover [2], environmental protection [3], and agriculture [4]. Hence, HSI classification has attracted increasing attention and became a hot topic in the remote sensing community.
The task of classification is to assign a unique label to each pixel vector of HSI. For this problem, many pixelwise (spectralbased) methods were employed, including knearest neighbors (KNN) [5], support vector machine (SVM) [6], and sparse representation [7] in the last two decades. SVM has shown good performance for classifying highdimensional data when a limited number of training samples are available [8]. It can effectively overcome the Hughes phenomenon [9] and the problem of limited training samples in HSI classification. Therefore, SVM and its improved algorithms get better performance than other methods. However, they still have a wide gap for expectations. After all, it is a universal phenomenon that different materials have the same spectrum and the same material has different spectrum.
To overcome the above problem and improve the performance of classification, recent studies have suggested incorporating spatial information into a spectralbased classifier [10], which is called the spectralspatial HSI classification. Because of the continuous improvement in spatial resolution, the spatial features of materials become more representative. Many papers show that spectral method is a very effective way for HSI classification. Various types of classification approaches have been proposed, including morphology feature extraction [11], kernel combination [3, 12], and joint representation [13]. By using geodesic opening and closing operations with fixed shape structuring elements of different sizes, morphological profiles significantly improve the classification accuracy. The main idea of a joint representation model is to exploit both spectral and spatial features by treating the test sample as a collection of its neighboring pixels (including the test pixel itself).
For SVM methods, the mainstream approaches of fusing spectral and spatial features are used by kernel combination [14]. The paper [15] proposed a series of composite kernels to fuse spectral and spatial features directly in the SVM kernels. And Li et al. [16] presented a generalized composite kernel framework to classify HSI. The aforementioned methods fused spectral and spatial features before classification. There is also a small number of ways to fuse feature after the SVM classification. Tarabalka [17] integrated into the SVM classification probabilities into a Markov random field for classifying the HSI. Moreover, Kang [18] proposed a spectralspatial HSI classification method with edgepreserving filtering, which extracted the spatial features after the SVM classification and got competitive results.
Motivated by the paper [18], we adopt the guided filter to extract spatial features. Compared to the reference [18], our contribution can be concluded as extracting spatial features before classification. In more detail, the main contributions are listed as follows.

1)
We adopt the guided filter to smooth HSI, which is similar to denoising in image processing. By this method, a fusion which consists of a pixel and its neighboring pixel information is generated. It is proved to be simple and effective.

2)
We attempt different spectral and spatial fusion methods, which makes sense for future work.

3)
The proposed methods are applied to three widely used hyperspectral datasets. We compare with two methods by three evaluation metrics.
Related methodology and work
SVM and HSI classification
SVM is a supervised machine learning method, proposed by Vapnik [19], which is based on the statistical learning theory. Essentially, SVM attempts to find a hyperplane in the multidimensional feature space to separate the two classes. And this hyperplane is the best decision surface which maximizes the distance between the hyperplane and two classes, called the margin. Generally, the larger the margin, the better the classifier is. From a given set of the training set, obtaining an SVM model is equivalent to an optimization problem for finding a hyperplane. For this optimization, SVM introduces a structural minimum principle that prevents overfitting problems.
SVM is suitable for highdimensional data with the limited training set. And a lot of researches have addressed that SVM classifier presents superior performance on HSI classification [6, 20], compared with other popular classifiers such as decision tree classifier, knearest neighbor classifier, and neural networks. The power of SVM is mainly due to its kernel function, especially radial basis function. However, the single kernel is not enough for all cases. Some researchers proposed a composite kernel [21], which integrates both spectral and spatial features, to improve classification performance.
Guided filter and HSI classification
The guided filter,^{Footnote 1} proposed by He [22] for the first time, has been widely used in the fields of noise reduction, image dehazing, and so on. We can get a new image that obtains the feature of the guided filter. Given image p as an input, and a guided filter image g, we can obtain an output image q. Generally, q is a linear transform of g in a window ω_{k} centered at the pixel k. If the radius of k is r, the size of local window ω_{k} is (2r + 1) × (2r + 1).
where a_{k} and b_{k} are a linear coefficient and bias, respectively. From the model, we can see that ∇q = a ∇ g, which means that the output q will have a similar gradient with guidance image g. The coefficient and bias, which need to be known, are solved by a minimum cost function as follows:
Here, ϵ is a regularization parameter. According to the paper [22], a solution can be derived from Eq. (2) as follows.
where μ_{k} and \( {\sigma}_k^2 \) are the mean and variance of g in ω_{k}, ∣ω∣ is the number of pixels in ω_{k}, and \( {\overline{p}}_k \) is the mean of p in ω_{k}. After obtaining the coefficient a_{k} and b_{k}, we can compute the filtering output q_{i}. Through the above process, we can get a linear transform image q.
The guided filter was first used for HSI classification by Kang [18]. They considered the HSI classification as a probability optimization process. They firstly obtained the initial probability by SVM. Then, they applied a guided filter to optimize the initial probability maps. They got a stateoftheart result. Subsequently, Wang [23] adopted a guided filter to extract the spatial features from HSI. Then, a stacked autoencoder was used to classify each pixel. Guo [24] proposed a method, which combines a guided filter, joint representation, and knearest neighbor to improve HSI classification. Inspired by the methods mentioned above, we propose a novel method for fusing spectral and spatial information. The experimental results clearly show that our approach can be executed more rapidly than conventional methods.
HSI classification with SVM and guided filter
The proposed Guided Filter SVM Edge Preserving Filter (GFSVMEPF)
We propose a novel method for HSI classification with SVM and guided filter. First, we extract the spatial features of HSI by the guided filter, which is obtained from the original HSI by a principal component analysis (PCA) method. Then, we classify the spatial features by SVM. Finally, we employ a guided filter again to optimize the classification. The process is shown in Fig. 1.
Extracting spatial features by guided filter
First, we obtain a guidance image by PCA. We take the first three principal components as a color guidance image. Given a dataset D = {d_{1}, d_{2}, ⋯, d_{S}}, we adopt PCA to obtain the following result. Here d_{i} is the information of the ith band, and S denotes the number of bands.
So, the guidance image is G = [g_{1}, g_{2}, g_{3}]. Then, based on formula (1), using input image d_{1} and guidance image G, we can get the output u_{1} by filtering. In the same way, we can yield all the u_{i} which constructs a new hyperspectral image U = {u_{1}, u_{2}, ⋯, u_{S}}.
Classifying HSI by SVM
After obtaining the image U = {u_{1}, u_{2}, ⋯, u_{S}} by the guided filter, we can rewrite it as V = {v_{1}, v_{2}, ⋯, v_{N}}, where v_{n} = {v_{n, 1}, v_{n, 2}, ⋯, v_{n, S}} is the spectral feature vector.
Then we adopt an SVM classifier to classify all the feature vector v_{n}. We can get a classification map C as the original result.
Optimizing the classification map
First, we convert the classification map C into a probability map P = {p_{1}, p_{2}, ⋯, p_{n}}, where p_{i, n} is the initial probability with a value of 0 or 1. And n denotes the number of categories to classify. If a pixel i belongs to the nth class, p_{i, n} is set to 1. Other else, p_{i, n} is set to 0.
Then, we employ the guided filter in section 3.1.1 to filter each p_{i}. So we get a new P as a final probability map. For each pixel, we get n probability values and choose the class label with the biggest probability value as the final label.
Other methods
In order to verify the effectiveness of our method, we also proposed another three methods with SVM and guided filter. We want to research the efficacy of spatial feature extraction and propose a GFSVM method, which firstly extracts spatial features and then classifies them by SVM. The implementation is the same as the steps 1 and 2 in Section 3.1.
If spatial features and spectral features are fused together, can the increasement of information improve classification accuracy of the HSI? We proposed another method called CoSVM (Connected SVM). We choose the top half of [g_{1}, g_{2}, ⋯, g_{S}] as the original feature. And we choose the top half of U as the spatial features. Then we join the original and the spatial features together as a fusion feature to classify. After being classified by SVM, we get the final classification. By optimizing the result of CoSVM as step 3 in Section 3.1, we got another method called CoSVMEPF.
Results and discussion
Experimental setup
Datasets
Three hyperspectral data,^{Footnote 2} including Indian Pines, University of Pavia, and Salinas, are employed to draw a convincing conclusion. The Indian Pines dataset was gathered by an AVIRIS sensor. The image scene, with a spatial coverage of 145 × 145 pixels, is covering the woods, grasspasture, and so on. We choose 200 spectral channels from 220 bands in the 0.4 to 2.45μm region of the visible and infrared spectrum.
The University of Pavia dataset was captured by the ROSIS (Reflective Optics System Imaging Spectrometer) sensor. The dataset contains 610 × 340 pixels with 115 spectral bands. After removing water absorption and low SNR bands, 103 bands were used for the analysis. And there are nine categories to be classified.
The third dataset was collected by the AVIRIS sensor, capturing an area over Salinas Valley, California, with a high spatial resolution of 3.7 m. Salinas comprises 512 × 217 pixels in all and contains vegetables, bare soils, and vineyard fields. We also selected 200 bands for experiments by discarding 20 water absorption.
Evaluation metrics
Three widely used indexes for HSI classification were adopted to evaluate the performance of experimental methods, including the overall accuracy (OA), the average accuracy (AA), and the kappa coefficient (KA).
Parameter settings
In this experiment, we use libSVM designed by Lin [25]. The libSVM has two main parameters C and g to be set. The C and g are determined by cross validation. And C changes from 10^{−2} to 10^{4}, and g ranges from 2^{−1} to 2^{4}.
For guided filter, there are two key parameters to be set. One is the radius r of the filter which represents the size of spatial feature scale. The other parameter of guided filter is the regularization parameter ɛ which controls the degree of smoothness. In our experiments, we set r = 3 and ɛ = 0.001.
Experimental results and discussion
In this section, the proposed methods are compared with two widely used classification methods, SVM [18] and EPF [18], which are typical examples of pixelwise methods and spectralspatial methods respectively.
Experimental results and discussion on Indian Pines
The Indian Pines dataset is the most commonly used and the most difficult to classify. In this experiment, the classification accuracy for each class, OA, AA, and KA is adopted to evaluate the classification performance. Figure 2 shows the classification maps obtained by different methods associated with the corresponding OA scores. From this figure, we can see that the classification accuracy obtained by SVM is the worst since lots of noisy estimations are visible. The best one is the classification accuracy obtained by GFSVMEPF, which is almost the same as the ground truth.
Classification performance of each class is shown in Table 1. All our proposed methods outperform the SVM and EPF significantly on all the indexes. They are higher than EPF by 2%, 2.5%, 3.5%, and 4.6%, respectively. Especially, the proposed method GFSVMEPF achieves 99.22%, which is the best result we have seen so far. In 16 categories, there are 11 classes to reach the highest results. Unlike our expectations, the result obtained by CoSVM which employs the fusion of guided information and spectral information is worse than that obtained by GFSVM which only employs the guided information. This shows that the superposition of information cannot bring better classification results. Because each pixel in the filtered image is a linear transformation of its own and neighbor pixels, the filtered feature is enough for HSI classification. We can draw a conclusion that the spectralspatial fusion method (CoSVM) can improve classification accuracy (compared with SVM and EPF). Using the guided filter twice can improve classification accuracy, such as GFSVMEPF.
Experimental results and discussion on the University of Pavia dataset
The University of Pavia only has nine categories. It is easier to classify. Classification maps of different methods are illustrated in Fig. 3. It can be seen from this figure that the proposed methods (CoSVM, GFSVM, and GFSVMEPF) achieve better classification performance than other compared approaches. Especially, in the map of GFSVMEPF, we can hardly see the difference between it and the ground truth.
And the results of different methods are shown in Table 2. It can be seen from Table 2 that our proposed methods (CoSVM, CoSVMEPF, GFSVM) are similar, and they are slightly higher than SVMEPF (98.51%). GFSVMEPF obtains the result of 99.7%, which outperforms stateoftheart methods. There are six categories to reach the highest results in nine categories. In this experiment, there is a strange phenomenon that CoSVM is better than CoSVMEPF. That is because some pixels of the thin edges are divided into the background, as seen from Fig. 3. As in the previous experiment, the method of extracting spatial features by the guided filter twice (GFSVMEPF) is the most effective classification method.
Experimental results and discussion of the Salinas dataset
The last experiment is performed on the Salinas dataset, which is the biggest one we have chosen. The qualitative results are shown in Fig. 4. It is apparent from this figure that the map of GFSVMEPF has the fewest noise points and obtains the best results.
The detailed results are illustrated in Table 3. All the methods perform well on this dataset. The worst is about 92.21% by SVM. The proposed methods (CoSVMEPF, GFSVM, and GFSVMEPF) are all over 99%, especially GFSVMEPF which reaches 99.8%, outperforming other methods greatly. There are 12 categories to achieve the best result in 16 categories. The conclusion of this experiment is consistent with that of Indian Pines. Spectral features with spatial features can improve classification accuracy. The result of extracting spatial features twice is better than that of extracting spatial features once.
From the above three experiments, we can draw a conclusion that SVM with guided filter can be well used for HSI classification. The guided filter is an effective way to fuse spatial information and spectral information. Especially for datasets with a regular shape, a guided filter is more effective to extract spatial features. Because the filtered pixels contain not only neighbor information but also their own information, they can be directly used for classification without adding other information.
Conclusion
In this paper, we propose several spectralspatial HSI classification methods which combined SVM with guided filter. Two spectral and spatial fusion methods are adopted for the SVM. Moreover, the guided filter was used for extracting spatial information and optimizing the classification results, respectively. Our proposed methods can improve the classification accuracy significantly in a short time. Consequently, the proposed methods can be effective to real applications.
From this work, we can draw the following conclusions: (a) The guided filter is an effective way to extract spatial information in HSI. (b) The extracted feature by the guided filter is good enough for HSI classification, without original information. (c) The method of SVM with twice filtrations is a simple and effective way to classify HSI.
Abbreviations
 AA:

Average Accuracy
 CoSVM:

Connected SVM
 GFSVMEPF:

Guided Filter SVM Edge Preserving Filter
 HSI:

Hyperspectral Image
 KA:

Kappa Coefficient
 KNN:

knearest neighbors
 OA:

Overall Accuracy
 PCA:

Principal Component Analysis
 ROSIS:

Reflective Optics System Imaging Spectrometer
 SVM:

Support Vector Machine
References
 1.
A.F.H. Goetz, Three decades of hyperspectral remote sensing of the earth: A personal view. Remote Sens. Environ. 113, S5–S16 (2009)
 2.
G.P. Petropoulos, C. Kalaitzidis, K. Prasad Vadrevu, Support vector machines and objectbased classification for obtaining landuse/cover cartography from Hyperion hyperspectral imagery. Comput. Geosci. 41(2), 99–107 (2012)
 3.
R.L. Lawrence, S.D. Wood, R.L. Sheley, Mapping invasive plants using hyperspectral imagery and Breiman cutler classifications (RandomForest). Remote Sens. Environ. 100(3), 356–362 (2006)
 4.
L.M. Dale, A. Thewis, C. Boudry, et al., Hyperspectral imaging applications in agriculture and agrofood product quality and safety control: A review. Appl. Spectrosc. Rev. 48(2), 142–159 (2013)
 5.
L. Bruzzone, R. Cossu, A multiplecascadeclassifier system for a robust and partially unsupervised updating of landcover maps. IEEE Trans. Geosci. Remote Sens. 40(9), 1984–1996 (2002)
 6.
F. Melgani, L. Bruzzone, Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 42(8), 1778–1790 (2004)
 7.
Y. Chen, N.M. Nasrabadi, T.D. Tran, Hyperspectral image classification via kernel sparse representation. IEEE Trans. Geosci. Remote Sens. 51(1), 217–231 (2013)
 8.
G. CampsValls, L. Bruzzone, Kernelbased methods for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 43(6), 1351–1362 (2005)
 9.
G. Hughes, On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 14(1), 55–63 (1968)
 10.
A. Plaza, J. Plaza, G. Martin, in Machine Learning for Signal Processing (MLSP), 2009. 2009 IEEE international workshop on. Incorporation of spatial constraints into spectral mixture analysis of remotely sensed hyperspectral data, vol 1 (IEEE, 2009), pp. 1–6
 11.
M. Dalla Mura, J.A. Benediktsson, B. Waske, et al., Morphological attribute profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote Sens. 48(10), 3747–3762 (2010)
 12.
M. Fauvel, J. Chanussot, J.A. Benediktsson, A spatial–spectral kernelbased approach for the classification of remotesensing images. Pattern Recogn. 45(1), 381–392 (2012)
 13.
Y. Chen, N.M. Nasrabadi, T.D. Tran, Hyperspectral image classification using dictionarybased sparse representation. IEEE Trans. Geosci. Remote Sens. 49(10), 3973–3985 (2011)
 14.
G. CampsValls, N. Shervashidze, K.M. Borgwardt, Spatiospectral remote sensing image classification with graph kernels. IEEE Geosci. Remote Sens. Lett. 7(4), 741–745 (2010)
 15.
G. CampsValls, L. GomezChova, J. MuñozMarí, et al., Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 3(1), 93–97 (2006)
 16.
J. Li, P.R. Marpu, A. Plaza, et al., Generalized composite kernel framework for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 51(9), 4816–4829 (2013)
 17.
Y. Tarabalka, M. Fauvel, J. Chanussot, et al., SVMand MRFbased method for accurate classification of hyperspectral images. IEEE Geosci. Remote Sens. Lett. 7(4), 736–740 (2010)
 18.
X.D. Kang, S. Li, J.A. Benediktsson, Spectral–spatial hyperspectral image classification with edgepreserving filtering. IEEE Trans. Geosci. Remote Sens. 52(5), 2666–2677 (2014)
 19.
C. Cortes, V. Vapnik, Supportvector networks. Mach. Learn. 20(3), 273–297 (1995)
 20.
G. CampsValls, L. GomezChova, J. CalpeMaravilla, J.D. MartinGuerrero, E. SoriaOlivas, L. AlonsoChorda, J. Moreno, Robust support vector method for hyperspectral data classification and knowledge discovery. IEEE Trans. Geosci. Remote Sens. 42(7), 1530–1542 (2004)
 21.
D. Tuia, F. Ratle, A. Pozdnoukhov, G. CampsValls, Multisource composite kernels for urbanimage classification. IEEE Geosci. Remote Sens. Lett. 7(1), 88–92 (2010)
 22.
K.M. He, J. Sun, X.O. Tang, Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013)
 23.
L. Wang, J. Zhang, P. Liu, Spectralspatial multifeaturebased deep learning for hyperspectral remote sensing image classification. Soft. Comput. 21(1), 213–221 (2017)
 24.
Y. Guo, H. Cao, S. Han, Spectralspatial hyperspectral image classification with Knearest neighbor and guided filter. IEEE Access 6, 18582–18591 (2018)
 25.
C.C. Chang, C.J. Lin, LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 27–27 (2011)
Funding
The project was supported by Open Research Fund of Shandong Provincial Key Laboratory Of Infectious Disease Control and Prevention, Shandong Center for Disease Control and Prevention (No. 2017KEYLAB01), the Science and Technology Project for the Universities of Shandong Province (No. J18KB171), Laboratory of Data Analysis and Prediction in Shandong Women’s University.
Availability of data and materials
The simulations were performed using matlab2014a and libsvm3.0 in Intel Core I7 (64bit). All data generated or analyzed during this study are included in this published article. The hyperspectral image datasets can be downloaded at http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes
Author information
Affiliations
Contributions
YHG is the main writer of this paper. He proposed the main idea and designed the experiment. XJY and XCZ completed the analysis of the results. DXY assisted in the collection and preprocessing of the data. YB refined the idea and designed the structure of the whole manuscript. Both authors read and approved the final manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Guo, Y., Yin, X., Zhao, X. et al. Hyperspectral image classification with SVM and guided filter. J Wireless Com Network 2019, 56 (2019). https://doi.org/10.1186/s136380191346z
Received:
Accepted:
Published:
Keywords
 Support vector machine
 Guided filter
 Hyperspectral image classification