by Abigail Schaaf, USDA Forest Service, Don Atwood, Alaska Satellite Facility, and Mark Riley, USDA Forest Service
Passive optical, remotely-sensed imagery has a long and successful history as a key data layer for the USDA Forest Service vegetation image classification and mapping process. With the capability to penetrate cloud and fog, active SAR is not bound by the same physical and temporal restrictions as passive optical sensors.

In cooperation with ASF, the Forest Service evaluated the integration of ALOS-PALSAR data into the existing vegetation mapping process for a current project on the Copper River Delta (CRD) region in the Chugach National Forest in Alaska (Figure 3). The project area consists of a portion of the CRD, with a total of 552,356 acres used for the analysis. One of the strengths of the PALSAR data is the medium-resolution cell size, which promotes a more exhaustive comparison to passive optical datasets.
The Forest Service employs the advanced R-based ensemble classifier Random Forest to statistically extract the most accurate information possible from robust image-based geospatial datasets and known-field data points. For this study, existing vegetation map types were characterized by dominance cover type and basic lifeform cover type, and were classified at the stand level, using multiple combinations of the input data layers. Following the classification process, an evaluation of the contribution of the PALSAR data was conducted.

In addition to the PALSAR data, the project used SPOT-5 multispectral satellite imagery and a 20-meter SPOT High-Resolution Stereo (HRS) Digital Surface Model (DSM). Image derivatives, indices, and other dataset transformations provided additional data-mining inputs. The two quad-pol
PALSAR scenes used for this study were acquired on May 27 and July 12, 2009, a near concurrent collection with the multispectral SPOT-5 data. Both scenes had a range and azimuth pixel size of 10 meters and were processed to Level 1.1. After the polarimetry data layers were generated, they were resampled from 10 meters to 5 meters, using ArcMap’s Resample Tool and Bilinear Interpolation as the resampling technique to match the spatial resolution of the SPOT-5 imagery and digital-elevation model (DEM) data.
Established techniques for pre-processing of SAR data were essential to the successful extraction of physical properties. Polarimetric processing was performed using two free, open-source tools provided by the European Space Agency (PolSARpro) and ASF (MapReady Remote Sensing Toolkit). These software tools enabled the derivation of PALSAR-polarimetry layers that accurately expressed the
scattering matrix of surface feature objects.
To evaluate the contribution of the PALSAR information, the mapping was performed using three combinations of input predictor layers: 1) standalone PALSAR data layers; 2) a combination of PALSAR data layers, SPOT 5, and DSM data; 3) only the SPOT and DSM data. For each combination of predictor layers, a Random Forest classification was performed to three hierarchical cover type levels: 1) vegetation dominance; 2) vegetation sub-cover; 3) vegetation lifeform. Image-classification training data, essential to the data-mining process, was collected in the field and from high-resolution digital-orthorectified aerial photography.
It was found that the integration of the PALSAR data in the vegetation mapping and classification process provides an improvement to the accuracy of the classification results as shown in the table. Percent-error calculations used Kappa statics and errors of omission and commission.
With fewer classes and less spectral variability, the lifeform cover type had the least classification error. This level of classification is the broadest of the three and affords the least amount of detail, and does not differentiate vegetation-species type. Figure 4 illustrates the mapped-classification results at the lifeform level.

Figure 4a represents the SPOT, DSM, and PALSAR classification result. Figure 4b represents the SPOT and DSM classification result and 4c represents the classification results from the PALSAR data alone. Visually, there is very little difference between the classification results of 4a and 4b. Figure 4c, the PALSAR data alone, is significantly different from the other combinations that included the SPOT and DSM data. The results in 4c indicate there is more water, less sparselyvegetated/unvegetated and shrub, and more forest in comparison to 4a and 4b. It was found that the sparsely vegetated, unvegetated, and water classes were very mixed when only the PALSAR data layers were used as inputs. Figure 4 demonstrates that the PALSAR data does less to drive the classification than the optical and topographic data. It is clear that when included in the classification with the SPOT and DSM data, the PALSAR data enhances, but does not control the results.
While the inclusion of PALSAR into the classification process resulted in only a marginal-statistical increase in accuracy, it is across the whole project area where this increase in accuracy is positively aggregated. While a classification-accuracy improvement of only a few percent may seem statistically insignificant, this should be measured in terms of additional accuracy across an entire project area, that could add up to several-thousand hectares.
For future research and analysis, evaluating each of the SAR-polarimetry datasets, independently and with auxiliary data layers, may determine which are performing better than others. Additionally, it may be beneficial to use the SAR data in generating the image segments, as opposed to only the optical data, including the radar imagery, may improve species-level delineations.