History of Image Classification

Image classification techniques have evolved alongside remote sensing technology. Understanding this history helps you appreciate why different methods exist and when each approach makes sense.

Learning objectives

  • Trace the evolution of classification from manual interpretation to machine learning.
  • Understand how sensor improvements enabled new classification techniques.
  • Recognize the shift from pixel-based to object-based methods.
  • Appreciate current trends in deep learning for remote sensing.

Why it matters

Knowing the history prevents reinventing the wheel. Many "new" ideas are refinements of older concepts. Understanding the progression helps you choose appropriate methods for your data resolution and research questions.

Evolution timeline

Era Data Characteristics Dominant Methods
1970s-1980s Low spectral/spatial resolution (early Landsat MSS: 4 bands, 80m) Manual interpretation, unsupervised clustering (ISODATA, K-means)
1980s-1990s Improved spectral resolution (Landsat TM: 7 bands, 30m) Supervised classification (Maximum Likelihood, Minimum Distance)
1990s-2000s High spatial resolution (IKONOS, QuickBird: 1-4m) Object-Based Image Analysis (OBIA), decision trees
2000s-2010s Multi-sensor fusion (optical + radar + LiDAR) Random Forest, Support Vector Machines, ensemble methods
2010s-Present Big data (Sentinel, Planet daily), cloud computing Deep learning (CNNs, U-Net), time-series classification

Key paradigm shifts

1. Pixel-based to Object-based

Early classifiers treated each pixel independently. As resolution improved, individual pixels became smaller than the objects of interest (a building might span 100 pixels). Object-Based Image Analysis (OBIA) first segments the image into homogeneous regions, then classifies those objects using shape, texture, and context in addition to spectral properties.

2. Single-date to Time Series

Limited data availability meant most classifications used a single "best" image. With dense time series now available (Landsat archive, Sentinel-2), classification can leverage phenological patterns - how vegetation changes through seasons - dramatically improving accuracy for agricultural mapping and forest type discrimination.

3. Desktop to Cloud

Processing petabytes of imagery was impossible on desktop computers. Platforms like Google Earth Engine moved computation to the cloud, enabling continental and global-scale classification that would have taken years on a single machine.

Quick win: Compare classification eras in GEE

Run this to see how we can apply both old and new methods in Earth Engine:

// Load a Landsat 8 composite
var image = ee.ImageCollection('LANDSAT/LC08/C02/T1_L2')
  .filterDate('2023-01-01', '2023-12-31')
  .filterBounds(ee.Geometry.Point([-82.3, 29.6]))
  .median()
  .multiply(0.0000275).add(-0.2);

// Select spectral bands
var bands = ['SR_B2', 'SR_B3', 'SR_B4', 'SR_B5', 'SR_B6', 'SR_B7'];

// 1980s approach: Unsupervised clustering
var clustered = ee.Algorithms.Image.Segmentation.KMeans(
  image.select(bands), 5, 100
).select('clusters');

// 2010s approach: Random Forest would need training data
// (see supervised classification lab for full implementation)

Map.centerObject(image, 9);
Map.addLayer(image, {bands: ['SR_B4', 'SR_B3', 'SR_B2'], min: 0, max: 0.3}, 'True Color');
Map.addLayer(clustered.randomVisualizer(), {}, 'K-means Clusters');

print('This unsupervised approach mirrors 1980s methods');

What you should see

A true color image and a clustered version showing 5 spectral classes without training data.

Current frontiers

  • Deep Learning: Convolutional Neural Networks (CNNs) automatically learn spatial features, reducing the need for hand-crafted indices.
  • Transfer Learning: Models pre-trained on large datasets can be fine-tuned for specific applications with limited training data.
  • Self-Supervised Learning: Learning useful representations from unlabeled data, then applying to classification.
  • Multi-sensor Fusion: Combining optical, radar, and LiDAR data for improved discrimination of complex land covers.

Quick self-check

  1. Why did unsupervised methods dominate in the 1970s-80s?
  2. What technological change enabled object-based image analysis?
  3. How has cloud computing changed what is possible in land cover mapping?
  4. What advantage does time-series classification offer over single-date classification?

Further reading

  • Li, M., et al. (2014). A review of remote sensing image classification techniques. International Journal of Remote Sensing
  • Blaschke, T. (2010). Object based image analysis for remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing
  • Zhu, X.X., et al. (2017). Deep learning in remote sensing: A comprehensive review. IEEE Geoscience and Remote Sensing Magazine

Next steps