BISAG, Gandhinagar
Bhaskaracharya Institute For Space Applications and Geo-Informatics
During the summer of 2015, I interned at BISAG. I, along with my friends Shantanu Shrivastava and Dharmil Shah worked on a project "Object Based Classification of Satellite Images"
Object Based Classification of Satellite Images
We researched and worked on identifying individual objects like cars, aircraft, roads, etc from a high resolution satellite image. Rather than using the conventional pixel based approach for image classification, we used context sensitive object based approach for image classification.
I did this project as a part of my internship under the mentor-ship of Dr. Manoj Pandya and Dr. M. B. Poddar.
The major topics involved in the project include
- Image Processing
- Machine Learning
Image Processing
Now another major question that arises is the need of object based classification rather than pixel based classification.
- Object-based classification outperforms both unsupervised and supervised pixel-based classification methods.
- The higher accuracy is attributed to the fact that object-based image classification took advantage of both spectral and contextual information in the remotely sensed imagery.
We can divide the complete process into 5 major steps
- Image pre-processing
- Multi-Resolution Segmentation
- Feature extraction
- Classification
- Post-classification processing
Image pre-processing
Errors may be introduced by
- Environment
- Clouds
- Atmospheric Scattering
- Random or systematic malfunction of the remote sensing system (e.g., an uncaliberated detector creates striping)
- Miscellaneous other Noises
Multi-Resolution Segmentation
- Motivation: The ability to describe image properties that span a range of scales, which directly reflects the nature of most images.
- The process starts with each pixel forming one image object or region. At each step a pair of image objects is merged into one larger object. The merging decision is based on local homogeneity criteria, describing the similarity of adjacent image objects.
- Several research papers are available on this algorithm for interested users.
- Scale of segmentation defines how smaller the objects should be after segmentation
Below are the original image, image at a multi-resolution scale of 5 and image at a multi-resolution scale of 40
Image Features
These are used for feature extraction.
- A feature is a piece of information which is relevant for solving the computational task related to a certain application.
- Features may be specific structures in the image such as points, edges or objects or may be the result of a general neighbourhood operation or feature detection applied to the image.
- Other examples of features are related to motion in image sequences, to shapes defined in terms of curves or boundaries between different image regions, or to properties of such a region.
Feature Extraction
- Building Derived Data values that are intended to be non-redundant, informative, facilitating the subsequent learning and generalization steps that will lead us to better interpretation.
- Generally involves reducing the amount of resources required to describe a large set of data.
- Major algorithms used are Linear Discriminant Analysis and Principal Component Analysis.
Classification
- Allocating classes to each region/segment.
- Need to Define Decision Boundaries.
- Need to define Decision Rule i.e. the algorithm that defines the class of a region based on the decision boundaries.
- Use of machine learning techniques like
Using decision trees and trained learners, we achieved the below classification results
For 2 class classification: Classified the original image into two classes, aircraft and ground
For 4 class classification: Classified the original image into four classes, aircraft, ground, shadow and road
Statistics
Statistics for any classified image can be obtained by creating a mask image for a particular colour and extracting that coloured segment from the image and hence the number of pixels in the resulting image will give the approximate percentage of the colour in the original classified image.
Accuracy Assessment
The accuracy of our training model can be computed by
- Comparing the results with the industrial standards like e-Cognition.
- Performing Tone based statistical operations on both the original image and the classified image.