Real-time Forest Fire Detection System using Computer Vision and Deep Learning

Contributors

Nishant Bharali, Amal Sujith

Abstract

Global forest ecosystems, wildlife habitats, and human settlements are increasingly endangered by the phenomenon of forest fires. The early identification of incipient forest fires is a crucial element in preventing their uncontrolled proliferation over extensive geographical regions. Nevertheless, the present surveillance infrastructure for forest fire detection exhibits notable deficiencies. For instance, satellite-based systems such as MODIS are often impeded by their coarse spatial resolution and delayed response times, typically ranging between one to two days, as identified in the study by Li et al. (2018). Additionally, traditional methods such as watch towers and ground patrols are constrained by their limited area coverage, rendering vast forest areas unprotected. In response to these challenges, the current project is directed towards the development of a comprehensive, real-time forest fire detection system, leveraging the latest advancements in the realms of computer vision, deep learning, and unmanned aerial systems for continuous, extensive area monitoring. This research endeavors to create a highly accurate and efficient detection system utilizing advanced computer vision and deep learning methodologies, encompassing object detection and image classification. The core of this system is based on custom-developed machine learning algorithms that analyze collected data to ascertain the probability of imminent fire events, drawing upon recognized fire signatures. In parallel, aerial imagery obtained is subjected to analysis by deep convolutional neural networks, specifically tailored for fire detection. These networks adeptly categorize image segments into 'fire' or 'non-fire' classes, achieving an accuracy rate exceeding 80%. Current research efforts are concentrated on refining the model's accuracy, particularly in conditions of reduced visibility and over extended detection ranges. The proposed system represents a substantial advancement over traditional satellite-based and manual monitoring methods, which are limited in terms of responsiveness and coverage. With ongoing enhancements, this system has the potential to be scaled up for widespread application, offering a proactive solution to this pressing global issue.

Index Terms—Communication Bridge, Coverage Path Planning, Detection Algorithm, Fixed-Wing, Ground Control Station, Image Processing, Mission Planner, Navigation, Unmanned Aerial Vehicle, Convolutional Neural Networks

Introduction

The foundational motivation behind the conception and execution of this model was to extend support to the tribal communities who are heavily reliant on forest resources for their sustenance. Catastrophic events like forest fires have a devastating impact on these ecosystems, leading to the loss of both natural resources and wildlife habitats within these sanctuaries. Such incidents disrupt the ecological balance in these regions significantly.


In this context, the capacity for real-time detection emerges as a crucial tool for forest management authorities, offering immediate insights into unfolding situations. Traditional satellite-based detection methods are becoming increasingly obsolete due to their prolonged relay times, which hampers timely response to such emergencies. In contrast, the utilization of drones emerges as a superior alternative, spearheading the detection of these perilous events.


Furthermore, the ethos promoted by our university, which emphasizes compassion towards those in need and the importance of contributing positively to society, was a critical factor in our decision to undertake this project. This initiative aligns with our institution's commitment to fostering social responsibility and aiding communities that depend on natural resources for their livelihood.

Proposed Workflow

Figure 1. Proposed Workflow

 Approach

This study will utilize advanced Convolutional Neural Network (CNN) architectures, with a focus on integrating Faster R-CNN Inception models into Raspberry Pi 4 systems for the purpose of forest fire detection. The training of these models will involve a dataset comprising images of forest fires, sourced from the National Disaster Management Authority of India's (NDMA) online repository, alongside images depicting standard forest landscapes without fire presence. To enhance the efficacy of these models, we will implement transfer learning techniques, leveraging the capabilities of pre-existing neural networks such as Inception and MobileNet. This approach involves the modification and fine-tuning of both the classifier and regressor components within these networks, specifically tailoring them for fire detection tasks. Additionally, data augmentation strategies—including image flipping, rotation, and adjustments in brightness and contrast—will be employed to broaden the variability within the training dataset.

Our model development and testing phases will be conducted on a high-capacity NVIDIA GPU server, ensuring accelerated computational processing. Following the determination of the most effective model, we will proceed with its conversion and subsequent deployment on compact computing devices, such as the Jetson Nano and Raspberry Pi. This enables real-time fire detection capabilities when integrated with aerial vehicles like fixed-wing gliders or Tilt Rotor VTOLs in a controlled simulation environment. The deployed model will be responsible for processing live video feeds from mounted cameras on these vehicles, identifying and delineating fire-affected areas through generated bounding boxes.

For a comprehensive implementation, our project is structured into several distinct phases.

The Coverage Strategy we are employing:

Code Implementation:

I. Proposed Methodology for UAV Based Object Detection

Figure 2. Expected Flight Operation (UAV based implementation)

The Figure indicates the work cycle of the drone with logic block and detection condition. The ultimate goal is to detect with maximum accuracy during the mission and when faced any critical battery failure or draining, then to safely return back to the base with minimal damages. This workflow provides an idea that has been presented in paper [13], where the Swarm drones return back to the base in case of repairs and is predominantly used for Autonomous flights along with manual flight control capability for human intervention. 

Deviations:

As causing a large-scale fire is not plausible in a real-case scenario to test the system, we are preferring simulations to replicate the same and test the procedure for small area-of-effect scenarios in the real world.

Required Tools




Working

The glider consists of two cameras, Video telemetry unit, GPS module, Smoke sensor (MQ2) and Micro-controller (Raspberry Pi 3B+/ NVIDEA Jetson Nano) etc. Tensor Flow software is installed in the micro-controller which uses RCNN machine learning algorithm for Fire detection. RCNN is trained with 3000-4000 images of fire in different scenarios. If normal fire image is detected the communication part is made active but if a smoky atmosphere the sensor part of the glider is made active to check the amount of carbon dioxide present in the atmosphere. Readings in smoke sensor (MQ2) and if the reading is above a particular threshold level then the communication part is made active. The other camera of the drone gives the live streaming of the location to the forest officials to avoid false alarm. 

II. Proposed Idea for Ground Based Detection Module using WSN

Working

Implementation Images

Experiments and Results

1. Fire Detection Using Deep Neural Networks

1.1. Forest Fire Detection Algorithm

A systematic two phase methodology was adopted for developing and optimizing a vision-based deep neural network model for reliable wildfire identification capability from imagery under challenging real-world conditions.

Phase 1: Rule-Based Method

This intuitive and computationally inexpensive approach serves as a fast method for fire identification. However, limitations were noted regarding poor tolerance to noise, shadows and outdoor luminance variation deteriorating detection accuracy.

Phase 2: Deep Learning Model

To overcome limitations of color rules-based method, a more robust deep convolutional neural network architecture customized for fire recognition was developed leveraging transfer learning: 

Among the models, SSD framework offered optimal accuracy-latency trade-off by simplifying the detection to a single shot prediction network compared to the two-stage region proposal mechanism of Faster R-CNN derivatives. The final model operates at over 80% precision in identifying diverse fire events within aerial imagery while maintaining 12 FPS throughput on the Jetson module. Table I summarizes the frame-by-frame analysis rate and accuracy achieved using the DNN models on the target embedded hardware alongside the software-based rule technique. Fig. 1 and 2 showcase sample detections using the deep network architecture.

1.2. Comparative Analysis

Table I. Performance of Experimental Fire Detection Models

Figure 3.1. SSD mobilenet v2 model

Figure 3.2. Faster RCNN inception v2 Model 

By harnessing deep neural network architectures combined with well annotated datasets and model optimization strategies, the solution demonstrates reliable automated fire spotting capability from sky allowing early alerts to ground response teams monitoring expansive territories.

Figure 3.3. Pre-processed Image and Original Image

Figure 3.4. Live detection using Faster RCNN inception v2 - Test Case 1

Figure 3.5. Maximum FOV for Test Case 1

Figure 3.6. Live fire detection test for a distinct scenario

1.3. Detection Through Computer Vision

In the detection part, the sample fire images are captured at the rate of 30 fps, using a normal camera and the captured frames are converted from the RGB color code to HSV format for the better detection. Here HSV is having the upper hand because HSV can separate the lumina component of the frame from the chroma component, this property of HSV will give better output in different lighting conditions. In order to set up a filter, masking is done in a particular range of color spectrum including the combination of yellow and red. Later this mask and the real image is passed to a filter which will perform a bit-wise AND operation. The output of AND operation in the black/white domain:

Filter response:

This operation will give the final filer output, later a non-zero bit counter is used on the filter response to produce the final prediction of fire.

1.4. Comparative Results

We further performed tests on both the models and gathered the following results for both the models and performed a comparative analysis between them:


Qualitative Results

Challenges faced and success/failure cases

Fire Detection success message and Output Image verification is presented as follows:

Figure 3.7. Indicates the Successful fire detection and Output Image

Video footage of test cases:

Untitled video - Made with Clipchamp (4).mp4
Untitled video - Made with Clipchamp (5).mp4

Additional Footage of UAV based Implementation:

CV UAV.mp4

Conclusion

Recent research proposes an integrated system for forest fire detection that appears to offer faster and more reliable outcomes compared to existing methods. The use of unmanned aerial vehicles (UAVs) enables extensive monitoring of hard-to-reach forest areas, while a wireless sensor network provides complementary detection capacity when drones are grounded. Preliminary tests suggest the combined system can reduce false alarms through cross-validation. Additionally, refinements to the image processing algorithms show promise for detecting both smoke and flames in challenging real-world conditions. While early results are encouraging, the persistent issue of high false alarm rates indicates that further research is needed to realize the full potential of this integrated approach. With additional training data and algorithm refinement, there may be opportunities to significantly improve accuracy. By addressing existing limitations, next-generation versions of the system could become a versatile component of forest fire prevention and response.

Future Scope

Conditions to find that provide viable alternatives for reduced power consumption:

References