Fire detection task aims to identify fire or flame in a video and put a bounding box around it. This repo includes a demo on how to build a fire detection detector using YOLOv5.
Install
Clone this repo and use the following script to install YOLOv5.
# Clone
git clone https://github.com/spacewalk01/Yolov5-Fire-Detection
cd Yolov5-Fire-Detection
# Install yolov5
git clone https://github.com/ultralytics/yolov5
cd yolov5
pip install -r requirements.txt
Training
I set up train.ipynb script for training the model from scratch. To train the model, download Fire-Dataset and put it in datasets folder. This dataset contains samples from both Fire & Smoke and Fire & Guns datasets on Kaggle. I filtered out images and annotations that contain smokes & guns as well as images with low resolution, and then changed fire annotation’s label in annotation files.
The following charts were produced after training YOLOv5s with input size 640x640 on the fire dataset for 10 epochs.
P Curve
PR Curve
R Curve
Prediction Results
The fire detection results were fairly good even though the model was trained only for a few epochs. However, I observed that the trained model tends to predict red emergency light on top of police car as fire. It might be due to the fact that the training dataset contains only a few hundreds of negative samples. We may fix such problem and further improve the performance of the model by adding images with non-labeled fire objects as negative samples. The authors who created YOLOv5 recommend using about 0-10% background images to help reduce false positives.
Ground Truth
Prediction
Feature Visualization
It is desirable for AI engineers to know what happens under the hood of object detection models. Visualizing features in deep learning models can help us a little bit understand how they make predictions. In YOLOv5, we can visualize features using --visualize argument as follows:
I borrowed and modified YOLOv5-Custom-Training.ipynb script for training YOLOv5 model on the fire dataset. For more information on training YOLOv5, please refer to its homepage.
Yolov5 for Fire Detection
Fire detection task aims to identify fire or flame in a video and put a bounding box around it. This repo includes a demo on how to build a fire detection detector using YOLOv5.
Install
Clone this repo and use the following script to install YOLOv5.
Training
I set up
train.ipynb
script for training the model from scratch. To train the model, download Fire-Dataset and put it indatasets
folder. This dataset contains samples from both Fire & Smoke and Fire & Guns datasets on Kaggle. I filtered out images and annotations that contain smokes & guns as well as images with low resolution, and then changed fire annotation’s label in annotation files.Prediction
If you train your own model, use the following command for detection:
Or you can use the pretrained model located in
models
folder for detection as follows:Results
The following charts were produced after training YOLOv5s with input size 640x640 on the fire dataset for 10 epochs.
Prediction Results
The fire detection results were fairly good even though the model was trained only for a few epochs. However, I observed that the trained model tends to predict red emergency light on top of police car as fire. It might be due to the fact that the training dataset contains only a few hundreds of negative samples. We may fix such problem and further improve the performance of the model by adding images with non-labeled fire objects as negative samples. The authors who created YOLOv5 recommend using about 0-10% background images to help reduce false positives.
Feature Visualization
It is desirable for AI engineers to know what happens under the hood of object detection models. Visualizing features in deep learning models can help us a little bit understand how they make predictions. In YOLOv5, we can visualize features using
--visualize
argument as follows:Reference
I borrowed and modified YOLOv5-Custom-Training.ipynb script for training YOLOv5 model on the fire dataset. For more information on training YOLOv5, please refer to its homepage.