Home
Scholarly Works
Cloud/haze detection in airborne videos using a...
Journal article

Cloud/haze detection in airborne videos using a convolutional neural network

Abstract

In airborne videos surveillance, moving object detection and target tracking are the key steps. However, under bad weather conditions, the presence of clouds and haze or even smoke coming from buildings can make the processing of these videos very challenging. Current cloud detection or classification methods only consider a single image. Moreover, the images they use are often captured by satellites or planes at high altitudes with very long ranges to clouds, which can help distinguish cloudy regions from non-cloudy ones. In this paper, a new approach for cloud and haze detection is proposed by exploiting both spatial and temporal information in airborne videos. In this method, several consecutive frames are divided into patches. Then, consecutive patches are collected as patch sets and fed into a deep convolutional neural network. The network is trained to learn the appearance of clouds as well as their motion characteristics. Therefore, instead of relying on single frame patches, the decision on a patch in the current frame is made based on patches from previous and subsequent consecutive frames. This approach, avoids discarding the temporal information about clouds in videos, which may contain important cues for discriminating between cloudy and non-cloudy regions. Experimental results show that using temporal information besides the spatial characteristics of haze and clouds can greatly increase detection accuracy.

Authors

Fazlali H; Shirani S; McDonald M; Kirubarajan T

Journal

Multimedia Tools and Applications, Vol. 79, No. 39-40, pp. 28587–28601

Publisher

Springer Nature

Publication Date

October 1, 2020

DOI

10.1007/s11042-020-09359-7

ISSN

1380-7501

Contact the Experts team