Optimizing Satellite Image Analysis: Leveraging Variational Autoencoders Latent Representations for Direct Integration
Journal Articles
Overview
Research
Identity
Additional Document Info
View All
Overview
abstract
Variational autoencoders (VAEs) have emerged as powerful tools for data compression and representation learning. In this study, we explore the application of VAE-based neural compression models for compressing satellite images and leveraging the latent space directly for downstream machine learning tasks, such as classification. Traditional approaches to image compression require decoding the compressed format for subsequent analysis. However, we propose that the latent representation constructed by these models can be utilized directly by another machine learning model without explicit reconstruction, or inverse transform. We utilize latent spaces derived from neural compression model-encoded Sentinel-2 images for downstream classification tasks. We demonstrate the viability and flexibility of this approach, showcasing the impact of fine-tuning the neural compression models to further increase classification performance, achieving the same accuracy as state-of-the-art models at lower bitrates. By training these models to compress satellite images into a low-dimensional latent space, we show that the latent representations capture meaningful information about the original images, facilitating accurate classification without the overhead of reconstruction. Our results highlight the potential of neural compression methods for direct satellite image analysis, offering a promising avenue for efficient data transmission and processing in remote sensing applications.