Desarrollan sistema inteligente de videovigilancia en tiempo real para aeropuertos

Scientists from the research group ‘Video and Image Processing (VIP)’ at the Department of Computer Architecture of the University of Malaga (UMA) have developed an intelligent video surveillance system that detects and identifies objects and people in large spaces in real time. The novelty of this method is that it can eliminate the need for direct human supervision in almost the entire process and enhance surveillance and control tasks. To demonstrate its effectiveness, experts have tested this model in a European airport as a real scenario.

Another key aspect of this study, which is funded by the Department of University, Research and Innovation of the Andalusian Government, the Ministry of Education and Vocational Training, and the University of Malaga, is the adaptation of the system to a low-consumption computing device, i.e., a smaller data processor that requires little energy for operation.

Currently, automatic video surveillance systems typically implement object detection techniques in the initial phase before performing more complex tasks. They also require constant supervision by a person or a human team to ensure that the labeling and identification of elements are done correctly. «These models are built using a supervised learning approach, where images of object classes, along with their labeling, must be available before training,» explains Paula Ruiz Barroso, a researcher at the University of Malaga and the lead author of the study.

This human supervision requires long periods of time, as well as material and human resources. «The system used in this study allows us to identify the movement of large objects such as airplanes, fire trucks, etc., and, at the same time, detect the presence of smaller objects such as luggage carts, working personnel, maintenance cars and vans, among others, which require minimal human supervision compared to supervised approaches,» explains Ruiz.

More processing in less time

With this system, experts provided the model with recorded images in a real parking platform, i.e., an area where airplanes park to load passengers and luggage.

To calculate the estimated processing time of the data it visualizes, they worked on optimizing the process to detect objects more quickly. «We have reduced the times, going from 7.4 seconds per frame, which is a very slow interval, to 0.2 seconds per frame,» indicates the study’s lead.

Paula Ruiz Barroso, researcher at the University of Malaga and lead author of the study.

During the tests, researchers used reference models that were later optimized to evaluate their impact on energy consumption and inference time, i.e., the time interval that artificial intelligence needs, after learning new data, to make decisions.

The results of this study, entitled ‘Real-time unsupervised video object detection on the edge’ and published in the journal ‘Future Generation Computer Systems’, demonstrate the accuracy and efficiency of this system, especially with small elements in large areas, such as people.

Low-consumption device

So far, due to the high computational complexity of conventional identification models, their processing had to be carried out on accelerators mounted on servers, or in other words, high-capacity computer processors to meet performance requirements.

In this work, experts used a device that helps save calculation time and energy in identification tasks. «In addition to reducing working times and energy expenditure, the processor ensures the privacy of the data it works with, as it does not need to be sent to the cloud,» adds Ruiz.

Another advantage of this processor is its low energy consumption. «We have managed to reduce the energy consumption it uses for operation. Specifically, we have reduced it from 9.6 joules to 0.4 joules, which is equivalent to a consumption 24 times lower than that of a 10-watt LED bulb,» points out the researcher from the University of Malaga.



FUENTE

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *