NoobaVSS: Video Processing Framework to Enhance Processing and Automated Manipulation of Surveillance Videos
No Thumbnail Available
Date
2013
Journal Title
Journal ISSN
Volume Title
Publisher
Uva Wellassa University of Sri Lanka
Abstract
Surveillance cameras are becoming artificial eyes capable of monitoring behaviors, activities, or
other visual information with the purpose of influencing, managing, directing, or protecting.
However they still depend on human assistance in interpreting any anomalies in the scenes they
capture. Next generation smart surveillance systems are expected to be capable of detecting
anomalies by themselves releasing human operators from constant, manual observation of the
video feeds. In the recent past Sri Lanka has shown a rapid increase in the use of CCTV
surveillance systems in different types of environments including commercial, non-commercial
and government sectors. Most of these however are used only for post-incident investigation
purposes mainly due to the higher effort and cost required for real time analysis. The
unavailability of video analyzing platforms in the public domain and non-existence of open
source video analyzing software has deterred their use for pre-incident investigation and real
time analysis. Our research effort is to develop a software framework that will act as a testing
framework and software basement for automated surveillance video analysis with the aim of
improving quality and level of security provided by video surveillance systems. A sample
scenario for a banking environment is studied extensively to guide the development process.
Methodology
The framework is developed as a component based model. A set of individual plugins have been
developed separately and connected to the main engine where each individual plugin is
responsible for a separate feature extracting task. A plug-in is basically capable of processing a
given sequence of image frames from a video and extract designated features (ex: Number of
faces in the scene, Speed of an object in the scene). To identify these key features to be
extracted from the video imagery, a scenario analysis is conducted over capturing domain (in
our extensive study-banking environment). Scenario analysis is useful in identifying what is
needed to be extracted from the input video and what is not needed to be extracted. Since the
approach in writing scenarios is not restricted to any formal method or constrained by any event
sequence, more free flowing and different scenarios are captured. These scenarios ultimately
make it easier to identify the nature of the environment and give more insight in identifying
computer vision techniques that need to be used.
Next, to extract each of those features of the video, a separate plugin has been developed.
Knowledge representation platform has been developed using the Qt framework. This
framework has the unique capability of loosely coupling functions using signal slot mechanism.
Each processing plugin essentially has the same structure, where it may or may not subscribe to
outputs of some other plugins. It processes the inputs accordingly within the given time frame
and emits its output, if any. They all are feature detectors which take input from a surveillance
video feed. A global timing signal has been used to keep track of time and an abstract
processing node facilitates signal slot mechanism. It has an abstract process method, so that the
processing modules inherited from it can implement a different functionality for a process
method. However, nodes named as D does not subscribe to any other nodes. They can be feature
detectors which take input from a video feed. In the testing environment, it can read from a file
and emit the content as an event for the given time frame.
Description
Keywords
Science and Technology, Technology, Automation, Automated, Computer Science