COMPLICATED SHAPES ESTIMATION METHOD FOR OBJECTS ANALYSIS IN VIDEO SURVEILLANCE SYSTEMS

Background. The evaluation of video image objects is a relatively difficult task. While solving the task of the geometric representation of a surveillance object, the following additional factors should be considered: possible overlapping of objects, similarity of complex elements, similarity of object elements and background, etc. Objective. The development of a method for complicated objects shape evaluation for application in video surveillance systems for estimation of dynamics of an object’s movement, examination of the object’s behavior on a probable execution of unauthorized actions, and for other tasks. Methods. The procedure of the background subtraction is used for identification of a raster shape of the surveillance object. To detect a vector shape of the object contours, the DEI approach is applied. The sorting procedures are used for identification of reference contour points and for forming the smooth curves. Results. The proposed method includes the following stages: color space conversion and normalization, object shape detection, contours detection and analysis, sorting of vector data, forming of smooth contour curve, object area computing. When the contour points number is reduced in 1.5 times, an average error of the proposed method compared with the DEI approach for accuracy rate is 0.75 %, for performance rate it is 8.43 %, for resource consuming rate it is 3.09 %. Conclusions. The proposed method allows to define an array of vector contour points which represent an “approximate” surveillance object of a complicated shape and it minimizes the data volume to be used in further analysis of a motion trajectory and other similar tasks without decreasing the accuracy. In addition, this method enables describing the surveillance object by an equal quantity of contour points that in turn can simplify the task of surveillance objects classification.


Introduction
Video surveillance is an important task which can be focused on visual control of road traffic [1 -5], monitoring of hazardous situations [6,7], monitoring in different areas of manufacturing [8,9], health care [10] or other applications. The essential part of the video surveillance procedure is the detection of presence of an object under observation. This detection is based on a certain specific feature of the object such as color, shape, behavior, etc. In our research we consider the object shape as the main feature.
The object shape is determinate by a contour which outlines the object as accurate as possible. This contour can be represented as either raster or vector curve. In the second case, the shape analysis is more flexible because we can scale the curve and change the number of points defining the contour. The latter is very important for both maximization of analysis accuracy and minimization of the consuming of computing resources used for further data processing because if there is insufficient data set about the object shape, its further analysis can lead to a wrong result; at the same time, if there is redundant data about the object shape, the need in increasing computing resources can appear. Therefore, it is desirable to find relatively optimal amount of data which enables an object shape analysis with minimal requirements to the computing resources.
A proposed method enables estimation of an object shape with a desirable level of accuracy and, thus, it can help to get an amount of data about the object shape which can be considered as optimal for a certain task.

Problem Statement
The research objective is the development of a method for shape evaluation of a complicated object. The method is supposed to be used in video surveillance systems as the preprocessing for further analysis of the object's movement dynamics, examination of the object's behavior on a probable execution of unauthorized actions, and in other similar tasks.

Related Work
The literature review shows that there is a relatively large number of researches related to tasks similar to the task we solve in our investigation. However, the approaches used for solving these related tasks are quite different.
Thus, in [11], the Moghaddam and Enright compared three interpolants defined for threedimensional elliptic Partial Differential Equations (PDEs) over an unstructured mesh. The authors assert that the obtained results showed that pure tri-cubic interpolant generates more accurate results rather than tri-quadratic interpolants.
In [12], the authors proposed three fast contouring algorithms for visualizing the solution of PDEs based on the Pure Cubic Interpolant. These algorithms do not need a fine structured approximation and the authors assert that their algorithms work efficiently with the original scattered data. The basic idea of the proposed approach is to identify the intersection points between contour curves and the sides of each triangle and then draw smooth contour curves connecting these points.
Enright developed in [13] Differential Equation Interpolant (DEI) approach which efficiently approximates the values of the solution of a PDE at off-mesh points. This approach allows to reach high precision at the resulting off-mesh points.
In [14], Barequet and Sharir presented a technique for piecewise-linear surface reconstruction from a series of parallel polygonal crosssections. The proposed algorithm uses a partial curve matching technique.
Lee, Wolberg, and Shin presented in [15] a fast algorithm for scattered data interpolation and approximation. The authors assert that their algorithm makes use of a coarse-to-fine hierarchy of control lattices to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function.
These and other similar methods enable producing an accurate contour of an object, however, in our opinion they can be complemented by additional procedures which enables minimization of the data amount about an object shape without appreciable change in accuracy and, in this way, we can achieve minimization of the requirements to the computing resources of a video surveillance system.

Method Description
The proposed method of video image analysis includes the following steps: A. Color space conversion and normalization. B. Object shape detection. C. Contours detection and analysis. D. Sorting of vector data. E. Forming of smooth contour curve. F. Object area computing.

A. Color Space Conversion and Normalization
Video data can be represented as a multiframe array. Array elements are images, which can be binary, gray-scale, and color.
Binary and gray-scale images of size M N  can be described by a two-dimension matrix ( , ) r g b in the color space of RGB model identify the pixel color. However, the most important information for the object shape estimation is luminance and we need to separate it from other color data. It can be fulfilled by conversion the image from RGB model into HSV, YCbCr, YIQ, XYZ, Lab or other similar models. In our research, we use conversion into color model YCbCr [16], which is defined by the formula:

B. Object Shape Detection
Initial data for this stage are color images, which are the background image b I and the image t I to be tested on the presence of any moving object. The background image is the image of the ordinary view of the scene to be under surveillance, it does not include any moving objects on it. The color vectors t t t represent values of the pixel intensity of these images: are the pixel coordinates of the images of size M N  , 1  In order to calculate the logical negation operation for a color image, we use an additional matrix which is considered as complementary to the color image to be analyzed. In this research we use the matrix V , which corresponds to white color  ( , , ), 1,2,3.

V i j k k
Alternatively, the elements values of matrix V can belong to the range [0.8083;1]. The logical negation operation is implemented in the following way: t are intensity values of the image pixels, which are the result of logical negation operation, 1, 2,3 k  . The detection of the moving object appeared on the background is fulfilled according to the formula: In case if input images are color, the following formula is used:

C. Contours Detection and Analysis
The input data for the contour detection procedure is luminance data which corresponds to Y-component of the color vector.
We use DEI approach [13,16] to determine the image contour of the vector shape. As a result, a matrix of the image contour is obtained:  . Let us consider q-contour, which displays the shape or rather the area that the moving object occupies in the two-dimensional space as the main contour: ,  is the quantity of points of the main contour.
For implementation of the sorting and approximation, it is necessary to normalize the obtained data by performing the procedure of redistribution of the contour points relatively to the coordinate origin. Let Contour where  is the value of a minimum distance,  is a column number of the Contour matrix, , The next step is the redistribution of the Contour matrix values (Fig. 1). The redistribution can be fulfilled either clockwise (Fig. 1, a) or counterclockwise ( Fig. 1, b). The contour image obtained as a result of the actions described above can include excess contour points which are visualized as loops, wave-like inequalities etc. and can be considered as noise. To smooth the obtained contour, the median filtration can be used [17 -21].
Since vector data processing, including images, significantly improves speed index compared to raster data, the alternative approach of contour smoothing is applied in the proposed method. This approach is the post-processing procedure for contour points detection.
Let image contour be described by the matrix:

D. Sorting of Vector Data
The data sorting procedure allows both to detect the "reference" contour points and to remove the uninformative points, solving the task of vector data compression.
The initial stage of sorting is to determine quantity of lines or segments that will collectively shape the object contour. The next stage is to shape a template of the segment by value range of which the "reference" contour points are defined.
A contour of the surveillance object can be described as follows: where P is a parameter, values of which indicate the quantity of segments. The range of values of the template segment is limited by a size of the input images and is determined by T vector: where D is a parameter of refinement, 0 D N   , N is a size of the vector image by x coordinates or of the raster image by j coordinates.
The procedure of sorting is implemented for where int is an operation of the integer-valued rounding.
The finding of x coordinates for "reference" vector points of L segments set is performed by finding a minimum distance for each T element from px l vectors: where  is a value of the minimum distance,  is the column number of 1x l vector, h  The y coordinate for "reference" points remains unchanged and is taken from py l set, that is, in this case equals y  , 1y y l   . Thus, the set of segments which connects "reference" points of initial , Contour is formed.
Singularity of this algorithm of data sorting consists in giving an opportunity to describe any surveillance object and any its contours by an equal quantity of points which can be calculated in the following way: Representation of all surveillance objects by the equal quantity of contour points allows to solve the tracing task effectively since further processing and analysis will be performed on similarity-represented objects. It is worth mentioning that after a point falls to L  set, it is removed from L set.
The quantity of segments and the value of the refinement parameter directly influence the accuracy of reproduction of the initial contour. The more the quantity of segments is and/or the less the refinement parameter is, the more accurate results of sorting are obtained. At the same time, both parameters influence the performance rate significantly. To disregard this indicator is expedient if the surveillance object has a complex shape. As a result, the additional procedures, which allow to determine the values range of these parameters, appear and complicate the proposed method.
The alternative approach for solving this task is to apply the data sorting algorithm on the assumption of that the points which fall into L  set are not removed from L set. It enables reproducing the initial contour of the object in a maximally accurate way (depending on the parameter D). However, the negative aspectthe presence of certain quantity of duplicated contour pointsappears in this case. Another alternative approach is to set the more accurate range of T template vector applying the sorting procedure twice, but it decreases the performance rate of the proposed method. Therefore, it is expedient to use the first approach of data sorting since the result of its applying is satisfactory for a wide range of input data.

E. Forming of Smooth Contour Curve
For further processing and analysis of the surveillance object, it is necessary to represent L  set by a certain figure, which can be described by lines that consistently combine a strictly ordered array of points. For this, it is necessary to apply the sorting procedure of p l  segments (2) by x and y coordinates. The procedure of determination of the point location on p l  segments follows the rule of minimum distance: where , i i x y are coordinates of "reference" contour points.
In order to avoid an additional processing procedure, the first points of p l segments (1) are set as the initial points. These points must be removed once the array L of ordered points is formed:

F. Object Area Computing
In order to fulfill the preliminary classification of the moving object, we calculate its area using the formula proposed in [16]: where R is a total quantity of "reference" contour points and it is defined in (3).

Results
In our experiments, we used MP4 and 3GP videos got from CCTV cameras installed on roads, shopping centers, supermarkets, and offices. Each video file is converted into a multi-frame array of JPEG images. The analysis was performed for each surveillance object detected by using a pair of frames: the background frame, where the object is absent, and every next test frame, where the object can be present. The experiment included approximately 200 frames.
The task of contour smoothing can be solved by two ways. The first way is to apply the median filtering as preprocessing of the contour detection procedure. The second way is to apply the vector filtering as the post-processing of the contour detection procedure (Fig. 2). Since the proposed method uses vector data processing, the second way was used in the experiments.
The initial quantity of contour points in one of frames in the test videos is 2093 points. The example of contour detection by using the proposed method is showed in Fig. 3. The final quantity of contour points is 1405. Thus, the number of points is decreased approximately 1.5 times.
Due to the accuracy parameter used in the proposed method, the quantity of contour points can be reduced in several times along with the correct representation of the surveillance object shape (Fig. 4).
The criteria for estimation of the proposed method include accuracy, performance rate, and computing resource consuming (Fig. 5). The accu-racy rate is defined by an error value of the surveillance object area in comparison with the DEI approach. Delaunay triangulation [22] and the determination of the total area of obtained triangles are used for estimation of the performance rate and computing resource consuming as additional load procedures in order to make the gain in the performance more evident.
As alternative approaches, the two algorithms of contour detection of the surveillance object (Vectoring of Raster Method 1, Vectoring of Raster Method 2) are applied for comparison of application results of the proposed method. The first technique includes the following processing stages: the detection of a surveillance object, extraction of Y-component data, thresholding processing, contour detection by the DEI approach, detection of a main contour and redistribution of contour points relatively to the coordinate origin, determining of The results in Fig. 5 match a mean value of the chosen criteria for certain frames of a test video. The table demonstrates the good results of the proposed method in comparison with the DEI approach for accuracy, performance rate, and computing resource consuming. The contour point quantity was reduced more than 1.5 times.

Conclusions
The proposed method of complicated shapes estimation for objects analysis in a video surveillance system consists of the following stages: color space conversion and normalization, object shape detection, contours detection and analysis, sorting of vector data, forming of smooth contour curve, object area computing.
When the contour points number is reduced in 1.5 times, an average error of the proposed method compared with the DEI approach for ac-  When the contour points number is reduced more than 2.5 times, an average error of the proposed method decreases. The proposed method allows to define an array of vector contour points which represent an "approximate" surveillance object of complicated shape and it decreases the data volume to be used in further analysis of a motion trajectory. In addi-tion, this method enables describing the surveillance object by an equal quantity of contour points that in turn can simplify the task of surveillance objects classification.
The further research concerns the transformation of a surveillance object subspace for the analysis of a motion trajectory, including examination of the object's behavior on a probable execution of unauthorized actions.