[target detection] video moving object tracking based on matlab GUI background difference algorithm [including Matlab source code 1915]

1, Introduction of vehicle moving target detection based on background difference and inter frame difference

1 Introduction
Moving object detection is to detect moving objects from image sequences Moving objects in the image can be obtained by moving object detection Moving target detection plays an important role in medical aided diagnosis, aerospace, military missile precision guidance, mobile robot visual navigation, intelligent transportation and other fields Because the image sequence is disturbed by illumination, weather, noise, shadow and other factors, it is particularly difficult to accurately detect moving targets

At present, the most effective target detection methods are optical flow method, background difference method and inter frame difference method The optical flow method has high time complexity, large amount of computation, poor real-time performance, and is sensitive to light. The changing light will be incorrectly recognized as optical flow Background difference method and inter frame difference method are widely used because of their simple implementation and strong real-time performance The inter frame difference method only depends on a few adjacent image frames. It has good stability and strong adaptability to environmental changes. Even in the case of drastic changes in the external environment, it can still achieve good results The disadvantage of inter frame difference method is that the selection of inter frame interval has a great impact on the result of target recognition The background difference method is relatively demanding for the background image. The background image does not allow moving objects, and can be updated in real time to adapt to the changes of the environment However, the traditional moving target detection algorithm can not remove the shadow part of the vehicle moving target. The shadow is often detected as a part of the vehicle by mistake, which affects the correct vehicle target information In this paper, a vehicle moving target detection method combining background difference method and inter frame difference method is proposed to obtain the vehicle moving target with shadow removed

1 algorithm Introduction
1.1 background difference method
Background subtraction is the simplest method for moving target detection The principle of the background difference method, because there is obvious difference between the pixel value of the moving target and the background, the current image and the background image are used for difference. The value of each pixel of the difference image is compared with the preset threshold value. If the pixel value is less than the threshold value, it is determined as the background area, otherwise it is the moving target area The principle process of background difference is as follows:

First, the difference image between the current image and the background image is obtained through equation (1), namely:

Ek(x,y)=|Pk(x,y)−Bk(x,y)|         (1)

Then, the pixel value of the difference image Ek(x,y) is compared with the given threshold according to equation (2), and the difference image is binarized. T is the given threshold, 1 represents the moving target area, and 0 represents the background area

1.2 inter frame difference method
The inter frame difference method is to obtain the moving target region by subtracting two consecutive frames When a moving object appears in an image sequence, there will be obvious changes between two frames The difference image is obtained by the difference of two frames. The difference image is judged by the set threshold, and the image is binarized to determine whether there is a moving target in the image sequence

First, the difference image Ek(x,y) of the kth frame image Pk(x,y) and the kth-1st frame image Pk − 1(x,y) is obtained by equation (3):

Ek(x,y)=|Pk(x,y)−Pk−1(x,y)|         (3)

Then, the pixel value of the difference image Ek(x,y) is compared with the given threshold according to equation (4), and the difference image is binarized. T is the given threshold, 1 represents the moving target area, and 0 represents the background area

2 algorithm improvement
2.1 background modeling
Histogram statistical background model is a common background modeling method. The algorithm is less affected by noise, so the extracted background image is of high quality, but the amount of calculation is large and the time complexity is high Moreover, the distant scenery in the image will be connected with the moving target, and the moving target is incorrectly calculated as the background, which will affect the quality of the background image

The idea of the single Gaussian background model is to assume that the probability of the occurrence of the pixel value of each pixel in the image follows the Gaussian distribution, and the color value of the pixel is regarded as a random process X. let I(x,y,t) represent the pixel value of the pixel (x,y,t) at time t, then:

among μ T and σ T is the expected value and standard deviation of the Gaussian distribution of the pixel at time t The single Gaussian background model is only suitable for the situation where the background is single and unchanged Based on the defects of the above two background modeling methods and the experimental results, the mean background model is used to obtain the background The idea of the algorithm is to treat the moving target of the vehicle as noise, eliminate the noise by cumulative average, and obtain the background image by averaging the image sequence containing the moving target In short, it is to sum all the pixels of the image sequence and take the average value to represent the background Then the average value is compared with the threshold value, and finally the background model is obtained If the pixel value of the corresponding position exceeds the threshold range in the background model, it is considered as a vehicle moving target The formula is as follows:

Where Bk represents the background image, N represents the number of frames, and image(x,y) represents the whole background image composed of all (x,y) pixels in the image sequence of frame i

2.2 background difference binarization
After the background model is obtained by the mean background method, the current image frame Pk(i,j,k) and the background image Bk(i,j,k) are subject to the background difference operation of equation (7) Then, the pixel gray value is compared with the threshold value through equation (8), and the vehicle moving target is obtained by binarization The expression is:

The sensitivity of the vehicle to moving objects

2.3 image edge detection
Common edge detection operators include Prewitt operator, Roberts operator, Log operator, Canny operator and Robert operator In contrast, the Roberts operator is a simpler operator, using 2 × The local difference operator is used to find the edge, and the difference between two adjacent pixels in the diagonal direction is used to approximately represent the gradient amplitude to detect the edge Roberts operator is better than oblique edge in detecting vertical edge and has high positioning accuracy Experiments show that this operator is effective in segmenting images with obvious edges and less noise, and the edge location is accurate, compared with other 3 × 3 operator can give relatively thin edges without image post-processing

For each pixel element in the image function f(x,y), obtain the gradient value of the element through formula (9):

Of which, Δ xf, Δ yf are:

2.4 edge image difference
Two edge images EG1(x,y) and EG2(x,y) are obtained by edge detection of background difference binarization image and background difference image respectively Then, the difference between two edge images is obtained

EG(x,y).EG(x,y)=|EG2(x,y)−EG1(x,y)|         (12)

2.5 image post-processing
Due to the influence of noise and edge discontinuity, the vehicle moving target image obtained is not ideal, so the difference image needs to be post processed The median filter is used to remove part of the interference, and the corrosion and expansion are used to remove part of the redundant edge lines, so that the moving target area is closed and complete

2, Partial source code

       function varargout = Main_object_tracking(varargin)
% _____________________________________________________
% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
    'gui_Singleton',  gui_Singleton, ...
    'gui_OpeningFcn', @Main_object_tracking_OpeningFcn, ...
    'gui_OutputFcn',  @Main_object_tracking_OutputFcn, ...
    'gui_LayoutFcn',  [] , ...
    'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before Main_object_tracking is made visible.
function Main_object_tracking_OpeningFcn(hObject, eventdata, handles, varargin)
movegui(hObject,'center')
imaqreset
% ID of video source
handles.fuente=2;
%Disable "Start" and "Stop" buttons
set(handles.inicio,'Enable','off');
set(handles.parar,'Enable','off');
set(hObject,'UserData',0)
set(handles.axes1,'XTickLabel',[],'YTickLabel',[])
% Choose default command line output for Main_object_tracking
handles.output = hObject;
% Update handles structure
guidata(hObject, handles);

% --- Outputs from this function are returned to the command line.
function varargout = Main_object_tracking_OutputFcn(hObject, eventdata, handles)
% Get default command line output from handles structure
varargout{1} = handles.output;

% --- FUNCTION TO GET BACKGROUND
function cap_fondo_Callback(hObject, eventdata, handles)
% Reset imaq device
imaqreset
set(hObject,'UserData',0) %User data 0 (1 stop capture)
% Enable "Start" and "Stop" buttons
set(handles.inicio,'Enable','off');
set(handles.parar,'Enable','off');
% Disable current button
set(hObject,'Enable','off');
% Get default source
sel_fuente=handles.fuente;
switch sel_fuente
    % _________________________________________________________________
    case 1 %WEB CAM        
        % Open GUI to select the camera to use
        sel_camera
        %
        uiwait
        % Bring the camera features
        % id= Camera ID
        % es_web_ext= indicator if laptop or external camera
        global id es_web_ext
        % Determine format depending on the type of camera to use
        if es_web_ext==0
            formato='YUY2_176x144';
        else
            formato='RGB24_320x240';
        end
        try
            % Create video object
            vid = videoinput('winvideo',id,formato);
            % Update handles
            guidata(hObject, handles);            
        catch
            % Message on error
            msgbox('Check the connection of the camera','Camera')
            % Remove axis labels
            set(handles.axes1,'XTick',[ ],'YTick',[ ])
        end
        % Specify how often to acquire frame from video stream
        vid.FrameGrabInterval = 1;
        set(vid,'TriggerRepeat',Inf);
        % Start capture
        % _______Get Background_________
        vid.FramesPerTrigger=50;
        start(vid);
        data = getdata(vid,50);
        if es_web_ext==0
            fondo=double(ycbcr2rgb(data(:,:,:,50)));
        else
            fondo=double(data(:,:,:,50));
        end
        % Set last image as background
        % Show background
        imshow(uint8(fondo))
        % Reset video object
        stop(vid);
        clear vid
        imaqreset
    case 2%VIDEO AVI
        [nombre, ruta]=uigetfile('*.avi','SELECT VIDEO AVI');
        if nombre == 0 %If press cancel button, return
            set(hObject,'Enable','on');
            set(handles.inicio,'Enable','on');
            set(handles.parar,'Enable','on');
            return
        end     

3, Running results

4, matlab version and references

1 matlab version
2014a

2 references
[1] Cailimei MATLAB image processing -- theory, algorithm and example analysis [M] Tsinghua University Press, 2020
[2] Yang Dan, zhaohaibin, long Zhe MATLAB image processing example details [M] Tsinghua University Press, 2013
[3] Zhou pin MATLAB image processing and graphical user interface design [M] Tsinghua University Press, 2013
[4] Liuchenglong Proficient in MATLAB image processing [M] Tsinghua University Press, 2015
[5] Luomin, liudongbo, Wen haoxuan, chenxinhai, songdan Vehicle moving target detection based on background difference method and inter frame difference method [J] Journal of Hunan Institute of Engineering (NATURAL SCIENCE EDITION) 2019,29(04)

3 remarks
This part of the introduction is extracted from the Internet for reference only. In case of infringement, please contact us to delete it

Tags: MATLAB Algorithm Object Detection

Posted by nwoottonn on Fri, 01 Jul 2022 01:31:02 +0930