The Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Math Forum » Discussions » Software » comp.soft-sys.matlab

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: how can i provide real time video input
Replies: 2   Last Post: Dec 6, 2012 9:31 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]

Posts: 28
Registered: 1/22/12
how can i provide real time video input
Posted: Mar 28, 2012 3:59 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

%% Tracking Cars Using Gaussian Mixture Models
% This demo illustrates how to detect cars in a video sequence
% using foreground detection based on gaussian mixture models (GMMs).

% Copyright 2004-2010 The MathWorks, Inc.
% $Revision: $ $Date: 2010/11/22 03:06:06 $

%% Introduction
% This demo illustrates the use of gaussian mixture models to detect
% foreground in a video. After foreground detection, the demo process
% the binary foreground images using blob analysis. Finally,
% bounding boxes are drawn around the detected cars.

%% Initialization
% Use these next sections of code to initialize the required variables and
% System objects.

% Create a System object to read video from a binary file.
hbfr = vision.BinaryFileReader('Filename', 'viptraffic.bin');

% Create a System object to upsample the chrominance components of the
% video.
hcr = vision.ChromaResampler(...
'Resampling', '4:2:0 (MPEG1) to 4:4:4', ...
'InterpolationFilter', 'Pixel replication');

% Create color space converter System objects to convert the image from
% YCbCr to RGB format.
hcsc = vision.ColorSpaceConverter('Conversion', 'YCbCr to RGB');

% Create a System object to detect foreground using gaussian mixture models.
hof = vision.ForegroundDetector(...
'NumTrainingFrames', 5, ... % only 5 because of short video
'InitialVariance', (30/255)^2); % initial standard deviation of 30/255

% Create a blob analysis System object to segment cars in the video.
hblob = vision.BlobAnalysis( ...
'CentroidOutputPort', false, ...
'AreaOutputPort', true, ...
'BoundingBoxOutputPort', true, ...
'OutputDataType', 'single', ...
'NumBlobsOutputPort', false, ...
'MinimumBlobAreaSource', 'Property', ...
'MinimumBlobArea', 250, ...
'MaximumBlobAreaSource', 'Property', ...
'MaximumBlobArea', 3600, ...
'FillValues', -1, ...
'MaximumCount', 80);

% Create and configure two System objects that insert shapes, one for
% drawing the bounding box around the cars and the other for drawing the
% motion vector lines.
hshapeins1 = vision.ShapeInserter( ...
'BorderColor', 'Custom', ...
'CustomBorderColor', [0 255 0]);
hshapeins2 = vision.ShapeInserter( ...
'Shape','Lines', ...
'BorderColor', 'Custom', ...
'CustomBorderColor', [255 255 0]);

% Create and configure a System object to write the number of cars being
% tracked.
htextins = vision.TextInserter( ...
'Text', '%4d', ...
'Location', [0 0], ...
'Color', [255 255 255], ...
'FontSize', 12);

% Create System objects to display the results.
sz = get(0,'ScreenSize');
pos = [20 sz(4)-300 200 200];
hVideoOrig = vision.VideoPlayer('Name', 'Original', 'Position', pos);
pos(1) = pos(1)+220; % move the next viewer to the right
hVideoFg = vision.VideoPlayer('Name', 'Foreground', 'Position', pos);
pos(1) = pos(1)+220;
hVideoRes = vision.VideoPlayer('Name', 'Results', 'Position', pos);

line_row = 22; % Define region of interest (ROI)

%% Stream Processing Loop
% Create a processing loop to track the cars in the input video. This
% loop uses the previously instantiated System objects.
% When the BinaryFileReader object detects the end of the input file, the loop
% stops.
while ~isDone(hbfr)
[y, cb, cr] = step(hbfr); % Read input video frame
[cb, cr] = step(hcr, cb, cr); % Upsample chroma to construct YCbCr 4:4:4
image = step(hcsc, cat(3,y,cb,cr)); % Convert image from YCbCr to RGB
% for display purposes

% Remove the effect of sudden intensity changes due to camera's
% auto white balancing algorithm.
y = im2single(y);
y = y-mean(y(:));

fg_image = step(hof, y); % Foreground

% Estimate the area and bounding box of the blobs in the foreground
% image.
[area, bbox] = step(hblob, fg_image);
Idx = bbox(1,:) > line_row; % Select boxes which are in the ROI.

% Based on dimensions, exclude objects which are not cars. When the
% ratio between the area of the blob and the area of the bounding box
% is above 0.4 (40%) classify it as a car.
ratio = zeros(1, length(Idx));
ratio(Idx) = single(area(1,Idx))./single(bbox(3,Idx).*bbox(4,Idx));
ratiob = ratio > 0.4;
count = int32(sum(ratiob)); % Number of cars
bbox(:, ~ratiob) = int32(-1);

% Draw bounding rectangles around the detected cars.
y2 = step(hshapeins1, image, bbox);

% Display the number of cars tracked and a white line showing the ROI.
y2(22:23,:,:) = 255; % White line
y2(1:15,1:30,:) = 0; % Black background for displaying count
image_out = step(htextins, y2, count);

step(hVideoOrig, image); % Original video
step(hVideoFg, fg_image); % Foreground
step(hVideoRes, image_out); % Bounding boxes around cars

% Close the video file

%% Summary
% The output video displays the bounding boxes around the cars. It also
% displays the number of cars in the upper left corner of the video.


Hi, The file loads a already present *.avi, or *.bin file to perform can i make it in real time by providing video input?

I even tried giving 'videoinput' function....but its not supported by computer vision. Correct me if i'm wrong. Please suggested me, alternate to make this run real time.


Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum at NCTM 1994-2018. All Rights Reserved.