Tag Archives: tracking

Human detection on mobile camera using HOG and tracking them using Kalman filter

This is the part I of the work that I did for my master thesis (part II). In this work first, I computed HOG (Histogram of oriented gradients) on my images and then sent the computed histogram to a linear SVM (support vector machine). The SVM was trained with human and non-human images. The output of the classifier was abounding box if there was any human in the image.

Feature extraction and object detection in HOG, Tiling the detection window in an overlapping grid of HOG descriptors and then using a SVM based window classifier gives the human detection chain. Image acquired from [1].

Overview of HOG, The detector window is tiled with a grid of overlapping blocks, Each block contains a grid of spatial cells. For each cell, the weighted vote of image gradients in orientation histogram is accumulated. These 31 are locally normalized and collected into one big feature vector. Images acquired from [2].

In the next, I used Kalman filter to track the detected human. To check the accuracy of my work, I created a ground truth based on the color tracker. You can read and download a similar one on my website here.


The bounding box shows the Kalman filter prediction while the letter 1 or 2 indicate the human detection by HOG and letter R and Y are locations of the player detected by the color tracker.

All text and images in this article are taken from my master thesis or respective publications, the full document can be downloaded here.

[1] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886 –893 vol. 1, June 2005. doi: 10.1109/CVPR.2005.177.

[2] N. Dalal and B . Triggs. Histograms of oriented gradients for human detection., 2005.

Colour based object Tracking with OpenCV

In many applications, you need to track an object. One simple method is color based tracking. I have developed a simple tool for that with OpenCV. All you have to do is just to adjust the High and Low values of HSV slider in the left window till you filter the image and you only see your desired object, here I’m tracking a green pen, a blue water container, and a red bottle top. The code is pretty easy and straight forward but I found different pieces of the codes for each part all over the internet and I change them and adapted them together so they can do the job.

The code on my GitHub account.

Learning From Demonstration

In this work at first, I recognize the object in the scene and estimate the 6 DOF pose of that. Then I track the object by using particle filter. RGB data acquired from Kinect 2 and turned into PCL pointcloud.
I demonstrate a task several times to the robot. In this case, I move an object (a detergent) over an “S” shape path to get an “S” shape trajectory.

In the following, you can see the result of 6 times repeating the job. Trajectories are very noisy and each repeat gives you a new “S” shape.
Then I compute the GMM (Gaussian mixture model) of trajectory in each dimension. Numbers of kernels can be set by the user or can be determined automatically based on BIC (Bayesian information criterion).
After that, I computed the Gaussian mixture regression to generalize the task and get the learned trajectory.
DTW (Dynamic time warping) can be used to align the trajectories to a reference trajectory to get rid of time difference problem between several trajectories.

Finally, you can see the learned trajectory in black.

All codes have been done with C++ in ROS (indigo-level).