There are several approaches for estimating the probability distribution function of a given data: 1)Parametric 2)Semi-parametric 3)Non-parametric A parametric one is GMM via algorithm such as expectation maximization. Here is my other post for expectation maximization. Example of Non-parametric is the histogram, where data are assigned to only one bin and depending on the number bins that fall within […]
I found a really good code at GitHub for fitting a Gaussian Mixture Model (GMM) with Expectation Maximization (EM) for ROS. There are so many parameters that you can change. Some of the most important ones are:
// minimum number of gaussians
#define PARAM_NAME_GAUSSIAN_COUNT_MIN "gaussian_count_min"
#define PARAM_DEFAULT_GAUSSIAN_COUNT_MIN 1
// search will terminate when the gaussian count reaches this
#define PARAM_NAME_GAUSSIAN_COUNT_MAX "gaussian_count_max"
#define PARAM_DEFAULT_GAUSSIAN_COUNT_MAX 10
To find the optimal number of components, it uses Bayesian information criterion (BIC). There are other methods to find
In many applications, you need to compare two or more data sets with each other to see how much they are similar or different. For instance, you have measured the height of men and women in Japan and the Netherlands and now you like to know how much they are different. Two commonly used method
In this work at first, I recognize the object in the scene and estimate the 6 DOF pose of that. Then I track the object by using particle filter. RGB data acquired from Kinect 2 and turned into PCL pointcloud. I demonstrate a task several times to the robot. In this case, I move an object (a detergent) over