For this work, first I loaded the RRBot in Gazebo and
launched Its joints controller, then I sent a periodic signal to the robot such that the laser scanner mounted on the robot swings.
In the following, I assembled the incoming laser scans with the transformation from tf and created PCL point cloud.
In this tutorial, I’m gonna show you how to do object recognition and 6DOF pose estimation in real-time based on Linemod algorithm with ROS and PCL pointcloud. First, you need to install ork:
You need to install Kinect driver, if you don’t know how, follow my tutorial about that.
1
roslaunch openni_launch openni.launch
1
rosrun rviz rviz
1
rosrun rqt_reconfigure rqt_reconfigure
And select /camera/driver from the drop-down menu. Enable the depth_registration checkbox. Now go back to RViz, and change your PointCloud2 topic to /camera/depth_registered/points.
In my other tutorial, I showed you how to install your code in an arbitrary location in Unix/ Linux systems. In this tutorial, I’m gonna show you how to find them after installation. Here I have two examples: OpenCV, PCL point cloud
I can assume that you have compiled and installed them using the following command:
1
cmake-DCMAKE_BUILD_TYPE=Release-DBUILD_SHARED_LIBS=ON-DCMAKE_INSTALL_PREFIX:PATH=~/usr..&&make-j8 all install
registration is aligning 3D point cloud on each other such that it gives you a complete model. To achieve this, you need to find the relative positions and orientations of each point cloud, such that you maximize the overlapping intersecting areas between them [1].
So I got the idea from here and I implemented a software based on that. In the following, you can see the main idea and the step I took and finally the results:
Main Flowchart of pairwise point cloud registration
image source: image has been taken from http://pointclouds.org/documentation/tutorials/registration_api.php
1)Importing point cloud acquired from different angles, down sampling, selecting keypoint extractor method SIFT, NARF, Harris, SUSAN and respected parameters
2)Selected keypoints are highlighted in green, for each keypoint a descriptor (PFH or FPFH) is estimated
3) Correspondences between keypoint descriptor are estimated(histogram distance) and correspondent points are connected.
4) Correspondent points are rejected via several algorithm and from the remained correspondent points a 4×4 transformation matrix is computed
5) 4×4 transformation matrix is used for initial estimation of ICP algorithm and two point clouds are merged into one
In this work at first, I recognize the object in the scene and estimate the 6 DOF pose of that. Then I track the object by using particle filter. RGB data acquired from Kinect 2 and turned into PCL pointcloud.
I demonstrate a task several times to the robot. In this case, I move an object (a detergent) over an “S” shape path to get an “S” shape trajectory.
In the following, you can see the result of 6 times repeating the job. Trajectories are very noisy and each repeat gives you a new “S” shape.
Then I compute the GMM (Gaussian mixture model) of trajectory in each dimension. Numbers of kernels can be set by the user or can be determined automatically based on BIC (Bayesian information criterion).
After that, I computed the Gaussian mixture regression to generalize the task and get the learned trajectory.
DTW (Dynamic time warping) can be used to align the trajectories to a reference trajectory to get rid of time difference problem between several trajectories.
Finally, you can see the learned trajectory in black.
All codes have been done with C++ in ROS (indigo-level).
The relation between ROS and PCL point cloud is a little bit complicated, at first, it was part of ROS, then it became a separate project and whole the time you need some serializer and deserializer to send and receive point cloud ROS messages. There are some old deprecated ways to do that and they still exist in the ROS documentation which confuses the programmers, so here is the right API to use: