Tag Archives: point cloud

Autonomous navigation of two wheels differential drive robot in Gazebo

Two wheels differential drive robot (with two caster wheels).
List of installed sensors:
• Velodyne VLP-16.
• Velodyne HDL-32E.
• Hokuyo Laser scanner.
• IMU.
• Microsoft Kinect/Asus Xtion Pro.
• RGB Camera.

You can manually control the robot with Joystick controller for mapping robot environment.
Autonomous navigation is possible by setting goal pose.

 

Converting sensor_msgs::PCLPointCloud2 to sensor_msgs::PointCloud and reverse

 

Converting pcl::PCLPointCloud2 to pcl::PointCloud and reverse

 

 

more about pcl point cloud and conversion

Assembling Laser scans into PCL point cloud Using Gazebo and ROS

For this work, first I loaded the RRBot in Gazebo and
launched Its joints controller, then I sent a periodic signal to the robot such that the laser scanner mounted on the robot swings.

In the following, I assembled the incoming laser scans with the transformation from tf and created PCL point cloud.

Install the necessary package:

Add the path to ROS_PACKAGE_PATH

set the second joint value

(/rrbot/joint2_position_controller/command)  into (pi/4)+(1*pi/4)*sin(i/40)*sin(i/40)

and the frequency into 50 Hz.

Laser Assembler:

Finally, run:

Create a launch file and save the following lines to it and run it with roslaunch:

source code at my git hub

Real time object recognition and 6DOF pose estimation with PCL pointcloud and ROS

Real-time object recognition and 6DOF pose estimation with PCL pointcloud and ROS. This approach is based on normal coherence

Real-time object recognition and 6DOF pose estimation based on Linemod algorithm with ROS and PCL pointcloud

In this tutorial, I’m gonna show you how to do object recognition and 6DOF pose estimation in real-time based on Linemod algorithm with ROS and PCL pointcloud. First, you need to install ork:

Then add the model of your object for tracking to Couch DB:

You need to install Kinect driver, if you don’t know how, follow my tutorial about that.

And select /camera/driver from the drop-down menu. Enable the depth_registration checkbox. Now go back to RViz, and change your PointCloud2 topic to /camera/depth_registered/points.

 

How to find CMake from arbitrary installed locations

In my other tutorial, I showed you how to install your code in an arbitrary location in Unix/ Linux systems. In this tutorial, I’m gonna show you how to find them after installation. Here I have two examples: OpenCV, PCL point cloud

I can assume that you have compiled and installed them using the following command:

1)PCL point cloud

put this line in your CMake file:

and check if everything is correct:

2)OpenCV

put this line in your CMake file:


and check if everything is correct:

if you don’t want to make changes to your CMakeList.txt file you can send <>__DIR as CMake parameter. Example:

 

PCL pointcloud pairwise registration

registration is aligning 3D point cloud on each other such that it gives you a complete model. To achieve this, you need to find the relative positions and orientations of each point cloud, such that you maximize the overlapping intersecting areas between them [1].

So I got the idea from here and I implemented a software based on that. In the following, you can see the main idea and the step I took and finally the results:

Main Flowchart of pairwise point cloud registration

image source: image has been taken from http://pointclouds.org/documentation/tutorials/registration_api.php

 

1)Importing point cloud acquired from different angles, down sampling, selecting keypoint extractor method SIFT, NARF, Harris, SUSAN and respected parameters

2)Selected keypoints are highlighted in green, for each keypoint a descriptor (PFH or FPFH) is estimated

3) Correspondences between keypoint descriptor are estimated (histogram distance) and correspondent points are connected.

4) Correspondent points are rejected via several algorithm and from the remained correspondent points a 4×4 transformation matrix is computed

5) 4×4 transformation matrix is used for initial estimation of ICP algorithm and two point clouds are merged into one

 

Learning From Demonstration

In this work at first, I recognize the object in the scene and estimate the 6 DOF pose of that. Then I track the object by using particle filter. RGB data acquired from Kinect 2 and turned into PCL pointcloud.
I demonstrate a task several times to the robot. In this case, I move an object (a detergent) over an “S” shape path to get an “S” shape trajectory.

In the following, you can see the result of 6 times repeating the job. Trajectories are very noisy and each repeat gives you a new “S” shape.
Then I compute the GMM (Gaussian mixture model) of trajectory in each dimension. Numbers of kernels can be set by the user or can be determined automatically based on BIC (Bayesian information criterion).
After that, I computed the Gaussian mixture regression to generalize the task and get the learned trajectory.
DTW (Dynamic time warping) can be used to align the trajectories to a reference trajectory to get rid of time difference problem between several trajectories.

Finally, you can see the learned trajectory in black.

All codes have been done with C++ in ROS (indigo-level).

 

 

Converting a PCL pointcloud to a ROS pcl message/ ROS pcl message to PCL point cloud

The relation between ROS and PCL point cloud is a little bit complicated, at first, it was part of ROS, then it became a separate project and whole the time you need some serializer and deserializer to send and receive point cloud ROS messages. There are some old deprecated ways to do that and they still exist in the ROS documentation which confuses the programmers, so here is the right API to use:

converting a PCL point cloud to a ROS PCL message

first, include the corresponding headers:



converting ROS PCL message to a PCL pointcloud:

more about pcl point cloud and conversion