Tag Archives: pcl

Simulating A Virtual Camera With OpenCV and Reverse Projection From 2D to 3D

Projection of Points from 3D in the camera plane:

Computed rays from the camera origin in the direction of points:


points are stored in CSV file like this:

The content of “reverse_projection.hpp”

The content of “transformation.hpp


Eigenvectors of PCL pointcloud (moment of inertia)

The snippet below shows how to computer PCA eigenvectors and eigenvalues (in this case could be interpreted as the moment of inertia).


A GUI ROS-package for cropping pcl pointcloud with dynamic reconfigure

This ROS package enables you to crop the scene from Kinect (input topic type: PCL pointcloud). You can even enable fitting a plane to remove the ground from the scene and by adjusting correct parameter you can get the desired object from the scene. code available on my Github.

White dots are original scene and rgb dots are from the cropped cloud. Values for the volume of cuboid are coming from sliders.

How to create octomap tree from mesh

I was looking for a way to create an octomap tree from arbitrary mesh file. Well at first I tried to convert my data into PCL pointcloud and then convert them into an octomap tree. But the problem was, for instance when you a cuboid mesh, you only have 8 vertex, which gives you 8 point and the walls between them won’t appear in the final octomap tree. So I found the following solution:

1) First, download the latest version of binvox from  http://www.patrickmin.com/binvox/ (more about binvox, source code available here or you can try this)
2) Convert you mesh file into binvox file i.e

3) grab the binvox2bt.cpp from octomap at GitHub and compile it, then

4) visualize the bt file install octovis:


mesh file (Dude.ply).

octomap bt file (Dude.binvox.bt).

The sample file can be downloaded at Dude.ply, Dude.binvox.bt.


Autonomous navigation of two wheels differential drive robot in Gazebo

Two wheels differential drive robot (with two caster wheels).
List of installed sensors:
• Velodyne VLP-16.
• Velodyne HDL-32E.
• Hokuyo Laser scanner.
• IMU.
• Microsoft Kinect/Asus Xtion Pro.
• RGB Camera.

You can manually control the robot with Joystick controller for mapping robot environment.
Autonomous navigation is possible by setting goal pose.


Converting sensor_msgs::PCLPointCloud2 to sensor_msgs::PointCloud and reverse


Converting pcl::PCLPointCloud2 to pcl::PointCloud and reverse



more about pcl point cloud and conversion

Assembling Laser scans into PCL point cloud Using Gazebo and ROS

For this work, first I loaded the RRBot in Gazebo and
launched Its joints controller, then I sent a periodic signal to the robot such that the laser scanner mounted on the robot swings.

In the following, I assembled the incoming laser scans with the transformation from tf and created PCL point cloud.

Install the necessary package:

get gazebo_ros_demos from gitHub

Add the path to ROS_PACKAGE_PATH

due to new updates, you need to make some changes in the file rrbot.gazebo, you have to add

this line <legacyModeNS>true</legacyModeNS>

Now run the followings:

you need to install some plugins for rqt. These plug ins will enable you send messages with rqt.

Now launch rqt_gui:

set the second joint value

(/rrbot/joint2_position_controller/command)  into (pi/4)+(1*pi/4)*sin(i/40)*sin(i/40)

and the frequency into 50 Hz, and /rrbot/joint2_position_controller/command)  into 0

Laser Assembler:

Finally, run:

Create a launch file and save the following lines to it and save it under laser_assembler.launch

and run it with roslaunch:

you should get this :

source code at my git hub

Real time object recognition and 6DOF pose estimation with PCL pointcloud and ROS

Real-time object recognition and 6DOF pose estimation with PCL pointcloud and ROS. This approach is based on normal coherence

Real-time object recognition and 6DOF pose estimation based on Linemod algorithm with ROS and PCL pointcloud

In this tutorial, I’m gonna show you how to do object recognition and 6DOF pose estimation in real-time based on Linemod algorithm with ROS and PCL pointcloud. First, you need to install ork:

Then add the model of your object for tracking to Couch DB:

You need to install Kinect driver, if you don’t know how, follow my tutorial about that.

And select /camera/driver from the drop-down menu. Enable the depth_registration checkbox. Now go back to RViz, and change your PointCloud2 topic to /camera/depth_registered/points.