Tag Archives: ROS

RGBD PCL point cloud from Stereo vision with ROS and OpenCV

In my other tutorial, I showed you how to calibrate you stereo camera. After Calibration, we can get disparity map and  RGBD PCL point cloud from our stereo camera cool huh 🙂

1)Save the following text under “stereo_usb_cam_stream_publisher.launch

2) Then run the following node to publish both cameras and camera info (calibration matrix)

3) Run the following to rectify image and compute the disparity map:

Super important: If you have USB cam with some delays you should add the following “_approximate_sync:=true”

4) Let’s view everything:

Super important: If you have USB cam with some delays you should add the following “_approximate_sync:=True _queue_size:=10”

5) Running rqt graph should give you the following:

6) Run the to configure the matching algorithm parameter:

7) PCL pointcloud in RVIZ

Stereo Camera Calibration with ROS and OpenCV

In this tutorial, I’m gonna show you stereo camera calibration with ROS and OpenCV. So you need a pair of cameras, I bought a pair of this USB webcam which is okay for this task.

1)Save the following text under “stereo_usb_cam_stream_publisher.launch

2)Then run the following node to publish both cameras.

3)Now call the calibration node:

Super important:

If you have USB cam with some delays you should add the following “–no-service-check –approximate=0.1”

4)Pose the chess board in different position, and then click on the calibrate and save button.

5) The result gonna be store at /tmp/calibrationdata.tar.gz. Unzip the file and save it under “/home/<username>/.ros/stereo_camera_info

A GUI ROS-package for cropping pcl pointcloud with dynamic reconfigure

This ROS package enables you to crop the scene from Kinect (input topic type: PCL pointcloud). You can even enable fitting a plane to remove the ground from the scene and by adjusting correct parameter you can get the desired object from the scene. code available on my Github.

White dots are original scene and rgb dots are from the cropped cloud. Values for the volume of cuboid are coming from sliders.

Car Detection Using Single Shot MultiBox Detector (SSD Convolutional Neural Network) in ROS Using Caffe

This work is similar to the previous work here, but this time I used Single Shot MultiBox Detector (SSD) for car detection. Installation is similar, clone the  SSD Caffe:

add the following lines to your Makefile.config

and build it:

used video_stream_opencv to stream your video:

download the trained model from here and put them in the model directory.

In my ssd.launch, I have changed my trained network into:

Now run the following to open rviz:

in the rviz, go to add a panel, and add integrated viewer>ImageViewrPlugin.

Now correct the topic in the added panel and you should see detected cars:

Car Detection Using Fast Region-based Convolutional Networks (R-CNN) in ROS with Caffe

To run this, you need to install Fast-RCNN and Autoware. Just in case you got error regarding hd5f when making Fast-RCNN, add the following lines to your Makefile.config

Now run the following command to start:

if you got an error like :

That means your graphics card is not ready or accessible, in my everytime I suspend my notebook I get that error and I need a restart :/

now you should publish your video stream on the topic “image_raw”, for that purpose I used video_stream_opencv. Here is my launch file:

Now run the following to open rviz:

in the rviz, go to add a panel, and add integrated viewer>ImageViewrPlugin.

Now correct the topic in the added panel and you should see detected cars:

Octomap explanierend

In this tutorial, I explain the concept, probabilistic sensor fusion model and the sensor model used in Octomap library.

related publication: OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees

1)Octamap Volumetric Model

octree storing free (shaded white) and occupied (black) cells. Image is taken from Ref [1]


2)Probabilistic Sensor Fusion Model

3)Sensor Model for Laser Range Data

Image is taken from Ref [1].

 

Expectation Maximization algorithm to obtain Gaussian mixture models for ROS

I found a really good code at GitHub for fitting a Gaussian Mixture Model (GMM) with Expectation Maximization (EM) for ROS. There are so many parameters that you can change. Some of the most important ones are:

To find the optimal number of components, it uses Bayesian information criterion (BIC). There are other methods to find the optimal number of components: Minimum description length (MDL),  Akaike information criterion (AIC),  Minimum message length (MML).

Here is my code for generating a 2 Gaussian and sending them to this node:

 

and you need to put them in to send them to the node:

 

and the results are what we expect:

It also makes it possible to visualize the data in RVIZ, but first, you have to publish your tf data and set the frame name and topic names correctly in gmm_rviz_converter.h

and add a MarkerArray in RVIZ and set the topic “gmm_rviz_converter_output

 

References: [1], [2]

 

Ackermann steering car robot model with simulation in Gazebo

Most of the wheeled robots in ROS use move_base to move the robot. move_base geometry model is based on differential drive which basically transforms a velocity command (twist message) into a command for rotating the left and the right wheels at a different speed which enable the car to turn into the right or left or goes straight.

Differential drive wheel model. Image Courtesy

But cars have Ackermann steering geometry.

 

Ackermann steering geometry. Image Courtesy.

I was looking for a car robot model with such geometry so I can test it in gazebo and ROS. I didn’t find what I was looking for but I found several packages and with some adaptations, I managed to build and control a car with Ackermann steering geometry with a joystick.

As you can see in the following graph, I’m reading my joystick data and translate them into twist messages (The topic is cmd_vel). Then I translate these messages into Ackermann messages (The topic is ackermann_cmd).

 

Click to enlarge the image.

The robot in the video downloaded from here with some modifications for this work.

Autonomous navigation of two wheels differential drive robot in Gazebo

Two wheels differential drive robot (with two caster wheels).
List of installed sensors:
• Velodyne VLP-16.
• Velodyne HDL-32E.
• Hokuyo Laser scanner.
• IMU.
• Microsoft Kinect/Asus Xtion Pro.
• RGB Camera.

You can manually control the robot with Joystick controller for mapping robot environment.
Autonomous navigation is possible by setting goal pose.

 

Converting sensor_msgs::PCLPointCloud2 to sensor_msgs::PointCloud and reverse