# Roll, pitch, yaw using Eigen and KDL Frame

From Eigen documentation:

If you are working with OpenGL 4×4 matrices then Affine3f and Affine3d are what you want.
Since Eigen defaults to column-major storage, you can directly use the Transform::data()  method to pass your transformation matrix to OpenGL.

construct a Transform:

or like this:

But note that unfortunately, because of how C++ works, you can not do this:

and with KDL Frame:

# ROS packages for Dynamic Time Warping

Dynamic Time Warping (DTW) is a method to align two sequences under certain constraints. For instance, two trajectories that are very similar but one of them performed in a longer time. The alignment should be is such way that minimizes the distance between these two sequences. Here I have done DTW between two-time series with python. I found a good c++ library for Fast Dynamic Time Warping and I developed my wrapper to make a ROS package for that. Here you can download my code at GitHub.

The number of possible warping paths through the grid is exponentially explosive. The image is taken from [1].

A good wrapping function should have following characteristics:

• monotonicity: The alignment path does not go back in “time” index.

The image is taken from [1]

• continuity: The alignment path does not jump in “time” index.

The image is taken from [1]

• boundary conditions: The alignment path starts at the bottom left and ends at the top right.

The image is taken from [1].

• warping window: A good alignment path is unlikely to wander too far from the diagonal.

The image is taken from [1].

• slope constraint: The alignment path should not be too steep or too shallow.

The image is taken from [1].

• References: [1]

References: [1]

# Expectation Maximization algorithm to obtain Gaussian mixture models for ROS

I found a really good code at GitHub for fitting a Gaussian Mixture Model (GMM) with Expectation Maximization (EM) for ROS. There are so many parameters that you can change. Some of the most important ones are:

To find the optimal number of components, it uses Bayesian information criterion (BIC). There are other methods to find the optimal number of components: Minimum description length (MDL),  Akaike information criterion (AIC),  Minimum message length (MML).

Here is my code for generating a 2 Gaussian and sending them to this node:

and you need to put them in to send them to the node:

and the results are what we expect:

It also makes it possible to visualize the data in RVIZ, but first, you have to publish your tf data and set the frame name and topic names correctly in gmm_rviz_converter.h

and add a MarkerArray in RVIZ and set the topic “gmm_rviz_converter_output

References: [1], [2]

# Ackermann steering car robot model with simulation in Gazebo

Most of the wheeled robots in ROS use move_base to move the robot. move_base geometry model is based on differential drive which basically transforms a velocity command (twist message) into a command for rotating the left and the right wheels at a different speed which enable the car to turn into the right or left or goes straight.

Differential drive wheel model. Image Courtesy

But cars have Ackermann steering geometry.

Ackermann steering geometry. Image Courtesy.

I was looking for a car robot model with such geometry so I can test it in gazebo and ROS. I didn’t find what I was looking for but I found several packages and with some adaptations, I managed to build and control a car with Ackermann steering geometry with a joystick.

As you can see in the following graph, I’m reading my joystick data and translate them into twist messages (The topic is cmd_vel). Then I translate these messages into Ackermann messages (The topic is ackermann_cmd).

Click to enlarge the image.

The robot in the video downloaded from here with some modifications for this work.

# Autonomous navigation of two wheels differential drive robot in Gazebo

Two wheels differential drive robot (with two caster wheels).
List of installed sensors:
• Velodyne VLP-16.
• Velodyne HDL-32E.
• Hokuyo Laser scanner.
• IMU.
• Microsoft Kinect/Asus Xtion Pro.
• RGB Camera.

You can manually control the robot with Joystick controller for mapping robot environment.
Autonomous navigation is possible by setting goal pose.

# Converting pcl::PCLPointCloud2 to pcl::PointCloud and reverse

more about pcl point cloud and conversion

# Assembling Laser scans into PCL point cloud Using Gazebo and ROS

For this work, first I loaded the RRBot in Gazebo and
launched Its joints controller, then I sent a periodic signal to the robot such that the laser scanner mounted on the robot swings.

In the following, I assembled the incoming laser scans with the transformation from tf and created PCL point cloud.

Install the necessary package:

set the second joint value

(/rrbot/joint2_position_controller/command)  into (pi/4)+(1*pi/4)*sin(i/40)*sin(i/40)

and the frequency into 50 Hz.

Laser Assembler:

Finally, run:

Create a launch file and save the following lines to it and run it with roslaunch:

source code at my git hub

# Control your robot with a joystick in ROS

In other tutorials, I showed how to get access to the joystick and how to code with that. In this tutorial, I’m  gonna show you how to do that without writing any line of code.

First, install the required packages:

Now call the following

This will publish the topic “/joy_node” which is type of “sensor_msgs/Joy

Now you need to create your favorite message from that, for instance, if you want to move your robot you need to create “Twist” and publish that over /cmd_vel. To do that, create a yaml file and call it “joystick_param.yaml”, then put the following in the file and save it:

Now load it to ROS param:

Then you can call joy_teleop.py, so it will check the values that you set into teleop and publish

/chatter and /cmd_vel

Alternatively, you can just call the following which will do lines above at once: