Tag Archives: ADAS

Car Detection Using Single Shot MultiBox Detector (SSD Convolutional Neural Network) in ROS Using Caffe

This work is similar to the previous work here, but this time I used Single Shot MultiBox Detector (SSD) for car detection. Installation is similar, clone the  SSD Caffe:

add the following lines to your Makefile.config

and build it:

used video_stream_opencv to stream your video:

download the trained model from here and put them in the model directory.

In my ssd.launch, I have changed my trained network into:

Now run the following to open rviz:

in the rviz, go to add a panel, and add integrated viewer>ImageViewrPlugin.

Now correct the topic in the added panel and you should see detected cars:

Car Detection Using Fast Region-based Convolutional Networks (R-CNN) in ROS with Caffe

To run this, you need to install Fast-RCNN and Autoware. Just in case you got error regarding hd5f when making Fast-RCNN, add the following lines to your Makefile.config

Now run the following command to start:

if you got an error like :

That means your graphics card is not ready or accessible, in my everytime I suspend my notebook I get that error and I need a restart :/

now you should publish your video stream on the topic “image_raw”, for that purpose I used video_stream_opencv. Here is my launch file:

Now run the following to open rviz:

in the rviz, go to add a panel, and add integrated viewer>ImageViewrPlugin.

Now correct the topic in the added panel and you should see detected cars:

Ackermann steering car robot model with simulation in Gazebo

Most of the wheeled robots in ROS use move_base to move the robot. move_base geometry model is based on differential drive which basically transforms a velocity command (twist message) into a command for rotating the left and the right wheels at a different speed which enable the car to turn into the right or left or goes straight.

Differential drive wheel model. Image Courtesy

But cars have Ackermann steering geometry.

 

Ackermann steering geometry. Image Courtesy.

I was looking for a car robot model with such geometry so I can test it in gazebo and ROS. I didn’t find what I was looking for but I found several packages and with some adaptations, I managed to build and control a car with Ackermann steering geometry with a joystick.

As you can see in the following graph, I’m reading my joystick data and translate them into twist messages (The topic is cmd_vel). Then I translate these messages into Ackermann messages (The topic is ackermann_cmd).

 

Click to enlarge the image.

The robot in the video downloaded from here with some modifications for this work.

Autonomous navigation of two wheels differential drive robot in Gazebo

Two wheels differential drive robot (with two caster wheels).
List of installed sensors:
• Velodyne VLP-16.
• Velodyne HDL-32E.
• Hokuyo Laser scanner.
• IMU.
• Microsoft Kinect/Asus Xtion Pro.
• RGB Camera.

You can manually control the robot with Joystick controller for mapping robot environment.
Autonomous navigation is possible by setting goal pose.