# Sample based-optimisation-based planner with signed distance fields cost map

Rapidly-exploring random trees (RRT) and their variant are a very power solution for solving motion planning problem in robotics, but they suffer from finding an optimise solution and
the generated path is usually jerky with redundant movements. Sample based-optimisation-based planners benefit the robustness of RRT and the possibility of imposing a cost function. Here in this work, I set a cost function which is the combination of length and clearance of the trajectory. To start I define the environment as a black and white image (black obstacles, white collision free area). I obtain a  signed distance field from this image in which every pixel show the distance to the closest obstacle (closer darker).  image 1, environment representation, white free area, black obstacles image 2, cost map obtained from signed distance field

After this, I used RRTconnect to find a collision-free path between start and goal, as it can be seen the trajectory an unusual turns.

In the following, I defined a cost function which is the combination of clearance and length but has more weight on the clearance so the planner avoids the obstacle as mush it can while it tries to shorten the trajectory length. Path with high clearance while shortening the length (lengthy but safe).

Finally, I changed the cost function such that put more weights on the length of the planner so the planner avoids the obstacle but minimize the length as much as possible trajectory length. # Motion planning under Cartesian constraints

In this task, an orientation constraint has been imposed to the planner such that the orientation of end effector is fixed during the task. The planner checks the self-collision and collision between robot and the environment.

Corresponding paper in IROS 2015.

# Manipulability Map

It often happens that you are interested to know in which area you robot has high manipulability and in which area it has low manipulability. There are several approaches for that. In this work, I implemented the Yoshikawa manipulability index. I discretized the joint space for each joint, then I compute the manipulability index and dump all data. Then I normalized the data and used OpenCV color map for better visualization.

In the following, you can see a pair of schunk LWA 4D manipulators and the corresponding manipulability map around the right arm. In this case, COLORMAP_HOT profile has been used for visualization, that mean yellow areas have high manipulability and brown areas have smaller manipulability.

# Learning From Demonstration

In this work at first, I recognize the object in the scene and estimate the 6 DOF pose of that. Then I track the object by using particle filter. RGB data acquired from Kinect 2 and turned into PCL pointcloud.
I demonstrate a task several times to the robot. In this case, I move an object (a detergent) over an “S” shape path to get an “S” shape trajectory.

In the following, you can see the result of 6 times repeating the job. Trajectories are very noisy and each repeat gives you a new “S” shape.
Then I compute the GMM (Gaussian mixture model) of trajectory in each dimension. Numbers of kernels can be set by the user or can be determined automatically based on BIC (Bayesian information criterion).
After that, I computed the Gaussian mixture regression to generalize the task and get the learned trajectory.
DTW (Dynamic time warping) can be used to align the trajectories to a reference trajectory to get rid of time difference problem between several trajectories.

Finally, you can see the learned trajectory in black.

All codes have been done with C++ in ROS (indigo-level).

Ref: