# Extended Kalman Filter Explained with Python Code

In the following code, I have implemented an Extended Kalman Filter for modeling the movement of a car with constant turn rate and velocity. The code is mainly based on this work (I did some bug fixing and some adaptation such that the code runs similar to the Kalman filter that I have earlier implemented).

Trajectory of the car, click on the image for large scale

References: [1] [2] [3] [4] [5]

# Parcticle Filter Explained With Python Code From Scratch

In the following code I have implemented a localization algorithm based on particle filter.

I have used conda to run my code, you can run the following for installation of dependencies:

and the code:

# Kalman Filter Explained With Python Code From Scratch

This snippet shows tracking mouse cursor with Python code from scratch and comparing the result with OpenCV. The CSV file that has been used are being created with below c++ code. A sample could be downloaded from here 1, 2, 3.

# How to develop GUI Application with PyQt (python Qt)

There are two main methods for developing GUI application with qt:

Here is the snippet for adding all widgets and their slots in code:

Now let’s do what we have done in the first method in a UI file and load it. First, create a text file and put the followings in it and save it as “mainwindow.ui”

Now call it in your python file like this:

The results should be the same as what you got in the first method.

# Installing NVIDIA DIGIST Ubuntu 16.04

### caffe

Install caffe as being explained in my other post here.

### DIGITS

#### Open in the browser:

http://localhost:5000/

# Installing Caffe on Ubuntu 16.04

### Protocol Buffers

or you can install protobuf v3  it from source:

BFS traverse:

DFS traverse:

# Hierarchical Clustring in python



Hierarchical Clustering is a method of clustering which build a hierarchy of clusters. It could be Agglomerative or Divisive.

1. Agglomerative: At the first step, every item is a cluster, then clusters based on their distances are merged and form bigger clusters till all data is in one cluster (Bottom Up). The complexity is $$O (n^2log(n) )$$.
2. Divisive: At the beginning, all items are in one big cluster. Then iteratively we break this cluster into smaller clusters (Top Down). The complexity is  $$O (2^n)$$.

To merge or divide the clusters we need to know the shortest distance between clusters. The common metrics for the distance between clusters are:

• Single Link: smallest distance between points.
• Complete Link: largest distance between points.
• Average Link: average distance between points
• Centroid: distance between centroids.

Depending on the definition of ‘shortest distance’ (single/ complete/ average/ centroid link   ) we have different hierarchical clustering method.

Hierarchical Algorithms:

1. Single Link: at each iteration, two clusters that have the closest pair of elements will be merged into a bigger cluster.
2. Average Link: distance between clusters is the average distance between all points in between clusters. Clusters with the minimum of these distances merge into a bigger cluster.
3. Complete Link: distance between clusters is the distance between those two points that are farthest away from each other. Two clusters with the minimum of these distances merge into a bigger cluster.
4. Minimum spanning tree (MST): In a connected graph without any cycle, a spanning tree is a subset tree in which all vertex are still connected. If edges have weight, MST is a span tree in which the edges have the minimum weight. MST may not be unique.

to visualize the outcome of the hierarchical clustering we often use “Dendrogram”.

The following graph represents the following matrix :

Minimum spanning tree of the graph.