Category Archives: Machine Learning

Human detection and Pose Estimation with Deep Learning for Sport Analysis

Pose estimation and tracking human is one the key step in sports analysis. Here is in this work I used openpose for analysis of player in a Bundesliga game HSV Hamburg vs Bayer München. Warning: the video might be disturbing for HSV fans 🙂

 

Original Video

Analyzed Video

Original Video

Analyzed Video

Original Video

Analyzed Video

Original Video

Analyzed Video

Vaganova_Ballet_Academy from Behnam Asadi on Vimeo.

 

Original Video

Analyzed Video

 

 

Thiem_Zverev from Behnam Asadi on Vimeo.

Deep Dreams with Caffe on Ubuntu 16.04

First, install caffe as being explained in my other post here.

Googlenet Model

Download the bvlc_googlenet.caffemodel from https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet

and put it in
caffe/models/bvlc_googlenet/

PIP

IPython, scipy, Jupyter, protobuf, scikit-image

Always install in the user space with –user

Running  jupyter notebook

open  a new notebook and paste the following into it and correct the “model_path” and

img = np.float32(PIL.Image.open(‘/home/behnam/Downloads/fractal.jpg’)) according to your setup.


 

Installing NVIDIA DIGIST Ubuntu 16.04

Prerequisite

Protobuf 3

caffe

Install caffe as being explained in my other post here.

DIGITS

visit https://github.com/NVIDIA/DIGITS/

Dependencies

# Install repo packages

Building DIGITS

Open in the browser:

http://localhost:5000/

Installing Caffe on Ubuntu 16.04

CUDA Toolkit 9.1

visit https://developer.nvidia.com/cuda-downloads and download the correct deb file then:

Basic Linear Algebra Subprograms (BLAS)

Protocol Buffers

or you can install protobuf v3  it from source:

Lightning Memory-Mapped Database

LevelDB

Hdf5

gflags

glog

Snappy

Caffe

Breadth-first search (BFS) and Depth-first search (DSF) Algorithm in Python

BFS traverse:

DFS traverse:



RANSAC Algorithm parameter explained

In this tutorial I explain the RANSAC algorithm, their corresponding parameters and how to choose the number of samples:

N = number of samples
e = probability that a point is an outlier
s = number of points in a sample
p = desired probability that we get a good sample
N =log(1-p) /log(1- (1- e) s )

ref: 1

Examples of Dynamic Programming with C++ and Matlab

In this tutorial, I will give you examples of using dynamic programming for solving the following problems:

1)Minimum number of coins for summing X.

 

2)The most (least) costly path on a grid (dynamic time warping).

 

3)Levenshtein edit distance.

 

4)Seam Carving. I have written a tutorial on that here and the recursive part is in the following lines:

 

The examples are taken from “Competitive Programmer’s Handbook” written by Antti Laaksonen.

Seam Carving Algorithm for Content-Aware Image Resizing with Matlab Code

Seam carving is an algorithm for resizing images while keeping the most prominent and conspicuous pixels in the image. The important pixels in an image are usually those who are located over horizontal or vertical edges, so to throw away some pixels we first find horizontal and vertical edges and store their magnitude as pixel energy. Then we use dynamic programming to find a path which contains the minimum energy level and we drop those pixels. We iteratively do this until we got our image in the desired size.

our image is 720×480,

let’s drop some rows and resize it to 640×480:

we can drop some columns and resize it to 720×320:

 

Hierarchical Clustring in python

\(\)

Hierarchical Clustering is a method of clustering which build a hierarchy of clusters. It could be Agglomerative or Divisive.

  1. Agglomerative: At the first step, every item is a cluster, then clusters based on their distances are merged and form bigger clusters till all data is in one cluster (Bottom Up). The complexity is \( O (n^2log(n) ) \).
  2. Divisive: At the beginning, all items are in one big cluster. Then iteratively we break this cluster into smaller clusters (Top Down). The complexity is  \( O (2^n) \).

To merge or divide the clusters we need to know the shortest distance between clusters. The common metrics for the distance between clusters are:

  • Single Link: smallest distance between points.
  • Complete Link: largest distance between points.
  • Average Link: average distance between points
  • Centroid: distance between centroids.

Depending on the definition of ‘shortest distance’ (single/ complete/ average/ centroid link   ) we have different hierarchical clustering method.

 

Hierarchical Algorithms:

  1. Single Link: at each iteration, two clusters that have the closest pair of elements will be merged into a bigger cluster.
  2. Average Link: distance between clusters is the average distance between all points in between clusters. Clusters with the minimum of these distances merge into a bigger cluster.
  3. Complete Link: distance between clusters is the distance between those two points that are farthest away from each other. Two clusters with the minimum of these distances merge into a bigger cluster.
  4. Minimum spanning tree (MST): In a connected graph without any cycle, a spanning tree is a subset tree in which all vertex are still connected. If edges have weight, MST is a span tree in which the edges have the minimum weight. MST may not be unique.

to visualize the outcome of the hierarchical clustering we often use “Dendrogram”.

The following graph represents the following matrix :

 

Minimum spanning tree of the graph.



 

Maximum likelihood estimation explained

In this tutorial, I explain the “Maximum likelihood” and MLE (maximum likelihood estimation) for binomial and Gaussian distribution.