Category Archives: Python

How to develop GUI Application with PyQt (python Qt)

There are two main methods for developing GUI application with qt:
1) Adding all widgets in your code (your cpp or python code)
2) Creating qt UI files, adding widgets there and load everything into your application.

1)Adding all widgets in your code

Here is the snippet for adding all widgets and their slots in code:

2) Creating qt UI files, adding widgets there and load everything into your application

Now let’s do what we have done in the first method in a UI file and load it. First, create a text file and put the followings in it and save it as “mainwindow.ui”

Now call it in your python file like this:

The results should be the same as what you got in the first method.

Installing NVIDIA DIGIST Ubuntu 16.04

Prerequisite

Protobuf 3

caffe

Install caffe as being explained in my other post here.

DIGITS

visit https://github.com/NVIDIA/DIGITS/

Dependencies

# Install repo packages

Building DIGITS

Open in the browser:

http://localhost:5000/

Installing Caffe on Ubuntu 16.04

CUDA Toolkit 9.1

visit https://developer.nvidia.com/cuda-downloads and download the correct deb file then:

Basic Linear Algebra Subprograms (BLAS)

Protocol Buffers

or you can install protobuf v3  it from source:

Lightning Memory-Mapped Database

LevelDB

Hdf5

gflags

glog

Snappy

Caffe

Breadth-first search (BFS) and Depth-first search (DSF) Algorithm in Python

BFS traverse:

DFS traverse:

Populating directed graph in networkx from CSV adjacency matrix

Drawing graphs in Python with networkx

Hierarchical Clustring in python

\(\)

Hierarchical Clustering is a method of clustering which build a hierarchy of clusters. It could be Agglomerative or Divisive.

  1. Agglomerative: At the first step, every item is a cluster, then clusters based on their distances are merged and form bigger clusters till all data is in one cluster (Bottom Up). The complexity is \( O (n^2log(n) ) \).
  2. Divisive: At the beginning, all items are in one big cluster. Then iteratively we break this cluster into smaller clusters (Top Down). The complexity is  \( O (2^n) \).

To merge or divide the clusters we need to know the shortest distance between clusters. The common metrics for the distance between clusters are:

  • Single Link: smallest distance between points.
  • Complete Link: largest distance between points.
  • Average Link: average distance between points
  • Centroid: distance between centroids.

Depending on the definition of ‘shortest distance’ (single/ complete/ average/ centroid link   ) we have different hierarchical clustering method.

 

Hierarchical Algorithms:

  1. Single Link: at each iteration, two clusters that have the closest pair of elements will be merged into a bigger cluster.
  2. Average Link: distance between clusters is the average distance between all points in between clusters. Clusters with the minimum of these distances merge into a bigger cluster.
  3. Complete Link: distance between clusters is the distance between those two points that are farthest away from each other. Two clusters with the minimum of these distances merge into a bigger cluster.
  4. Minimum spanning tree (MST): In a connected graph without any cycle, a spanning tree is a subset tree in which all vertex are still connected. If edges have weight, MST is a span tree in which the edges have the minimum weight. MST may not be unique.

to visualize the outcome of the hierarchical clustering we often use “Dendrogram”.

The following graph represents the following matrix :

 

Minimum spanning tree of the graph.



 

Naive Bayes Classifier Example with Python Code

In the below example I implemented a “Naive Bayes classifier” in python and in the following I used “sklearn” package to solve it again:

and the output is:

Continue reading

Density-Based Spatial Clustering (DBSCAN) with Python Code

DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a data clustering algorithm It is a density-based clustering algorithm because it finds a number of clusters starting from the estimated density distribution of corresponding nodes.

It starts with an arbitrary starting point that has not been visited.

This point’s epsilon-neighborhood is retrieved, and if it contains sufficiently many points, a cluster is started. Then, a new unvisited point is retrieved and processed, leading to the discovery of a further cluster or noise. DBSCAN requires two parameters: epsilon (eps) and the minimum number of points required to form a cluster (minPts). If a point is found to be part of a cluster, its epsilon-neighborhood is also part of that cluster.

I implemented the pseudo code from DBSCAN wiki page:

Continue reading