My Avatar

Mohamed Haseeb

Software and machine learning engineer, interested in applying machine learning techniques to build innovative solutions.

Wisture: Touch-less Hand Gesture Classification in Unmodified Smartphones Using Wi-Fi Signals


Our paper (together with Dr. Ramviyas Parasuraman) got accepted for publication in IEEE Sensors Journal (2018). Subset of the work reported was part of my master’s thesis. A preprint can be found here, and the paper can be found here.

The paper introduces Wisture, a solution for recognizing touch-less dynamic hand gestures on smartphones from the Wi-Fi Received Signal Strength (RSS). Unlike other Wi-Fi based gesture recognition methods, the proposed method does not require a modifying the smartphone hardware or the operating system, and performs the gesture recognition without interfering with the normal operation of other smartphone applications. A Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) is trained to predict hand gestures from a pre-processed Wi-Fi RSS input sequence.

Below is a video demonstration of Wisture.

Comment  Read more

Deep Q-learning presentation at Ericsson


Last week I presented the topic of deep Q-learning to a group of engineers who work with machine learning at Ericsson (where I am currently working). You can access the presentation by clicking here.

In relation to this, I’ve previously built a Pac-Man player that uses Q-learning to learn how to play the game. You can read more about it here.

Comment  Read more

Python implementation of the Learning Time-series Shapelets method (LTS)



Jump to usage if you have no patience to hear my life story :)

A month ago or so, I was looking for an implementation of a time-series classification/clustering method that uses shapelets, a widely researched topic within the time-series research community, and to my surprise, I found none 1. So I thought it will be great if I did an implementation and made it available to whoever interested. I was also motivated by the interesting idea behind the LTS method, and I imagined it would be fun to spend time to fully grasp the method and implement it.

A shapelet is a time-series sub-sequence that is discriminative to the members of one class (or more). LTS learns a time-series classifier (that uses a set of shaplets) with stochastic gradient descent. Refer to the LTS paper for details.

This implementation I did, found here, views the model as a layered network (the shown diagram), where each layer implements a forward, a backward and parameters update method. This abstraction makes it easy to understand the method and implement it (specially when it gets hairy and one needs to debug the code). It also help if one decided to port the implementation to frameworks like Torch or Tensorflow. A bunch of unit-tests were also implemented for the forward and backward methods (so I can rest assured that the gradients are calculated correctly).

Note, the loss in my implementation is an updated version of the one in the paper, and that is to enable training a single network for all the classes in the dataset (rather than one network/class as I understood from the paper). The impact on performance caused by this deviation was not investigated. For details check the shapelets/network/ in the implementation.


See below. Also have a look at in the implementation. For a stable training, make sure all the features in dataset are standardized (i.e. each has zero mean and unit variance).

from shapelets.classification import LtsShapeletClassifier
# create an LtsShapeletClassifier instance
classifier = LtsShapeletClassifier(K=20, R=3, L_min=30, epocs=2, regularization_parameter=0.01,
                                       learning_rate=0.01, shapelet_initialization='segments_centroids')
# train the classifier. train_data (a numpy matrix) shape is (# train samples X time-series length), train_label (a numpy matrix) is (# train samples X 1)., train_label, plot_loss=True)
# evaluate on test data. test_data (a numpy matrix) shape is (# test samples X time-series length)
prediction = classifier.predict(test_data)

Although I believe the architecture is good, I think the implementation is way from optimal, and there is plenty of room for improvement. Off the top of my head, the usage of python arrays/lists has to be improved.

1 At the beginning of my search, I tried to contact the author of the LTS method asking for access to his implementation. The author granted me access but that was late; after I was done with the python implementation :)

Comment  Read more

Automated hunting of rental apartments


If you live in (or have been to) Stockholm, I bet you know how fierce it is to rent a place, and you would appreciate the piece of software I wrote. Using this software, I got a first hand contract within a month … YESSS!. This was in 2014.

Rental houses in Sweden are either owned by private companies or by the municipalities. The process of renting the houses to the public is controlled by the owners, or in many cases by broker companies. Due to the huge demand and the supply shortage, rental seekers have to wait in queues for years (~6 to get a place in a Stockholm suburb). Most housing brokers use a first-apply-first-served queues (bostadssnabben in swedish) to rent some of the less favourable apartments: short term rents, far from city center, some of the new built apartments (I guess this is because these are more expensive). These apartments are though scarce and hugely sought after. I targeted one of such queues managed by the biggest housing broker in Stockholm.

The plan was to build an agent that monitor the housing broker’s first-apply-first-served queue and notify me as soon as new apartments are posted there. So I wrote a a software that has: 1) a web crawler that crawls the queue web page and returns the available apartments list, 2) a logic that identify if the returned list contained a new apartment or not (this logic relies on a database for persistance), and 3) a notifier, that send emails if a new apartment was found. Since I was only interested on whether new apartments were added or not, I used to calculate the hash value of the returned apartment list and compare it the hash value of the previously returned list (the previously returned list is stored in the database).The application was written in Java and jsoup was used for the crawler. To have the app running 24/7, it was deployed in Google cloud app engine product and hence the app engine data store was used for persistance.

I am always observant to the rules, so at the time of writing the software, I checked and did not find any policy mentioned by that broker which prevents robots from crawling that page (i.e. there was no robot.txt regarding that).

Comment  Read more

Predicting human face attributes from images with deep learning


attributes_prediction Deep learning is prooved powerful in solving many problems, but requires plenty of data and computational resources which unfortunately few possess (the likes of Google, facebook and Amazon, etc..). The good news is that those who don’t have these resources can still benefit from deep learning using transfer learning techniques. Transfer learning alows one to use a model trained on a task X and fine-tune it for another task Y (not that different from X), using a small dataset related to task Y. One still needs to find a pretrained deep model available for use.

For the final project of an image recognition course (at KTH), I used deep Convolutional Neural Networks (CNN) to predict human attributes like skin tune, hair color and age from a face image. I used a dataset of face images annotated with facial attributes (40 attributes per image), called CelebA, to fine-tune a CNN pre-trained for a general object classification task (a VGG CNN trained for the ImageNet challenge), to predict facial attributes given a face image. As a baseline, a linear classifier was trained for attribute prediction using representations extracted from the pre-trained CNN. When trained on 20% of CelebA dataset (~40K images), the fine-tuned CNN achieved an average accuracy of ∼ 89.9% predicting 40 different attributes per image. MatConvNet deep learning framework was used.

Check this document for the deep network details, the fine-tuning procedure, the conducted experiments and a discussion on the results.

Comment  Read more