Reality is merely an illusion, albeit a very persistent one.

Blog Entries and personal Journal


Benchmark Alpha Zero

One experiment is simply to give AlphaZero an old-fashioned examination on test positions for which the perfect answers are known. These could even be generated in a controlled fashion from chess endgames with 7 or fewer pieces on the board, for which perfect play was tabulated by Victor Zakharov and Vladimir Makhnichev using the Lomonosov supercomputer of Moscow State University. Truth in those tables is often incredibly deep—in some positions the win takes over 500 moves, many of which no current chess program (not equipped with the tables) let alone human player would find. Or one can set checkmate-in-{N} problems that have stumped programs to varying degrees.

Sequence to sequence conversation learning : seq2seq RNN

This work tries to reproduce the results of A Neural Conversational Model (aka the Google chatbot). It uses a RNN (seq2seq model) for sentence predictions. It is done using python and TensorFlow.

A Recurrent Latent Variable Model for Sequential Data

The inclusion of latent random variables into the hidden state of a recurrent neural network (RNN) by combining the elements of the variational autoencoder. Use of high-level latent random variables of the variational RNN (VRNN) to model the kind of variability observed in highly structured sequential data such as natural speech.

Sequence Generative Adversarial Nets with Policy Gradient : SeqGAN

This work tries to reproduce the results of SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient (aka SeqGan). It uses a RNN sequence as a generator and the discriminator Modeling the data generator as a stochastic policy in reinforcement learning (RL),SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.

Human Behaviour Prediction - PGM vs ML vs Siamese Lstms

This work is an nlp task comparison by simultaneously solving the verification task (If a sample belongs to the certain group ) using three unique approaches : pgm method(bayesian nets/ markov chains), simple machine learning (pos tagging, word2vec feature count etc), a deep learning method (rnn, lstm)

Logistic Regression - MNIST + USPS Dataset

MNIST-Logistic-Regression-MLP-CNN

Logistic regression, MLP with 1 hidden layer and CNN on both MNIST and USPS

Basic requirements:

Logistic regression, MLP with 1 hidden layer and CNN on both MNIST and USPS using a publicly available library (such as tensorflow, ….) are required. No need to tune hyper parameters for CNN. Implementation of back propagation is not required. However, implementing back propagation yourself independently can get you bonus points (up to an extra 10%). If you choose to do this extra, submit code in another separate file proj3code_bp.zip

RANSAC, Homography and Fundamental Matrix Estimation

Implementing a robust homography and fundamental matrix estimation to register pairs of images separated either by a 2D or 3D projective transformation.

Application of a Laplacian Blob Detector

The goal of the assignment is to implement a Laplacian blob detector: a generalized Laplacian of Gaussian (LoG) (gLoG) filter for detecting general elliptical blob structures in images. The gLoG filter can not only accurately locate the blob centers but also estimate the scales, shapes, and orientations of the detected blobs. These functions can be realized by generalizing the common 3-D LoG scale-space blob detector to a 5-D gLoG scale-space one, where the five parameters are image-domain coordinates (x, y), scales (σ x , σ y ), and orientation (θ), respectively.

Resume. Background. Projects.