Blog Entries and personal Journal
One experiment is simply to give AlphaZero an old-fashioned examination on test positions for which the perfect answers are known. These could even be generated in a controlled fashion from chess endgames with 7 or fewer pieces on the board, for which perfect play was tabulated by Victor Zakharov and Vladimir Makhnichev using the Lomonosov supercomputer of Moscow State University. Truth in those tables is often incredibly deep—in some positions the win takes over 500 moves, many of which no current chess program (not equipped with the tables) let alone human player would find. Or one can set checkmate-in-{N} problems that have stumped programs to varying degrees.
This work tries to reproduce the results of A Neural Conversational Model (aka the Google chatbot). It uses a RNN (seq2seq model) for sentence predictions. It is done using python and TensorFlow.
The inclusion of latent random variables into the hidden state of a recurrent neural network (RNN) by combining the elements of the variational autoencoder. Use of high-level latent random variables of the variational RNN (VRNN) to model the kind of variability observed in highly structured sequential data such as natural speech.
This work is an nlp task comparison by simultaneously solving the verification task (If a sample belongs to the certain group ) using three unique approaches : pgm method(bayesian nets/ markov chains), simple machine learning (pos tagging, word2vec feature count etc), a deep learning method (rnn, lstm)
MNIST-Logistic-Regression-MLP-CNN
Logistic regression, MLP with 1 hidden layer and CNN on both MNIST and USPS
Basic requirements:
Logistic regression, MLP with 1 hidden layer and CNN on both MNIST and USPS using a publicly available library (such as tensorflow, ….) are required. No need to tune hyper parameters for CNN. Implementation of back propagation is not required. However, implementing back propagation yourself independently can get you bonus points (up to an extra 10%). If you choose to do this extra, submit code in another separate file proj3code_bp.zip
The goal of the assignment is to implement a Laplacian blob detector: a generalized Laplacian of Gaussian (LoG) (gLoG) filter for detecting general elliptical blob structures in images. The gLoG filter can not only accurately locate the blob centers but also estimate the scales, shapes, and orientations of the detected blobs. These functions can be realized by generalizing the common 3-D LoG scale-space blob detector to a 5-D gLoG scale-space one, where the five parameters are image-domain coordinates (x, y), scales (σ x , σ y ), and orientation (θ), respectively.
Resume.
Background.
Projects.