Broadly, my research interests include Machine Learning, Computer Vision and Analytic Combinatorics.
This project involves using state-of-the-art techniques for image segmentation in high-noise cellular microscopy data. My work involves exploring current techniques including Conditional Random Fields, Edge Detection, Watershed Algorithm and Spectral Clustering and some open-source pipelines such as CellProfiler for image segmentation.
The present research has been limited to rely heavily on extracting statistical features to evaluate and score essays through training on huge datasets. This makes it impossible for pri- mary school teachers to use such systems for grading. Moreover, until recently, even the systems which involve training on huge datasets yielded average results. We use LSTMs and feed-forward neural networks and devise a supervised system which improves the state-of-the-art system, and also devise an unsupervised system which perform as good as many recent supervised systems.
This research project aimed at using the power of deep learning networks to implement a classification model. I implemented a convolutional neural network that runs the logistic regression classifier to map the feature vectors obtained from VGG16 to the scene categories of a famous TV Show.
Climate change poses real and present danger to the future of the world. Also at stake is practically every major life form’s existence that resides on this planet. We devised a time-series model on the socio-economic factors contributing to climate change, and produced several visualizations which establish relations between several socio-economic factors. More generally, we have created an educative interface which provides visualiza- tions to trigger climate-aware thought-processes and provides some useful insights through basic statistical and machine learning methods.
The goal of this was to build a system that maps the correspondences in two images, and warps one image according to a reference image, and merges both of them into a mosaic which provides a larger FOV (Field of View)
The goal of this research project was to analyze the performance of various types of neural network architectures including Feed-forward Neural Networks and Competitive Networks over a supervised classification problem using various metrics.
The aim of this project was to find out simple feature measurements of a conjecture and axioms that sufficiently provide information to determine a good choice of heuristic. Our dataset consists a set of 5 heuristics, each having results from 14 static feature measurements and 39 dynamic feature measurements. The basic units affecting the features include the set of processed clauses, the set of unprocessed clauses and the axioms.
This project is a part of a larger research that I am currently into. Using state-of-the-art big data analytics and data mining techniques over a dataset of around 77,414 Megabytes, this project aimed at performing an analysis of commuting patterns, neighborhoods, traffic, tipping patterns, taxi fares (and more) in urban communities. The purpose is to extract useful insights so that any solutions that we derive can be mapped to other large cities.
Analyzed over 2,500,000 Megabytes of data on Amazon CommonCrawl using Amazon EMR on an S3 bucket. The project involved finding patterns where people express their feelings as "I feel". Different words that indicated a particular sentiment were collected and grouped using Spark.
The goal of this experiment was to implement and record the running times, depth in the search space of path-finding algorithms. Using Manhattan Distance and NMT Heuristics, our results provide a holistic comparison of the performances of both of the algorithms on the famous 8-Puzzle and 15-Puzzle Problems.
The automated detection of diseases using Machine Learning Techniques has become a key research area. Although the computational complexity involved in analyzing a huge data set can be extremely high, nonetheless the merits of getting a desired result surely count for the complexity involved in the task. In this paper we adopt the K-Means Clustering Algorithm with a single mean vector of centroids, to classify and make clusters of varying probability of likeliness of suspect being prone to CKD. The results are obtained from a Real Case Data-Set from UCI Machine Learning Repository.
I did my high school from Delhi Public School and passed out with a Scholar Badge for academic excellence for seven consecutive years.
I believe a tag cloud provides a great description.
At Stony Brook University, I have taken CSE-527 (Computer Vision) by Prof. Minh Hoai Nguyen, CSE-564 (Visualization & Visual Analytics) by Prof. Klaus Mueller, CSE-628 (Natural Language Processing), CSE-537 (Artifcial Intelligence) by Prof. Niranjan Balasubramanian, CSE-545 (Big Data Analytics) by Prof. Andrew Schwartz, CSE-537 (Fundamentals of Computer Networks) by Prof. Aruna Balasubramanian and CSE-548 (Analysis of Algorithms) by Prof. Rezaul Chowdhury
Here is a list if courses I took at Birla Institute of Applied Sciences. I took Soft Computing as my elective course.
I try my best to learn outside the classroom as well. I really like taking MOOCs time-to-time. They allow us to explore different areas without academic constraints.
I love art. Sometimes I wish I had gone to an Art School instead of a Math School. But that is fine. May be I'll go to one in future. For now, I paint whenever I feel like.
I like to write and read. Check out my blogs:
Get In Touch