I’m a graduate student at the University of Southern California studying Computer Science advised by Professor Aiichiro Nakano and Professor Emilio Ferrara. My research interests are in network analysis, geometric and quantum deep learning, and high performance computing. Particularly, I focus on developing deep learning techniques for network analysis problems by exploiting high-performance and quantum computing architectures.
Before joining USC, I was a research assistant at the Indian Statistical Institute with Professor Saroj Kumar Meher, where I worked on feature engineering of geological data. I was also a research associate at M. S. Ramaiah Institute of Technology with Professor Krishnaraj P M, where I collaborated on a textbook focusing on the practical aspects of social network analysis.
MS in Computer Science, 2020
University of Southern California
BE in Information Science and Engineering, 2016
M. S. Ramaiah Institute of Technology
Nov  2020 - One new preprint submitted to arXiv 
Aug  2020 - Successfully defended masters thesis! 
Nov  2019 - One paper accepted in Computational Materials Science 
Aug  2018 - I’ve moved to USC! 
Aug  2018 - Practical Social Network Analysis is now available! See the website for details 
Jul  2017 - One paper accepted in Social Network Analysis and Mining 
Most datasets have some form of noise which affects the downward machine learning task. When we are provided with a clean training dataset, a deep neural network trained on this clean dataset, and a noisy test dataset; we explore the possibility of denoising the test data without having to retraining the model by exploiting the denoising capabilities of restricted Boltzmann machines and the representations of the hidden layers of the deep neural network.
Implementation of “Estimating Differential Entropy under Gaussian Convolutions” (2019), Ziv Goldfeld, Kristjan Greenewald, Yury Polyanskiy Here we estimate the mutual information between the input layer and each of the hidden layer representations using a noisy deep neural network, where additive white Gaussian noise (AWGN) is injected to each of these representations. We further extend this work to estimate information flow in graph neural networks.
We attempt to solve the game of Pommerman using deep reinforcement learning by designing both curriculum learning and reward engineering methods to progressively train the game agent.