Most datasets have some form of noise which affects the downward machine learning task. When we are provided with a clean training dataset, a deep neural network trained on this clean dataset, and a noisy test dataset; we explore the possibility of denoising the test data without having to retraining the model by exploiting the denoising capabilities of restricted Boltzmann machines and the representations of the hidden layers of the deep neural network.
Implementation of "Estimating Differential Entropy under Gaussian Convolutions" (2019), Ziv Goldfeld, Kristjan Greenewald, Yury Polyanskiy Here we estimate the mutual information between the input layer and each of the hidden layer representations using a noisy deep neural network, where additive white Gaussian noise (AWGN) is injected to each of these representations. We further extend this work to estimate information flow in graph neural networks.
We attempt to solve the game of Pommerman using deep reinforcement learning by designing both curriculum learning and reward engineering methods to progressively train the game agent.