Until this day the theoretical mechanisms of learning with Deep Neural Network (DNN) are not well known. One remarkable contribution is the concept of Information Bottleneck (IB) presented by Naftali Tishby [1,2], he was computer scientist and neuroscientist from the Hebrew University of Jerusalem. His theory claims to be a way to understand the impressive success of neural networks in a huge variety of applications. Hence, I'm presenting a general vision of DNN via information and also creating my own experimental setup to check the IB.
[1] N. Tishby, Fernando C. Pereira, and W. Bialek, The information bottleneck method (1999), In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Comput- ing, 1999.
[2] Shwartz-Ziv, Ravid, and N. Tishby, Opening the Black Box of Deep Neural Networks via Information (2017), doi:10.48550/ARXIV.1703.00810.