top of page
Capture.PNG

Explainable AI

The high performance of deep neural networks (DNN) in real-world problems compared to machine learning algorithms has led to significant growth of interest in the area of neural networks. However, on the one hand, not being robust to perturbations reduces such performance. On the other hand, the higher the performance, the greater the number of parameters, leading to computational complexity and high storage consumption. Dealing  with such problems in DNNs brought up two concepts of network robustness and network compression. So far, much research has been conducted on network robustness and different types of network compression, including pruning and low-rank decomposition while more attention needs to be paid to the theoretical explanation of such techniques. We aim to explain the idea behind such methods using information-theoretic divergence in a latent space of DNN weights.

 

bottom of page