-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Padmanabhan_2024_CVPR, author = {Padmanabhan, Namitha and Gwilliam, Matthew and Kumar, Pulkit and Maiya, Shishira R and Ehrlich, Max and Shrivastava, Abhinav}, title = {Explaining the Implicit Neural Canvas: Connecting Pixels to Neurons by Tracing their Contributions}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {10957-10967} }
Explaining the Implicit Neural Canvas: Connecting Pixels to Neurons by Tracing their Contributions
Abstract
The many variations of Implicit Neural Representations (INRs) where a neural network is trained as a continuous representation of a signal have tremendous practical utility for downstream tasks including novel view synthesis video compression and image super-resolution. Unfortunately the inner workings of these networks are seriously understudied. Our work eXplaining the Implicit Neural Canvas (XINC) is a unified framework for explaining properties of INRs by examining the strength of each neuron's contribution to each output pixel. We call the aggregate of these contribution maps the Implicit Neural Canvas and we use this concept to demonstrate that the INRs we study learn to "see" the frames they represent in surprising ways. For example INRs tend to have highly distributed representations. While lacking high-level object semantics they have a significant bias for color and edges and are almost entirely space-agnostic. We arrive at our conclusions by examining how objects are represented across time in video INRs using clustering to visualize similar neurons across layers and architectures and show that this is dominated by motion. These insights demonstrate the general usefulness of our analysis framework.
Related Material