Latent Inspector: An Interactive Tool for Probing Neural Network Behaviors Through Arbitrary Latent Activation
Latent Inspector: An Interactive Tool for Probing Neural Network Behaviors Through Arbitrary Latent Activation
Daniel Geißler, Bo Zhou, Paul Lukowicz
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Demo Track. Pages 7127-7130.
https://doi.org/10.24963/ijcai.2023/832
This work presents an active software instrument allowing deep learning architects to interactively inspect neural network models' output behavior from user-manipulated values in any latent layer. Latent Inspector offers multiple dimension reduction techniques to visualize the model's high dimensional latent layer output in human-perceptible, two-dimensional plots. The system is implemented with Node.js front end for interactive user input and Python back end for interacting with the model. By utilizing a general and modular architecture, our proposed solution dynamically adapts to a versatile range of models and data structures. Compared to already existing tools, our asynchronous approach of separating the training process from the inspection offers additional possibilities, such as interactive data generation, by actively working with the model instead of visualizing training logs. Overall, Latent Inspector demonstrates the possibilities as well as the appearing limits for providing a generalized, tool-based concept for enhancing model insight in terms of explainable and transparent AI.
Keywords:
Machine Learning: ML: Explainable/Interpretable machine learning
AI Ethics, Trust, Fairness: ETF: Trustworthy AI
Data Mining: DM: Data visualization
Data Mining: DM: Exploratory data mining
Humans and AI: HAI: Human-computer interaction
Machine Learning: ML: Feature extraction, selection and dimensionality reduction