Facebook today introduced Captum, a library for explaining decisions made by neural networks with deep learning framework PyTorch. Captum is designed to implement state of the art versions of AI models like Integrated Gradients, DeepLIFT, and Conductance. Captum allows researchers and developers to interpret decisions made in multimodal environments that combine, for example, text, images, and video, and allows them to compare results to existing models within the library.
Developers can also use Captum to understand feature importance or perform a deep dive on neural networks to understand neuron and layer attributions.
The tool will also launch with Captum Insights, a visualization tool for visual representations of Captum results. Insights launches with support for Integrated Gradients, with support for additional models coming soon, Facebook said in a blog post.
“There are other libraries out there that are more context focused, but deep learning is really the hardest of hard problems in trying to interpret what the model was actually thinking, so to speak, especially when it comes to these multimodal tech problems,” PyTorch product manager Joe Spisak told VentureBeat Ina phone interview.
The news is being announced today at PyTorch Developer Conference, which takes place at The Midway in San Francisco.
Other new releases today include PyTorch 1.3 with quantization and Google Cloud TPU support, PyTorch Mobile for embedded devices starting with Android and iOS devices, and the release of object detect model Detectron2.
Before being open-sourced today, Captum was used internally at Facebook to better understand decisions made in multimodal environments, Spisak said.
“You can look at any Facebook page and it’s got text, it’s got audio, it’s got video and links, and there’s a number of different types of modalities embedded. And so we basically started with that premise of we want to understand why models are predicting what they’re predicting, but we wanted to do it in a way that was visual, that gave an intuition for users as well as concrete statistics and information that allow them to say confidently this is why the model is making this prediction,” he said.
Interpretability, the ability to understand why an AI model made a decision, is important for developers to be able to convey why a model made a certain decision. It enables the application of AI in businesses that require explainability in order to comply with regulatory law.
The lack of an ability to understand decisions made by deep learning have popularized the term “black box.”
In a conversation with VentureBeat’s Kyle Wiggers this summer, OpenAI CTO Greg Brockman and chief scientist Ilya Sutskever suggested that future model making should be informed by explainability and reason.
Other tools released this year to help interpret AI inference include IBM’s AI Explainability 360 toolkit and the release of InterpretML by Microsoft in May.