OpenAI has launched Microscope, a neural library of nine popular and oft used neural networks, which hosts millions of images. The aim is to help AI researchers understand the behaviour of the tens of thousands of neurons that make up a neural network.

 

 In a blog post, OpenAI said it will help AI learners understand neural networks by reverse-engineering the workings of the neurons that make up the networks.

“Our models are composed of a graph of “nodes” (the neural network layers), which are connected to each other through “edges.” Each op contains hundreds of “units”, which are roughly analogous to neurons. Most of the techniques we use are useful only at a specific resolution. For instance, feature visualization can only be pointed at a “unit”, not its parent “node”, explain the makers at OpenAI

“While we’re making this available to anyone who’s interested in exploring how neural networks work, we think the primary value is in providing persistent, shared artifacts to facilitate long-term comparative study of these models. We also hope that researchers with adjacent expertise — neuroscience, for instance — will find value in being able to more easily approach the internal workings of these vision models,” OpenAI said on the Microscope website.

 

The neural library of Microscope includes the commonly studied and used computer vision models, such as the AlexNet, which has been cited over 50,000 times in research. GoogleNet (Inception V1) and ResNet v2 have also been included. Each model visualization comes with a handful of scenarios, and images are available in the OpenAI Lucid library for reuse under a Creative Commons license.

The Microscope has two main visualizations; feature visualization and DeepDream. It also includes data sets and synthetic tuning curves or units that respond to synthetic image families. The group plans to add more images over time.

The nine models itself will give hundreds of thousands of neutron images to study with their own peculiarities, hence, for the time being, OpenAI has restricted the models.

 

Interpreting the vast mass of data that deep learning machines gather is a challenge for most researchers.

The deep connections and complexity of the neural networks and the neurons that make them up makes understanding and unraveling their workings a near-impossible task. So most data researchers use visualization techniques to understand the functioning and decision-making processes of the neural networks.

 

To get accurate results in deep learning, one has to marry interpretability with the key building blocks of the model that is being studied. The whole study depends on getting the macro picture of how the neurons interplay with each other in a specific segment.

 

The whole concept is similar to how we study the cells of an organism and how the various parts in it form and interact with each other. Microscope does the same thing with neural networks. It studies the neurons in these networks and their interconnectivity with each other. Microscope and the Lucid library will be a major help in model interpretability. Understanding neuron relationships is fundamental to understanding deep learning models and Microscope and Lucid are a solid step in the right direction.

OpenAI is a research laboratory based in San Francisco, California. It mainly works in the artificial general intelligence (AGI)area to build systems for the benefit of humankind. It was co-founded by Elon Musk in 2016. He exited the company in 2019, apparently to focus on his other ventures. But it seems he was in disagreement over the directions that OpenAI was taking, and its research was clashing with what was being done at Tesla and SpaceX, Musk’s other companies.