Artificial intelligence is everywhere, from the personal assistant in your pocket to self-driving cars navigating busy streets. Yet, for all their apparent intelligence, the neural networks powering these technologies have long been a mystery. AI developers don’t write explicit rules for these systems. Instead, they feed vast quantities of data into these systems and they learn on their own to spot patterns. Since they don’t provide insights that point to their decision-making process, AIs have often been referred to as “black boxes”.
A team of researchers at Western University has now taken a major step toward opening these AI black boxes. By applying advanced mathematics, they’ve developed a method to peer inside the step-by-step output of these algorithms and understand their decision-making processes.
Their work marks a milestone in the quest for so-called explainable AI.
“We create neural networks that can perform specific tasks, while also allowing us to solve the equations that govern the networks’ activity,” said Lyle Muller, director of Western’s Fields Lab for Network Science. “This mathematical solution lets us ‘open the black box’ to understand precisely how the network does what it does.”
A New Way of Seeing
The researchers demonstrated their method by tackling image segmentation, a key process in computer vision. Image segmentation enables systems to divide an image into distinct parts, such as separating a polar bear from the snowy background. But how does a computer decide where one object ends and another begins?
Starting with basic shapes like squares and triangles, the team created a recurrent neural network (RNN) capable of identifying these components. Using a mathematical approach developed for studying other types of networks, they mapped how the neural network processed each image step by step.
The innovation introduced by the researchers hinges on the neural network’s ability to operate with complex numbers. This design, inspired by natural processes like brain waves, creates dynamic, flowing patterns across an image. These patterns, or “traveling waves,” separate different regions, such as a triangle from its background.
Surprisingly, their network was able to extend this understanding to more complex scenarios, such as photographs of wildlife in natural settings.
“By simplifying the process to gain mathematical insight, we were able to construct a network that was more flexible than previous approaches and also performed well on new inputs it had never seen,” said Muller.
The flexibility of this new network comes with a significant bonus: it can be explained. Unlike traditional neural networks, whose operations are often opaque even to their creators, this system reveals exactly how its computations unfold.
From the Cortex to the Computer
The team drew inspiration from neuroscience. The visual cortex, a brain region processing visual information, exhibits traveling waves during sensory tasks. These waves aren’t just noise — they carry computational meaning.
By modeling these waves mathematically, the team created a network that doesn’t require repeated training to handle different images. Instead, a single set of parameters guides the RNN to identify objects in simple geometric patterns, natural scenes, and everything in between.
The network, called a complex-valued RNN (cv-RNN), uses two key features:
- Amplitude and Phase: Each pixel is represented as a complex number, with its brightness encoded in amplitude and its group (object) determined by phase.
- Recurrent Connectivity: Pixels influence their neighbors, much like neurons in the brain. This interaction generates waves that flow through regions belonging to the same object.
These dynamics unfold over two layers. The first separates objects from the background, while the second distinguishes individual objects. The results are astonishingly clear, even for overlapping shapes or detailed natural scenes.
The team even connected their network to a living brain cell in collaboration with physiology professor Wataru Inoue. The result was a hybrid system that merged artificial and biological neural networks, opening the door to futuristic possibilities in neuroscience and AI.
“This kind of fundamental understanding is crucial as we continue to develop more sophisticated AI systems that we can trust and rely on,” said Roberto Budzinski, a post-doctoral scholar on the team.
Why It Matters
Neural networks have revolutionized industries but remain a double-edged sword. Their power comes with uncertainty, as their lack of transparency has raised concerns about trust and safety. By creating systems that are both effective and explainable, researchers are paving the way for more responsible, better AI.
“This is just the beginning,” Muller said. “We believe this mathematical understanding can be useful far beyond this first example.”
As AI continues to shape the world, understanding how it thinks is more than an academic exercise — it’s a necessity. This work is a step toward ensuring that the technologies of tomorrow are not only smarter but also safer and more accountable.
The findings were reported in the journal PNAS.