Researchers have developed an extremely compact artificial intelligence model inspired by the brain’s visual system, using data from macaque monkeys to dramatically shrink the size of the AI while preserving performance. Traditional AI vision models often require millions of variables and large computational power, but this new model was compressed from around 60 million variables to just about 10,000 — a reduction of roughly 1/1000th of its original size without a proportionate loss of ability.
The breakthrough stems from mimicking how the brain’s visual processing areas — particularly neurons known as V4 cells — respond to shapes, edges and textures. By training the AI on patterns gleaned from monkey neural recordings and then eliminating redundant components, the researchers created a highly efficient, biologically informed system that performs visual recognition tasks far more compactly than typical deep neural networks.
One key insight is that living brains are vastly more energy-efficient than current AI systems; a human brain consumes less power than a small household light bulb, while conventional AI can use enormous energy for similar tasks. The compact model not only suggests how AI can be made more efficient but also offers a simplified way to inspect what the artificial neurons are doing, potentially helping scientists better understand visual processing mechanisms.
Beyond technical performance, the research may help scientists test theories about brain function and could inform future AI designs that are both smaller and more human-like in capability. Experts suggest that adopting insights from biological neural systems might lead to AI that runs on much less power, enabling applications — such as in self-driving cars or portable devices — to operate with lower computational cost while better handling real-world visual recognition challenges.