Making AI More Brain‑Like: New Research from Johns Hopkins University

Making AI More Brain‑Like: New Research from Johns Hopkins University

Recent research from Johns Hopkins shows that designing artificial intelligence systems with architectures inspired by the human brain can produce AI that behaves — even before training — in ways more similar to human or primate brains than conventional models.

The study tested different neural‑network blueprints — including transformer-based, fully connected, and convolutional architectures — and found that when convolutional networks are modified (e.g. with scaled-up artificial neurons), these “brain‑like” versions produced activation patterns in response to images that closely matched those observed in real brains (from humans and non‑human primates) exposed to the same images.

This finding challenges the dominant assumption in AI development: that success depends primarily on training large models with massive amounts of data and compute. The results suggest that architecture matters more than raw data scale — selecting the right blueprint could dramatically accelerate AI learning, reduce resource requirements, and yield systems that learn more efficiently (similar to how humans learn).

In other words: the future of AI may lie not just in “bigger models” but in carefully designed, biologically inspired models that capture the structural strengths of the brain. If scaled and refined, this approach could reshape how AI is built — with benefits in efficiency, performance, and possibly even safety or interpretability.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.