Researchers from Aalto University have developed a breakthrough method of computing that uses a single pass of light to perform complex tensor operations—core computations in modern AI systems. Instead of relying on traditional electronic circuits and sequential processing, this approach encodes data into the amplitude and phase of light waves and allows the optical interactions to carry out matrix and tensor multiplications all at once.
This “single-shot tensor computing” technique allows deep-learning tasks (such as convolutions and attention layers) to be executed at the speed of light and with very low energy consumption. The system works passively—no active electronic switching is required during computation—and is designed to be compatible with photonic chips.
The implications for AI hardware are profound. As AI models continue to demand more compute power and energy (especially for training large language models and other deep-learning systems), this optical method could represent a new class of ultra-efficient processors. Integrating light-based hardware might help overcome current bottlenecks in speed, power usage and scalability.
However, the research is still early stage. The team researchers estimate that this technique could be integrated into hardware within 3–5 years, but challenges remain—such as manufacturing photonic chips at scale, integrating them with existing infrastructure and verifying their performance in real-world AI workloads.