Scientists have developed a new optical computing architecture called Parallel Optical Matrix–Matrix Multiplication (POMMM), which could dramatically accelerate AI processing. Traditional optical computing uses light instead of electricity but has been limited by its inability to run many operations in parallel — a major barrier compared to GPU-based systems.
With POMMM, researchers can perform multiple tensor (matrix) operations in a single laser pulse, encoding the data into the amplitude and phase of light. This passive process — where calculations happen as the light travels — greatly reduces energy consumption because it removes the need for active switching or control during computation.
The breakthrough promises not only higher speed, but also compatibility with standard optical platforms — meaning it could be integrated into next-generation AI hardware within 3–5 years, according to the researchers. This could be a game-changer for scaling AI: faster processing, lower power, and potentially more efficient training and inference.
Ultimately, the development brings us closer to scalable general-purpose optical AI compute, and possibly even a step toward artificial general intelligence (AGI). By overcoming a long-standing optical bottleneck, POMMM offers a fundamentally new path for building high-performance, energy-efficient AI systems.