Scientists have developed a new optical computing architecture that uses light — instead of electricity — to perform the core tensor calculations required by large AI models. By encoding numerical data in the amplitude and phase of light waves, their system can carry out complex operations like matrix multiplications in a single laser pulse, vastly improving speed.
The key innovation, called Parallel Optical Matrix-Matrix Multiplication (POMMM), enables multiple tensor operations to run in parallel during one light propagation. This overcomes a major limitation of earlier optical systems, which handled operations linearly and couldn’t scale like GPU-based systems.
Because the optical operations happen passively — the calculations occur as light travels without requiring active switching — the energy requirements are extremely low. This could lead to AI hardware that is not just faster, but also far more power-efficient.