Meta has started testing its first in-house AI training chip, a significant milestone in its efforts to reduce reliance on external suppliers like Nvidia. The company has begun a small deployment of the chip and plans to ramp up production if the test is successful. This move is part of Meta's long-term plan to bring down its infrastructure costs, which are expected to reach $114 billion to $119 billion in 2025, with up to $65 billion allocated for AI infrastructure.
The new chip is a dedicated accelerator designed to handle AI-specific tasks, making it more power-efficient than traditional GPUs. Meta is working with Taiwan-based chip manufacturer TSMC to produce the chip, which is part of the company's Meta Training and Inference Accelerator (MTIA) series.
This development comes as Meta aims to start using its own chips for training AI models by 2026, initially focusing on recommendation systems and later expanding to generative AI products. The company has already made progress in this area, having started using an MTIA chip for inference in its recommendation systems last year.