Researchers Enable Privacy-Preserving AI Training on Everyday Devices

Researchers Enable Privacy-Preserving AI Training on Everyday Devices

A new breakthrough from researchers is making it easier to train AI models without compromising user privacy. The innovation significantly improves a technique called federated learning, allowing devices like smartphones, sensors, and wearables to collaboratively train AI models while keeping sensitive data stored locally. Instead of sending raw data to central servers, devices only share model updates, ensuring that personal information remains secure.

One of the biggest challenges with federated learning has been inefficiency, especially when dealing with devices that have limited computing power or connectivity. The new method developed by MIT researchers addresses these bottlenecks and improves training speed by around 81%. By optimizing how devices handle memory and communication, the approach enables smoother coordination across a diverse network of devices, making privacy-preserving AI far more practical.

This advancement has important implications for industries that rely on sensitive data, such as healthcare and finance. By allowing AI models to be trained directly on-device, organizations can unlock valuable insights without exposing confidential information. It also opens the door for AI to run more effectively on everyday devices rather than relying solely on large, centralized data centers.

Ultimately, the research signals a shift toward more decentralized and secure AI systems. As concerns around data privacy continue to grow, innovations like this could redefine how AI is developed and deployed—balancing performance with trust, and bringing powerful AI capabilities closer to where data is actually generated.


SAS Makes AI Governance the Centerpiece of Its Agent Strategy

SAS is placing AI governance at the core of its strategy for building and deploying AI agents, reflecting a broader shift in the industry toward responsible and controlled AI use. As organizations increasingly adopt autonomous AI systems, SAS argues that governance frameworks—covering transparency, accountability, and risk management—are becoming essential rather than optional.

The company emphasizes that the future of AI will not be defined solely by innovation, but by how well organizations manage trust and compliance. With regulations like the EU AI Act introducing strict requirements, businesses are under pressure to ensure their AI systems are explainable, ethical, and aligned with legal standards. SAS positions governance as a competitive advantage, suggesting that companies with strong oversight will outperform those that prioritize speed over responsibility.

A key element of SAS’s approach is embedding governance directly into AI agents themselves. Instead of treating governance as a separate layer, the idea is to build systems that can monitor, document, and regulate their own behavior in real time. This is especially important as AI agents become more autonomous, making decisions and taking actions without constant human intervention.

Overall, the strategy highlights a turning point in enterprise AI adoption. As AI systems grow more powerful and independent, governance is emerging as the foundation that enables safe scaling. Companies that integrate governance into their AI architecture from the start are more likely to build sustainable, trustworthy systems—while those that neglect it may face regulatory, operational, and reputational risks.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.