Science Discoveries

MIT Develops Faster Federated Learning for AI on Low-Power Devices

MIT researchers have developed a new technology that accelerates federated learning, a privacy-preserving AI training method, by approximately 81 percent. This advancement enables devices with limited computational power and memory—such as smartwatches, sensors, and older smartphones—to collaboratively train AI models more efficiently while safeguarding user data.

Federated Learning Challenges on Edge Devices

Federated learning allows a network of connected devices to train a shared AI model without exposing local data, as model updates—rather than raw data—are sent back to a central server. However, many devices face constraints in memory, processing capability, and network stability, which can delay the training process or cause it to fail. The central server typically waits for updates from all devices before proceeding, creating bottlenecks especially in heterogeneous networks of edge devices.

Innovations in the Federated Tiny Training Engine (FTTE)

To overcome these limitations, the MIT researchers created FTTE (Federated Tiny Training Engine), a framework designed for heterogeneous devices with varying capabilities. FTTE introduces three key innovations:

  • Instead of sending the full AI model to all devices, FTTE distributes a smaller subset of model parameters tailored to fit the most memory-constrained devices, optimizing accuracy within memory limits.
  • The server aggregates updates asynchronously, processing them once a fixed capacity is reached rather than waiting for all device responses.
  • Updates are weighted based on their arrival time, minimizing the impact of outdated information that can slow training and reduce accuracy.

Performance and Real-World Testing

Simulation tests involving hundreds of devices demonstrated that FTTE reduced on-device memory use by 80 percent and communication overhead by 69 percent, while maintaining near-equivalent accuracy to conventional federated learning methods. This resulted in training completion roughly 81 percent faster. The approach also scaled effectively, providing greater benefits as the number of devices increased.

FTTE was further tested on a small network of real devices with diverse computational capabilities, highlighting its potential for deployment in environments with older or less powerful hardware, including developing countries.

Why it matters

This breakthrough facilitates privacy-preserving AI training on everyday devices previously unable to support such techniques due to hardware limitations and connectivity issues. It holds promise for expanding the use of AI in sensitive fields like healthcare and finance, where data privacy is critical, and for democratizing AI benefits to users worldwide irrespective of device sophistication.

Background

Federated learning is an emerging AI training approach that keeps user data on local devices to protect privacy. Traditional methods require consistent device capabilities and network connectivity, assumptions that do not hold true in diverse real-world scenarios involving edge devices. Enhancing the efficiency and inclusiveness of federated learning networks remains a key challenge for researchers aiming to broaden AI accessibility without compromising security.

The research team includes graduate student Irene Tenison from MIT’s Electrical Engineering and Computer Science department and senior researcher Lalana Kagal from the Computer Science and Artificial Intelligence Laboratory (CSAIL). Their work, partly funded by a Takeda PhD Fellowship, will be presented at the IEEE International Joint Conference on Neural Networks.

Read more Science Discoveries stories on Goka World News.

Sources

This article is based on reporting and publicly available information from the following source:

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia