A research team led by the Tandon School of Engineering at New York University has created a method for autonomous cars to indirectly exchange information about road conditions. This allows each vehicle to benefit from the experiences of other cars, even if they do not often cross paths on the road.

Image Credit: metamorworks/Shutterstock.com
The study addresses a recurring issue in artificial intelligence: how to assist vehicles in learning from one another while protecting the privacy of their data. It was presented in a paper at the Association for the Advancement of Artificial Intelligence Conference on February 27th, 2025. The speed at which vehicles adjust to novel situations is usually limited because they often only communicate what they have learned during brief, direct contact.
Think of it like creating a network of shared experiences for self-driving cars.
Yong Liu, Professor, Electrical and Computer Engineering Department, Tandon School of Engineering, New York University
Liu is also a member of its Center for Advanced Technology in Telecommunications and Distributed Information Systems and of NYU WIRELESS.
Liu added, “A car that has only driven in Manhattan could now learn about road conditions in Brooklyn from other vehicles, even if it never drives there itself. This would make every vehicle smarter and better prepared for situations it hasn't personally encountered.”
The researchers developed a novel method known as Cached Decentralized Federated Learning (cached-DFL). This method allows vehicles to train their own AI models locally and communicate those models with others immediately, in contrast to traditional Federated Learning, which depends on a central server to coordinate updates.
Vehicles exchange trained models instead of raw data within 100 meters of one another via high-speed device-to-device communication. Importantly, they may also share models they have been given from earlier contacts, which enables information to propagate much beyond face-to-face exchanges. Every 120 seconds, every vehicle updates its AI and keeps a cache of up to ten external models.
The method prioritizes current and pertinent knowledge in automobiles by automatically eliminating older models based on a staleness threshold, preventing obsolete information from impairing performance.
Using Manhattan's street layout as a template, the researchers ran computer simulations to test their approach. In their tests, virtual cars traveled roughly 14 meters per second through the city's grid, turning at crossroads according to probability, with a 50% chance of going straight ahead and an equal chance of turning onto other routes.
Cached-DFL enables models to move indirectly through the network, similar to how messages spread in delay-tolerant networks, which manage sporadic connectivity by storing and forwarding data until a connection is available. This contrasts with traditional decentralized learning techniques, which suffer when vehicles do not meet frequently. Vehicles can transmit information by serving as relays, even if they have never encountered particular circumstances firsthand.
“It is a bit like how information spreads in social networks. Devices can now pass along knowledge from others they've met, even if those devices never directly encounter each other,” Liu explained.
This multi-hop transmission mechanism reduces the drawbacks of conventional model-sharing techniques, which depend on instantaneous, one-to-one interactions. Cached-DFL makes it possible to learn to spread throughout a whole fleet more effectively than if each vehicle could only communicate directly by letting them serve as relays.
The system protects data privacy while enabling connected cars to learn about obstacles, signals, and road conditions. This is particularly helpful in cities where cars encounter various conditions yet are rarely present for extended periods for conventional teaching approaches.
The study demonstrates how learning efficiency is impacted by model expiration, cache size, and vehicle speed. Outdated models decrease accuracy, but faster speeds and frequent communication yield better outcomes. By giving multiple models from various domains priority over only the most recent models, a group-based caching technique improves learning even more.
Cached-DFL offers a safe and effective means for self-driving cars to learn collectively, becoming more intelligent and adaptable as AI shifts from centralized servers to edge devices. For reliable and effective decentralized learning in the direction of swarm intelligence, cached-DFL can also be used with various networked systems of intelligent mobile agents, such as robots, drones, and satellites.
The code has been released to the public by the researchers. Their technical report has more information. The study team also includes Houwei Cao from the New York Institute of Technology and Guojun Xiong and Jian Li from Stony Brook University in addition to Liu and Wang.
Multiple National Science Foundation grants supported the research, as did the Resilient & Intelligent NextG Systems (RINGS) program, which received financing from the Department of Defense and the National Institute of Standards and Technology, as well as NYU's computing resources.