FedDC: Federated Learning from Small Datasets

Abstract. Federated learning allows multiple parties to collaboratively train a joint model without having to share any local data. It enables applications of machine learning in settings where data is inherently distributed and undisclosable, such as in the medical domain. Joint training is usually achieved by aggregating local models. When local datasets are small, locally trained models can vary greatly from a globally good model. Bad local models can arbitrarily deteriorate the aggregate model quality, causing federating learning to fail in these settings.

We propose a novel approach that avoids this problem by interleaving model aggregation and permutation steps. During a permutation step we redistribute local models across clients through the server, while preserving data privacy, to allow each local model to train on a daisy chain of local datasets. This enables successful training in data-sparse domains. Combined with model aggregation, we so achieve effective learning even if the local datasets are extremely small, while retaining the privacy benefits of federated learning.

Implementation

the PyTorch source code (April 2023) by Michael Kamp.

Related Publications

Kamp, M, Fischer, J & Vreeken, J Federated Learning from Small Datasets. In: Proceedings of the International Conference on Representation Learning (ICLR), OpenReview, 2023. (31.8% acceptance rate)