Abstract
Federated learning (FL) rests on the notion of training a global model in a decentralized manner. Under this setting, mobile devices perform computations on their local data before uploading the required updates to the central aggregator for improving the global model. However, a key challenge is to maintain communication efficiency (i.e., the number of communications per iteration) when participating clients implement uncoordinated computation strategy during aggregation of model parameters. We formulate a utility maximization problem to tackle this difficulty, and propose a novel crowdsourcing framework, involving a number of participating clients with local training data to leverage FL. We show the incentive-based interaction between the crowdsourcing platform and the participating client's independent strategies for training a global learning model, where each side maximizes its own benefit. We formulate a two-stage Stackelberg game to analyze such scenario and find the game's equilibria. Further, we illustrate the efficacy of our proposed framework with simulation results. Results show that the proposed mechanism outperforms the heuristic approach with up to 22% gain in the offered reward to attain a level of target accuracy.
Original language | English |
---|---|
Article number | 9014329 |
Journal | Proceedings - IEEE Global Communications Conference, GLOBECOM |
DOIs | |
Publication status | Published - 2019 |
Event | 2019 IEEE Global Communications Conference, GLOBECOM 2019 - Waikoloa, United States Duration: 9 Dec 2019 → 13 Dec 2019 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
Keywords
- Decentralized machine learning
- Federated learning
- Mobile crowdsourcing
- Stackelberg game