Abstract
How to allocate the limited wireless resource in dense radio access networks (RANs) remains challenging. By leveraging a software-defined control plane, the independent base stations (BSs) are virtualized as a centralized network controller (CNC). Such virtualization decouples the CNC from the wireless service providers (WSPs). We investigate a virtualized RAN, where the CNC auctions channels at the beginning of scheduling slots to the mobile terminals (MTs) based on bids from their subscribing WSPs. Each WSP aims at maximizing the expected long-term payoff from bidding channels to satisfy the MTs for transmitting packets. We formulate the problem as a stochastic game, where the channel auction and packet scheduling decisions of a WSP depend on the state of network and the control policies of its competitors. To approach the equilibrium solution, an abstract stochastic game is proposed with bounded regret. The decision making process of each WSP is modeled as a Markov decision process (MDP). To address the signalling overhead and computational complexity issues, we decompose the MDP into a series of single-agent MDPs with reduced state spaces, and derive an online localized algorithm to learn the state value functions. Our results show significant performance improvements in terms of per-MT average utility.
Original language | English |
---|---|
Pages (from-to) | 961-974 |
Number of pages | 14 |
Journal | IEEE Transactions on Mobile Computing |
Volume | 17 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Apr 2018 |
Bibliographical note
Publisher Copyright:© 2002-2012 IEEE.
Keywords
- Learning
- Markov decision process
- Multi-user resource scheduling
- Network virtualization
- Radio access networks
- Software-defined networking
- Stochastic games