In device-to-device (D2D) communications, D2D users establish a direct link by utilizing the cellular users' spectrum to increase the network spectral efficiency. However, due to the higher priority of cellular users, interference imposed by D2D users to cellular ones should be controlled by channel and power allocation algorithms. Due to the unknown distribution of dynamic channel parameters, learning-based resource allocation algorithms work more efficient than classic optimization methods. In this paper, the problem of the joint channel and power allocation for D2D users in realistic scenarios is formulated as an interactive learning problem, where the channel state information of selected channels is unknown to the decision center and learned during the allocation process. In order to achieve the maximum reward function by choosing an action (channel and power level) for each D2D pair, a recency-based Q-learning method is introduced to find the best channel-power for each D2D pair. The proposed method is shown to achieve logarithmic regret function asymptotically, which makes it an order optimal policy, and it converges to the stable equilibrium solution. The simulation results confirm that the proposed method achieves better responses in terms of network sum rate and fairness criterion in comparison with conventional learning methods and random allocation.

Learning-based resource allocation in D2D communications with QoS and fairness considerations

Shahbazian R.;
2018-01-01

Abstract

In device-to-device (D2D) communications, D2D users establish a direct link by utilizing the cellular users' spectrum to increase the network spectral efficiency. However, due to the higher priority of cellular users, interference imposed by D2D users to cellular ones should be controlled by channel and power allocation algorithms. Due to the unknown distribution of dynamic channel parameters, learning-based resource allocation algorithms work more efficient than classic optimization methods. In this paper, the problem of the joint channel and power allocation for D2D users in realistic scenarios is formulated as an interactive learning problem, where the channel state information of selected channels is unknown to the decision center and learned during the allocation process. In order to achieve the maximum reward function by choosing an action (channel and power level) for each D2D pair, a recency-based Q-learning method is introduced to find the best channel-power for each D2D pair. The proposed method is shown to achieve logarithmic regret function asymptotically, which makes it an order optimal policy, and it converges to the stable equilibrium solution. The simulation results confirm that the proposed method achieves better responses in terms of network sum rate and fairness criterion in comparison with conventional learning methods and random allocation.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/381048
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 11
  • ???jsp.display-item.citation.isi??? 6
social impact