Policy Gradient SMDP for Resource Allocation and Routing in Integrated Services Networks

Ngo Anh VIEN  Nguyen Hoang VIET  SeungGwan LEE  TaeChoong CHUNG  

Publication
IEICE TRANSACTIONS on Communications   Vol.E92-B   No.6   pp.2008-2022
Publication Date: 2009/06/01
Online ISSN: 1745-1345
DOI: 10.1587/transcom.E92.B.2008
Print ISSN: 0916-8516
Type of Manuscript: PAPER
Category: Network
Keyword: 
call admission control (CAC),  communication system control,  communication system routing,  Markov decision processes,  dynamic programming,  semi-Markov decision processes,  

Full Text: PDF(401.9KB)
>>Buy this Article


Summary: 
In this paper, we solve the call admission control (CAC) and routing problem in an integrated network that handles several classes of calls of different values and with different resource requirements. The problem of maximizing the average reward (or cost) of admitted calls per unit time is naturally formulated as a semi-Markov Decision Process (SMDP) problem, but is too complex to allow for an exact solution. Thus in this paper, a policy gradient algorithm, together with a decomposition approach, is proposed to find the dynamic (state-dependent) optimal CAC and routing policy among a parameterized policy space. To implement that gradient algorithm, we approximate the gradient of the average reward. Then, we present a simulation-based algorithm to estimate the approximate gradient of the average reward (called GSMDP algorithm), using only a single sample path of the underlying Markov chain for the SMDP of CAC and routing problem. The algorithm enhances performance in terms of convergence speed, rejection probability, robustness to the changing arrival statistics and an overall received average revenue. The experimental simulations will compare our method's performance with other existing methods and show the robustness of our method.