超密集网络环境中移动边缘计算任务卸载的深度强化学习算法

MOBILE EDGE COMPUTING TASK OFFLOADING ALGORITHM BASED ON DEEP REINFORCEMENT LEARNING IN ULTRA-DENSE NETWORK ENVIRONMENT

  • 摘要: 针对移动边缘计算任务卸载研究忽略通信网络时变特性和用户移动性而导致的场景过于静态化问题,考虑了一个具有多个基站的超密集网络环境中的边缘计算任务卸载场景,在没有任何先验信息的情况下为移动用户提供实时的任务卸载决策。结合强化学习强大的环境交互能力,将问题描述为马尔可夫决策过程,重新定义状态和动作空间;基于优先级采样的双深度Q网络提出一种二进制在线任务卸载算法,同时优化设备CPU频率;通过仿真实验验证了所提算法的有效性。

     

    Abstract: To solve the problem of too static scenarios caused by ignoring the time-varying characteristics of communication networks and user mobility in the research of mobile edge computing task offloading, this paper considers an edge computing task offloading scenario in an ultra-dense network environment with multiple base stations, which provides mobile users with real-time task offloading decisions without any prior information. Combined with the strong environment interaction ability of reinforcement learning, the problem was described as a Markov decision process, and the state and action spaces were redefined. A binary online task offloading algorithm based on priority sampling in double deep Q network was proposed, and the CPU frequency of the device was optimized. The effectiveness of the proposed algorithm was verified by simulation experiments.

     

/

返回文章
返回