Abstract:
To solve the problem of too static scenarios caused by ignoring the time-varying characteristics of communication networks and user mobility in the research of mobile edge computing task offloading, this paper considers an edge computing task offloading scenario in an ultra-dense network environment with multiple base stations, which provides mobile users with real-time task offloading decisions without any prior information. Combined with the strong environment interaction ability of reinforcement learning, the problem was described as a Markov decision process, and the state and action spaces were redefined. A binary online task offloading algorithm based on priority sampling in double deep Q network was proposed, and the CPU frequency of the device was optimized. The effectiveness of the proposed algorithm was verified by simulation experiments.