摘要
Mobile edge computing (MEC) has been envisioned as a promising paradigm that could effectively enhance the computational capacity of wireless user devices (WUDs) and quality of experience of mobile applications. One of the most crucial issues of MEC is computation offloading, which decides how to offload WUDs' tasks to edge severs for further intensive computation. Conventional mathematical programming-based offloading approaches could face troubles in dynamic MEC environments due to the time-varying channel conditions (caused primarily by WUD mobility). To address the problem, reinforcement learning (RL) based offloading approaches have been proposed, which develop offloading policies by mapping MEC states to offloading actions. However, these approaches could fail to converge in large-scale MEC due to the exponentially-growing state and action spaces. In this article, we propose a novel online computation offloading approach that could effectively reduce task latency and energy consumption in dynamic MEC with large-scale WUDs. First, a RL-based computation offloading and energy transmission algorithm is proposed to accelerate the learning process. Then, a joint optimization method is adopted to develop the allocating algorithm, which obtains near-optimal solutions for energy and computation resources allocation. Simulation results show that the proposed approach can converge efficiently and achieve significant performance improvements over baseline approaches.
| 源语言 | 英语 |
|---|---|
| 页(从-至) | 669-683 |
| 页数 | 15 |
| 期刊 | IEEE Transactions on Services Computing |
| 卷 | 15 |
| 期 | 2 |
| DOI | |
| 出版状态 | 已出版 - 2022 |
联合国可持续发展目标
此成果有助于实现下列可持续发展目标:
-
可持续发展目标 7 经济适用的清洁能源
指纹
探究 'An Efficient Online Computation Offloading Approach for Large-Scale Mobile Edge Computing via Deep Reinforcement Learning' 的科研主题。它们共同构成独一无二的指纹。引用此
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver