跳到主要导航 跳到搜索 跳到主要内容

Hierarchical Reinforcement Learning with Self-Distillation for Resource Scheduling in Complex Resource Networks

  • Kexin Zhang*
  • , Qing Gao
  • *此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

The resource scheduling problem still presents various limitations in traditional methods when dealing with a large number of resource packages. This paper proposes a novel hierarchical reinforcement learning with self-distillation (HRLSD) model to address the resource scheduling problem in complex service networks with a large-scale of resource packages. To reduce the complexity of the policy space, we introduce a student-teacher encoder as a hierarchical reinforcement learning model. The student Q-value and teacher Q-value are obtained from two separate Q-networks using the deep Q-learning method. By distilling the knowledge from the teacher Q-value to the student Q-value, the student encoder learns from the teacher encoder to enhance effectiveness without increasing computational complexity. We evaluate the effectiveness of our proposed method through numerical examples involving large-scale resource packages.

源语言英语
主期刊名ISCAS 2025 - IEEE International Symposium on Circuits and Systems, Proceedings
出版商Institute of Electrical and Electronics Engineers Inc.
ISBN(电子版)9798350356830
DOI
出版状态已出版 - 2025
活动2025 IEEE International Symposium on Circuits and Systems, ISCAS 2025 - London, 英国
期限: 25 5月 202528 5月 2025

出版系列

姓名Proceedings - IEEE International Symposium on Circuits and Systems
ISSN(印刷版)0271-4310

会议

会议2025 IEEE International Symposium on Circuits and Systems, ISCAS 2025
国家/地区英国
London
时期25/05/2528/05/25

指纹

探究 'Hierarchical Reinforcement Learning with Self-Distillation for Resource Scheduling in Complex Resource Networks' 的科研主题。它们共同构成独一无二的指纹。

引用此