xinghuanlai Associate Professor

Supervisor of Doctorate Candidates

Supervisor of Master's Candidates

  

  • Education Level: PhD graduate

  • Professional Title: Associate Professor

  • Alma Mater: 英国诺丁汉大学

  • Supervisor of Doctorate Candidates

  • Supervisor of Master's Candidates

  • School/Department: 计算机与人工智能学院

  • Discipline:Communications and Information Systems
    Computer Science and Technology
  • MORE>
    Recommended Ph.D.Supervisor Recommended MA Supervisor
    Language: 中文

    Paper Publications

    Offloading Dependent Tasks in Multi-Access Edge Computing: A Multi-Objective Reinforcement Learning Approach

    Impact Factor:7.307

    DOI number:10.1016/j.future.2021.10.013

    Affiliation of Author(s):School of Computing and Artificial Intelligence, Southwest Jiaotong University

    Journal:Future Generation Computer Systems

    Key Words:Computation offloading,Dynamic preferences,Multi-access edge computing,Multi-objective reinforcement learning,Task dependency

    Abstract:This paper studies the problem of offloading an application consisting of dependent tasks in multi-access edge computing (MEC). This problem is challenging because multiple conflicting objectives exist, e.g., the completion time, energy consumption, and computation overhead should be optimized simultaneously. Recently, some reinforcement learning (RL) based methods have been proposed to address the problem. However, these methods, called single-objective RLs (SORLs), define the user utility as a linear scalarization. The conflict between objectives has been ignored. This paper formulates a multi-objective optimization problem to simultaneously minimize the application completion time, energy consumption of the mobile device, and usage charge for edge computing, subject to dependency constraints. Moreover, the relative importance (preferences) between the objectives may change over time in MEC, making it quite challenging for traditional SORLs to handle. To overcome this, we first model a multi-objective Markov decision process, where the scalar reward is extended to a vector-valued reward. Each element in the reward corresponds to one of the objectives. Then, we propose an improved multi-objective reinforcement learning (MORL) algorithm, where a tournament selection scheme is designed to select important preferences to effectively maintain previously learned policies. The simulation results demonstrate that the proposed algorithm obtains a good tradeoff between three objectives and has significant performance improvement compared with a number of existing algorithms.

    Co-author:Fuhong Song,Huanlai Xing*,Xinhan Wang,Shouxi Luo,Penglin Dai,Ke Li

    Document Code:10.1016/j.future.2021.10.013

    Volume:128

    Page Number:333-348

    ISSN No.:0167-739X

    Translation or Not:no

    Date of Publication:2021-09-06

    Included Journals:SCI

    Copyright © 2019 Southwest Jiaotong University.All Rights Reserved . ICP reserve 05026985
    Address:999 Xi'an Road, Pidu District, Chengdu, Sichuan, China
     Chuangongnet Anbei 510602000061
    Technical support: Office of Information Technology and network management
    Click:    MOBILE Version Login

    The Last Update Time : ..