Abstract
Mobile Edge Cloud Computing has been developed and introduced to provide low-latency service in close proximity to users. In this environment., resource constrained UE (user equipment) incapable to execute complex applications (i.e VR/AR., Deep Learning., Image Processing Applications) can dynamically offload computationally demanding tasks to neighboring MEC nodes. To process tasks even faster with MEC nodes., we can divide one task into several sub-Tasks and offload to multiple MEC nodes simultaneously., thereby each sub-Task will be processed in parallel. In this paper., we predict the total processing duration of each task on each candidate MEC node using Linear Regression. According to the previously observed state of each MEC node., we offload sub-Tasks to their respective edge node. We also developed a monitoring module at core cloud. The results show a decrease in execution duration when we offload an entire application to one edge node compared with local execution.
Original language | English |
---|---|
Title of host publication | 33rd International Conference on Information Networking, ICOIN 2019 |
Publisher | IEEE Computer Society |
Pages | 448-452 |
Number of pages | 5 |
ISBN (Electronic) | 9781538683507 |
DOIs | |
Publication status | Published - 17 May 2019 |
Event | 33rd International Conference on Information Networking, ICOIN 2019 - Kuala Lumpur, Malaysia Duration: 9 Jan 2019 → 11 Jan 2019 |
Publication series
Name | International Conference on Information Networking |
---|---|
Volume | 2019-January |
ISSN (Print) | 1976-7684 |
Conference
Conference | 33rd International Conference on Information Networking, ICOIN 2019 |
---|---|
Country/Territory | Malaysia |
City | Kuala Lumpur |
Period | 9/01/19 → 11/01/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
Keywords
- Deep Learning
- Linear Regression
- Mobile Edge Computing
- Task Offloading