Abstract
Human activity recognition (HAR), driven by large deep learning models, has received a lot of attention in recent years due to its high applicability in diverse application domains, manipulate time-series data to speculate on activities. Meanwhile, the cloud term 'as-a-service' has essentially revolutionized the information technology industry market over the last ten years. These two trends somehow are incorporating to inspire a new model for the assistive living application: HAR as a service in the cloud. However, with frequently updates deep learning frameworks in open source communities as well as various new hardware features release, which make a significant software management challenge for deep learning model developers. To address this problem, container techniques are widely employed to facilitate the deep learning software development cycle. In addition, models and the available datasets are being larger and more complicated, and so, an expanding amount of computing resources is desired so that these models are trained in a feasible amount of time. This requires an emerging distributed training approach, called data parallelism, to achieve low resource utilization and faster execution in training time. Therefore, in this paper, we apply the data parallelism to build an assistive living HAR application using LSTM model, deploying in containers within a Kubernetes cluster to enable the real-time recognition as well as prediction of changes in human activity patterns. We then systematically measure the influence of this technique on the performance of the HAR application. Firstly, we evaluate our system performance with regard to CPU and GPU when deployed in containers and host environment, then analyze the outcomes to verify the difference in terms of the model learning performance. Through the experiments, we figure out that data parallelism strategy is efficient for improving model learning performance. In addition, this technique helps to increase the scaling efficiency in our system.
Original language | English |
---|---|
Title of host publication | Proceedings - 11th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2019, 19th IEEE International Conference on Computer and Information Technology, CIT 2019, 2019 International Workshop on Resource Brokering with Blockchain, RBchain 2019 and 2019 Asia-Pacific Services Computing Conference, APSCC 2019 |
Editors | Jinjun Chen, Laurence T. Yang |
Publisher | IEEE Computer Society |
Pages | 387-391 |
Number of pages | 5 |
ISBN (Electronic) | 9781728150116 |
DOIs | |
Publication status | Published - Dec 2019 |
Event | 11th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2019, 19th IEEE International Conference on Computer and Information Technology, CIT 2019, 2019 International Workshop on Resource Brokering with Blockchain, RBchain 2019 and 2019 Asia-Pacific Services Computing Conference, APSCC 2019 - Sydney, Australia Duration: 11 Dec 2019 → 13 Dec 2019 |
Publication series
Name | Proceedings of the International Conference on Cloud Computing Technology and Science, CloudCom |
---|---|
Volume | 2019-December |
ISSN (Print) | 2330-2194 |
ISSN (Electronic) | 2330-2186 |
Conference
Conference | 11th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2019, 19th IEEE International Conference on Computer and Information Technology, CIT 2019, 2019 International Workshop on Resource Brokering with Blockchain, RBchain 2019 and 2019 Asia-Pacific Services Computing Conference, APSCC 2019 |
---|---|
Country/Territory | Australia |
City | Sydney |
Period | 11/12/19 → 13/12/19 |
Bibliographical note
Publisher Copyright:© 2019 IEEE.
Keywords
- Containers
- Data parallelism
- Human activity recognition
- LSTM
- Machine learning