Performance analysis of data parallelism technique in machine learning for human activity recognition using LSTM

Tri D.T. Nguyen, Jae Ho Park, Md Imtiaz Hossain, Md Delowar Hossain, Seung Jin Lee, Jin Woong Jang, Seo Hui Jo, Luan N.T. Huynh, Trong Khanh Tran, Eui Nam Huh

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)

Abstract

Human activity recognition (HAR), driven by large deep learning models, has received a lot of attention in recent years due to its high applicability in diverse application domains, manipulate time-series data to speculate on activities. Meanwhile, the cloud term 'as-a-service' has essentially revolutionized the information technology industry market over the last ten years. These two trends somehow are incorporating to inspire a new model for the assistive living application: HAR as a service in the cloud. However, with frequently updates deep learning frameworks in open source communities as well as various new hardware features release, which make a significant software management challenge for deep learning model developers. To address this problem, container techniques are widely employed to facilitate the deep learning software development cycle. In addition, models and the available datasets are being larger and more complicated, and so, an expanding amount of computing resources is desired so that these models are trained in a feasible amount of time. This requires an emerging distributed training approach, called data parallelism, to achieve low resource utilization and faster execution in training time. Therefore, in this paper, we apply the data parallelism to build an assistive living HAR application using LSTM model, deploying in containers within a Kubernetes cluster to enable the real-time recognition as well as prediction of changes in human activity patterns. We then systematically measure the influence of this technique on the performance of the HAR application. Firstly, we evaluate our system performance with regard to CPU and GPU when deployed in containers and host environment, then analyze the outcomes to verify the difference in terms of the model learning performance. Through the experiments, we figure out that data parallelism strategy is efficient for improving model learning performance. In addition, this technique helps to increase the scaling efficiency in our system.

Original languageEnglish
Title of host publicationProceedings - 11th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2019, 19th IEEE International Conference on Computer and Information Technology, CIT 2019, 2019 International Workshop on Resource Brokering with Blockchain, RBchain 2019 and 2019 Asia-Pacific Services Computing Conference, APSCC 2019
EditorsJinjun Chen, Laurence T. Yang
PublisherIEEE Computer Society
Pages387-391
Number of pages5
ISBN (Electronic)9781728150116
DOIs
Publication statusPublished - Dec 2019
Event11th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2019, 19th IEEE International Conference on Computer and Information Technology, CIT 2019, 2019 International Workshop on Resource Brokering with Blockchain, RBchain 2019 and 2019 Asia-Pacific Services Computing Conference, APSCC 2019 - Sydney, Australia
Duration: 11 Dec 201913 Dec 2019

Publication series

NameProceedings of the International Conference on Cloud Computing Technology and Science, CloudCom
Volume2019-December
ISSN (Print)2330-2194
ISSN (Electronic)2330-2186

Conference

Conference11th IEEE International Conference on Cloud Computing Technology and Science, CloudCom 2019, 19th IEEE International Conference on Computer and Information Technology, CIT 2019, 2019 International Workshop on Resource Brokering with Blockchain, RBchain 2019 and 2019 Asia-Pacific Services Computing Conference, APSCC 2019
Country/TerritoryAustralia
CitySydney
Period11/12/1913/12/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Keywords

  • Containers
  • Data parallelism
  • Human activity recognition
  • LSTM
  • Machine learning

Fingerprint

Dive into the research topics of 'Performance analysis of data parallelism technique in machine learning for human activity recognition using LSTM'. Together they form a unique fingerprint.

Cite this