TY - JOUR
T1 - Robustness of Workload Forecasting Models in Cloud Data Centers
T2 - A White-Box Adversarial Attack Perspective
AU - Mahbub, Nosin Ibna
AU - Hossain, Md Delowar
AU - Akhter, Sharmen
AU - Hossain, Md Imtiaz
AU - Jeong, Kimoon
AU - Huh, Eui Nam
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Cloud computing has become the cornerstone of modern technology, propelling industries to unprecedented heights with its remarkable and recent advances. However, the fundamental challenge for cloud service providers is real-time workload prediction and management for optimal resource allocation. Cloud workloads are characterized by their heterogeneous, unpredictable, and fluctuating nature, making this task even more challenging. As a result of the remarkable achievements of deep learning (DL) algorithms across diverse fields, scholars have begun to embrace this approach to addressing such challenges. It has become the defacto standard for cloud workload prediction. Unfortunately, DL algorithms have been widely recognized for their vulnerability to adversarial examples, which poses a significant challenge to DL-based forecasting models. In this study, we utilize established white-box adversarial attack generation methods from the field of computer vision to construct adversarial cloud workload examples for four cutting-edge deep learning regression models, including Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), 1D Convolutional Neural Network (1D-CNN) and attention-based models. We evaluate our study with three widely recognized cloud benchmark datasets: Google trace, Alibaba trace, and Bitbrain. The findings of our analysis unequivocally indicate that DL-based cloud workload forecasting models are highly vulnerable to adversarial attacks. To the best of our knowledge, we are the first to conduct systematic research exploring the vulnerability of DL-based models for workload forecasting in the cloud data center, highlighting the inherent hazards to both security and cost-effectiveness in cloud data centers. By raising awareness of these vulnerabilities, we advocate the urgent development of robust defensive mechanisms to enhance the security of cloud workload forecasting in a constantly evolving technical landscape.
AB - Cloud computing has become the cornerstone of modern technology, propelling industries to unprecedented heights with its remarkable and recent advances. However, the fundamental challenge for cloud service providers is real-time workload prediction and management for optimal resource allocation. Cloud workloads are characterized by their heterogeneous, unpredictable, and fluctuating nature, making this task even more challenging. As a result of the remarkable achievements of deep learning (DL) algorithms across diverse fields, scholars have begun to embrace this approach to addressing such challenges. It has become the defacto standard for cloud workload prediction. Unfortunately, DL algorithms have been widely recognized for their vulnerability to adversarial examples, which poses a significant challenge to DL-based forecasting models. In this study, we utilize established white-box adversarial attack generation methods from the field of computer vision to construct adversarial cloud workload examples for four cutting-edge deep learning regression models, including Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), 1D Convolutional Neural Network (1D-CNN) and attention-based models. We evaluate our study with three widely recognized cloud benchmark datasets: Google trace, Alibaba trace, and Bitbrain. The findings of our analysis unequivocally indicate that DL-based cloud workload forecasting models are highly vulnerable to adversarial attacks. To the best of our knowledge, we are the first to conduct systematic research exploring the vulnerability of DL-based models for workload forecasting in the cloud data center, highlighting the inherent hazards to both security and cost-effectiveness in cloud data centers. By raising awareness of these vulnerabilities, we advocate the urgent development of robust defensive mechanisms to enhance the security of cloud workload forecasting in a constantly evolving technical landscape.
KW - Cloud computing
KW - adversarial attack
KW - cloud security
KW - deep learning
KW - workload prediction
UR - http://www.scopus.com/inward/record.url?scp=85190726288&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3385863
DO - 10.1109/ACCESS.2024.3385863
M3 - Article
AN - SCOPUS:85190726288
SN - 2169-3536
VL - 12
SP - 55248
EP - 55263
JO - IEEE Access
JF - IEEE Access
ER -