Abstract
Online learning is widely used in production to refine model parameters after initial deployment. This opens several vectors for covertly launching attacks against deployed models. To detect these attacks, prior work developed black-box and white-box testing methods. However, this has left a prohibitive open challenge: How is the investigator supposed to recover the model (uniquely refined on an in-the-field device) for testing in the first place. We propose a novel memory forensic technique, named AiP, that automatically recovers the unique deployment model and rehosts it in a lab environment for investigation. AiP navigates through both main memory and GPU memory spaces to recover complex ML data structures, using recovered Python objects to guide the recovery of lower-level C objects, ultimately leading to the recovery of the uniquely refined model. AiP then rehosts the model within the investigator's device, where the investigator can apply various white-box testing methodologies. We have evaluated AiP using three versions of TensorFlow and PyTorch with the CIFAR-10, LISA, and IMDB datasets. AiP recovered 30 models from main memory and GPU memory with 100% accuracy and rehosted them into a live process successfully.
Original language | English |
---|---|
Title of host publication | Proceedings of the 33rd USENIX Security Symposium |
Publisher | USENIX Association |
Pages | 1687-1704 |
Number of pages | 18 |
ISBN (Electronic) | 9781939133441 |
Publication status | Published - 2024 |
Event | 33rd USENIX Security Symposium, USENIX Security 2024 - Philadelphia, United States Duration: 14 Aug 2024 → 16 Aug 2024 |
Publication series
Name | Proceedings of the 33rd USENIX Security Symposium |
---|
Conference
Conference | 33rd USENIX Security Symposium, USENIX Security 2024 |
---|---|
Country/Territory | United States |
City | Philadelphia |
Period | 14/08/24 → 16/08/24 |
Bibliographical note
Publisher Copyright:© USENIX Security Symposium 2024.All rights reserved.