Noise-Resilient Neural Network-Based Adversarial Attack Modeling for XOR Physical Unclonable Functions

Authentication plays an essential role in preventing unauthorized access to data and resources on the Internet of Things. Classical security mechanisms fall short of achieving security requirements against many physical attacks due to the resource-constraint nature of IoT devices. Physical Unclonable Functions (PUFs) have been successfully used for lightweight security applications such as devices authentication and secret key generation. PUFs utilize the inevitable variation of integrated circuits during the fabrication process to produce unique responses for individual PUF, hence not reproducible even by the manufacturer itself. However, PUFs are mathematically cloneable by machine learning-based methods. XOR arbiter PUFs are one group of PUFs that can withstand existing attack methods unless exceedingly long training time and large dataset size are applied. In this paper, large-sized XOR PUFs with 64-bit and 128-bit challenges were efficiently and effectively attacked using a carefully engineered neural network-based method. Our fine-tuned neural network-based adversarial models achieve 99% prediction accuracy on noise-free datasets and as low as 96% prediction accuracy on noisy datasets, using up to 55% smaller dataset size compared to existing works known to us. Revealing such vulnerabilities is essential for PUF developers to re-evaluate existing PUF designs, hence avoiding potential risks for IoT devices.

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *