Blake Wilson, Yuheng Chen, Daksh Kumar Singh, Rohan Ojha, Jaxon Pottle, Michael Bezick, Alexandra Boltasseva, Vladimir M. Shalaev, Alexander V. Kildishev, "Authentication through residual attention-based processing of tampered optical responses," Adv. Photon. 6, 056002 (2024)

Search by keywords or author
- Advanced Photonics
- Vol. 6, Issue 5, 056002 (2024)

Fig. 1. PUF sampling process. An overview of the PUF tamper detection method using distance matrices of randomly positioned gold nanoparticles. The process consists of four primary stages. (i) Gold nanoparticles are randomly introduced, serving as a distinct physical system. (ii) The nanoparticles’ distance matrix is recorded and archived in a reference database. (iii) The system may experience external tampering or natural degradation that can modify its initial state. (iv) The distance matrix is reassessed and cross-referenced with the initial database to identify any potential tampering or other changes.

Fig. 2. Distance matrix extraction from dark-field images. Nanoparticle dark-field images of size are prepared using dark-field microscopy. Then, the segmentation process classifies pixels as belonging to either a nanoparticle pattern or the dark-field background. Next, nanoparticle pattern pixel regions are clustered into local particle patterns, and their centers of mass (purple points) are extracted. Finally, the distance matrix is generated by evaluating all pairwise distances between these nanoparticle patterns. We visualize the distance matrix using its minimum spanning tree, despite the full tree being all-to-all. All scale bars represent .

Fig. 3. Machine-learning-assisted authentication is trained by classifying synthetic posttamper measurements as being either adversarially tampered or naturally degraded, indicated by . We use a pretrained segmentation model, along with a labeled clustering algorithm, to compute the distance matrix and radii of the nanoparticles for both samples. Then, the discriminator network is trained by randomly choosing a synthetic tampering type according to the tampering Bernoulli distribution .

Fig. 4. Adversarial tampering is introduced through tearing of the substrate, thereby separating the gold nanoparticles according to their distance from the tear line, and filling the tear with new nanoparticles uniformly distributed in the tear to match the original distribution. The tearing of the substrate is modeled as a random cut that shifts the nanoparticles based on the inverse square root of the perpendicular distance to the cut. (a), (b) The tearing coefficients and demonstrate the increased separation dependent on the tearing coefficient. (c) The normalized expected distance between nanoparticles is plotted for natural degradation, adversarial tearing without filling, and adversarial tearing with filling.

Fig. 5. RAPTOR uses an attention mechanism for prioritizing nanoparticle correlations across pretamper and posttamper samples before passing them into a residual, attention-based deep convolutional classifier. (a) RAPTOR takes the top 56 nanoparticles in descending order of radii to construct the distance matrices and and radii and from the pretamper and posttamper samples. (b) The radii and distance matrices form the query and value embeddings of an attention mechanism. The attention mechanism is then used alongside the raw distance matrices and , the soft weight matrix, and matrix generated from the radii vectors for the classifier. (c) The classifier uses GELU activation and attention layers before applying a kernel layer and max pool layer. Then, the output is flattened into a multilayer perceptron to compute the final classification .
|
Table 1. Overall performance comparison of each method for distance matrix extraction and discrimination tasks. For all results in the table, a 1000-sample tensor was loaded onto an NVIDIA T4 GPU (except Procrustes, which used all CPU RAM) and batched at maximum capacity for the particular model. Accuracy is measured by the number of correct pixels or authentication classifications over the total. For semantic segmentation, we include the BCE loss to show a marginal advantage in using ResNet over Gaussian blur. The computation time is measured by preloading all data onto an NVIDIA T4 GPU or CPU RAM before recording the start time.

Set citation alerts for the article
Please enter your email address