EECS
eeXiv

We gratefully acknowledge support from our volunteer peer reviewers, member institutions, and all open-source contributors.

One Less Reason for Filter-Pruning: Gaining Free Adversarial Robustness with Structured Grouped Kernel Pruning

Zach LeClaire, Shaochen Zhong, Zaichuan You, Jiamu Zhang, Sebastian Zhao, Zirui Liu, Daochen Zha, Vipin Chaudhary, Shuai Xu, and Xia Hu

Latest revision published Sep 21, 2023

PaperPeer ReviewedRevision 1

Abstract

Densely structured pruning methods utilizing simple pruning heuristics are capable of delivering immediate compression and acceleration benefits with acceptable benign performances. However, empirical findings indicate such naively pruned networks are extremely fragile under simple adversarial attacks. Naturally, we would be interested in knowing if such phenomenon also hold true to carefully designed modern structured pruning methods. If so, then to what extent the severity? And what kind of remedies are available? Unfortunately, both the questions and the solution remain largely unaddressed: no prior art is able to provide a thorough investigation on the adversarial performance of modern structured pruning methods (spoiler: it is not good), yet the few works that attempt to provide mitigation often done so at various extra costs with only to-be-desired performance. In this work, we answer both questions by fairly and comprehensively investigate the adversarial performance of 10+ popular structured pruning methods. Solution-wise, we take advantage of Grouped Kernel Pruning (GKP)’s recent success in pushing densely structured pruning freedom to a more fine-grained level. By mixing up kernel smoothness — a classic robustness-related kernel-level metric — into a modified GKP procedure, we hereby present an one-shot-post-train-data-free GKP method capable of advancing SOTA performance on both benign and adversarial scale, while requiring no extra (in fact, often less) cost than a standard pruning procedure.

Topics: Electrical Engineering and Computer Science, Artificial Intelligence

Reviewed by: NeurIPS 2023 Conference Area Chair

DOI: 10.5281/zenodo.10677218

Cite as: eeXiv:3293w2ss8yaj