[ACSAC’24] Z. Zhang, X. Zhang, Y. Zhang, L. Y. Zhang, C. Chen, S. Hu, A. Gill, S. Pan, “Stealing Watermarks of Large Language Models via Mixed Integer Programming”, in ACSAC, 2024. (core a, ccf b) PDF
[ICML’24] L. Hou, R. Feng, Z. Hua, W. Luo, L. Y. Zhang, Y. Li, “IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency,” in ICML, 2024. (core a*, ccf a) PDF
[Oakland’24a] Y. Zhang, S. Hu, L. Y. Zhang, J. Shi, M. Li, X. Liu, W. Wan, H. Jin, “Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training,” in Oakland, 2024. PDFCODESLIDES
[Oakland’24b] X. Mo, Y. Zhang, L. Y. Zhang, W. Luo, N. Sun, S. Hu, S. Gao, Y. Xiang, “Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics,” in Oakland, 2024. PDFCODESLIDES
[Oakland’24c] Z. Zhou, M. Li, W. Liu, S. Hu, Y. Zhang, W. Wan, L. Xue, L. Y. Zhang, D. Yao, H. Jin, “Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples,” in Oakland, 2024. PDFCODE
[IJCAI’24a] H. Zhang, S. Hu, Y. Wang, L. Y. Zhang, Z. Zhou, X. Wang, Y. Zhang, C. Chen, “Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness,” in IJCAI, 2024. PDFCODEDEMO
[IJCAI’24b] W. Wan, Y. Ning, S. Hu, L. Xue, M. Li, L. Y. Zhang, Y. Wang, “DarkFed: A Data-Free Backdoor Attack in Federated Learning,” in IJCAI, 2024. PDFCODE
[AAA’24a] Q. Duan, Z. Hua, Q. Liao, Y. Zhang, L. Y. Zhang, “Conditional Backdoor Attack via JPEG Compression,” in AAAI, 2024. PDFCODE
[AAAI’24b] D. Mi, Y. Zhang, L. Y. Zhang, S. Hu, Q. Zhong, H. Yuan, S. Pan, “Towards Model Extraction Attacks in GAN-based Image Translation via Domain Shift Mitigation,” in AAAI, 2024. PDFCODE
[AAAI’24c] L. Xue, S. Hu, R. Zhao, L. Y. Zhang, S. Hu, L. Sun, D. Yao, “Revisiting Gradient Pruning: A Dual Realization for Defending Against Gradient Attacks,” in AAAI, 2024. PDFCODE
[Acm MM’23] W. Wan, S. Hu, M. Li, J. Lu, L. Zhang, L. Y. Zhang, H. Jin, “A Four-Pronged Defense Against Byzantine Attacks in Federated Learning,” in Acm MM, 2023. PDF
[ICCV’23] Z. Zhou, S. Hu, R. Zhao, Q. Wang, L. Y. Zhang, J. Hou, H. Jin, “Downstream-agnostic Adversarial Examples,” in ICCV, 2023. PDF
[IJCAI’23] H. Zhang, Z. Yao, L. Y. Zhang, S. Hu, C. Chen, A. W.-C. Liew, Z. Li, “Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning,” in IJCAI, 2023.
[Asia CCS’23a] X. Zhang, Z. Zhang, Q. Zhong, X. Zheng, Y. Zhang, S. Hu, L. Y. Zhang, “Masked Language Model Based Textual Adversarial Example Detection,” in AsiaCCS, 2023. PDF
[Asia CCS’23b] M. Ma, Y. Zhang, P. C. M. Arachchige, L. Y. Zhang, M. B. Chhetri, G. Bai, “LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks,” in AsiaCCS, 2023. PDF
[TIFS’23] Z. Gong, L. Shen, Y. Zhang, L. Y. Zhang, J. Wang, G. Bai, Y. Xiang, “AGRAMPLIFIER: Defending Federated Learning Against Poisoning Attacks Through Local Update Amplification,” IEEE Transactions on Information Forensics and Security, 2023.
[CVPR’22] S. Hu, X. Liu, Y. Zhang, M. Li, L. Y. Zhang, H. Jin, L. Wu, “Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer,” in IEEE/CVF CVPR, 2022.
[IJCAI’22] W. Wan, S. Hu, J. Lu, L. Y. Zhang, H. Jin, Y. He, “Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection,” in IJCAI, 2022.