I am a Senior Lecturer in the School of ICT and the Program Director for Bachelor of Cybersecurity, Griffith University. Prior to this, I was a faculty member in the School of IT, Deakin University (2018-2023). I obtained my Ph.D. in 2016 from the Department of Electrical Engineering at City University of Hong Kong, where I later worked as a research fellow in the Department of Computer Science before joining Deakin. I am interested in a wide range of topics in cybersecurity, with a particular focus on trustworthy AI and applied cryptography.
I am a core member of the TrustAGI Lab, whose goal is to endow machines with human-level intelligence while ensuring trustworthiness and transparency. I am a member of IEEE and ACM, and an Associate Editor of IEEE Transactions on Dependable and Secure Computing and IEEE Transactions on Multimedia.
I am always actively looking for self-motivated students. Please email me your CV, transcript and english test score if you are interested in my research topics. Information about Griffith PhD admission and scholarships can be found here.
I am recruiting 2 PhD students with full scholarship to work on quantum-safe algorithms and protocols. See here for more information.
1 CSC PhD students and 2 visiting positions are available for 2025! See here for more information.
News
-
[Oct-24] I have joined the editorial team of IEEE Transactions on Multimedia as an associate editor. Please submit your good works!
-
[Sep-24] Our new work DarkSAM: Fooling Segment Anything Model to Segment Nothing has been accepted by the Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024)! Congrats, Ziqi!
-
[Aug-24] Our new work Stealing Watermarks of Large Language Models via Mixed Integer Programming has been accepted by the 2024 Annual Computer Security Applications Conference (ACSAC 2024)! Congrats, Zhaoxi and Xiaomei!
-
[Aug-24] I will serve as a Program Committee member for ASIACCS-25. Looking forward to reviewing your excellent submissions!
-
[Jul-24] Our new work DERD: Data-free Adversarial Robustness Distillation through Self-adversarial Teacher Group has been accepted to ACM Multimedia (MM 2024)! Congrats, Yuhang!
-
[Jun-24] Our new work ECLIPSE: Expunging Clean-label Indiscriminate Poisons via Sparse Diffusion Purification has been accepted to the Srping Cycle of ESORICS 2024! Congrats, Xianlong!
-
[May-24] Our new work IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency has been accepted by the 2024 International Conference on Machine Learning (ICML 2024)! Congrats, Linshan!
-
[Apr-24] I will serve as a Program Committee member for PETS-25. Looking forward to reviewing your excellent submissions!
-
[Apr-24] Our new works Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness and DarkFed: A Data-Free Backdoor Attack in Federated Learning have been accepted by the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024)! Congrats, Hangtao and Wei!
-
[Mar-24] Our new work Exploiting Class-Wise Rotation for Availability Poisoning Attacks in 3D Point Clouds has been accepted by the 29th European Symposium on Research in Computer Security (ESORICS 2024)! Congrats, Xianlong!
-
[Mar-24] Our new work CryptGraph: An Efficient Privacy-Enhancing Solution for Accurate Shortest Path Retrieval in Cloud Environments has been accepted by the 19th ACM ASIA Conference on Computer and Communications Security (ASIACCS 2024)! Congrats, Fuyi!
-
[Mar-24] Our new work Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples has been accepted with shepherding by the winter round of the 45th IEEE Symposium on Security and Privacy (Oakland 2024)! Congrats, Ziqi!
Older News
[Dec-23] Glad to share that our three papers, Towards Model Extraction Attacks in GAN-based Image Translation via Domain Shift Mitigation, Conditional Backdoor Attack via JPEG Compression, Revisiting Gradient Pruning: A Dual Realization for Defending Against Gradient Attacks, have been accepted by the 38th AAAI Conference on Artificial Intelligence (AAAI-24)!
[Nov-23] I have joined the editorial team of IEEE Transactions on Dependable and Secure Computing as an associate editor. Please submit your good works!
[Oct-23] Our new work Robust Backdoor Detection for Deep Learning via Topological Evolution Dynamics has been accepted with shepherding by the 45th IEEE Symposium on Security and Privacy (Oakland 2024)!
[Sep-23] Our paper titled Towards Self-Interpretable Graph-Level Anomaly Detection has been accepted by the Conference on Neural Information Processing Systems (NeurIPS 2023)!
[Jul-23] Our two papers titled PointCRT: Detecting Backdoor in 3D Point Cloud via Corruption Robustness, and A Four-Pronged Defense Against Byzantine Attacks in Federated Learning have been accepted by ACM Multimedia 2023 (ACM MM 2023)!
[Jul-23] Our paper titled Downstream-agnostic Adversarial Examples has been accepted by the International Conference on Computer Vision 2023 (ICCV 2023)!
[Jul-23] Glad to share that our paper Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training has been accepted with shepherding by the 45th IEEE Symposium on Security and Privacy (Oakland 2024)!
[Jul-23] Our paper titled SigA: rPPG-based Authentication for Virtual Reality Head-mounted Display has been accepted by the 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2023)!
[Apr-23] Our paper titled Denial-of-Service or Fine-Grained Control: Towards Flexible Model Poisoning Attacks on Federated Learning has been accepted by the 32nd International Joint Conference on Artificial Intelligence (IJCAI 2023)!
[Apr-23] Our paper titled PriGenX: Privacy-preserving Query With Anonymous Access Control for Genomic Data has been accepted by IEEE TDSC!
[Mar-23] Our paper titled Predicate Private Set Intersection With Linear Complexity has been accepted by the 21st International Conference on Applied Cryptography and Network Security (ACNS 2023)!
[Mar-23] Our two papers titled LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks, and Masked Language Model Based Textual Adversarial Example Detection, have been accepted by the 18th ACM ASIA Conference on Computer and Communications Security (ASIACCS 2023)!
[Mar-23] I joined Griffith University as a Senior Lecturer (in the commonwealth system).