I am an Assistant Professor in the Department of Computer Science at the University of Alabama at Birmingham (UAB). I received my Ph.D. in Computer Science from Penn State University under the guidance of Dr. George Kesidis and Dr. David Miller, with a dissertation focused on poisoning attacks and defenses for machine learning models. I obtained my M.S. degree in Computer Science from Penn State University in 2018 and my B.S. degree from the School of Information Science and Engineering at Southeast University in Nanjing, China, in 2016.
Research Interests: My research broadly focuses on trustworthy AI. Recently, we have been exploring the following topics:
Check my resume here.
Most recent publications are on Google Scholar.
Securing Federated Learning Against Novel and Classic Backdoor Threats During Foundation Model Integration
X. Bi, X. Li
Under review
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models
X. Li, Y. Zhang, R. Lou, C. Wu, J. Wang
Under review
Position Paper: Assessing Robustness, Privacy, and Fairness in Federated Learning Integrated with Foundation Models
X. Li, J. Wang
Under review
Backdoor Mitigation by Correcting Distribution of Neural Activation
X. Li, Z. Xiang, D. J. Miller, G. Kesidis
Neurocomputing, 2024
BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers: A Comprehensive Study
X. Li, D. J. Miller, Z. Xiang, G. Kesidis
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2024
Unveiling Backdoor Risks Brought by Foundation Models in Heterogeneous Federated Learning
X. Li, C. Wu, J. Wang
The Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2024
Temporal-Distributed Backdoor Attack Against Video-Based Action Recognition
X. Li, S. Wang, R. Huang, M. Gowda, G. Kesidis
AAAI, 2024
Backdoor Threats from Compromised Foundation Models to Federated Learning
X. Li, S. Wang, C. Wu, H. Zhou, J. Wang
FL@FM-NeurIPS'23
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
X. Li, D. J. Miller, Z. Xiang, G. Kesidis
IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 2023
Test-Time Detection of Backdoor Triggers of Poisoned Deep Neural Networks
X. Li, D. J. Miller, Z. Xiang, G. Kesidis
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Detecting Backdoor Attacks Against Point Cloud Classifiers
Z. Xiang, D. J. Miller, S. Chen, X. Li, G. Kesidis
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
A Backdoor Attack against 3D Point Cloud Classifiers
Z. Xiang, D. J. Miller, S. Chen, X. Li, G. Kesidis
ICCV, 2021
AAAR-1.0: Assessing AI's Potential to Assist Research
R. Lou, H. Xu, S. Wang, J. Du, R. Kamoi, X. Lu, J. Xie, Y. Sun, Y. Zhang, J. J. Ahn, H. Fang, Z. Zou, W. Ma, X. Li, K. Zhang, C. Xia, L. Huang, W. Yin
Under review
Securing Federated Learning Against Novel and Classic Backdoor Threats During Foundation Model Integration
X. Bi, X. Li
Under review
Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models
X. Li, Y. Zhang, R. Lou, C. Wu, J. Wang
Under review
Position Paper: Assessing Robustness, Privacy, and Fairness in Federated Learning Integrated with Foundation Models
X. Li, J. Wang
Under review
CEPA: Consensus Embedded Perturbation for Agnostic Detection and Inversion of Backdoors
G. Yang, X. Li, H. Wang, D. J. Miller, G. Kesidis
Under review
Vulnerabilities of Foundation Model Integrated Federated Learning Systems Under Adversarial Threats
X. Li, C. Wu, J. Wang
Under review
Backdoor Mitigation by Correcting Distribution of Neural Activation
X. Li, Z. Xiang, D. J. Miller, G. Kesidis
Neurocomputing, 2024
BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers: A Comprehensive Study
X. Li, D. J. Miller, Z. Xiang, G. Kesidis
IEEE Transactions on Knowledge and Data Engineering (TKDE), 2024
Unveiling Backdoor Risks Brought by Foundation Models in Heterogeneous Federated Learning
X. Li, C. Wu, J. Wang
The Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2024
Temporal-Distributed Backdoor Attack Against Video-Based Action Recognition
X. Li, S. Wang, R. Huang, M. Gowda, G. Kesidis
AAAI, 2024
Backdoor Threats from Compromised Foundation Models to Federated Learning
X. Li, S. Wang, C. Wu, H. Zhou, J. Wang
FL@FM-NeurIPS'23
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
X. Li, D. J. Miller, Z. Xiang, G. Kesidis
IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 2023
Test-Time Detection of Backdoor Triggers of Poisoned Deep Neural Networks
X. Li, D. J. Miller, Z. Xiang, G. Kesidis
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Detecting Backdoor Attacks Against Point Cloud Classifiers
Z. Xiang, D. J. Miller, S. Chen, X. Li, G. Kesidis
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
A Backdoor Attack against 3D Point Cloud Classifiers
Z. Xiang, D. J. Miller, S. Chen, X. Li, G. Kesidis
ICCV, 2021
I’m looking for highly motivated PhD students to join my research group starting in Fall 2025 at CS@UAB. Please check Recruitment. If you are interested, please apply to the CS PhD program and mention my name in your application. Additionally, send your CV and transcript to this email with the subject [25Fall PhD Application].
Instructor UAB
Teaching Assistant PSU
Conference Program Committee:
Conference Reviewer:
Journal Reviewer:
Student Volunteer: