Hi, my name is Shuang Zeng, welcome to my homepage. I received my bachelor's degree in Engineering from Peking University, Beijing, China in 2021. Currently, I am a Biomedical Engineering joint Ph.D. student of Peking University - Georgia Institute of Technology - Emory University. I am very fortunate to be advised by Prof. Qiushi Ren and Assistant Professor Dr. Yanye Lu in MILab from College of Future Technology, PKU and Prof. May Dongmei Wang in Bio-MIBLab from Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University. My research interest mainly focuses on self-supervised contrastive learning, Large Language Models, explainable AI and medical image processing. Welcome to reach out to me for communication and cooperation!

Shuang Zeng, Lei Zhu, Xinliang Zhang, Qian Chen, Hangzhou He, Lujia Jin, Zifeng Tian, Zhaoheng Xie, Micky C Nnamdi, Wenqi Shi, J Ben Tamo, May D. Wang, Yanye Lu# (# corresponding author)
IEEE Journal of Biomedical and Health Informatics 2026 中科院二区Top, IF:6.8
We propose a novel Multi-level Asymmetric Contrastive Learning framework named MACL by introducing an asymmetric CL structure and a multi-level CL strategy to realize one-stage encoder-decoder synchronous pre-training for medical image segmentation.
Shuang Zeng, Lei Zhu, Xinliang Zhang, Qian Chen, Hangzhou He, Lujia Jin, Zifeng Tian, Zhaoheng Xie, Micky C Nnamdi, Wenqi Shi, J Ben Tamo, May D. Wang, Yanye Lu# (# corresponding author)
IEEE Journal of Biomedical and Health Informatics 2026 中科院二区Top, IF:6.8
We propose a novel Multi-level Asymmetric Contrastive Learning framework named MACL by introducing an asymmetric CL structure and a multi-level CL strategy to realize one-stage encoder-decoder synchronous pre-training for medical image segmentation.

Shuang Zeng, Lei Zhu, Xinliang Zhang, Hangzhou He, Yanye Lu# (# corresponding author)
IEEE Transactions on Image Processing 2026 中科院一区Top, IF:13.7
We propose SuperCL, a superpixel-guided contrastive learning framework for medical image segmentation pre-training, which exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies: Intra-image Local Contrastive Pairs (ILCP) Generation and Inter-image Global Contrastive Pairs (IGCP) Generation.
Shuang Zeng, Lei Zhu, Xinliang Zhang, Hangzhou He, Yanye Lu# (# corresponding author)
IEEE Transactions on Image Processing 2026 中科院一区Top, IF:13.7
We propose SuperCL, a superpixel-guided contrastive learning framework for medical image segmentation pre-training, which exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies: Intra-image Local Contrastive Pairs (ILCP) Generation and Inter-image Global Contrastive Pairs (IGCP) Generation.

Shuang Zeng, Chee Hong Lee, Kaiwen Li, Boxu Xie, Ourui Fu, Hanghzou He, Lei Zhu#, Yanye Lu#, Fangxiao Cheng# (# corresponding author)
Expert Systems With Applications 2025 中科院一区Top, IF:7.5
We design a novel loss named Channel-Coupled Vessel Consistency Loss to enforce the coherence and consistency between vessel, artery and vein predictions and a regularization term named intra-image pixel-level contrastive loss to extract more discriminative feature-level fine-grained representations for accurate retinal A/V classification.
Shuang Zeng, Chee Hong Lee, Kaiwen Li, Boxu Xie, Ourui Fu, Hanghzou He, Lei Zhu#, Yanye Lu#, Fangxiao Cheng# (# corresponding author)
Expert Systems With Applications 2025 中科院一区Top, IF:7.5
We design a novel loss named Channel-Coupled Vessel Consistency Loss to enforce the coherence and consistency between vessel, artery and vein predictions and a regularization term named intra-image pixel-level contrastive loss to extract more discriminative feature-level fine-grained representations for accurate retinal A/V classification.

Shuang Zeng*, Chee Hong Lee*, Micky C. Nnamdi, Wenqi Shi, J. Ben Tamo, Hangzhou He, Xinliang Zhang, Qian Chen, May D. Wang, Lei Zhu#, Yanye Lu#, Qiushi Ren# (* equal contribution, # corresponding author)
Image and Vision Computing 2025
We propose a new retinal vessel segmentation model named AttUKAN to selectively filter skip connection features and a Label-guided Pixel-wise Contrastive Loss (LPCL) to extract more discriminative features by distinguishing between foreground vessel-pixel sample pairs and background sample pairs.
Shuang Zeng*, Chee Hong Lee*, Micky C. Nnamdi, Wenqi Shi, J. Ben Tamo, Hangzhou He, Xinliang Zhang, Qian Chen, May D. Wang, Lei Zhu#, Yanye Lu#, Qiushi Ren# (* equal contribution, # corresponding author)
Image and Vision Computing 2025
We propose a new retinal vessel segmentation model named AttUKAN to selectively filter skip connection features and a Label-guided Pixel-wise Contrastive Loss (LPCL) to extract more discriminative features by distinguishing between foreground vessel-pixel sample pairs and background sample pairs.