
Shuang Zeng, Lei Zhu, Xinliang Zhang, Qian Chen, Hangzhou He, Lujia Jin, Zifeng Tian, Zhaoheng Xie, Micky C Nnamdi, Wenqi Shi, J Ben Tamo, May D. Wang, Yanye Lu# (# corresponding author)
IEEE Journal of Biomedical and Health Informatics 2026 中科院二区Top, IF:6.8
We propose a novel Multi-level Asymmetric Contrastive Learning framework named MACL by introducing an asymmetric CL structure and a multi-level CL strategy to realize one-stage encoder-decoder synchronous pre-training for medical image segmentation.
Shuang Zeng, Lei Zhu, Xinliang Zhang, Qian Chen, Hangzhou He, Lujia Jin, Zifeng Tian, Zhaoheng Xie, Micky C Nnamdi, Wenqi Shi, J Ben Tamo, May D. Wang, Yanye Lu# (# corresponding author)
IEEE Journal of Biomedical and Health Informatics 2026 中科院二区Top, IF:6.8
We propose a novel Multi-level Asymmetric Contrastive Learning framework named MACL by introducing an asymmetric CL structure and a multi-level CL strategy to realize one-stage encoder-decoder synchronous pre-training for medical image segmentation.

Shuang Zeng, Lei Zhu, Xinliang Zhang, Hangzhou He, Yanye Lu# (# corresponding author)
IEEE Transactions on Image Processing 2026 中科院一区Top, IF:13.7
We propose SuperCL, a superpixel-guided contrastive learning framework for medical image segmentation pre-training, which exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies: Intra-image Local Contrastive Pairs (ILCP) Generation and Inter-image Global Contrastive Pairs (IGCP) Generation.
Shuang Zeng, Lei Zhu, Xinliang Zhang, Hangzhou He, Yanye Lu# (# corresponding author)
IEEE Transactions on Image Processing 2026 中科院一区Top, IF:13.7
We propose SuperCL, a superpixel-guided contrastive learning framework for medical image segmentation pre-training, which exploits the structural prior and pixel correlation of images by introducing two novel contrastive pairs generation strategies: Intra-image Local Contrastive Pairs (ILCP) Generation and Inter-image Global Contrastive Pairs (IGCP) Generation.

Shuang Zeng, Chee Hong Lee, Kaiwen Li, Boxu Xie, Ourui Fu, Hanghzou He, Lei Zhu#, Yanye Lu#, Fangxiao Cheng# (# corresponding author)
Expert Systems With Applications 2025 中科院一区Top, IF:7.5
We design a novel loss named Channel-Coupled Vessel Consistency Loss to enforce the coherence and consistency between vessel, artery and vein predictions and a regularization term named intra-image pixel-level contrastive loss to extract more discriminative feature-level fine-grained representations for accurate retinal A/V classification.
Shuang Zeng, Chee Hong Lee, Kaiwen Li, Boxu Xie, Ourui Fu, Hanghzou He, Lei Zhu#, Yanye Lu#, Fangxiao Cheng# (# corresponding author)
Expert Systems With Applications 2025 中科院一区Top, IF:7.5
We design a novel loss named Channel-Coupled Vessel Consistency Loss to enforce the coherence and consistency between vessel, artery and vein predictions and a regularization term named intra-image pixel-level contrastive loss to extract more discriminative feature-level fine-grained representations for accurate retinal A/V classification.

Shuang Zeng*, Chee Hong Lee*, Micky C. Nnamdi, Wenqi Shi, J. Ben Tamo, Hangzhou He, Xinliang Zhang, Qian Chen, May D. Wang, Lei Zhu#, Yanye Lu#, Qiushi Ren# (* equal contribution, # corresponding author)
Image and Vision Computing 2025
We propose a new retinal vessel segmentation model named AttUKAN to selectively filter skip connection features and a Label-guided Pixel-wise Contrastive Loss (LPCL) to extract more discriminative features by distinguishing between foreground vessel-pixel sample pairs and background sample pairs.
Shuang Zeng*, Chee Hong Lee*, Micky C. Nnamdi, Wenqi Shi, J. Ben Tamo, Hangzhou He, Xinliang Zhang, Qian Chen, May D. Wang, Lei Zhu#, Yanye Lu#, Qiushi Ren# (* equal contribution, # corresponding author)
Image and Vision Computing 2025
We propose a new retinal vessel segmentation model named AttUKAN to selectively filter skip connection features and a Label-guided Pixel-wise Contrastive Loss (LPCL) to extract more discriminative features by distinguishing between foreground vessel-pixel sample pairs and background sample pairs.