Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
DTPI, 2021
Sicheng Yang, Yaping Dai, Simin Li, Kaixin Zhao. (2021). "An Automatic Analysis and Evaluation System Used for Teaching Quality in MOOC Environment." DTPI, 38-41, doi: 10.1109/DTPI52967.2021.9540117.
Interspeech, 2022
SiCheng Yang*, Methawee Tantrawenith*, Haolin Zhuang*, Zhiyong Wu, Aolan Sun, Jianzong Wang, Ning Cheng, Huaizhen Tang, Xintao Zhao, Jie Wang, Helen Meng. (2022). "Speech Representation Disentanglement with Adversarial Mutual Information Learning for One-shot Voice Conversion." Interspeech, 2553-2557, doi: 10.21437/Interspeech.2022-571.
ICMI, 2022
Sicheng Yang, Zhiyong Wu, Minglei Li, Mengchen Zhao, Jiuxin Lin, Liyang Chen, Weihong Bao. (2022). "The ReprGesture entry to the GENEA Challenge 2022." ICMI, 758–763, doi: 10.1145/3536221.3558066.
Laboratory Science, 2023
Gao Xuanyi, Yang Sicheng, Liu Yongxin. (2023). "Innovative Thinking Training in Light Control Circuit Open Experiment." Laboratory Science, 26(01):107-110, doi: CNKI:SUN:YSKT.0.2023-01-025.
ICASSP, 2023
Haolin Zhuang, Shun Lei, Long Xiao, Weiqin Li, Liyang Chen, Sicheng Yang, Zhiyong Wu, Shiyin Kang, Helen Meng. (2023). "GTN-Bailando: Genre Consistent Long-Term 3D Dance Generation based on Pre-trained Genre Token Network." ICASSP, 1-5, doi: 10.1109/ICASSP49357.2023.10095203.
ICASSP, 2023
Weihong Bao, Liyang Chen, Chaoyong Zhou, Sicheng Yang, Zhiyong Wu. (2023). "Wavsyncswap: End-To-End Portrait-Customized Audio-Driven Talking Face Generation." ICASSP, 1-5, doi: 10.1109/ICASSP49357.2023.10094807.
CVPR (Highlight), 2023
Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Haolin Zhuang. (2023). "QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation." CVPR, 2321-2330, doi: 10.1109/CVPR52729.2023.00230.
IJCAI (The 4th Outstanding Science and Technology Academic Papers of Shenzhen Association for Science and Technology, 2024), 2023
Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Ming Cheng, Long Xiao. (2023). "DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models." IJCAI, 5860--5868, doi: 10.24963/ijcai.2023/650.
ACM MM (Oral), 2023
Sicheng Yang*, Zilin Wang*, Zhiyong Wu, Minglei Li, Zhensong Zhang, Qiaochu Huang, Lei Hao, Songcen Xu, Xiaofei Wu, Changpeng Yang, Zonghong Dai. (2023). "UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons." ACM MM, 1033–1044, doi: 10.1145/3581783.3612503.
ICMI (Reproducibility Award), 2023
Sicheng Yang*, Haiwei Xue*, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, Zonghong Dai. (2023). "The DiffuseStyleGesture+ entry to the GENEA Challenge 2023." ICMI, 779–785, doi: 10.1145/3577190.3616114.
AAAI, 2024
Zunnan Xu, Yachao Zhang, Sicheng Yang, Ronghui Li, Xiu Li. (2024). "Chain of Generation: Multi-Modal Gesture Synthesis via Cascaded Conditional Control." AAAI.
ICASSP, 2024
Sicheng Yang, Zunnan Xu, Haiwei Xue, Yongkang Cheng, Shaoli Huang, Mingming Gong, Zhiyong Wu. (2024). "Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness." ICASSP.
ICASSP, 2024
Haiwei Xue, Sicheng Yang, Zhensong Zhang, Zhiyong Wu, Minglei Li, Zonghong Dai, Helen Meng. (2024). "Conversational Co-Speech Gesture Generation via Modeling Dialog Intention, Emotion and Context with Diffusion Models." ICASSP.
CVPR, 2024
Xu He, Qiaochu Huang, Zhensong Zhang, ZhiweiLin, Zhiyong Wu, Sicheng Yang, Minglei Li, Zhiyi Chen, Songcen Xu, Xiaofei Wu. (2024). "Co-Speech Gesture Video Generation via Motion-Decoupled Diffusion Model." CVPR.
NeurIPS, 2024
Zunnan Xu, Yukang Lin, Haonan Han, Sicheng Yang, Ronghui Li, Yachao Zhang, Xiu Li. (NeurIPS). "MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models." NeurIPS.
Arxiv, 2025
Boxuan Zhu, Sicheng Yang, Zhuo Wang, Haining Liang, Junxiao Shen. (Arxiv). "Duo Streamers: A Streaming Gesture Recognition Framework." Arxiv.
Arxiv, 2025
Likun Zhang, Sicheng Yang, Zhuo Wang, Haining Liang, Junxiao Shen. (Arxiv). "AutoMR: A Universal Time Series Motion Recognition Pipeline." Arxiv.
Arxiv, 2025
Rajmund Nagy, Hendric Voss, Thanh Hoang-Minh, Mihail Tsakov, Teodor Nikolov, Zeyi Zhang, Tenglong Ao, Sicheng Yang, Shaoli Huang, Yongkang Cheng, M Hamza Mughal, Rishabh Dabral, Kiran Chhatre, Christian Theobalt, Libin Liu, Stefan Kopp, Rachel McDonnell, Michael Neff, Taras Kucherenko, Youngwoo Yoon, Gustav Eje Henter. (Arxiv). "Towards Reliable Human Evaluations in Gesture Generation: Insights from a Community-Driven State-of-the-Art Benchmark." Arxiv.
Arxiv, 2025
Fengyi Fang, Sicheng Yang, Wenming Yang. (Arxiv). "CoordSpeaker: Exploiting Gesture Captioning for Coordinated Caption-Empowered Co-Speech Gesture Generation." Arxiv.
AAAI, 2026
Sicheng Yang, Yukai Huang, Weitong Cai, Shitong Sun, You He, Jiankang Deng, Hang Zhang, Jifei Song, Zhensong Zhang. (AAAI). "Plug-and-Play Clarifier: A Zero-Shot Multimodal Framework for Egocentric Intent Disambiguation." AAAI.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.