Jialong Li

Dr. Jialong Li

Address
Max-Planck-Institut für Informatik
Saarland Informatics Campus
Campus E1 4
66123 Saarbrücken
Location
E1 4 - 518
Phone
+49 681 9325 3535
Fax
+49 681 9325 5719

Personal Information

I am a postdoc researcher at Max Planck Institute for Informatics (MPF-INF). My research interests include optical networks, optical communications, and computer networks. Before joining MPI-INF, I received my B.E. and Ph.D. degree in Electronic Engineering from Tsinghua University in 2016 and 2021, respectively.

For more information, please see my personal website: https://franklee94.github.io/.

Publications

2024
De Marchi, F., Bai, W., Li, J., & Xia, Y. (2024). Rethinking Transport Protocols for Reconfigurable Data Centers: An Empirical Study. In HotOptics ’24, 1st SIGCOMM Workshop on Hot Topics in Optical Technologies and Applications in Networking. Sidney, Australia: ACM. doi:10.1145/3672201.367412
Export
BibTeX
@inproceedings{DeMarchi_HotOPTICS24, TITLE = {Rethinking Transport Protocols for Reconfigurable Data Centers: {A}n Empirical Study}, AUTHOR = {De Marchi, Federico and Bai, Wei and Li, Jialong and Xia, Yiting}, LANGUAGE = {eng}, ISBN = {979-8-4007-0716-2}, DOI = {10.1145/3672201.367412}, PUBLISHER = {ACM}, YEAR = {2024}, DATE = {2024}, BOOKTITLE = {HotOptics '24, 1st SIGCOMM Workshop on Hot Topics in Optical Technologies and Applications in Networking}, PAGES = {7--13}, ADDRESS = {Sidney, Australia}, }
Endnote
%0 Conference Proceedings %A De Marchi, Federico %A Bai, Wei %A Li, Jialong %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T Rethinking Transport Protocols for Reconfigurable Data Centers: An Empirical Study : %G eng %U http://hdl.handle.net/21.11116/0000-0010-2E36-F %R 10.1145/3672201.367412 %D 2024 %B 1st SIGCOMM Workshop on Hot Topics in Optical Technologies and Applications in Networking %Z date of event: 2024-08-04 - 2024-08-08 %C Sidney, Australia %B HotOptics '24 %P 7 - 13 %I ACM %@ 979-8-4007-0716-2
De Marchi, F., Li, J., Bai, W., & Xia, Y. (2024). POSTER: Opportunistic Credit-Based Transport for Reconfigurable Data Center Networks with Tidal. In ACM SIGCOMM Posters and Demos ’24. Sydney, Australia: ACM. doi:10.1145/3672202.3673714
Export
BibTeX
@inproceedings{DeMarchi_SIGCOMM, TITLE = {{POSTER}: Opportunistic Credit-Based Transport for Reconfigurable Data Center Networks with Tidal}, AUTHOR = {De Marchi, Federico and Li, Jialong and Bai, Wei and Xia, Yiting}, LANGUAGE = {eng}, ISBN = {979-8-4007-0717-9}, DOI = {10.1145/3672202.3673714}, PUBLISHER = {ACM}, YEAR = {2024}, DATE = {2024}, BOOKTITLE = {ACM SIGCOMM Posters and Demos '24}, PAGES = {4--6}, ADDRESS = {Sydney, Australia}, }
Endnote
%0 Conference Proceedings %A De Marchi, Federico %A Li, Jialong %A Bai, Wei %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T POSTER: Opportunistic Credit-Based Transport for Reconfigurable Data Center Networks with Tidal : %G eng %U http://hdl.handle.net/21.11116/0000-0010-41E5-2 %R 10.1145/3672202.3673714 %D 2024 %B 38th ACM SIGCOMM %Z date of event: 2024-08-04 - 2024-08-08 %C Sydney, Australia %B ACM SIGCOMM Posters and Demos '24 %P 4 - 6 %I ACM %@ 979-8-4007-0717-9
Li, J., Gong, H., De Marchi, F., Gong, A., Lei, Y., Bai, W., & Xia, Y. (2024). Uniform-Cost Multi-Path Routing for Reconfigurable Data Center Networks. In ACM SIGCOMM ’24. Sydney, Australia: ACM. doi:10.1145/3651890.3672245
Export
BibTeX
@inproceedings{LiSIGCOMM24, TITLE = {Uniform-Cost Multi-Path Routing for Reconfigurable Data Center Networks}, AUTHOR = {Li, Jialong and Gong, Haotian and De Marchi, Federico and Gong, Aoyu and Lei, Yiming and Bai, Wei and Xia, Yiting}, LANGUAGE = {eng}, ISBN = {979-8-4007-0614-1}, DOI = {10.1145/3651890.3672245}, PUBLISHER = {ACM}, YEAR = {2024}, DATE = {2024}, BOOKTITLE = {ACM SIGCOMM '24}, PAGES = {433--448}, ADDRESS = {Sydney, Australia}, }
Endnote
%0 Conference Proceedings %A Li, Jialong %A Gong, Haotian %A De Marchi, Federico %A Gong, Aoyu %A Lei, Yiming %A Bai, Wei %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T Uniform-Cost Multi-Path Routing for Reconfigurable Data Center Networks : %G eng %U http://hdl.handle.net/21.11116/0000-000F-E324-8 %R 10.1145/3651890.3672245 %D 2024 %B ACM SIGCOMM Conference %Z date of event: 2024-08-04 - 2024-08-08 %C Sydney, Australia %B ACM SIGCOMM '24 %P 433 - 448 %I ACM %@ 979-8-4007-0614-1
Lei, Y., De Marchi, F., Joshi, R., Li, J., Chandrasekaran, B., & Xia, Y. (2024). DEMO: An Open Research Framework for Optical Data Center Networks. In ACM SIGCOMM Posters and Demos ’24. Sydney, Australia: ACM. doi:10.1145/3672202.3673712
Export
BibTeX
@inproceedings{Lei_SIGCOMM, TITLE = {{DEMO}: 7A}n Open Research Framework for Optical Data Center Networks}, AUTHOR = {Lei, Yiming and De Marchi, Federico and Joshi, Raj and Li, Jialong and Chandrasekaran, Balakrishnan and Xia, Yiting}, LANGUAGE = {eng}, ISBN = {979-8-4007-0717-9}, DOI = {10.1145/3672202.3673712}, PUBLISHER = {ACM}, YEAR = {2024}, DATE = {2024}, BOOKTITLE = {ACM SIGCOMM Posters and Demos '24}, PAGES = {86--88}, ADDRESS = {Sydney, Australia}, }
Endnote
%0 Conference Proceedings %A Lei, Yiming %A De Marchi, Federico %A Joshi, Raj %A Li, Jialong %A Chandrasekaran, Balakrishnan %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T DEMO: An Open Research Framework for Optical Data Center Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0010-43CF-A %R 10.1145/3672202.3673712 %D 2024 %B 38th ACM SIGCOMM %Z date of event: 2024-08-04 - 2024-08-08 %C Sydney, Australia %B ACM SIGCOMM Posters and Demos '24 %P 86 - 88 %I ACM %@ 979-8-4007-0717-9
Lei, Y., Li, J., Liu, Z., Joshi, R., & Xia, Y. (2024). Nanosecond Precision Time Synchronization for Optical Data Center Networks. Retrieved from https://arxiv.org/abs/2410.17012
(arXiv: 2410.17012)
Abstract
Optical data center networks (DCNs) are renovating the infrastructure design<br>for the cloud in the post Moore's law era. The fact that optical DCNs rely on<br>optical circuits of microsecond-scale durations makes nanosecond-precision time<br>synchronization essential for the correct functioning of routing on the network<br>fabric. However, current studies on optical DCNs neglect the fundamental need<br>for accurate time synchronization. In this paper, we bridge the gap by<br>developing Nanosecond Optical Synchronization (NOS), the first<br>nanosecond-precision synchronization solution for optical DCNs general to<br>various optical hardware. NOS builds clock propagation trees on top of the<br>dynamically reconfigured circuits in optical DCNs, allowing switches to seek<br>better sync parents throughout time. It predicts drifts in the tree-building<br>process, which enables minimization of sync errors. We also tailor today's sync<br>protocols to the needs of optical DCNs, including reducing the number of sync<br>messages to fit into short circuit durations and correcting timestamp errors<br>for higher sync accuracy. Our implementation on programmable switches shows<br>28ns sync accuracy in a 192-ToR setting.<br>
Export
BibTeX
@online{Lei_2410.17012, TITLE = {Nanosecond Precision Time Synchronization for Optical Data Center Networks}, AUTHOR = {Lei, Yiming and Li, Jialong and Liu, Zhengqing and Joshi, Raj and Xia, Yiting}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2410.17012}, EPRINT = {2410.17012}, EPRINTTYPE = {arXiv}, YEAR = {2024}, DATE = {2024}, ABSTRACT = {Optical data center networks (DCNs) are renovating the infrastructure design<br>for the cloud in the post Moore's law era. The fact that optical DCNs rely on<br>optical circuits of microsecond-scale durations makes nanosecond-precision time<br>synchronization essential for the correct functioning of routing on the network<br>fabric. However, current studies on optical DCNs neglect the fundamental need<br>for accurate time synchronization. In this paper, we bridge the gap by<br>developing Nanosecond Optical Synchronization (NOS), the first<br>nanosecond-precision synchronization solution for optical DCNs general to<br>various optical hardware. NOS builds clock propagation trees on top of the<br>dynamically reconfigured circuits in optical DCNs, allowing switches to seek<br>better sync parents throughout time. It predicts drifts in the tree-building<br>process, which enables minimization of sync errors. We also tailor today's sync<br>protocols to the needs of optical DCNs, including reducing the number of sync<br>messages to fit into short circuit durations and correcting timestamp errors<br>for higher sync accuracy. Our implementation on programmable switches shows<br>28ns sync accuracy in a 192-ToR setting.<br>}, }
Endnote
%0 Report %A Lei, Yiming %A Li, Jialong %A Liu, Zhengqing %A Joshi, Raj %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T Nanosecond Precision Time Synchronization for Optical Data Center Networks : %G eng %U http://hdl.handle.net/21.11116/0000-0010-43D5-2 %U https://arxiv.org/abs/2410.17012 %D 2024 %X Optical data center networks (DCNs) are renovating the infrastructure design<br>for the cloud in the post Moore's law era. The fact that optical DCNs rely on<br>optical circuits of microsecond-scale durations makes nanosecond-precision time<br>synchronization essential for the correct functioning of routing on the network<br>fabric. However, current studies on optical DCNs neglect the fundamental need<br>for accurate time synchronization. In this paper, we bridge the gap by<br>developing Nanosecond Optical Synchronization (NOS), the first<br>nanosecond-precision synchronization solution for optical DCNs general to<br>various optical hardware. NOS builds clock propagation trees on top of the<br>dynamically reconfigured circuits in optical DCNs, allowing switches to seek<br>better sync parents throughout time. It predicts drifts in the tree-building<br>process, which enables minimization of sync errors. We also tailor today's sync<br>protocols to the needs of optical DCNs, including reducing the number of sync<br>messages to fit into short circuit durations and correcting timestamp errors<br>for higher sync accuracy. Our implementation on programmable switches shows<br>28ns sync accuracy in a 192-ToR setting.<br> %K Computer Science, Networking and Internet Architecture, cs.NI
Li, J., Tripathi, S., Rastogi, L., Lei, Y., Pan, R., & Xia, Y. (2024). Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling. Retrieved from https://arxiv.org/abs/2410.17043
(arXiv: 2410.17043)
Abstract
As machine learning models scale in size and complexity, their computational<br>requirements become a significant barrier. Mixture-of-Experts (MoE) models<br>alleviate this issue by selectively activating relevant experts. Despite this,<br>MoE models are hindered by high communication overhead from all-to-all<br>operations, low GPU utilization due to the synchronous communication<br>constraint, and complications from heterogeneous GPU environments.<br> This paper presents Aurora, which optimizes both model deployment and<br>all-to-all communication scheduling to address these challenges in MoE<br>inference. Aurora achieves minimal communication times by strategically<br>ordering token transmissions in all-to-all communications. It improves GPU<br>utilization by colocating experts from different models on the same device,<br>avoiding the limitations of synchronous all-to-all communication. We analyze<br>Aurora's optimization strategies theoretically across four common GPU cluster<br>settings: exclusive vs. colocated models on GPUs, and homogeneous vs.<br>heterogeneous GPUs. Aurora provides optimal solutions for three cases, and for<br>the remaining NP-hard scenario, it offers a polynomial-time sub-optimal<br>solution with only a 1.07x degradation from the optimal.<br> Aurora is the first approach to minimize MoE inference time via optimal model<br>deployment and communication scheduling across various scenarios. Evaluations<br>demonstrate that Aurora significantly accelerates inference, achieving speedups<br>of up to 2.38x in homogeneous clusters and 3.54x in heterogeneous environments.<br>Moreover, Aurora enhances GPU utilization by up to 1.5x compared to existing<br>methods.<br>
Export
BibTeX
@online{Li_2410.17043, TITLE = {Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling}, AUTHOR = {Li, Jialong and Tripathi, Shreyansh and Rastogi, Lakshay and Lei, Yiming and Pan, Rui and Xia, Yiting}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2410.17043}, EPRINT = {2410.17043}, EPRINTTYPE = {arXiv}, YEAR = {2024}, ABSTRACT = {As machine learning models scale in size and complexity, their computational<br>requirements become a significant barrier. Mixture-of-Experts (MoE) models<br>alleviate this issue by selectively activating relevant experts. Despite this,<br>MoE models are hindered by high communication overhead from all-to-all<br>operations, low GPU utilization due to the synchronous communication<br>constraint, and complications from heterogeneous GPU environments.<br> This paper presents Aurora, which optimizes both model deployment and<br>all-to-all communication scheduling to address these challenges in MoE<br>inference. Aurora achieves minimal communication times by strategically<br>ordering token transmissions in all-to-all communications. It improves GPU<br>utilization by colocating experts from different models on the same device,<br>avoiding the limitations of synchronous all-to-all communication. We analyze<br>Aurora's optimization strategies theoretically across four common GPU cluster<br>settings: exclusive vs. colocated models on GPUs, and homogeneous vs.<br>heterogeneous GPUs. Aurora provides optimal solutions for three cases, and for<br>the remaining NP-hard scenario, it offers a polynomial-time sub-optimal<br>solution with only a 1.07x degradation from the optimal.<br> Aurora is the first approach to minimize MoE inference time via optimal model<br>deployment and communication scheduling across various scenarios. Evaluations<br>demonstrate that Aurora significantly accelerates inference, achieving speedups<br>of up to 2.38x in homogeneous clusters and 3.54x in heterogeneous environments.<br>Moreover, Aurora enhances GPU utilization by up to 1.5x compared to existing<br>methods.<br>}, }
Endnote
%0 Report %A Li, Jialong %A Tripathi, Shreyansh %A Rastogi, Lakshay %A Lei, Yiming %A Pan, Rui %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T Optimizing Mixture-of-Experts Inference Time Combining Model Deployment and Communication Scheduling : %G eng %U http://hdl.handle.net/21.11116/0000-0010-43DB-C %U https://arxiv.org/abs/2410.17043 %D 2024 %8 22.10.2024 %X As machine learning models scale in size and complexity, their computational<br>requirements become a significant barrier. Mixture-of-Experts (MoE) models<br>alleviate this issue by selectively activating relevant experts. Despite this,<br>MoE models are hindered by high communication overhead from all-to-all<br>operations, low GPU utilization due to the synchronous communication<br>constraint, and complications from heterogeneous GPU environments.<br> This paper presents Aurora, which optimizes both model deployment and<br>all-to-all communication scheduling to address these challenges in MoE<br>inference. Aurora achieves minimal communication times by strategically<br>ordering token transmissions in all-to-all communications. It improves GPU<br>utilization by colocating experts from different models on the same device,<br>avoiding the limitations of synchronous all-to-all communication. We analyze<br>Aurora's optimization strategies theoretically across four common GPU cluster<br>settings: exclusive vs. colocated models on GPUs, and homogeneous vs.<br>heterogeneous GPUs. Aurora provides optimal solutions for three cases, and for<br>the remaining NP-hard scenario, it offers a polynomial-time sub-optimal<br>solution with only a 1.07x degradation from the optimal.<br> Aurora is the first approach to minimize MoE inference time via optimal model<br>deployment and communication scheduling across various scenarios. Evaluations<br>demonstrate that Aurora significantly accelerates inference, achieving speedups<br>of up to 2.38x in homogeneous clusters and 3.54x in heterogeneous environments.<br>Moreover, Aurora enhances GPU utilization by up to 1.5x compared to existing<br>methods.<br> %K Computer Science, Learning, cs.LG,Computer Science, Networking and Internet Architecture, cs.NI
2022
Li, J., Zhu, K., Hua, N., Zhao, C., Li, Y., Zheng, X., & Zhou, B. (2022). Joint Optimization of Multidimensional Resources Allocation in Cloud Networking. In The 7th Optoelectronics Global Conference (OGC 2022). Shenzhen, China: IEEE. doi:0.1109/OGC55558.2022.10050986
Export
BibTeX
@inproceedings{LIOG22, TITLE = {Joint Optimization of Multidimensional Resources Allocation in Cloud Networking}, AUTHOR = {Li, Jialong and Zhu, Kangqi and Hua, Nan and Zhao, Chen and Li, Yanhe and Zheng, Xiaoping and Zhou, Bingkun}, LANGUAGE = {eng}, ISBN = {978-1-6654-8698-9}, DOI = {0.1109/OGC55558.2022.10050986}, PUBLISHER = {IEEE}, YEAR = {2022}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {The 7th Optoelectronics Global Conference (OGC 2022)}, PAGES = {55--59}, ADDRESS = {Shenzhen, China}, }
Endnote
%0 Conference Proceedings %A Li, Jialong %A Zhu, Kangqi %A Hua, Nan %A Zhao, Chen %A Li, Yanhe %A Zheng, Xiaoping %A Zhou, Bingkun %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations External Organizations External Organizations External Organizations %T Joint Optimization of Multidimensional Resources Allocation in Cloud Networking : %G eng %U http://hdl.handle.net/21.11116/0000-000D-5762-3 %R 0.1109/OGC55558.2022.10050986 %D 2022 %B The 7th Optoelectronics Global Conference %Z date of event: 2022-12-06 - 2022-12-11 %C Shenzhen, China %B The 7th Optoelectronics Global Conference %P 55 - 59 %I IEEE %@ 978-1-6654-8698-9
Li, J., Lei, Y., De Marchi, F., Joshi, R., Chandrasekaran, B., & Xia, Y. (2022). Hop-On Hop-Off Routing: A Fast Tour across the Optical Data Center Network for Latency-Sensitive Flow. In 6th Asia-Pacific Workshop on Networking (APNet 2022). Fuzhou, China: ACM. doi:10.1145/3542637.3542647
Export
BibTeX
@inproceedings{Li_APNet2022, TITLE = {{Hop-On Hop-Off} Routing: {A} Fast Tour across the Optical Data Center Network for Latency-Sensitive Flow}, AUTHOR = {Li, Jialong and Lei, Yiming and De Marchi, Federico and Joshi, Raj and Chandrasekaran, Balakrishnan and Xia, Yiting}, LANGUAGE = {eng}, DOI = {10.1145/3542637.3542647}, PUBLISHER = {ACM}, YEAR = {2022}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {6th Asia-Pacific Workshop on Networking (APNet 2022)}, ADDRESS = {Fuzhou, China}, }
Endnote
%0 Conference Proceedings %A Li, Jialong %A Lei, Yiming %A De Marchi, Federico %A Joshi, Raj %A Chandrasekaran, Balakrishnan %A Xia, Yiting %+ Networks and Cloud Systems, MPI for Informatics, Max Planck Society Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T Hop-On Hop-Off Routing: A Fast Tour across the Optical Data Center Network for Latency-Sensitive Flow : %G eng %U http://hdl.handle.net/21.11116/0000-000C-7CD6-8 %R 10.1145/3542637.3542647 %D 2022 %B 6th Asia-Pacific Workshop on Networking %Z date of event: 2022-07-01 - 2022-07-02 %C Fuzhou, China %B 6th Asia-Pacific Workshop on Networking %I ACM
Pan, R., Lei, Y., Li, J., Xie, Z., Yuan, B., & Xia, Y. (2022). Efficient Flow Scheduling in Distributed Deep Learning Training with Echelon Formation. In HotNets ’22, 21st ACM Workshop on Hot Topics in Networks. Austin, TX, USA: ACM. doi:10.1145/3563766.3564096
Export
BibTeX
@inproceedings{echelon, TITLE = {Efficient Flow Scheduling in Distributed Deep Learning Training with Echelon Formation}, AUTHOR = {Pan, Rui and Lei, Yiming and Li, Jialong and Xie, Zhiqiang and Yuan, Binhang and Xia, Yiting}, LANGUAGE = {eng}, ISBN = {978-1-4503-9899-2}, DOI = {10.1145/3563766.3564096}, PUBLISHER = {ACM}, YEAR = {2022}, MARGINALMARK = {$\bullet$}, BOOKTITLE = {HotNets '22, 21st ACM Workshop on Hot Topics in Networks}, PAGES = {93--100}, ADDRESS = {Austin, TX, USA}, }
Endnote
%0 Conference Proceedings %A Pan, Rui %A Lei, Yiming %A Li, Jialong %A Xie, Zhiqiang %A Yuan, Binhang %A Xia, Yiting %+ External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society Networks and Cloud Systems, MPI for Informatics, Max Planck Society External Organizations External Organizations Networks and Cloud Systems, MPI for Informatics, Max Planck Society %T Efficient Flow Scheduling in Distributed Deep Learning Training with Echelon Formation : %G eng %U http://hdl.handle.net/21.11116/0000-000C-1822-3 %R 10.1145/3563766.3564096 %D 2022 %B 21st ACM Workshop on Hot Topics in Networks %Z date of event: 2022-11-14 - 2022-11-15 %C Austin, TX, USA %B HotNets '22 %P 93 - 100 %I ACM %@ 978-1-4503-9899-2

Research Interests

  • Optical networks
  • Optical communications
  • Computer networks
  • Data center networks

Reviewing Activity & Workshop / Conference positions

Conferences

Reviewer, The 11th International Conference on Wireless Communications and Signal Processing (WCSP 2019)

Journals

Reviewer, IEEE Communications Letters
Reviewer, IEEE/OSA Journal of Optical Communications and Networking

Teachings

At MPI / Saarland University:

  • Winter 2022: Advanced Topics on Data Networks; Co-lecturer
  • Summer 2021: Data Networks; Co-lecturer
  • Winter 2021: Advanced Topics on Data Networks; Co-lecturer

At Tsinghua University:

  • Fall 2019, Teaching Assistant, Introduction to Information Science and Technology
  • Fall 2018, Teaching Assistant, Introduction to Information Science and Technology

Recent Positions

November 2021 - now:
Postdoctoral Researcher at Max Planck Institute for Informatics

Education

August 2016 - October 2021:
Ph.D. in Electronic Engineering, Tsinghua University

 

August 2012 - July 2016:
B.S. in Electronic Engineering, Tsinghua University