Jia Wei

weijia4473 AT gmail.com
Now Traveling at Edinburgh
Office details coming soon
Position: Postdoc Fellow at Tsinghua University with Prof. Xuehai Qian.
Address: Ziqiang Building, Tsinghua University, Beijing, China
Contacts: weijia4473 AT gmail.com
Jia Wei will be available on the job market on April. 2027.
Jia Wei received his PhD on Dec. 2024, from the School of Computer Science and Technology, Xi’an Jiaotong University, under the supervision of Prof. Xingjun Zhang (Dean). Jia Wei has worked closely with IEEE Fellow Witold Pedrycz as a visiting scholar at the University of Alberta, Canada. Jia Wei’s current research interests focus on optimizing the training and inference performance of frontier large models (e.g., Deepseek r1 and kimi v1.5), with an emphasis on machine learning systems, deep learning, graph neural networks, and data transfer optimization. His research results provide a comprehensive overview of his research topics, honors, and awards. His academic contributions are listed in the “Publications” section, and his open source work can be browsed through the “Projects and Resources Library”.
Jia Wei’s personal homepage is still a work in progress.
news
Jul 05, 2025 | I received funding from Tsinghua University’s Huiyan Fund (only 10 people in the entire school, salary-type fund, 300,000 RMB per year/two years). ![]() |
---|---|
Jan 06, 2025 | My paper entitled “Dynamic Fuzzy Sampler for Graph Neural Networks” was accepted by IEEE Transactions on Fuzzy System (SCI Region I, Impact Factor 10.7, CCFB). This is the first innovative work that combines fuzzy system and graph sampling. |
Dec 24, 2024 | I officially received my Ph.D. in Engineering from Xi’an Jiaotong University! |
Jan 15, 2024 | My accepted paper Fastensor by ACM TACO (CCF A Journal) was formally presented at the HiPEAC conference (CCF B). ![]() ![]() |
selected publications
- TFSDynamic Fuzzy Sampler for Graph Neural NetworksIEEE Transactions on Fuzzy Systems, 2025
- TACOFastensor: Optimise the tensor I/O path from SSD to GPU for deep learning trainingACM Transactions on Architecture and Code Optimization, 2023
- Under ReviewDual-pronged deep learning preprocessing on heterogeneous platforms with CPU, GPU and CSDarXiv preprint arXiv:2407.00005, 2024
- ICDEHow much storage do we need for high performance serverIn 2022 IEEE 38th International Conference on Data Engineering (ICDE), 2022
- ISLeader population learning rate scheduleInformation Sciences, 2023