Zechu (Steven) Li
I received my master's degree from TU-Darmstadt from Oct. 2023 to May. 2024, advised by Prof. Georgia Chalvatzaki.
My research interest lies in reinforcement learning, especially its applications (e.g., robotics, finance, and transportation) and high-performance and scalable systems.
Prior to this, I was a research assistant at MIT CSAIL, advised by Prof. Pulkit Agrawal,
where I conducted research on massively parallel simulation and sim-to-real in robotics.
I received my bachelor's degree from Columbia University in May 2022, majoring in computer science.
During my undergraduate studies, I was fortunate to work with Prof. Xiaodong Wang, Prof. Anwar Walid and Prof.
Sharon (Xuan) Di.
Email  / 
Google Scholar  / 
GitHub / 
LinkedIn
|
|
Selected Publications [* Equal contribution]
|
|
Reconciling Reality through Simulation: A Real-to-Sim-to-Real Approach for Robust Manipulation
Marcel Torne,
Anthony Simeonov,
Zechu Li,
April Chan,
Tao Chen,
Abhishek Gupta*,
Pulkit Agrawal*
Robotics: Science and Systems (RSS), 2024
paper /
website /
code
A system for robustifying real-world imitation learning policies via reinforcement learning in "digital twin" simulation environments constructed on the fly from small amounts of real-world data.
|
|
Parallel Q-Learning: Scaling Off-policy Reinforcement Learning under Massively Parallel Simulation
Zechu Li*,
Tao Chen*,
Zhang-Wei Hong,
Anurag Ajay,
Pulkit Agrawal
ICML, 2023
paper /
code
A novel parallel Q-learning framework that scales off-policy learning to 10000+ parallel environments.
|
|
Homomorphic Matrix Completion
Xiao-Yang Liu*,
Zechu Li*,
Xiaodong Wang
NeurIPS, 2022
paper
A homomorphic matrix completion algorithm that satisfies the differential privacy property and reduces the best-known error bound to EXACT recovery at a price of more samples.
|
|
FinRL: Financial Reinforcement Learning
project page /
code /
GitHub Star
The first open-source framework to show the great potential of financial reinforcement learning.
|
|
ElegantRL “小雅”: Massively Parallel Library for Cloud-native Deep Reinforcement Learning
project page /
code /
GitHub Star
A massively parallel library for cloud-native deep reinforcement learning (DRL) applications.
As a leader of this project, I have been contributing to
- develop a series of large-scale training frameworks,
- implemente SOTA algorithms and techniques,
- build the documentation website.
Starting from Mar. 2021, I started to write tutorial blogs for the community,
- ElegantRL: Much More Stable Deep Reinforcement Learning Algorithms than Stable-Baseline3,
MLearning.ai, Mar. 3, 2022.
- ElegantRL-Podracer: A Scalable and Elastic Library for Cloud-Native Deep Reinforcement Learning,
Towards data science, Dec. 11, 2021.
- ElegantRL: Mastering PPO Algorithms,
Towards data science, May. 3, 2021.
- ElegantRL Demo: Stock Trading Using DDPG (Part II),
MLearning.ai, Apr. 19, 2021.
- ElegantRL Demo: Stock Trading Using DDPG (Part I),
MLearning.ai, Mar. 28, 2021.
- ElegantRL-Helloworld: A Lightweight and Stable Deep Reinforcement Learning Library,
Towards data science, Mar. 4, 2021.
|
|
High-performance Tensor Decompositions for Compressing and Accelerating Deep Neural Networks
Xiao-Yang Liu,
Yiming Fang,
Liuqing Yang,
Zechu Li,
Anwar Walid
Tensors for Data Processing, Elsevier, 2021
chapter /
book
This chapter takes a practical approach to seek a better efficiency-accuracy trade-off, which utilizes high performance tensor decompositions to compress and accelerate neural networks by exploiting low-rank structures of the network weight matrix.
|
|