Cheng Luo

Cheng Luo

I am a independent researcher focusing on LLM optimization. Before that, I was an research at Microsoft Research. I am also collaborating with Anima Anandkumar. at Caltech, Beidi Chen at CMU. Soon, I will join Caltech as a postdoctoral researcher.

I am interested in bridging hardware constraints with the principles of learning in neural networks. I focus on developing hardware-efficient learning algorithms that are principled and scalable for large-scale training, such as training large language models (LLMs). Check out my research for more details.

news

Jul, 2024 MsT is released and try it out 🙌
Mar, 2024 RTP is released and try it out 🙌
Google Scholar GitHub