Jiayu (Mila) Wang
Contact: milawang [at] cs [dot] wisc [dot] edu

I am Jiayu (pronounciation: “Jee-ah-yü Wahng”), a PhD student in Computer Sciences at UW-Madison. I am fortunate to be advised by Prof. Aws Albarghouthi and Prof. Fred Sala (Sprocket Lab).
I am passionate about building efficient and intelligent agentic systems. My recent works focus on:
- Data- and compute-efficient Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) (e.g., cost-effective adaptation via augmenting model routing with expanded adaptation strategy pool)
- Logical and mathematical reasoning for LLMs and VLMs (e.g., dissecting reasoning under RL, grammar-aligned decoding, spatial reasoning)
I’m always happy to discuss research, answer questions, or just chat! Feel free to reach out through my socials (see RHS →).
Outside of research, I love playing tennis🎾 and try to get on the court as often as I can—usually 4–5 times a week.
news
Jun 5, 2025 | 🚀 SPARKLE preprint is now live on arXiv! Reinforcement learning has driven impressive gains in LLM reasoning—but what exactly does RL improve? SPARKLE answers this question with a fine-grained evaluation framework that dissects reasoning into plan-following, problem decomposition, and knowledge use. The results are surprising: explicit plans can actually hurt on the hardest problems, yet RL-tuned models remain far more robust and flexible in handling them. We also find clear gains in how RL enhances knowledge integration. And we push back on a common myth: hard problems can be useful for RL—even when they seem unrewarding. SPARKLE shows how to turn those tough cases into real training signal. |
---|---|
Apr 30, 2025 | 🚀 COSMOS preprint is now available on arXiv! With training-time and test-time adaptation strategies for LLMs exploding in number, figuring out the best one can feel like a wild goose chase. COSMOS makes it easy — predicting performance and cost accurately and efficiently so you don’t have to burn GPU hours testing every option. Smarter choices, fewer experiments. |
Apr 23, 2025 | I passed my qualifying exam! |
Dec 9, 2024 | Attended NeurIPS 2024 in Vancouver and presented two papers:
|
Sep 25, 2024 | Two first/co-first authored papers are accepted to NeurIPS 2024! 🎉🎉 |
publications (*equal contribution)
2025
- Beyond Accuracy: Dissecting Mathematical Reasoning for LLMs Under Reinforcement LearningJun 2025
-
2024
- Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language ModelsNeurIPS 2024, Jun 2024