Yijia Dai

CS PhD @ Cornell. 问心无愧.

yijia_looking.jpg

👋 ▫️ 👍 👀 ❕

Learning is key for any type of intelligence. I love thinking about how human learns, and whether that is transferable to machines. Questions like How does sequential ordering of knowledge affect human learning? How about dataset for machines? intrigue me.

My research interests include developing self-improvement mechanisms for large models, interpreting machine intelligence, and creating AI systems that enhance people’s daily lives.

I’m fortunate to be working with Prof. Sarah Dean, Prof. Thorsten Joachims, and Prof. Jennifer Sun at Cornell. Currently, I study reinforcement learning, human-in-loop dynamical systems, and LLMs for scientific discovery. Sample-efficient low-rank representation learning for dynamical and unobserved user states, optimal learning of logistic bandits for recommendation systems, and LLMs for understanding decision trajectories are examples of what I work on.

Publications

2025

  1. hmm.png
    Pre-trained Large Language Models Learn Hidden Markov Models In-context
    Yijia Dai, Zhaolin Gao, Yahya Satter, and 2 more authors
    2025

2024

  1. llm_training_v3.png
    End-to-end Training for Recommendation with Language-based User Profiles
    Zhaolin Gao, Joyce Zhou, Yijia Dai, and 1 more author
    2024

2023

  1. recsys_flowchart.gif
    Representation Learning in Low-rank Slate-based Recommender Systems
    Yijia Dai, and Wen Sun
    2023