Staging environment

Lightning Lessons

Context Tuning for LLMs

Hosted by Amir Feizpour, Jack Lu, and Dr. Mengye Ren

328 students

Share this lesson

Go deeper with a course

Build Multi-Agent Applications - A Bootcamp - LangGraph, Cursor, n8n
Amir Feizpour, PhD
View syllabus

What you'll learn

Smarter Prompt Initialization

Context Tuning starts prompts with real task demos, not random tokens, tapping into LLMs’ in-context learning ability.

Boosted Few-Shot Performance

By leveraging task-specific context, models adapt faster and achieve higher accuracy across diverse benchmarks.

Efficiency Without Fine-Tuning

Competitive results are reached without updating model weights, making training lighter and more resource-efficient.

Why this topic matters

Context Tuning shows that LLMs can be adapted more effectively without expensive fine-tuning. By grounding prompts in real examples, it combines efficiency with strong performance, offering a practical path to deploy LLMs in dynamic, resource-limited settings where adaptability matters most.

You'll learn from

Amir Feizpour

Founder @ Aggregate Intellect

Amir Feizpour is the founder, CEO, and Chief Scientist at Aggregate Intellect building a generative business brain for service and science based companies. Amir has built and grown a global community of 5000+ AI practitioners and researchers gathered around topics in AI research, engineering, product development, and responsibility. Prior to this, Amir was an NLP Product Lead at Royal Bank of Canada. Amir held a research position at University of Oxford conducting experiments on quantum computing resulting in high profile publications and patents. Amir holds a PhD in Physics from University of Toronto. Amir also serves the AI ecosystem as an advisor at MaRS Discovery District, works with several startups as fractional chief AI officer, and engages with a wide range of community audiences (business executives to hands-on developers) through training and educational programs. Amir leads Aggregate Intellect’s R&D via several academic collaborations.

Jack Lu

PhD Student @ Agentic Learning AI Lab (NYU)

I’m a second-year Computer Science Ph.D. student in the CILVR lab at NYU Courant. My research is supported by the NSERC PGS-D Scholarship. Prior to joining NYU, I got my Bachelor’s degree in Computer Science and Mathematics at the University of Waterloo.


I’m currently interested in efficiently adapting foundation models to out-of-distribution tasks, new knowledge, user preferences, and more. To achieve these goals, I build upon various methods from test-time training, in-context learning, and diffusion guidance.


Back in undergrad, I did a mixture of research and software engineering work at NVIDIA, Waabi/Uber-ATG, IBM, DarwinAI, and Deep Trekker. I was fortunate to have worked with Prof. Raquel Urtasun, Prof. Sanja Fidler, and Prof. Alexander Wong.

Dr. Mengye Ren

Head @ Agentic Learning AI Lab (NYU)

Mengye Ren is an assistant professor of computer science and data science at New York University (NYU). He runs the Agentic Learning AI Lab. Before joining NYU, he was a visiting faculty researcher at Google Brain Toronto. From 2017 to 2021, he was a senior research scientist at Uber Advanced Technologies Group (ATG) and Waabi, working on self-driving vehicles. He received a Ph.D. in Computer Science from the University of Toronto. His research focuses on making machine learning more natural and human-like, in order for AIs to continually learn, adapt, and reason in naturalistic environments.

Watch the recording for free

By continuing, you agree to Maven's Terms and Privacy Policy.