Shipping an Agent: Lessons from LangChain’s Own Deployment
Hosted by Jason Liu and Anika Somaia
Learn directly from Jason Liu and Anika Somaia
Go deeper with a course
Featured in Lenny’s List
Systematically Improving RAG Applications

Jason Liu
Staff machine learning engineer, currently working as an AI consultant
83 students
Go deeper with a course
Featured in Lenny’s List
Systematically Improving RAG Applications

Jason Liu
Staff machine learning engineer, currently working as an AI consultant
What you'll learn
Evaluate multi-turn LLM agents in production
Learn methods to test complex reasoning chains and prevent regressions before deployment
Debug common agent failure modes effectively
Identify and resolve memory, context, and prompt drift issues using production tooling
Build feedback loops for AI system improvement
Connect evaluation metrics, logs, and user data to drive iterative product development
Why this topic matters
Deploying LLM agents in production is fundamentally different from building demos. Students will learn battle-tested practices for evaluation, debugging, and iteration that separate successful AI products from failed experiments. Understanding these real-world deployment challenges prepares you to build reliable AI systems that users trust and companies depend on.
You'll learn from
Jason Liu
Consultant at the intersection of Information Retrieval and AI
Jason has built search and recommendation systems for the past 6 years. He has consulted and advised a dozens startups in the last year to improve their RAG systems. He is the creator of the Instructor Python library.
Anika Somaia
Software Engineer, LangChain
worked with
By continuing, you agree to Maven's Terms and Privacy Policy.
.png&w=1536&q=75)