Principal ML Engineer @ Disney, ex-Meta

Build a production-ready Recommender System Blueprint for a media surface of your choice. This will be an artifact you can take back to work to guide implementation, align stakeholders, and make smarter roadmap calls.
Most teams working on recommender systems don’t struggle because they lack algorithms. They struggle because they lack a practical, end-to-end playbook for shipping.
You might know the building blocks (embeddings, retrieval, ranking, bandits), but still feel stuck on the hard parts:
which metrics actually reflect what product wants
how to architect the system from logging to serving
what model to choose under real constraints
how to handle cold start and sparse data
and how to prove ML is worth it over heuristics
I’ve built recommendation systems in production across news, feeds, and sports over the last 10 years (NYT, Meta, ESPN). This course turns that experience into a clear, reusable process you can apply to your own product without needing FAANG-scale infrastructure.
By the end, you’ll be able to confidently design and evaluate production-ready recommenders, defend trade-offs, and use your Blueprint as a durable reference for future iterations.
Learn to design and evaluate a production-ready media recommender: metrics, architecture, cold start, and drafting an effective roadmap.
Understand the full lifecycle of a production recommender
Learn the systems view behind any recommender decision
Be able to explain, critique, and improve recommender designs across different domains
Translate vague goals into a metrics spec
Learn the most common RecSys metric traps and how to defend against them
Make roadmap decisions that are grounded in measurable outcomes
Design real product recommender architectures
Decide on what fallbacks can keep the system reliable
Discover where LLMs actually add value and how to evaluate cost/latency trade-offs.
Build a cold-start strategy for new users, new items, and new surfaces
Learn techniques that help users/items “graduate” out of cold start
Validate impact with the smallest credible test
Gain confidence pushing back on “just add another heuristic” and defending ML driven solutions
Learn to make roadmap calls based on concrete trade-offs

Principal ML Engineer @ Disney, x-Meta, x-NYT
Senior ML Engineers & Data Scientists who can build models but want sharper judgment on what to build next and how to defend it
Software Engineers transitioning into ML who want to think in systems + measurement, not just training code.
Tech Leads who need an end-to-end playbook to review designs, align metrics with product, and avoid expensive detours.
You are able to interpret (not compute) common product metrics like retention/engagement and discuss tradeoffs.
You can follow an architecture diagram and understand concepts like services/APIs, latency, reliability constraints.
We will be working with specs, diagrams, and decision frameworks (not code).

Live sessions
Learn directly from Katerina Zanos in a real-time, interactive format.
Lifetime access to 5 self-paced lessons with practical exercises
Absorb core concepts on a schedule that works for you. Walk into live sessions ready to dive deeper, apply lessons, share progress, and workshop challenges.
Lifetime membership to a community of ambitious builders
Stay accountable, share insights with like-minded community builders throughout the course and beyond in a community led by Katerina on Slack.
Show & Tell sessions focused on problem solving & brainstorms
Spend high-ROI live time with Katerina and your peers, presenting how you solved the weekly assignment for your use case and discussing critical concepts and theories in RecSys.
Direct 1:1 async access to Katerina for Q&A
Throughout the 5 week course, message Katerina anytime with questions specific to your own problem sets and goals.
Certificate of completion
Share your new skills and signal that you've committed to being the best in your craft.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund through the second week of the course.
9 live sessions • 25 lessons • 5 projects
Feb
25
Feb
27
Mar
4
Mar
6
.png&w=768&q=75)
Learn how to go from a vague objective to a concrete metrics stack.
Understand how your metrics spec drives core engineering choices in recommendation systems.
Practice using a metrics spec as a shared language with PMs and leadership when building a recommendation system.
Live sessions
2 hrs / week
Wed, Feb 25
4:30 PM—6:00 PM (UTC)
Fri, Feb 27
5:00 PM—6:00 PM (UTC)
Wed, Mar 4
4:30 PM—6:00 PM (UTC)
Assignments
1-2 hrs / week
Use a real use case from your work for the assignment or if you don’t have a use case ready, you can use the provided media case study instead.
$1,300
USD