Staging environment

How to test AI when you don't have any data yet

Hosted by Madalina Turlea and Catalina Turlea

In this video

What you'll learn

How to create test cases when you're starting from zero

Learn how to create test user inputs for your golden dataset to validate your AI features when starting from scratch

What 'good' looks like for your specific use case

Learn how to manually evaluate AI responses for your AI feature to define the expected output

Build a reusable test dataset (no engineering)

Learn how to run their prompt across multiple models, select the best outputs, and create a reusable test dataset

Why this topic matters

You'll learn how to go from zero to a working test dataset. No engineering dependency. No waiting for production. Just you, your understanding of what the feature should do, and a systematic way to validate it.  This is for you if:  - You're building an AI feature but don't have test data yet  - You're stuck waiting for production traces  - You want to test before you ship

You'll learn from

Madalina Turlea

Co-founder @Lovelaice, 10+ years in Product

I'm co-founder of Lovelaice and a product leader with 10+ years building products across fintech, payments, and compliance. I hold a CFA charter and have led AI product development in highly regulated environments — where AI failures aren't just embarrassing, they're liabilities.

I've watched smart teams make the same mistakes: choosing models based on benchmarks that don't reflect their use case, writing prompts that work in testing but fail in production, and leaving domain experts out of the loop. These aren't edge cases — they're why 80% of AI projects underperform.

Through these failures (my own included), I developed a systematic approach to AI experimentation that puts domain expertise at the center. I teach what I've learned building Lovelaice: how to test, evaluate, and iterate on AI — before it reaches your users.

Catalina Turlea

Founder @Lovelaice

I bring over 14 years of software development expertise and a decade of startup experience to help teams build AI products that actually work. After founding my first company six years ago, I run a consultancy specializing in helping startups build MVPs, solve complex technical challenges, and integrate AI effectively.

I've seen firsthand how AI projects fail due to lack of systematic experimentation—teams treat AI like traditional software and struggle with inconsistent results. That's why I co-created Lovelace, a platform designed for non-technical professionals to experiment with AI agents systematically.

Go deeper with a course

Build AI features for confident Product Managers
Madalina Turlea and Catalina Turlea
View syllabus