AI isnât magic. Itâs math.
Youâre not âfalling behindâ if you havenât built an AI feature yet.
Youâre falling behind if youâre still throwing random prompts at GPT and calling it innovation.
Look, AI is having its SaaS momentâagain. But this time, itâs wearing a cool new jacket labeled "LLM." And if youâre a product manager trying to figure out how to use this surge of machine-powered momentum to actually improve your onboarding, guidance, or product experienceâyouâre in the right place.
I sat down with Eleanor Manley, AI Lead at Hibernia Labs, TEDx speaker, and data science OG (before it was trendy), to talk about what PMs get wrong about AIâand how to get it right.
Spoiler: itâs less about building models, and more about understanding where you sit on the AI map.
âAIâ isnât one thing. And ChatGPT is just the tip.
Hereâs how Eleanor breaks it down:
Artificial intelligence is the umbrella. Under that, youâve got machine learning. And under that, youâve got things like deep learning and natural language processing. Generative AIâwhat most people are talking about right nowâis just one branch.
Translation: If youâre building onboarding flows or nudges and thinking âshould we be using AI?â the answer is... maybe. But not like that.
Instead of building toward the buzzword, work backward from the problem.
Are your users dropping out mid-way through setup? Is your product tour converting like a PowerPoint deck from 2004?
Then AI isnât the strategyâitâs just one tool in the box.
And like any good tool, it needs a well-scoped job.
Stop trying to âbuild an LLM.â Use the Buy-Borrow-Build cheat sheet.
If you're thinking of hiring ML engineers or ârolling your own model,â pause. Hereâs Eleanorâs sanity-saving framework:
1. Buy â Use a plug-and-play product
Think: off-the-shelf onboarding tools with smart defaults, or analytics platforms with built-in AI tags.
2. Borrow â API your way to MVP
Think: OpenAI, Claude, or Perplexityâcheap-ish, powerful, and easy to experiment with.
3. Build â Only if youâre a bank, defense contractor, or tech giant
Think: fine-tuning open-source models like LLaMA with your own data, hosting it, maintaining it, explaining it to security⌠and good luck with that.
Only 1% of engineers are actually engineering models from scratch. Everyone else is layering on top.
So unless your compliance team makes you encrypt Slack messages with a paper shredder, youâre probably in the Borrow phase. And thatâs not only okayâitâs smart.
The model doesnât matter. Your data does.
Letâs clear something up: GPT-4, Claude 3, Gemini Ultraâtheyâre all getting a little blurry.
People ask, âWhat model should I use?â But itâs the wrong question,â Eleanor told us. âYouâll go further by investing in better, more relevant data.
This hit home for us at Chameleon. We used AI to auto-tag thousands of in-app experiences created by our customers. That data now powers our onboarding suggestions and guides new users toward best-practice flows.
Could we have swapped models? Sure. But clean, structured data was the unlock, not the model itself.
Quick win: If youâve got a pile of messy onboarding flows, start by tagging them. Even a simple âuse caseâ tag helps you analyze whatâs working, and where AI suggestions could actually help.
Hosting your own LLM? Unless youâve got a server room... donât.
Sure, you can host your own model. You can buy GPUs. You can download LLaMA 3 and load it onto your own infra. You can also try to churn your own butter.
Most organizations are going to stick with cloud,â Eleanor said. âItâs secure, flexible, and way easier to scale.
There are edge casesâhighly regulated industries, ultra-sensitive dataâbut if youâre shipping SaaS at scale, cloud APIs are still the move.
And unless you enjoy waking up at 3 a.m. because your self-hosted fine-tuning script ate all your RAM... stay in the cloud lane.
Should you hire an ML engineer?
Maybe. But ask yourself these three questions first:
Do we have proprietary data we can use?
Is the problem urgent enough to solve with AI?
Could this be solved faster with smart no-code tools and APIs?
If you answer yes to all three, hiring might make sense.
If not? Contract it. Or better yet, partner with a studio that lives and breathes AI workflows.
The PMâs role becomes even more critical here,â Eleanor said. âYour job isnât to code the modelâitâs to make sure whatâs built actually solves the problem.
Yes. That. Frame the problem. Scope the dataset. Keep the engineers honest.
Try this next week
Want to dip your toes in without writing a single line of model code? Try one of these:
Auto-tag your onboarding flows using OpenAI and product usage data.
Experiment with prompt-powered tooltips that adapt based on user activity.
Use Claude or ChatGPT to summarize NPS feedback and group into feature categories.
Run a âno code LLM hack dayâ with your team. Set a timer. Use only public APIs. See who ships what.
Or just plug Chameleon into your data stack and start making your onboarding smarter without a single fine-tuned layer.
Resources that arenât hype
Eleanorâs shortlist for staying sharp without drowning in whitepapers:
DeepLearning.AI â Great free training
Emergent Mind â AI research without the jargon
Hugging Face â The playground for open-source models
Kaggle â Learn by doing, with a global data community
TL;DR: You donât need âan AI strategy.â You need better questions.
If youâre in product, hereâs the real checklist:
â
Are you clear on the problem youâre solving?
â
Do you have unique data to power smarter experiences?
â
Are you leveraging tools that help, not distract?
â
Are you resisting the urge to bolt âAIâ onto your landing page?
If notâcongrats. Youâre thinking like a product leader, not a hype follower.
AI isnât going away. But neither is onboarding. And if you want to guide users better, personalize experiences, and boost activationâthen smart product decisions will always beat smart models.
đď¸ Craving the full convo? Watch the full Product Lunchbox episode with Eleanor Manley to go deeper into AI strategy, model myths, and what product teams actually need to know.