"Explainability" in AI Isn’t Just a Technical Problem, It’s a Product Problem
What Every PM Needs to Know About Explainable AI Before Shipping...
Imagine this: you’ve just launched an AI agent that’s performing brilliantly. It makes fast, accurate decisions, and the metrics look good. Then a customer, stakeholder, or regulator asks the one question that changes the mood instantly:
“Why did the system make that decision?”
And suddenly, there’s no clear answer.
That’s where explainability comes in.
Explainability (sometimes called explainable AI) is about making AI’s decision-making transparent and understandable. On the surface level, it sounds straightforward. But the deeper you dig into how it works, when it’s needed, and what trade-offs it brings, the more complex it becomes.
At its core, explainability is about trust and usability. Without it, users hesitate to adopt AI-driven features, debugging turns into blind guessing, regulators won’t let things slide, and it becomes hard to justify decisions to stakeholders or customers. As PMs, that’s a tough place to be.
Human Analogy
Think of it this way: when a human expert makes a decision, they can usually explain their reasoning. A doctor might say, “I recommended this treatment because the patient showed symptoms A, B, and C, which usually indicate condition X.”
By contrast, many AI systems — especially deep learning models — are like brilliant consultants who give excellent advice but, when asked why, they say: “trust me.” I’m sure that will suddenly raise suspicion in your mind as a client.
That gap between “I can explain” and “just trust me” is where explainability lives.
Because we can’t peer directly into a complex AI model’s inner workings, explainability relies on tools that act like investigators. Think of these methods as a kind of “Detective Toolkit” They run experiments, test hypotheses, and produce clues about why the AI made a decision.
A few of the common methods:
LIME (Local Interpretable Model-agnostic Explanations) works by tweaking one input and seeing how the decision changes. Example: if a recommendation engine suggests Product A, LIME might reveal it’s mostly because the customer is aged 25–30 and previously bought electronics.
SHAP (Shapley Additive Explanations) assigns “credit” to each input, like calculating how much each player contributed to a team’s win. In a loan approval system, SHAP might show that purchase history pushed approval strongly, while income slightly reduced it.
Permutation Importance asks: what happens if the AI can’t see one piece of information? If hiding purchase history drops accuracy from 85% to 60%, then purchase history clearly mattered.
Different tools give different perspectives; it is like having multiple witnesses describing the same event.
Choosing how to approach explainability is rarely just a technical call. It’s usually a joint decision across teams:
Engineering and Data Science focus on what’s technically possible.
Compliance and Legal ensure regulatory requirements are met.
UX considers how explanations are shown to users.
Product balances user needs, business requirements, and trade-offs.
PMs often find themselves bridging these viewpoints: translating business priorities into technical asks, advocating for user clarity, and helping teams decide what to prioritise when not everything is possible.
The Trade-Offs
The key trade-off you'll often face with AI models is that the most accurate AI models (like large neural networks) tend to be the least explainable, while simpler, more explainable models might sacrifice some accuracy. And here is where things get interesting and challenging.
Take loan approvals, for instance, a decision tree to approve or reject is easy to explain step by step, but it may miss subtle patterns. A neural network with millions of parameters can detect complex interactions — say, customers who shop on weekends and use mobile apps and have certain spending patterns — but the reasoning is so layered no human can follow it.
That’s not so different from how we make decisions ourselves. Simple choices are easy to explain: “I chose the red shirt because it matches my pants.” But complex ones? We fall back to intuition: “I just had a good feeling about this candidate.” Behind that gut feeling is a mash-up of signals: body language, tone, past experiences, micro-expressions, that our brain integrates but we can’t fully articulate.
Complex AI models work in a similar way. They sift through massive amounts of data, detect patterns too subtle for us to see, and produce outcomes that are accurate but not easily explained in human terms. In both cases, the reasoning is buried in layers of learned intuition.
For PMs, understanding this parallel helps you to set realistic expectations. Don’t assume every AI system can be made transparent. Instead, decide what level of explanation makes sense for the product and its users.
Questions That Help Frame the Conversation
Rather than asking “do we need explainability?”, it’s often more useful to ask:
Which decisions are critical enough that users or regulators will demand explanations?
What’s the cost of a wrong decision versus the cost of slower, more transparent systems?
Who actually needs these explanations? Is it end users, customer service reps, or regulators?
How much detail is useful without overwhelming people?
What’s the trade-off between accuracy, speed, and clarity?
These questions don’t have universal answers, but they help teams align on what’s important in their context.
A Practical Example
Consider an e-commerce recommendation system.
A vague requirement like “we need explainability” isn’t very helpful. But reframing it can make the impact clearer:
Problem: Users don’t trust recommendations, leading to lower conversions.
Hypothesis: If users understand why products are recommended, they’ll be more likely to purchase.
Requirement: Show the top 3 reasons for each recommendation, with an option to adjust preferences.
Success Metric: Increase click-through rate by 15%.
Trade-off: Accept a 2% accuracy loss in exchange for better user trust.
That’s explainability made practical.
Explainability isn’t about making every model fully transparent. It’s about finding the right level of clarity for the right audience at the right time.
Sometimes that’s a detailed technical explanation for compliance officers. Sometimes it’s a confidence score for end users. And sometimes it’s acknowledging that “gut-feeling” accuracy is more valuable than perfect traceability.
For PMs, the challenge is to frame the trade-offs, define success in context, and guide teams toward solutions that balance user needs, business priorities, and technical constraints.
Because in the end, the future of AI products won’t just be measured by how powerful the models are but by whether people can understand, trust, and integrate them into their decisions and workflows.