Every pitch deck now mentions AI. Every product roadmap includes “AI-powered features.” Every founding team claims they’re “leveraging machine learning to transform” their industry.
The problem? Most of it is nonsense.
AI washing—the practice of exaggerating or fabricating AI capabilities—has become the new greenwashing. And it’s costing investors millions in overvalued acquisitions and failed portfolio companies.
At Osparna, we’ve evaluated hundreds of companies claiming AI sophistication. The gap between marketing claims and technical reality has never been wider. Here’s how we separate genuine AI innovation from elaborate PowerPoint fiction.
The Technical Criteria That Actually Matter
Real AI companies demonstrate specific technical characteristics that can’t be faked in a demo.
Data infrastructure reveals truth. Companies building legitimate ML systems have robust data pipelines, not hastily assembled datasets. Ask about their data collection methodology, labeling processes, and validation frameworks. If they’re using off-the-shelf datasets with minimal customization, they’re not doing AI—they’re doing integration.
We look for companies that can articulate their data strategy as clearly as their business model. How do they handle data drift? What’s their approach to model retraining? How do they validate predictions against ground truth? Vague answers here indicate surface-level AI implementation.
Model architecture matters more than buzzwords. Genuine AI companies can explain why they chose specific architectures for specific problems. They don’t just say “we use transformers” or “our neural networks are deep learning.” They explain trade-offs between model complexity and inference speed, discuss their approach to hyperparameter optimization, and acknowledge limitations of their chosen approach.
Red flag: Teams that can’t explain their models in technical detail but speak fluently about AI’s transformative potential. Real builders focus on engineering constraints, not just possibilities.
Deployment infrastructure indicates maturity. How are they serving predictions? What’s their latency tolerance? How do they handle model versioning and rollback? Companies with production-grade AI have answers to these questions that reveal deep technical consideration. Companies with AI theater have answers that sound like they were copied from a blog post.
The difference shows up in specifics. “We use Kubernetes for model serving with custom autoscaling based on prediction latency” versus “We have a scalable cloud infrastructure” tells you everything about whether AI is core to their product or bolted on for fundraising.
Common AI Washing Techniques and Detection Methods
Technique 1: Rules-based systems marketed as AI. If-then logic wrapped in AI terminology. The tell: Ask about their training data. If they don’t have training data, they don’t have machine learning—they have traditional software with marketing.
Technique 2: Third-party API dependency disguised as proprietary technology. They’re calling OpenAI’s API through their wrapper and claiming it as innovation. The tell: Ask about model training costs and compute infrastructure. If they’re spending more on cloud APIs than on compute for training, they’re building middleware, not AI.
Technique 3: Human-in-the-loop systems misrepresented as autonomous. Their “AI” is actually offshore workers making decisions. The tell: Test edge cases during demos. AI systems have consistent failure modes. Human workers have creative failures that reveal themselves under stress testing.
Technique 4: Statistical models rebranded as machine learning. Linear regression doesn’t become AI because you call it “predictive analytics powered by machine learning.” The tell: Ask about model complexity. If they can’t describe layers, attention mechanisms, or ensemble methods, they’re doing statistics—which is fine, but isn’t AI.
Technique 5: Future capabilities presented as current features. The roadmap is marketed as the product. The tell: Request API access or model performance metrics from production. If everything is “in beta” or “coming soon,” the AI exists primarily in marketing materials.
The Osparna Evaluation Framework
Our due diligence process for AI companies follows three principles:
First, we assume deception until proven otherwise. Not because founders are malicious, but because AI terminology has become so diluted that even well-intentioned teams conflate basic automation with artificial intelligence.
Second, we evaluate AI claims like we evaluate code quality—through direct examination. We request technical architecture documentation, review training pipelines, and assess model performance on holdout data. If they can’t provide these, we assume the AI is aspirational.
Third, we assess whether AI is core to the business model or window dressing. Can this company exist without ML? If yes, then AI is a feature, not a foundation. That’s not necessarily bad—but it changes valuation dramatically.
What Real AI Innovation Looks Like
Companies genuinely innovating in AI demonstrate measurable improvements over non-AI baselines, articulate clear paths from research to production, and acknowledge limitations openly. They discuss failure modes, edge cases, and known problems with the same fluency they discuss capabilities.
They invest heavily in data infrastructure because they understand that models are only as good as training data. They have ML engineers who can discuss loss functions, optimization challenges, and the unglamorous work of debugging training runs.
Most importantly, they can explain why AI is necessary for their solution. Not just that it’s cool or trending, but why alternative approaches fail and how ML specifically addresses those failures.
The Cost of Getting This Wrong
Overvaluing AI capabilities leads to overpaying for acquisitions, misallocating R&D budgets, and building strategic plans on technical foundations that don’t exist. The correction comes eventually—usually when promised capabilities fail to materialize post-investment.
In 2025, AI due diligence isn’t optional. It’s the difference between backing genuine innovation and funding expensive theater. The companies building real AI systems welcome technical scrutiny. The ones doing AI washing deflect, obfuscate, and redirect conversations to market opportunity instead of technical capability.
At Osparna, we’re builders who became investors specifically to bridge this gap. We know what real AI infrastructure looks like because we’ve built it. We know the difference between legitimate technical challenges and excuses for underperformance.
If you’re evaluating an AI company and can’t get clear technical answers, you’re not looking at an AI company. You’re looking at a marketing problem disguised as a technology opportunity.
And that’s a bet we’ll never make.
0 Comments
Leave A Comment