The 6% Rule: What AI High Performers Do Differently
McKinsey's State of AI 2025 shows only 6% of organizations are achieving meaningful business value from AI.
The 6% Rule: What AI High Performers Do Differently
The Gap Between Adoption and Value
Despite the constant AI noise, very few companies have moved past surface-level implementation—the chatbot on the corporate website, the CV screening tool, the "AI-powered" label slapped on existing products. According to McKinsey's State of AI 2025, 88% of organizations are using AI in at least one business function, but only 6% report significant business impact: EBIT contribution above 5% and measurable business value.
The other 94% are stuck in an endless loop: pilots that never scale, experiments without conclusion, ROI projections that never materialize. This isn't a technology problem—it's a strategy problem.
When you analyze what separates the 6% from everyone else, six patterns emerge. None of them are about better models or bigger budgets. All of them are about how organizations think about AI.
1. They Transform Instead of Optimize
What McKinsey found: High performers are over three times more likely to use AI for transformative business change rather than incremental efficiency gains.
What the 94% do: They approach AI as a cost-cutting tool. "We want to save on customer support." "Can you build us a chatbot?" "Let's automate some emails."
The problem: When you optimize a broken process, you get a faster broken process. AI amplifies whatever you feed it—including dysfunction.
What actually works:
Transformation means questioning the process itself. Not "how do we answer tickets faster?" but "why are customers submitting tickets?" Not "automate data entry" but "eliminate the need for data entry."
At Kenaz, we build AI as infrastructure—integrated with project management, metrics, knowledge bases, and real workflows. Our Atlas platform offers orchestration, unlimited context management, and multi-model routing. That's transformation-level architecture.
We charge for discovery because understanding your actual problem is work—and it's the work that matters most. If you automate chaos, you'll get faster chaos.
2. They Redesign Workflows From Scratch
What McKinsey found: High performers are nearly three times more likely to fundamentally redesign workflows rather than overlay AI on existing processes. This is one of the strongest predictors of AI success.
What the 94% do: They take a process designed in 1995 (or 1965), add an AI layer on top, and wonder why nothing changes.
The problem: Most enterprise workflows were designed for a world without AI. They assume humans are the bottleneck, that information flows via email, that decisions happen in meetings. Overlaying AI on these workflows is like strapping a jet engine to a horse cart.
What actually works:
Start from the outcome, not the existing process. Map how work actually flows (not what the org chart claims). Identify where AI can replace entire steps, not just accelerate them.
Our discovery phase goes deep—we talk to the people actually doing the work, not just management. We document reality, not aspirations. Then we design workflows that are native to AI capabilities: parallel processing, instant context retrieval, 24/7 availability.
This approach requires admitting the current way isn't working, which can be uncomfortable. But a €50,000 AI investment on a fundamentally broken workflow returns nothing, while the same investment on a redesigned workflow can return 10x.
3. They Scale Agents Across Functions
What McKinsey found: 62% of organizations are experimenting with AI agents, but only 23% are scaling them. High performers are at least three times more likely to deploy agents across multiple business functions.
What the 94% do: They run a pilot in one department, declare victory (or defeat), and never expand.
The problem: A pilot that doesn't scale is just an expensive lesson. Agents that only work in one function miss the compounding effects—when your sales agent can simultaneously query your product database, check inventory, and draft contracts, that's exponentially more valuable than three separate pilots.
What actually works:
Build for production from day one—not demos, but systems. Proper error handling, monitoring, SOLID architecture, integration points designed for expansion.
We've built over 15 MCP servers in production, for ourselves and clients. Our Faceless Agent orchestrates other agents, running 24/7 without human oversight. Atlas provides the platform layer for multi-agent coordination. When we build something, it's designed to connect to everything else.
Pilots that don't scale are wasted money. We build production-ready systems from the start because we know you'll want to expand. The marginal cost of adding a new function to a well-architected system is minimal; the cost of retrofitting a pilot into production is enormous.
4. Their Leaders Actually Use AI
What McKinsey found: High performers are three times more likely to have senior leaders who demonstrate ownership and engagement—and critically, who personally model AI usage.
What the 94% do: They delegate AI to IT. Leadership sees it as a technology project, not a business transformation.
The problem: If the CEO doesn't use AI, why would anyone else take it seriously? Culture flows from the top. When leadership treats AI as someone else's problem, teams learn that AI adoption is optional—or worse, career-risky.
What actually works:
Start at the top. Literally. Our Tier 1 engagement includes personal AI setup for founders and executives—not a demo, but real daily usage configuration. When a CEO experiences AI answering their specific questions, drafting their documents, managing their workflows, the conversation changes entirely.
Weekly demos to leadership maintain visibility. They see progress, ask questions, make decisions. AI becomes a business conversation, not a tech report.
We've seen this pattern repeatedly: companies where leadership uses AI daily transform faster than those with bigger budgets but disengaged executives. The founder's assistant using Claude is worth more than a €500,000 enterprise contract that nobody champions.
5. They Invest Enough to Succeed
What McKinsey found: More than a third of high performers allocate over 20% of their digital budget to AI. Approximately three-quarters of them have reached the scaling phase.
What the 94% do: They test the waters with minimal commitment. "Let's try something for €3,000 and see what happens." "Can you build this in a week?"
The problem: Underfunded AI projects fail—not because AI doesn't work, but because there's not enough runway to iterate, learn, and adapt. A €5,000 budget might get you a demo, but not a system that survives contact with reality.
What actually works:
Our minimum engagement is €5,000, not because we're expensive, but because we've learned that less isn't enough to deliver real results. Tier 2 at €50,000 delivers a fully automated workflow in 6-8 weeks. That's real transformation, not a PowerPoint about transformation.
That said, if a simple script solves your problem better than AI, we'll tell you. We're ROI-focused, not AI-obsessed. Sometimes the answer is "you don't need us."
Cheap AI experimentation usually becomes an expensive lesson. We don't take projects where the budget doesn't allow for a real outcome—that saves your money and our time.
6. They Know When Humans Should Decide
What McKinsey found: High performers have defined processes for determining when and how model outputs require human validation. This is one of the top factors distinguishing successful AI implementations.
What the 94% do: They oscillate between extremes—either reviewing everything manually (losing all time savings) or trusting AI outputs blindly (accumulating errors and liability).
The problem: Both extremes fail. Over-validation means you've just added an expensive layer without gaining efficiency. Under-validation means you're building technical debt and legal risk with every automated decision.
What actually works:
Clear architecture: what runs autonomously, what needs review, what requires approval. We build Telegram and Slack integrations where agents flag decisions that require human judgment. The system knows its own limits.
Compliance automation with human-in-the-loop where regulations require it—not as an afterthought, but as core architecture.
AI without proper human oversight becomes a liability. AI with excessive oversight is wasted potential. We calibrate the balance for your risk tolerance, regulatory environment, and team capacity. There's no universal answer—just the right answer for your specific context.
The Path Forward
McKinsey's data is clear: 94% of organizations are stuck. Not because AI doesn't work, but because they're approaching it wrong.
The 6% who succeed share six patterns:
- Transform, don't optimize
- Redesign workflows from scratch
- Scale agents across functions
- Engage leadership personally
- Invest enough to succeed
- Build intelligent human-AI boundaries
None of these require cutting-edge technology. All of them require changing how you think about AI.
At Kenaz, we don't sell AI strategy in PowerPoint—we build systems that move companies into the 6%. We start with discovery to understand where it actually hurts. Then we build what actually works.
The question isn't whether you should use AI. It's whether you want to be in the 6% or the 94%.
Data source: McKinsey State of AI 2025
