Skip to main content

Why 95% of AI Projects Fail (And It's Not the Technology)

After working with dozens of teams on AI adoption, the pattern is clear: the technology works fine. The failure isn't technical. It's organisational discipline.

Tim Clark

Tim Clark

Co-founder · 27 February 2026 · 5 min read

Why 95% of AI Projects Fail (And It's Not the Technology) Why 95% of AI Projects Fail (And It's Not the Technology)

You could hand the world’s best AI tools to your organisation and you’d still fail 95% of the time.

I know that’s a bold claim. But after working with dozens of teams on AI adoption, I’ve watched the pattern repeat: The technology works fine. The failure isn’t technical.

Rand Group research puts the figure at 87% of AI projects never making it to production. Gartner estimates that through 2025, at least 30% of generative AI projects will be abandoned after proof of concept. Whether the number is 87% or 95%, the root cause is the same: it’s almost never the technology.

The Real Problem

Here’s what actually happens:

A leader reads about ChatGPT or Claude and thinks, “We need this.” A team gets excited. Someone sets up an account. They run a few experiments. And then… silence. The project lives in one person’s head. There’s no measurement of what worked. No training for the wider team. No governance framework. No documentation of what was built. When that person leaves or moves to something else, the whole thing collapses.

Or they implement something bigger: a workflow automation or document processing system. It works brilliantly for three weeks. Then nobody maintains it. The prompts drift. The outputs degrade. No one notices because there’s no monitoring. Three months later, someone asks, “Whatever happened to that AI project?” and nobody remembers.

The pattern is consistent: Technology success + organisational chaos = project failure.

Why This Happens

It’s not stupidity. It’s that AI is new enough that most organisations haven’t figured out the discipline required to make it sustainable.

When you implement a new CRM system, you have processes. A CRM vendor gives you implementation guides. Your team gets trained. There’s a change management plan. There’s governance about who can access what. There are quarterly reviews of adoption metrics.

With AI, we’re still in the Wild West. People experiment in isolation. They build something, it works, and then they expect it to just… keep working. Without maintenance, measurement, or documentation.

MIT Sloan Management Review found that 91% of top data managers cited team challenges and change management, not technology limitations, as the main barriers holding them back with AI. McKinsey’s research reinforces this: companies with a formal AI strategy report an 80% success rate in AI adoption, compared to just 37% for those without one.

The constraint isn’t whether ChatGPT can analyse documents. It’s:

  • Does your team have a framework for finding AI opportunities systematically?
  • Once you build something, who owns maintaining it?
  • How do you measure whether it actually delivered value?
  • How do you scale from one team experimenting to the entire organisation moving systematically?
  • What happens when someone leaves and nobody knows how the system works?

These aren’t technology questions. They’re discipline questions.

Organisational chaos versus structured AI adoption pipeline

What Actually Works

The organisations that successfully adopt AI do something different:

They treat it like any other critical business capability: discovery, planning, execution, measurement, governance, training.

They don’t ask, “Can we use AI?” They ask, “Where will systematic AI use save the most time, reduce the most risk, or improve the most?” They map opportunities. They prioritise. They run pilots with proper measurement. They scale only what works.

They build documentation as they go. They train their teams. They establish governance that enables fast, safe decision-making rather than blocking it.

They measure ROI clearly. Not to prove AI works, but to understand where it’s creating value and where it’s not.

They treat the first person who builds something as a proof point, not the final solution. They systematically spread that knowledge across the team so it survives when people move on.

This takes discipline. But it’s the difference between a single successful AI project and systematic AI adoption that compounds over time. It’s also why we built the AI Native Programme as a structured monthly partnership rather than a one-off engagement. Sustainable AI adoption requires ongoing structure.

How the Platform Helps

The AI Native Platform is built around this exact insight. It’s not trying to replace ChatGPT or Claude (those are tools you’ll use within the platform). It’s infrastructure for the discipline part.

The AI Native Platform dashboard showing Discovery Canvas, Team Training, Governance, and ROI tracking

  • Discovery Canvas: systematically find where AI matters most in your business, rather than guessing
  • Roadmap & Prioritisation: rank opportunities by impact and effort so you invest in the right projects first
  • Task Management: keep AI project execution visible across your team
  • ROI Measurement: track actual returns, not assumptions, so you know what’s working
  • Governance Framework: move fast without fear of chaos, with built-in risk assessments and policy guardrails
  • Training Modules: build team capability so knowledge doesn’t walk out the door

Essentially, the platform exists because we realised that the bottleneck for most organisations isn’t the AI tools. It’s the discipline, measurement, and structure around them.

If you want to see what this looks like in practice, the AI Readiness Assessment is a good place to start. It helps you understand how AI-literate your team is, what tools they’ve already used, and how they feel about using AI in your organisation.

What This Means for Your Organisation

If you’re experimenting with AI today, you’ve probably got ChatGPT, Claude, maybe Copilot. Different teams using different tools. No shared framework for what’s working. No measurement of impact. No governance. You’re probably part of the 95%.

The path forward isn’t more tools. It’s systematic adoption. Discovery, measurement, training, governance, scaling.

That’s not sexy. But it’s what actually works.

Interested in understanding where AI could actually matter in your organisation? Get in touch. I’m happy to chat through your biggest opportunities and where the real bottlenecks are.

Governance as Confidence (Not Constraint) Governance as Confidence (Not Constraint)

Governance as Confidence (Not Constraint)

Good AI governance doesn't slow you down. It accelerates you. Risk-based frameworks let teams move fast where it's safe and apply controls where it matters.

Tim Clark

Tim Clark

Co-founder · 27 February 2026 · 5 min read

When I mention “governance,” most people immediately think: bureaucracy. Slow. Red tape. “We can’t do this without filling out Form 47 and waiting for committee approval.”

That’s governance done wrong.

Good governance doesn’t slow you down. It accelerates you.

Let me explain.

The Governance Problem (Done Wrong)

Bad governance looks like: “No one can use AI without executive sign-off. All AI decisions go through a committee. All prompts must be reviewed. All outputs must be audited.”

This stops experimentation. It kills momentum. It makes your organisation move like a tank through mud.

So teams bypass it. They use ChatGPT in ways you don’t know about. They build AI systems outside your frameworks. They hide things because the official process is too slow.

That’s not safety. That’s the opposite. That’s fragmentation and risk. Deloitte’s research highlights exactly this problem. Unsanctioned AI deployments by individual teams create governance blind spots that are far more dangerous than the risks governance was meant to prevent.

And the numbers back this up: while 75% of organisations have established AI usage policies, only 36% have adopted a formal governance framework, according to the 2025 AI Governance Benchmark Report. Having a policy without a framework means no consistent roles, controls, monitoring, or enforcement. It’s governance theatre.

The Governance Insight

Here’s what I’ve noticed in organisations that are winning:

They don’t have less governance. They have smarter governance.

They ask: What’s actually risky? And what’s not?

Using ChatGPT to brainstorm marketing copy? Low risk. Let the team do it. No governance needed.

Using an AI model to flag fraudulent transactions? High risk. That needs governance. That needs an audit trail, oversight, validation.

Storing customer data in a third-party AI system? High risk. Governance required.

Processing internal meeting notes? Low risk.

The organisations that move fast don’t have no governance. They have risk-based governance. They apply strict controls where it matters and trust teams where it doesn’t.

What Risk-Based Governance Looks Like

Instead of one process for everything, you have levels:

Green (Low Risk): Team uses AI tools, minimal controls needed. Completion timestamp. That’s it.

Yellow (Medium Risk): Team uses AI, but there’s documentation of the prompt, the output is reviewed by a second person, there’s an audit trail. Takes an extra 30 minutes per project.

Red (High Risk): Governance committee review. Full audit trail. Compliance checklist. Formal sign-off.

What determines the level? Risk factors:

  • Is personal data involved?
  • Is this a legal or regulatory decision?
  • Could bad output cause real harm?
  • Are we trusting AI to make a decision, or just to inform one?
  • How transparent is the AI’s reasoning?

Most projects are green or yellow. A few are red.

And here’s the key: Teams know upfront which level they’re in. They’re not surprised halfway through. They know what controls they need and plan accordingly.

This is why the AI Native Platform includes a built-in governance framework — so your team can classify projects by risk level and apply the right controls without slowing down the low-risk work. Gartner’s research found that organisations with AI governance platforms are 3.4 times more likely to achieve high effectiveness in AI governance than those without.

Risk-based governance matrix showing low-risk tasks move fast while high-risk decisions get proper oversight

How This Enables Speed

Let me give you an example.

Tender analysis: Is it high risk?

Some organisations say yes: “We’re using AI to evaluate proposals. That’s a decision that could be wrong. Governance required.”

Other organisations think differently: “The AI reads the tender, extracts information, and makes a recommendation. A human reads the AI output and makes the decision. The human is responsible. The AI is a tool.”

Different risk assessment. Yellow governance, not red.

With yellow governance, you can:

  • Launch projects in 4 weeks, not 12 weeks
  • Scale across teams, not wait for approval
  • Iterate and improve, not freeze specs
  • Trust teams, not create bottlenecks

And you’re still safe. You still have documentation. You still have an audit trail. You still have review. You just don’t have bureaucracy.

The data supports this approach. The Cloud Security Alliance found that organisations with comprehensive AI governance are nearly twice as likely to successfully adopt advanced AI, with 46% adoption versus just 25% for those with only partial guidelines. Good governance doesn’t slow adoption. It enables it.

The Confidence Part

Here’s why this matters for your board and executives:

Bad governance makes it seem like you’re being careful, but you’re creating risk.

Good governance means your leadership can move fast with confidence.

“We evaluated tenders with AI support. The AI recommendation was reviewed by our procurement lead. Here’s the audit trail of the AI analysis. Here’s the decision. Here’s the reasoning.”

That’s auditable. That’s defensible. That’s confidence.

Compare to: “We used ChatGPT to help. I think it worked well. Here’s the outcome.”

That second one is risky. Not because you used AI. Because you can’t explain your process.

This is where structure matters, and it’s one of the reasons the AI Native Programme includes governance setup as part of the monthly partnership. You don’t need to build a framework from scratch. You need one that fits your risk profile and enables your team to move.

What This Means for Your Organisation

If you’re worried about AI risk, good governance is the answer. Not “no AI.” Not “secret AI.” Structured, risk-based governance.

It lets you move confidently. Your team knows what’s expected. Your executives know what’s auditable. Your customers (if relevant) know you’re being responsible.

It’s not bureaucracy. It’s confidence.

And it’s not “no governance vs. full governance.” It’s “smart governance that enables speed where it’s safe and controls where it matters.”

That’s how you actually win with AI.

Keen to talk about what a governance framework might look like for your organisation? Get in touch. It’s usually a shorter conversation than you’d expect.

Using AI Right Now: A Business Guide Using AI Right Now: A Business Guide

Using AI Right Now: A Business Guide

A practical guide for New Zealand businesses on which AI tools to use and how to get real business value from them right now.

Tim Clark

Tim Clark

Co-founder · 7 July 2025 · 8 min read

The landscape has shifted quite a bit in the last few months - it’s less about finding the “best” model and more about finding the best overall system for what your business needs. Good news is picking an AI is actually easier than ever, though understanding how to use these increasingly complex tools effectively for business outcomes is the real challenge.

It’s less about finding the “best” model and more about finding the best overall system for what your business needs.

Which AI to Use for Business

For businesses wanting to use AI seriously, you’ve got three excellent choices: Claude (Anthropic), Google’s Gemini, and OpenAI’s ChatGPT. All three give you access to advanced and fast models, voice mode, document handling, code execution, decent mobile apps, and the ability to create images and video (though Claude can’t do images yet). Some features are free, but you’ll generally need to pay around $45-55 NZD/month per user for the full feature set your business needs.

There are other options - Grok if you’re big on X, Microsoft’s Copilot through Windows, or DeepSeek’s free Chinese model - but honestly, just stick with Gemini, Claude, or ChatGPT for most business applications.

Business Impact: Where AI Actually Helps

I spend considerable time helping businesses actually use AI to get things done, and the complexity can be overwhelming. Let me start with the practical business applications that are working well right now.

Deep Research: Your New Business Intelligence Tool

Deep Research is brilliant for producing high-quality reports that genuinely impress professionals I work with - lawyers, accountants, consultants, market researchers. This is where AI can immediately impact your business.

Practical business uses:

  • Market analysis: “Analyse the current state of the New Zealand construction industry, focusing on residential building trends, key players, and regulatory changes affecting small to medium builders”

  • Competitive intelligence: “Research our top 5 competitors in the Auckland accounting services market, including their pricing models, service offerings, and recent client wins”

  • Regulatory compliance: “What are the recent changes to New Zealand employment law that affect small businesses with 10-50 employees? Include practical compliance steps”

  • Supply chain research: “Find alternative suppliers for industrial packaging materials in the South Island, focusing on companies that can handle our minimum order quantities”

  • Client research: “Research this potential client’s business model, recent news, key personnel, and industry challenges” (brilliant for sales prep)

Deep Research reports aren’t error-free but are far more accurate than just asking the AI directly, and the citations tend to be correct. Each system works slightly differently - Claude and o3 with web search enabled work as mini Deep Research tools, while Google offers options like turning reports into presentations or infographics you can use with clients.

Document and Content Creation: Scale Your Communications

All three systems can produce professional documents, presentations, analyses, and communications. This is where businesses see immediate productivity gains.

Business applications:

  • Proposals and tenders: Give it your previous successful proposals and brief details about the new opportunity

  • Client communications: “Draft a project update email for our client explaining the delay in the Auckland office fit-out due to supply chain issues, maintaining professional tone while being transparent about revised timelines”

  • Policy documents: “Create an updated remote work policy for a 25-person Wellington tech company, incorporating recent changes to employment legislation”

  • Training materials: “Develop onboarding materials for new sales staff covering our CRM system, client communication standards, and territory management”

  • Financial reporting: Upload your data and ask for executive summaries, trend analyses, or board presentation materials

To get Gemini or ChatGPT to create professional documents reliably, select the Canvas option. Claude handles document creation well on its own.

The Right Model for Business Work

Each system offers multiple AI models - think of it like choosing the right tool for the job. You’ve got three tiers: fast models for casual tasks (Claude Sonnet, GPT-4o, Gemini Flash), powerful models for serious business work (Claude Opus, o3, Gemini Pro), and ultra-powerful models for complex analysis (o3-pro, which can take 20+ minutes to think through problems).

Fast models are fine for brainstorming or quick questions. But for anything business-critical - client proposals, financial analysis, strategic planning, legal document review - switch to the powerful model. Most systems default to the fast model to save computing power, so you need to manually switch using the model selector dropdown.

The free versions don’t give you access to the most powerful models, so if you don’t see these options, that’s why you need the paid version for business use.

I use o3, Claude 4 Opus/Sonnet, and Gemini 2.5 Pro for serious business work. For most business applications, stick with these powerful models for anything important.

Privacy for Business: Claude doesn’t train future models on your data, but Gemini and ChatGPT might (unless you’re using business/enterprise versions). You can turn off training features in ChatGPT without losing functionality. For sensitive business information, this matters. Andrew wrote a good post on this on LinkedIn.

Voice Mode: Perfect for Mobile Professionals

Voice mode is brilliant for busy professionals. The best implementations are in the Gemini app and ChatGPT’s app and website. Claude’s voice mode is weaker.

The killer feature for business isn’t just the conversation - it’s sharing your screen or camera. Point your phone at contracts, spreadsheets, equipment, or site issues.

The AI sees what you see and responds in real-time. I’ve used it to:

  • Review contracts while travelling
  • Get quick analysis of financial data on screen
  • Identify equipment issues on job sites
  • Translate documents or signs when dealing with international suppliers
  • Get instant feedback on presentations before client meetings

Most people use voice mode like Siri - you’re missing the business applications.

Working with AI: Business Best Practices

The most recent AI models can often figure out what you want without complex prompts, so just approach it conversationally rather than getting too worried about exact wording.

Key principles for business use:

Give business context: Most AI models only know basic information and the current chat. Provide context: company background, industry specifics, previous successful documents, client requirements. Upload files, images, or detailed briefs using the file upload option.

Be specific about business outcomes: Instead of “Write a marketing email,” try “I’m a Wellington-based accounting firm targeting small e-commerce businesses. Write a cold outreach email addressing their GST compliance challenges and our specialist e-commerce accounting services. Here’s our service details: [paste]”

Ask for business alternatives: The AI doesn’t get tired. Ask for 20 different approaches to a client problem, or 10 ways to improve a proposal. Then push the AI to expand on the approaches that resonate. But remember, treat it like you would any human. Be nice and give it context.

Use iterative refinement: All systems let you edit prompts after getting answers, creating conversation branches. This is perfect for refining business documents - you can explore different approaches and compare results.

Common Business Pitfalls

Hallucinations: While much improved, AI still makes confident mistakes. Answers are more reliable from the powerful models that do web searches. Always use AI for business topics you understand, and verify important facts, especially for client-facing work.

Not magic: The best AIs perform like very smart consultants on many tasks, but can’t provide miraculous insights beyond human understanding. If something seems impossible, it probably isn’t actually doing that.

Always engage: Have back-and-forth conversations. Don’t just ask for responses - push the AI, ask for alternatives, request refinements. This is where the real business value comes from.

Verification: For business-critical work, click “show thinking” to see what the model was considering. Not 100% accurate, but helpful for understanding the reasoning behind recommendations.

Getting Started: Your First Business Applications

Pick a system and invest the $45/month per user (free versions are demos, not business tools). Then test these three business applications immediately:

First: Switch to the powerful model and give it a real business challenge - a client proposal, competitive analysis, or strategic planning question. Provide full context and have an interactive discussion. Ask for specific outputs and refine until you’re happy with the business quality.

Second: Try Deep Research on a business question where you need comprehensive information - market analysis for a new service line, competitive intelligence, or regulatory research relevant to your industry.

Third: Use voice mode during your commute or between meetings to think through business problems, review presentations, or brainstorm solutions to client challenges.

The tools are genuinely impressive when used well for business applications. The key is understanding what they can actually do for your specific business needs and industry context.

Cheers, Tim


Want hands-on help applying these tools to your business? Our AI Accelerator programme gives business owners personal coaching and group workshops to build practical AI skills that deliver immediate results. If you’re leading a team, explore how our AI Native Programme can upskill your entire organisation. And for a look at real business outcomes, browse our AI use cases library.

The "AI Cake Fallacy" - Why NZ Businesses can't have their AI Cake and eat it too The "AI Cake Fallacy" - Why NZ Businesses can't have their AI Cake and eat it too

The "AI Cake Fallacy" - Why NZ Businesses can't have their AI Cake and eat it too

Stanford research reveals 41% of AI investments target the wrong areas. Learn the four zones of AI automation and why people-first adoption delivers better results.

Tim Clark

Tim Clark

Co-founder · 25 June 2025 · 6 min read

I’ll be honest with you - I’m living this contradiction every single day.

In my role as a business owner, I’m constantly looking for ways to cut costs and improve efficiency. When I see a process that could be streamlined or a role that might be automated, there’s this immediate mental calculation: “What would this save us monthly? Could AI handle this?” It’s almost automatic now, this lens of optimisation and cost reduction.

But here’s the thing - I’m obviously a huge AI user. And when I use AI tools, I’m not trying to replace myself. I’m trying to get rid of the tedious stuff so I can focus on what actually matters. I want AI to handle my scheduling, draft my initial emails, and organise my notes so I can spend more time on strategy, relationships, and the work that genuinely energises me.

The contradiction hit me recently when I realised I had been looking at my team’s roles through that “efficiency lens” while protecting my own work from the same scrutiny. I wanted AI to free me up to do more meaningful work, but I was unconsciously viewing their roles as potentially replaceable.

Last week, I stumbled across this Stanford research that perfectly captures what I call the ‘AI Cake Fallacy.’ Everyone wants their slice of the AI productivity gains, but nobody wants to give up the parts that matter most to them. And the data proves this disconnect is real - and it’s killing AI implementations.

The Problem We’re All Dancing Around

The Stanford study surveyed 1,500 workers across 104 occupations and found something that should make every business owner pause. Yes, 46% of workers want AI automation, but only for specific tasks. They want AI to handle scheduling, data entry, and file management. You know, the stuff that’s genuinely dull. They want to focus on strategy, creativity, and meaningful work.

Makes perfect sense when you think about it. Why would anyone want to spend their workday doing stuff they don’t enjoy?

But here’s where it gets interesting. The research shows that 41% of current AI investments are going to what they call the “wrong places” - either low-priority areas or “Red Light Zones” where workers actually resist automation.

We’re literally investing in the stuff people don’t want automated while missing the opportunities where they’d welcome it with open arms.

The Four Zones That Change Everything

The researchers identified four distinct zones, and understanding these may shifted how you think about AI implementation.

There’s the “Green Light Zone” where workers actually want automation and AI can deliver. Think automated scheduling, basic data processing, routine admin work. Companies operating here see both productivity gains and employee satisfaction. It’s the sweet spot we should all be aiming for.

Then there’s the “Red Light Zone” - high AI capability but low worker desire. This is where we often get seduced by what’s technically possible rather than what’s actually wanted. Creative tasks, strategic planning, client relationship management. Force AI into these areas and you’ll create resistance and disengagement faster than you can say “digital transformation.”

The study found that Arts and Design workers only wanted 17% of their tasks automated. Editorial roles consistently rated as requiring essential human involvement. The pattern is clear - workers fiercely protect tasks involving creative expression, human relationships, strategic thinking, and work they find genuinely enjoyable.

The Reality Check I Needed

There’s a cycle that is all too familiar. We saw it with failed digital transformation projects as well. Initial excitement about AI capabilities, followed by the slow realisation that implementation isn’t working as expected, then confusion about what went wrong, followed by the frustrated “just make it work” directive that leads nowhere.

We skip the critical step of understanding what our people actually value about their work before trying to automate it away. We assume that because AI can technically do something, it should do something.

What Workers Actually Want

The most telling finding from the Stanford research is that workers prefer equal partnership with AI, not replacement. They want collaboration, not elimination. This isn’t about being resistant to change - it’s about being protective of meaning.

When you dig into what creates the strongest resistance, it’s the human-centred stuff. Customer service conversations where relationship building matters. Sales interactions that require reading between the lines. Training and mentoring colleagues where empathy makes the difference. Any work requiring genuine human judgment or emotional intelligence.

Workers will gladly hand over the scheduling, the data entry, the routine administrative tasks. But ask them to surrender the creative problem-solving, the strategic thinking, the work that makes them feel human, and you’ll hit a wall.

The Skills Revolution We’re Missing

The research reveals something important about where we’re heading. There’s a clear shift happening from information-processing skills toward interpersonal and organisational competencies. The traditionally high-wage work like data analysis is becoming less valuable, while relationship building, creative thinking, and strategic judgment are becoming more important.

This isn’t just about redistribution - it’s about fundamental redefinition of value. The most valuable human skills are becoming the ones that are most distinctly human.

Getting This Right

The companies that succeed with AI won’t be the ones who automate the most. They’ll be the ones who understand that AI works best when it amplifies human capability rather than replacing it.

This means having real conversations with your team about which tasks they’d actually want automated. It means understanding what they find meaningful about their work before you try to optimise it away. It means designing AI implementation with them, not to them.

The Bigger Picture

The Stanford research proves what many of us have suspected - the real opportunity lies in that collaboration zone where AI handles the tedious while humans focus on what they do best: thinking, creating, relating, and innovating.

As business owners, we need to resist the temptation to see AI as primarily a cost-cutting tool. Yes, it can reduce costs, but its real value lies in freeing our people to do more valuable work. When we get this right, we don’t just save money - we create more value.

The question isn’t whether your business can have its AI cake and eat it too. It’s whether you’re smart enough to share the cake in a way that everyone wins. Because the research is clear - when you get this alignment right, both productivity and satisfaction go up.

And honestly, isn’t that the kind of transformation we actually want to be part of?


Ready to align your AI investment with what your team actually wants? Our AI Clarity Session helps you identify the green-light opportunities where AI delivers both productivity gains and employee satisfaction. Or explore our real-world AI use cases to see practical examples of people-first automation in action.

Leading AI adoption with a clear vision Leading AI adoption with a clear vision

Leading AI adoption with a clear vision

Companies with a formal AI strategy see 80% adoption success vs 37% without one. Learn how to define, craft, and share an AI vision that brings your team along.

Phil Vinall

Phil Vinall

Co-founder · 23 June 2025 · 4 min read

Does AI spark excitement or unease in your team? As a business leader, your clear vision is needed to ensure smooth AI adoption.

For any AI effort to really hit its stride, it needs a clear purpose, right from the very top. A recent McKinsey report outlined research where not having a clear vision can cause “significant division” in organisations. This kind of disconnect, where only 45% of staff feel good about AI adoption compared to 75% of leaders, shows a clear gap in understanding and alignment. Some employees might even start using their own AI tools on the sly (“shadow AI”) if they’re not happy with, or can’t get to, the official ones. That’s a recipe for unmanaged risks for your business.

However, companies with a formal AI strategy report an 80% success rate in AI adoption, significantly higher than the mere 37% for those without one. This significant 43 percentage-point gap highlights how vital a clear vision is to successful AI adoption.

Companies with a formal AI strategy report an 80% success rate in AI adoption, significantly higher than the mere 37% for those without one.

Defining the vision

Successful AI adoption begins with envisioning what an AI-Native version of your business looks like. It’s about imagining what your business could become in the next few years with an AI-first mindset. This isn’t just about the technology you’ll use; it’s about seeing how AI fundamentally changes how your business operates, how decisions are made, and how problems are solved. Crucially, it’s about understanding what new expertise, culture, and mindset your people will also need to make that vision possible. Start with some of your key frustrations and aspirations - what do you wish you could do away with, and what do you want to do more of? What would your business look like if you started it today with AI running at the heart of it?

Crafting the vision

Once the vision is becoming clear, it’s essential to ground it in what’s achievable today - by deepening your practical understanding of AI’s capabilities (and limitations) within your unique business context. AI isn’t a magic wand - it requires thoughtful planning, thorough testing, and expert human oversight. As a result, AI works best as a collaborative partner, not just a tool. Think “humans multiplied by machines”, genuinely augmenting your team’s abilities rather than replacing them. Generative AI can typically boost performance by as much as 40% in a variety of areas for highly skilled workers. This allows your team to focus on higher-value, more engaging (and satisfying) work.

Sharing the vision

Finally, and perhaps most crucially, it’s important to effectively communicate this vision to your team. Transparent communication is vital to ensure your people feel included and informed, rather than threatened by change. It’s about painting a picture where they can genuinely see themselves thriving alongside AI, not being replaced by it. Leaders who proactively educate employees, involve them in decisions, and maintain transparency can transform resistance into advocacy. This empathetic approach helps bridge the gap between management and employee perceptions, fostering a culture of trust and shared success. Remember, successful AI adoption isn’t about rushing into technology; it’s about bringing your people along thoughtfully.

What’s your initial thought or biggest question about defining a powerful AI vision for your business? We’d love to hear from you.


Need help crafting your AI vision? Our AI Clarity Session is a focused 2-hour workshop where we help you define exactly what AI could mean for your business and align your leadership team on a clear direction. For a deeper dive into the leadership skills required, read our article on essential leadership skills for AI adoption.

Essential leadership skills for successful AI adoption Essential leadership skills for successful AI adoption

Essential leadership skills for successful AI adoption

McKinsey says leadership is AI's biggest hurdle. Discover the essential skills NZ business leaders need to drive successful AI adoption and empower their teams.

Phil Vinall

Phil Vinall

Co-founder · 1 June 2025 · 3 min read

The biggest hurdles when adopting AI aren’t about the tech. They’re actually about you - how will you lead AI adoption in your organisation?

The biggest hurdles when adopting AI aren’t about the tech. They’re actually about you.

A report from McKinsey said, “The biggest hurdle to success [with AI] is leadership.” And it’s a common theme in other findings. MIT Sloan Management Review (April 2025), found that 91% of top data managers said that “team challenges and managing change” were the main problems holding them back with AI.

For Kiwi businesses, getting AI working well isn’t just about buying new software or hiring a tech guru. It’s about how you, as the leader, tell the story of AI, get your people ready, and weave it into your unique business.

Your role in leading AI adoption

Sharing the vision: If you don’t have a clear idea of why you’re bringing in AI, or what you want it to achieve, your efforts can end up all over the place. As Codewave pointed out (May 2025), “AI actually helps the business when the top leader leads the way.” Your team needs to understand why and how AI is coming into your company.

Turning worries into wins: It’s fair enough for people to worry about AI taking their jobs. But staff are often less scared of AI itself, and more worried about how it will change their daily work, especially when things aren’t clear. This is where your leadership is crucial. When you show that AI is here to help people and not replace them - it changes everything. For example, Microsoft’s own studies (April 2025) found that staff using an AI Copilot, saw a 10-20% boost in getting things done and 68% felt happier at work. It helped free them up for more interesting, important (and valuable) work, and boosted their confidence in their future.

Leading the journey: Bringing in AI means some changes in the workplace. You need to be the kind of leader who can guide your team through this. You can do this by setting clear expectations, letting them try out new things, and building trust. When you, as the leader, actively share exploration of AI and its benefits, you create a workplace where your team sees it as a helpful partner, not something to be afraid of.

Adopting AI is less about a tech race and more about a thoughtful, people-focused journey. For Kiwi businesses, this means looking beyond the gadgets and truly focusing on your team and your style of leadership. When you, the leader, are strong and understanding, your AI plans will have a clear purpose, get results, and grow right alongside your business.

What’s your biggest question or fear about leading AI adoption within your business? We’d really love to hear from you.


Want to develop your AI leadership skills with expert guidance? Our AI Clarity Session gives you a structured 2-hour workshop to build your AI vision and roadmap. For ongoing support, the AI Native Programme helps you lead your team through the full adoption journey. You might also enjoy our related article on leading AI adoption with a clear vision.