Stop Buying AI Features. Start Building an AI Advantage.

I help Australian mid-market leadership teams ($20M–$500M revenue) turn scattered AI experiments into a governed portfolio that compounds EBIT and reduces risk.

Most of my work is alongside CEOs and boards who are already spending on AI—and want it to stop leaking into pilots that never ship.

Frameworks and playbooks used across insurance, manufacturing and services.

What AI Is Actually For

Most mid-market organisations are using AI for the wrong thing.

  • Developers are building pilots that can't be deployed because governance, security and operating-model readiness aren't there.
  • Executives are sold tools that promise to "speed up broken processes" instead of asking how AI should change their market position, business model and capital allocation.

AI is not a productivity add-on. It's a capital allocation question.

You should be using AI to reshape how you compete, how you serve customers, and how you allocate scarce time and budget—not to simply speed up broken processes or bolt another chatbot onto an already messy workflow.

What the Research Says (Not Just My Opinion)

If this sounds strong, it's because the evidence has moved.

Independent research from McKinsey, BCG, the Australian Institute of Company Directors and Harvard Governance is now saying the same thing:

  • McKinsey: AI value only shows up when the CEO leads the transformation and the C-suite builds its own "AI muscle", rather than delegating it to IT.1,2
  • McKinsey's latest State of AI research shows that only a small minority of organisations are true "AI high performers"—and they win by redesigning workflows and business models, not by adding tools to existing processes.3
  • BCG: CEOs must be at the centre of data and AI conversations, leading the big strategic calls, not just approving tools or dashboards.4,5
  • AICD and governance bodies: boards now have explicit expectations to oversee AI risk and governance; they cannot outsource their duty of care to the CIO or a vendor.6,7

My practice is simply the mid-market, implementation-ready version of this: making sure your AI portfolio, capital allocation and governance match what the evidence says works—not what the loudest vendor is selling.

1 McKinsey – Building the AI muscle (2025) · 2 McKinsey – Economic potential of generative AI (2023) · 3 McKinsey – State of AI 2025 · 4 BCG – CEOs must lead data conversations (2025) · 5 BCG – As AI changes work (2025) · 6 AICD/HTI – Director's Guide to AI Governance (2024) · 7 Harvard Law – The Artificially Intelligent Boardroom (2025)

Board & C-Suite AI Briefing Partner

AI is now a board-level skill, not an IT hobby.

Boards and CEOs are being told they are personally accountable for AI risk, governance and capital allocation. They can't delegate understanding of AI's impact to a CIO who is also still learning, or to developers who are focused on pilots rather than enterprise risk.

You can safely treat your ERP as plumbing.

You can't treat AI that way. You need your own, board-ready view of strategic use of AI, and how it's changing profit, risk and defensibility in your business.

That's the gap I fill.

My work with leadership teams typically looks like:

  • Private working sessions with the CEO and C-suite to separate signal from noise in AI—what matters in your industry, what doesn't, and where the real risks sit
  • Long-form conversations about how AI should change your market position, not just your process diagrams
  • Regular check-ins during pilots and deployments so executive decisions on kill / fix / double-down are grounded in evidence, not fear of the first visible error

The goal isn't to turn you into prompt engineers. It's to make sure your strategy, capital allocation and risk posture reflect what AI is actually doing to your industry, rather than whatever the last vendor deck claimed.

Why leadership teams work with me

1

Translator between business, IT and AI

I speak fluent boardroom and fluent terminal, and I bridge the two so your AI strategy doesn't get lost in translation.

  • IT: AWS Solutions Architect, Salesforce Certified Application & System Architect, TOGAF-certified Enterprise Architect
  • Business: Master of Management (Innovation), two decades of delivery across insurance, manufacturing and services
  • AI: 26 articles + 15 ebooks on AI deployment and governance in 2025 [insights] [linkedin]
2

Future-focused, but grounded in things I've already built

Most consultants write about AI; I ship it. Current open-source and internal tools include:

  • SiloOS — security-first AI execution environment [article]
  • Discovery Accelerator — multi-agent decision engine (97% vs 80% single-model accuracy) [article]
  • ask CLI — terminal-based agentic system with plan/act loops [article]
3

Board-side thinking partner and communicator

I spend much of my time in the room with CEOs, CFOs and boards—framing AI in the language of capital allocation, risk appetite and competitive moat. Briefings are designed to survive the next board meeting, not just the next sprint.

4

Steward of moat and capital, not just a project helper

I'm not here to help you finish a sprint. I'm here to help you decide which experiments to kill, which to double-down on, and how to turn AI spend into long-term defensibility. The goal isn't a faster chatbot—it's a compounding asset your competitors can't copy.

Why Most AI Projects Fail: The Three Traps

Industry research documents a 40–90% AI project failure rate. Most failures stem from three common patterns that mid-market organisations fall into when treating strategic transformation like technology procurement.

The First Idea Trap

Seeing AI opportunities through only one lens. Taking the first obvious idea without genuine multi-perspective debate.

Example:

Operations sees "automate customer intake = save 2,200 hours/month." Revenue sees "$300K expansion revenue at risk from lost upsell calls." HR sees "attrition risk from removing meaningful work." Net result: lose $300K to save $200K.

Impact: $30-40B in wasted AI investment. 95% of pilots fail because companies solve the wrong problem by seeing it through only one dimension.

The One-Error Death Spiral

Deploying AI without baseline metrics or observability. When the first visible error happens, no data to prove AI outperforms humans.

Example:

Agent makes 15 mistakes out of 1,000 tasks (98.5% success). Executive asks "How often?" No observability = no data. Project cancelled despite possibly outperforming humans at their 3.8% error rate.

Impact: "One error = kill it" dynamic destroys projects that might be succeeding.

Maturity Mismatch

Treating AI deployment as SaaS procurement instead of software development. Jumping to R3-R4 automation when only ready for R0-R1.

Example:

Going straight to "AI handles customer refunds automatically" when organization lacks prompt version control, regression tests, or observability.

Impact: 80%+ failure rate, wasted $50K-$300K, team concludes "AI doesn't work for us."

The AI Investment Steward Model

Portfolio management + governance frameworks + ownership transfer = compounding returns

Capital Allocation Discipline

Treat AI the way a CFO treats capital allocation—clear investment thesis, quarterly reviews, kill/fix/double-down decisions, explicit risk appetite. No random experiments, systematic evaluation.

Advisory and stewardship typically represent 10–30% of your AI budget so that the other 70–90% compounds instead of leaking into orphaned pilots

Automated Controls, Not Committees

Governance lives in the system, not in a PDF. PII redaction, observability and decision logs are built into AI workflows so that every call is traceable, auditable and explainable. Safety and compliance are enforced by architecture and automation, not standing meetings and slide decks.

The underlying tools can change over time—the point is a governed pattern your teams can apply everywhere

Build Capability, Not Dependency

The stack is composable and open—documented patterns, reference implementations and a trained internal team. After 12–18 months, most clients reduce to quarterly or board-level reviews because the organisation can run AI as part of normal operations.

You own the code, frameworks and operating model. No vendor lock-in, and no permanent dependence on me to make basic decisions

Proven Frameworks,
Research-Backed Insights

These aren't blog posts optimized for SEO. They're systematic explorations of AI deployment challenges and solutions—frameworks you can apply, research-backed insights, and counter-intuitive thinking tested against real-world implementations.

1

SiloOS: The Agent Operating System for AI You Can't Trust

AI-first, security-first, privacy-first execution environment that straps untrusted agents to the chair: policy walls, observability, and containment by design. For leaders who need superhuman AI without losing control.

2

The AI Executive Brief — November 2025

Based on McKinsey's November 2025 Global Survey on AI (1,993 respondents, 105 countries). 88% now use AI in at least one function; only 6% are high performers (>5% EBIT impact); 62% are stuck in pilot purgatory. What separates the winners? Governance cadence and operating model, not model choice.

3

Stop Automating. Start Replacing: Why Your AI Strategy Is Backwards

"You can’t automate your way to transformation. You have to rethink the work itself." — Bain & Company. Shows why assistive automation caps at 5–10% while replacement architectures deliver 60–90% gains; includes playbook to flip from process grease to process redesign.

4

Why 42% of AI Projects Fail: The Three-Lens Framework for AI Deployment Success

Unspoken misalignment between CEO, HR, and Finance kills AI deployments despite working technology. Pre-deployment checklists and phase gates ensure all stakeholders align before building—preventing the organizational readiness gaps that doom 42% of initiatives.

5

The AI Paradox: Why 68% of SMBs Are Using AI But 72% Are Failing

68% of SMBs use AI, 72% struggle with integration. The problem isn't access or education—it's "Add-On Purgatory." Introducing the AI Bridge role and economic inversion: custom AI now cheaper than SaaS subscriptions over 18-24 months. Includes 90-day implementation plan.

6

Why AI Projects Are Failing - Explained

Running 2005 procurement playbooks against 2025 technology. Only 5% of enterprises extract consistent AI value (BCG/S&P). Four mental shifts framework: requirements→RFP→install becomes hypotheses→experiments→operating model. Addresses vendor AI-washing and procurement anti-patterns.

7

The Team of One: Why AI Enables Individuals to Outpace Organizations

When thinking costs drop to near-zero, bottleneck shifts from headcount to coordination architecture. Explains why 95% of corporate AI fails: organizations can't learn fast enough. Delegation architecture (treating AI as team members) + tight learning loops beat 50-person teams. Addresses $1.3M annual coordination tax per 1,000 employees.

8

Discovery Accelerators: The Path to AGI Through Visible Reasoning Systems

80% of AI projects fail because recommendations can't be defended to boards/regulators. Multi-dimensional reasoning (chess-inspired search across HR/Risk/Revenue/Brand lenses) shows rejected alternatives with transparent reasoning. Council of AIs achieved 97% accuracy on medical exams vs. 67% for single GPT-4.

9

Why 95% of AI Pilots Fail—And How AI Think Tanks Solve the Discovery Problem

Most companies don't know what they want from AI. Multi-agent reasoning (Operations/Revenue/Risk/People brains debate ideas) produces 2× better results. The "John West Principle": showing rejected alternatives builds more trust than hiding them. Discovery problem vs. tool problem.

10

Why Most SMB AI Projects Are Designed to Fail (And How to Fix It)

A readiness framework for SMBs entering custom AI development—often without realizing it. AI software is harder to deploy and operate than standard software. Reviews recurring failure patterns and how to harden delivery before pilots stall.

Research citations: BCG, S&P, NIST AI RMF, OWASP LLM Top 10, OpenTelemetry GenAI, Gartner

Real-World Projects

Strategic architecture and innovative thinking driving business transformation

INSURANCE | STRATEGIC ARCHITECTURE

Covermore Travel Insurance

Challenge: Amadeus travel systems dominated the distribution channel, limiting direct relationships with key partners.

Strategic Solution: Designed and delivered complete disintermediation architecture—removing dependency on Amadeus while maintaining continuity.

Breakthrough Impact: Enabled enduring partnership with Flight Centre that became foundation for successful IPO.

Key Insight: Strategic architecture isn't just technical—it's about removing barriers to high-value relationships. Systematic thinking about dependencies and alternatives creates breakthrough opportunities.

TECHNOLOGY | AI-FIRST INNOVATION

Dynaquest

Challenge: HubSpot subscription costs escalating while core workflows (outbound sales, training, onboarding) needed custom automation.

Innovative Solution: AI-first architecture replacing conventional SaaS. Purpose-built workflows with systematic governance from day one.

Breakthrough Impact: Owned infrastructure, no vendor lock-in, custom workflows tuned to exact business needs. Systematic approach vs. feature accumulation.

Key Insight: Challenging conventional SaaS wisdom with systematic AI-first thinking. When you control the architecture, you control the evolution. Governance isn't bureaucracy—it's ownership.

"Strategic breakthroughs come from systematic thinking about dependencies, alternatives, and ownership—not from following vendor roadmaps."

— Pattern from 2+ decades solutions architecture across insurance, manufacturing, and services

Systematic Governance in 90 Days

Fast proof or fast kill—no 6-month POCs that die quietly

1

Audit & Clarity

Weeks 1–4

What: AI Portfolio Review—audit all vendors, tools and projects, plus 2–3 working sessions with the CEO and C-suite to align on where AI should and should not play in the strategy

Output: Kill/Fix/Double-Down decisions, AI waste identified ($50K–$200K typical)

Deliverables: 90-day roadmap, Readiness Scorecard, board-ready summary

2

Govern & Pilot

Weeks 5–8

What: Design and implement a Company AI Gateway plus a 10-day pilot on a single high-value workflow, with weekly executive check-ins to review results, surprises and kill/scale decisions

Output: Shadow AI brought under governance, 15–40% improvement measured against baselines

Deliverables: Observability dashboard, evaluation framework, kill/scale criteria agreed with leadership

3

Scale & Transfer

Weeks 9–12

What: Move successful pilots into production, train internal teams and hand over ownership, including an executive debrief on what you've learned about AI in your own context and how that changes the next 12–24 months of capital allocation

Output: Internal AI capability with clear metrics and governance

Deliverables: You own the code, frameworks and operating model. Ongoing advisory available where it makes economic sense.

Investment context

Most clients are already investing $150K–$1M+ annually in AI. Governance and portfolio stewardship typically represent 10–30% of that spend and pay for themselves by eliminating wasted investments in the first 90 days.

How We Work Together

Start with assessment. Then scope the right engagement based on your readiness, AI spend, and strategic priorities.

1

Assessment

10-minute Readiness Scorecard. Know your score, gaps, pathway.

FREE

2

Discovery

30-minute call. Discuss situation, challenges, current AI spend.

FREE

3

Audit

2–3 weeks. Deep review of all AI investments, plus dedicated working sessions with the CEO and C-suite.

$15K–$25K

4

Engage

Custom scope based on readiness + priorities.

CUSTOM

Typical Engagements

AI Portfolio Review & C-Suite Briefing

Audit all AI spend, vendors, tools and pilots. Classify: Kill / Fix / Double-down. Includes 2–3 executive working sessions to align on risk, ROI and strategic direction. Deliver 90-day roadmap + waste report ($50K–$200K typically identified).

$15K–$25K 2–3 weeks

Ongoing AI Investment Stewardship

Portfolio stewardship, quarterly reviews, governance oversight, vendor evaluation, board presentations and ongoing CEO / C-suite working sessions. Scope varies: retained advisory or project-based support.

CUSTOM Based on AI spend + readiness

Governed Implementation Support

Hands-on support from an architect who's also your strategic advisor: design the Company AI Gateway, run the 10-day pilot, move into production with governance baked in. The goal is to build durable capability, not a dependency.

CUSTOM 3–6 months typical

Investment Depends On

Your Current AI Spend

Are you investing $150K, $500K, or $1M+ annually? Scope scales accordingly.

Readiness Score

Score 0-10 needs foundation-building. Score 17+ ready for systematic deployment. Different pathways, different investments.

Strategic Priorities

Governance urgency? Shadow AI risk? Failed pilots to salvage? Board pressure? Each shapes the engagement.

Context for Investment

Industry data shows 72% of AI projects fail. If you're spending $500K on AI, that's ~$360K wasted annually at the documented failure rate.

Systematic governance prevents waste + improves working spend. Portfolio review typically identifies $50K-$200K in eliminable spending—often paying for itself in eliminated waste alone.

Is This For You?

This IS for you if:

  • 25–500 employees, typically $20M–$500M revenue
  • Currently investing $150K–$1M+ annually in AI (tools, pilots, vendors)
  • Have executive or board-level sponsorship and budget authority
  • Tried AI with underwhelming/failed results
  • Board/competitors pressuring for AI ROI
  • Staff using shadow AI (ChatGPT with company data)
  • Value prudent investment over hype-chasing
  • Willing to kill failed projects (no sunk-cost fallacy)
  • Want ownership and flexibility (not vendor lock-in)
  • Australian company or significant AU operations

This is NOT for you if:

  • <25 employees or <$150K annual AI spend
  • >500 employees with dedicated AI teams
  • Pure startup <2 years old (too early for governance)
  • Want "AI strategy deck" without implementation
  • Seeking cheapest vendor (my work typically sits in the $180K–$300K/year band when fully engaged)
  • Not yet investing in AI (come back when ready)
  • Expect AI to "solve everything" (we're skeptics)
  • Can't commit to killing failed projects

Not Ready to Commit? Start Here.

Are You Ready for AI?

Take the 10-minute Readiness Scorecard (32 points across 8 dimensions). Instant results: where you are, what's missing, what to fix first.

Take Free Assessment

DIY AI Governance

Step-by-step implementation plan: Audit → Gateway → Pilot → Scale. Includes checklists, tool recommendations, success criteria.

Download Free Template

See How It Works

5 documented case studies: Professional services, manufacturing, healthcare, SaaS, financial services. ROI breakdowns, timelines, lessons learned.

View Case Studies
Book 30-Minute Discovery Call

Discuss your AI challenges, explore fit, no obligation