Tech10
Back to blog

AI Strategy Frameworks That Actually Drive Decisions (Not Decks)

Ai Strategy Frameworks That Drive Decisions
AI ConsultingApr 13, 20265 min readDoreid Haddad

Most AI strategy engagements produce thoughtful decks built around frameworks that nobody acts on. The framework looks rigorous. The 2x2 grid is well-populated. The opportunity heat map is comprehensive. Six months later, the company has implemented none of it because the framework forced no choices.

The published AI strategy frameworks from Microsoft, Google Cloud, and Harvard Business School Online consistently name the same seven pillars: strategic vision and alignment, use case prioritization, data strategy and governance, operating model and talent, responsible AI and ethics, technology and infrastructure, and measurement/KPIs. Coverage of all seven matters less than which subset actually drives decisions for your specific situation.

This article is the four AI strategy frameworks that actually drive decisions, with the specific outputs each one produces and the situations where each one fits.

Framework 1: Value-Effort Matrix (the workhorse)

The classic 2x2: value on the y-axis, implementation effort on the x-axis. Plot every AI opportunity. The quadrants:

  • High value, low effort: do these now (typically 3-5 quick wins)
  • High value, high effort: strategic bets (typically 1-2 over 12 months)
  • Low value, low effort: maybe (only if free capacity)
  • Low value, high effort: never (delete from the list)

Why it drives decisions: the bottom-right quadrant gets explicitly killed. Most strategy frameworks let "low value, high effort" survive as "future considerations." This one forces deletion.

When to use: starting an AI strategy from scratch. Need to narrow from 30+ ideas to a working portfolio.

Framework 2: Capability vs Commodity Grid

Two axes: how core the capability is to your competitive advantage (commodity → core), and where the work currently lives (build vs buy). Plot each AI investment.

The quadrants:

  • Commodity + Buy: correct (LLM APIs, generic productivity tools)
  • Core + Build: correct (proprietary models trained on your data)
  • Commodity + Build: wrong — you're rebuilding what's available off-the-shelf
  • Core + Buy: wrong — you're outsourcing your competitive advantage

Why it drives decisions: it surfaces misaligned investments. Most companies have at least one item in each wrong quadrant — usually a custom-built tool that should have been bought, or a vendor product they're depending on for differentiation that should be built in-house.

When to use: rationalizing an existing AI portfolio that's grown organically. Surface the misalignments and reallocate.

Framework 3: BCG AI@Scale (capability maturity)

BCG's published AI@Scale framework maps capability across five dimensions: strategy, talent, ways of working, technology, and data. Each dimension gets a maturity score 1-5.

Why it drives decisions: it surfaces specific capability gaps rather than generic "we need AI." A company that scores 4/5/3/4/2 has a clear data problem; a company that scores 5/2/3/2/4 has a talent and ways-of-working problem. Different fixes follow.

When to use: when leadership wants to understand WHY their AI investments aren't producing results. The maturity scoring identifies the binding constraint.

Framework 4: 70/20/10 Innovation Portfolio (allocation)

Adapted from McKinsey's broader innovation framework: 70% of AI investment goes to core/proven use cases, 20% to adjacent extensions, 10% to transformative bets. Apply to AI specifically:

  • 70% (core): known-good AI patterns in your industry — fraud detection, recommendations, customer support automation
  • 20% (adjacent): newer applications of mature AI in your business — agent-based workflows, advanced personalization
  • 10% (transformative): experimental — multimodal use cases, novel research, capability-building bets

Why it drives decisions: it forces allocation discipline. Most companies skew either entirely conservative (100% core, no learning) or entirely speculative (chasing every new capability). 70/20/10 keeps both honest.

When to use: when allocating AI budget for the year. Resists the gravitational pull of either pure caution or pure novelty.

What these frameworks won't do

Three things even the best frameworks can't substitute for:

Strategic clarity about what the business is becoming. Frameworks help allocate AI investment, but the deeper question — what kind of company we're building, what advantages we're trying to create — is upstream of any framework. AI strategy without business strategy produces well-organized misallocation.

Hard data on impact. Frameworks position investments based on estimates. Real impact requires measurement, which requires implementation. Strategy with no implementation can't validate its own assumptions.

Organizational courage to kill projects. The framework can identify low-value/high-effort items that should die. The organization has to actually kill them. This is usually harder than the analysis.

How to use frameworks in an engagement

For a 6-week AI strategy engagement, a workable framework sequence:

  • Week 1-2: Capability vs Commodity Grid — rationalize the existing portfolio
  • Week 2-3: BCG AI@Scale — assess capability maturity, identify binding constraints
  • Week 3-4: Value-Effort Matrix — narrow new opportunities to a working portfolio
  • Week 5-6: 70/20/10 — allocate budget across core, adjacent, transformative

The frameworks compound. The capability grid tells you what to stop. The maturity model tells you what to fix. The value-effort matrix tells you what to start. The portfolio allocation tells you in what proportion. Together they produce a strategy with explicit positions on each major decision.

The honest takeaway

Frameworks drive decisions when they force trade-offs. The four above each force a specific trade-off — what to delete (value-effort), what's misaligned (capability-commodity), what's the binding constraint (maturity), how to allocate (70/20/10). Strategy that uses frameworks this way produces investment decisions; strategy that uses frameworks decoratively produces decks.

Pick the framework that matches the question you're trying to answer. Use it to force a specific decision. Move to the next question. Most AI strategy work in 2026 needs less framework volume and more framework discipline.

Frequently Asked Questions

Why do most AI strategy frameworks fail to drive decisions?

Because they produce coverage instead of choices. A 50-cell heat map shows where everything could go but doesn't pick. The frameworks that drive decisions force trade-offs — explicit prioritization, explicit non-investment, explicit sequencing.

Should I use a framework from McKinsey/BCG or build my own?

Use established frameworks for the structural choices (centralized vs federated, portfolio allocation). Build custom criteria for the value/effort trade-offs specific to your business. Off-the-shelf frameworks for shape; custom criteria for content.

Sources
Doreid Haddad
Written byDoreid Haddad

Founder, Tech10

Doreid Haddad is the founder of Tech10. He has spent over a decade designing AI systems, marketing automation, and digital transformation strategies for global enterprise companies. His work focuses on building systems that actually work in production, not just in demos. Based in Rome.

Read more about Doreid

Keep reading