Tech10
Back to blog

Building an AI Strategy: The 6 Decisions Every Leadership Team Faces

Building Ai Strategy Six Decisions Leadership Faces
AI ConsultingApr 11, 20265 min readDoreid Haddad

An AI strategy isn't a roadmap document. It's a set of decisions made explicitly so the rest of the organization can act consistently. Per Deloitte's framing, "the strongest AI strategies tend to begin without ever mentioning AI. Instead, they should begin with the organization's north star: the core business strategy." Most strategies fail not because the recommendations were wrong but because the underlying decisions were never made — they were deferred to "we'll figure it out as we go," which means each team makes them individually based on local incentives. Per MIT Sloan's executive education guidance, the discipline that turns AI from experiments into enterprise-wide impact is structured leadership decisions on strategy, governance, and operating model.

This article is the six decisions every leadership team faces in any serious AI strategy. Each is binary or near-binary. Each has consequences that compound. Each gets made implicitly if not made explicitly.

Decision 1: Centralized vs federated AI capability

Centralized. A single AI/ML team serves the whole business. Hires, tools, and governance live in one place. Best fit: companies under 5,000 employees, narrow set of AI use cases, regulated industries where governance must be tight.

Federated. AI capabilities live in business units. Each unit has its own AI talent or shared services from a central CoE. Best fit: large enterprises, diverse use cases, businesses where domain knowledge matters more than ML depth.

Hybrid (the most common production pattern). Central platform team handles infrastructure, governance, and shared models. BU teams handle domain-specific applications.

The mistake: defaulting to whatever matches your existing IT structure. The decision should be made on AI fit, not org-chart inertia.

Decision 2: Build vs buy at each layer of the stack

Most companies need to make this decision separately at each layer:

  • Foundation models: buy (Claude, GPT, Gemini APIs)
  • Application layer: mostly buy mature categories (HubSpot, Salesforce with AI), build proprietary differentiators
  • Data layer: build (your data is your moat)
  • ML platform: usually buy (managed offerings on AWS, Azure, GCP)

The decision principle: build where capability is core to competitive advantage; buy where capability is commodity. Most companies overbuild commodity layers because engineering teams want to build, and underbuild the data layer because it's unglamorous.

Decision 3: Make-then-use vs use-then-make

Make-then-use. Invest in AI capability first (hires, platform, training), then identify use cases. Common in tech-forward companies.

Use-then-make. Identify specific high-value use cases first, build minimum capability to serve them, expand from there. Common in operationally-focused companies.

The 2026 pattern that ships more reliably is use-then-make. Concrete use cases drive concrete investment decisions. Capability-first investments often produce platforms looking for problems.

Decision 4: Centralized governance vs distributed accountability

Where do AI risk decisions live? Three options:

Centralized governance. A single AI ethics/risk function reviews every AI deployment. Slow but consistent. Right for regulated industries.

Distributed accountability. Each business unit owns risk for its own AI. Fast but inconsistent. Right for innovation-focused companies with low regulatory exposure.

Tiered model. High-risk AI (customer-facing, decisions affecting people) goes through central governance; low-risk AI (internal productivity) is distributed. The most common production pattern in 2026.

Decision 5: What metrics determine AI success

Three classes of metric, each with different organizational implications:

Operational metrics. Time saved, cost reduced, throughput increased. Easy to measure, easy to defend, easy to game.

Outcome metrics. Revenue impact, customer satisfaction, retention. Harder to attribute but more meaningful.

Capability metrics. Number of deployed models, eval pass rates, AI-fluent employees. Internal-facing, useful for capability building, dangerous if treated as outcomes.

The mistake: leading with capability metrics ("we deployed 12 AI systems this year") rather than outcome metrics. Most boards in 2026 see through capability claims and ask for outcomes.

Decision 6: When to scale vs when to stay focused

Most companies face a moment 12-18 months into AI work where the question is: do we scale to more use cases, or deepen on the ones that work?

Scale signals: clear pattern of success across initial use cases, capability that translates beyond the original domain, market demand for broader application.

Stay focused signals: initial use cases are working but not yet compounding, edge cases still surfacing, internal capability still ramping.

The wrong call: scaling prematurely because it feels like progress. Most AI capability gets diluted faster than it grows when teams scale to too many use cases simultaneously.

How these decisions interact

The six aren't independent. Centralized governance + federated capability creates organizational friction. Build-everything + use-then-make + capability metrics produces platforms looking for problems. The right strategy makes all six choices explicit, then checks them for internal consistency.

Most AI strategy engagements in 2026 should produce documented positions on all six. If your engagement output doesn't include explicit positions on each, you don't have a strategy — you have a roadmap of activities. The two are different.

What to do this week

If you're inside an organization without clear AI strategy:

  1. List the six decisions on a single page
  2. Walk through each in a leadership team meeting
  3. Document the explicit position for each
  4. Identify which positions are most likely to change in 12 months and why

Two-hour session. Outputs are decisions documented. Most leadership teams discover they had implicit positions that conflicted across decisions, and the act of making them explicit forces alignment.

That's how AI strategy actually shapes investment. Not through frameworks. Through decisions.

Frequently Asked Questions

Do all six decisions need to be made upfront?

Yes — at least with explicit positions, even if those positions are time-bound (e.g., 'centralized for 18 months, then re-evaluate'). Strategy that defers these decisions ends up making them implicitly, often badly, when budget pressure forces a quick call.

Which decision is hardest?

Centralized vs federated. It cuts across hiring, budget, governance, and culture, and the right answer depends on company maturity, risk profile, and existing organizational shape. Most companies get this one wrong by defaulting to whichever pattern matches their existing IT structure rather than what fits AI.

Sources
Doreid Haddad
Written byDoreid Haddad

Founder, Tech10

Doreid Haddad is the founder of Tech10. He has spent over a decade designing AI systems, marketing automation, and digital transformation strategies for global enterprise companies. His work focuses on building systems that actually work in production, not just in demos. Based in Rome.

Read more about Doreid

Keep reading