What Generative AI Consultants Actually Build (5 Service Categories)

Per Gartner's analysis of the generative AI consulting market and the explicit breakdown surfaced in Google's AI Overview, generative AI consulting services cluster into five distinct categories. The pattern is consistent across industry coverage from Smartbridge, LeewayHertz, and IBM Consulting Advantage. Most engagement scopes claim to cover all five but actually emphasize two or three. Knowing which categories matter for your project, and reading proposals to see what's actually included, is half the buying skill.
This article is the five categories with realistic scope, pricing, and what to look for in each.
Category 1: Strategy & Opportunity Assessment
What it includes: Use case discovery, prioritization, opportunity sizing, target operating model decisions, roadmap.
Realistic scope: 2-4 weeks for focused engagement, 6-12 weeks for enterprise.
Realistic pricing: $15K-$50K for focused, $100K+ for enterprise.
What to look for: Outputs are decisions and prioritized opportunities, not 50-page strategy decks. The deliverable should fit on 10-20 pages with explicit recommendations.
Common gap: Strategy work that doesn't connect to implementation. Strong engagements explicitly include the bridge to the next phase.
Category 2: Custom Model Development
What it includes: Choosing between RAG, fine-tuning, and prompt engineering. Building the chosen approach. Eval set construction. Iteration to acceptable performance.
Realistic scope: 3-6 weeks for focused use case, 8-16 weeks for complex multi-model systems.
Realistic pricing: $25K-$80K for focused, $150K-$500K for complex.
What to look for: Eval discipline embedded in the work. Specific methodology for measuring model quality. Documented approach decisions (why RAG instead of fine-tuning, etc.).
Common gap: Engagements that build the model without the eval set. Without evaluation, the team can't tell if the model is actually working in production. Insist on eval as a deliverable.
Category 3: Integration & Deployment
What it includes: Wiring the model into your existing systems. API design. Workflow integration. Production deployment. Monitoring and observability setup.
Realistic scope: 2-4 weeks if the model is already built, 4-8 weeks for full build-and-deploy.
Realistic pricing: $20K-$60K for focused, more for complex enterprise integrations.
What to look for: Integration with named systems specifically (your CRM, your help desk, your ERP). Monitoring stack setup. Production runbook.
Common gap: Demos that work in isolation but never actually integrate with the customer's stack. The integration phase is where many AI projects die. Require named integration points in the contract.
Category 4: Data Services
What it includes: Data audit, sourcing, cleaning, structuring, labeling, embedding generation, vector database setup if needed.
Realistic scope: 2-6 weeks depending on data state. Often the longest-pole work in any engagement.
Realistic pricing: $20K-$80K for focused work, more if labeling is intensive.
What to look for: Specific data quality findings as a deliverable. Documented data preparation process the customer's team can replicate. Clean handover of the data infrastructure.
Common gap: Engagements that minimize this category to keep total price competitive. The 60-70% data problem in AI implementations means data services rarely get over-specified. If a proposal allocates only 1-2 weeks to data, ask whether the firm has actually audited your data state or assumed it's ready.
Category 5: Governance & Compliance
What it includes: Risk assessment, regulatory compliance review (HIPAA, GDPR, EU AI Act, sector-specific rules), bias and fairness audits, audit trail setup, AI policy documentation.
Realistic scope: 2-4 weeks for standard, 6-12 weeks for heavily regulated industries.
Realistic pricing: $15K-$50K standard, $80K-$200K+ heavily regulated.
What to look for: Specific regulations named, not generic "compliance review." Documented audit trail. Clear escalation procedures for AI risk events.
Common gap: Engagements that treat governance as a 1-page bolt-on. In regulated industries, governance is the work that can block deployment if done late. Require it as a parallel workstream from week 1.
How the categories relate
The five categories don't run sequentially — they overlap. A typical 6-week engagement structure:
- Week 1: Strategy + Data Audit (Category 1 + start of Category 4)
- Weeks 2-3: Model Development + Data Preparation (Categories 2 + 4)
- Weeks 3-4: Governance review running in parallel (Category 5)
- Weeks 4-5: Integration build (Category 3)
- Week 6: Deployment + final governance sign-off
Engagements that try to handle the categories sequentially (strategy fully done before any model development) end up taking 12-16 weeks for what should fit in 6.
Reading a vendor proposal
When evaluating a generative AI consulting proposal, check what percentage of the proposed work falls into each category:
Healthy distribution for a focused implementation engagement:
- Strategy: 10-15%
- Model development: 30-40%
- Integration: 15-25%
- Data services: 20-25%
- Governance: 10-15%
Warning signs:
- Strategy >30%: probably advisory engagement disguised as implementation
- Data services under 10%: probably skipping data audit
- Governance under 5%: probably treating compliance as bolt-on
- Integration under 10%: probably demo-quality output that won't ship to production
Use this distribution as a sanity check on any proposal. Significant deviation requires explanation.
Modular vs bundled
Most firms offer both modular (single-category) and bundled (multi-category) engagements:
Modular makes sense when:
- You've already done strategy and need only implementation
- You have an existing system that needs only governance review
- You need a focused data audit before broader engagement
Bundled makes sense when:
- Starting from zero on a new use case
- The categories have strong dependencies (governance affects model design, data affects strategy)
- You want a single accountable firm rather than coordinated specialists
For most mid-market mid-volume work, bundled fits cleanly. For enterprise work or specific gap-filling, modular is more efficient.
The honest takeaway
Generative AI consulting engagements have five category components. Reading a proposal to see how the firm's time and budget actually distribute across them tells you what they're really selling. The honest distributions look balanced; the misleading ones over-emphasize the visible categories (strategy, model development) and under-invest in the unglamorous ones (data services, governance) that determine whether the work ships and stays shipped.
Read the proposal. Check the distribution. Push back where it's skewed. The engagement that delivers covers all five at appropriate depth, even when individual categories aren't separately priced in your contract.
Frequently Asked Questions
Which categories are most often missing from engagement scopes?
Data services and governance. Vendor proposals often emphasize strategy and model development (the visible work) and minimize data preparation and compliance (the unglamorous work). Engagements that skip data services usually fail because the data wasn't ready; engagements that skip governance usually hit compliance walls late.
Can I buy individual categories or only the full bundle?
Both. Mature firms offer modular engagement scopes — strategy-only, build-only, audit-only, governance-only. Bundled engagements are more common because the categories reinforce each other, but standalone work is appropriate when you have specific gaps to fill.
Sources
- Gartner — Generative AI Consulting and Implementation Services
- LeewayHertz — Top Generative AI Consulting Companies of 2026
- IBM — IBM Consulting Advantage
- Smartbridge — Generative AI Consulting Services: What Enterprises Need
- Harvard Business Review — AI Is Changing the Structure of Consulting Firms
- McKinsey QuantumBlack — The state of AI in 2026
- Anthropic Research — Building Effective Agents
- NIST — AI Risk Management Framework

Founder, Tech10
Doreid Haddad is the founder of Tech10. He has spent over a decade designing AI systems, marketing automation, and digital transformation strategies for global enterprise companies. His work focuses on building systems that actually work in production, not just in demos. Based in Rome.
Read more about Doreid


