Before You Spend a Rupee on AI, Ask These 4 Questions
Most AI decisions in small and mid-sized businesses happen the same way. Someone reads an article, gets excited, buys a tool, and hopes for transformation. Six months later, the tool is gathering dust and the team is skeptical about the next AI pitch.
The failure isn’t technical. It’s strategic. The business never had a framework for deciding where AI belongs and where it doesn’t.
This post gives you that framework. It’s simple enough to use in your next leadership meeting.
Why Most AI Decisions Go Wrong
Business owners get pulled in two directions. Vendors tell them AI can do everything. Their teams tell them AI can’t do anything useful. Both are wrong.
The real problem is that most people evaluate AI the way they evaluate software: feature lists, demos, competitor comparisons. But AI isn’t software in the traditional sense. It doesn’t follow instructions. It learns patterns. And that distinction changes everything about where it works and where it fails.
You don’t need to understand how AI learns. You need to understand what makes a task suitable for AI in the first place. That’s where the four-dimension model comes in.
The Four Dimensions: Volume, Pattern, Judgment, Data
When evaluating any task or process for AI, score it across four dimensions. The higher the combined score on the first two and the last one — and the lower the score on the third — the stronger the AI case.
1. Volume
Ask: Is this task done hundreds or thousands of times a month?
High-volume, repetitive work is where AI delivers the fastest returns. Processing 200 purchase orders a day? That’s a strong candidate. Each order follows a known structure, and errors from manual handling compound fast at scale.
Negotiating a partnership deal? That happens twice a year. No matter how painful the process, AI won’t meaningfully change it. The volume isn’t there.
Rule of thumb: If your team does it daily and complains about the monotony, volume is high.
2. Pattern
Ask: Does this task follow recognizable patterns, even if those patterns are complex?
AI is exceptional at recognizing patterns that humans follow but struggle to articulate. Categorizing expenses from bank statements is pattern work — the descriptions vary, but an experienced accountant knows that “AMZN MKTP” is an office supplies purchase. AI can learn that same mapping.
Deciding whether to enter a new market? That’s not pattern work. It depends on timing, relationships, competitive dynamics, and gut instinct built over decades. No dataset captures that.
Rule of thumb: If you could teach a sharp new hire to do it by showing them 50 examples, it’s pattern work.
3. Judgment
Ask: Does this task require human relationships, ethical reasoning, or creative leaps?
This is where business owners need to think carefully. High-judgment tasks should stay human — but that doesn’t mean AI has no role.
Writing a legal brief requires judgment. But AI can draft the research section, pulling relevant precedents and summarizing case law, so the lawyer spends their time on argumentation instead of retrieval.
Client relationship management requires the human touch. But AI can analyze engagement patterns across your accounts and flag clients who are at risk of churning before your team notices.
Rule of thumb: Don’t ask whether AI can replace the judgment. Ask whether AI can handle the non-judgment parts of the task, freeing your people to exercise better judgment.
4. Data
Ask: Do you have examples of this task being done well?
This is the dimension most businesses overlook. AI learns from examples. If those examples don’t exist in a usable form, AI has nothing to work with.
If your best salesperson’s closing approach lives only in their head — in intuition, in the way they read a room — AI can’t learn it. But if their call notes, email sequences, and deal progression are logged in your CRM, that’s trainable data.
If your senior CA’s ITR review process is just “I look at it and I know,” that’s locked expertise. If they’ve been annotating returns with notes for years, that’s a dataset.
Rule of thumb: The question isn’t whether you have data. It’s whether your best practices are captured somewhere outside of people’s heads.
How to Use This in Practice
Here’s an exercise you can run in your next leadership meeting. It takes 30 minutes.
-
List your top 5 operational bottlenecks. Where does work pile up? Where do you lose time, money, or quality?
-
Score each bottleneck on the four dimensions (High / Medium / Low):
| Bottleneck | Volume | Pattern | Judgment | Data |
|---|---|---|---|---|
| Invoice processing | High | High | Low | High |
| Client advisory | Low | Low | High | Medium |
| Inventory forecasting | High | High | Medium | High |
| Hiring decisions | Low | Medium | High | Low |
| Compliance checks | High | High | Medium | High |
-
Prioritize. The tasks with high Volume, high Pattern, low Judgment, and high Data are your starting points. In the table above, invoice processing and compliance checks are clear first candidates. Inventory forecasting is a strong second.
-
Be honest about data gaps. If a task scores well on three dimensions but low on Data, the first step isn’t buying AI. It’s capturing data for three to six months so AI has something to learn from.
This isn’t a theoretical exercise. This is how we evaluate every business we work with. The companies that get the most from AI are the ones that pick the right starting point, not the ones that spend the most on technology.
What to Do Next
Pin this framework to your wall. Use it the next time someone brings you an AI proposal. If they can’t explain how the use case scores on all four dimensions, the proposal isn’t ready.
And if you want to run through this exercise with someone who’s done it across dozens of businesses — manufacturing firms, CA practices, wealth managers, logistics companies — we’ll do it with you.
Book a free Basecamp session — we’ll walk through this framework with your actual operations.