Every AI initiative that arrives at executive approval has already survived significant organizational momentum — internal champions who believe in it, vendors who have invested in it, teams who are excited about it. By the time it reaches the budget conversation, there is pressure — often subtle, sometimes not — to approve and move forward.

The ten questions below are not designed to create obstacles. They are designed to ensure that the momentum behind an initiative is grounded in clarity. If the team proposing the initiative can answer all ten questions with confidence, the organization is ready to move. If they cannot, the pre-work is more valuable than the budget.

Mudassir Saleem Malik - AI Strategy for CXOs

The Ten Questions

1. What specific business outcome will this AI initiative improve?

Not a direction or a theme — a specific, measurable outcome. Cost per application processed. Customer satisfaction score. Time to resolution. Fraud detection rate. If the outcome cannot be named specifically, the initiative is not ready.

2. What is the current baseline for that metric?

You cannot demonstrate improvement without knowing where you started. If the baseline has not been measured, the first investment should be measuring it — not building an AI system.

3. What is the target, and why is it achievable?

The target should be grounded in evidence: comparable implementations at similar organizations, pilot data, or a principled analysis of the addressable improvement. A target without evidence is an aspiration, not a commitment.

4. Does the necessary data exist, and is it ready?

What data does the AI system need to function? Is that data available, in the right format, at the right quality, at sufficient volume? Has a data audit been done? If the answer is "we think so," that is not good enough before committing budget.

The 10 Questions — Quick Reference

1. What specific outcome improves?

2. What is the current baseline?

3. What is the target and why is it achievable?

4. Does the necessary data exist and is it ready?

5. Who is accountable for the outcome?

6. What governance structure covers AI decisions?

7. What does the full cost of ownership include?

8. What is the plan if the first implementation fails?

9. How does this connect to the overall AI strategy?

10. What does success look like in 90 days?

5. Who is accountable for the outcome?

Not the vendor. Not the technology team. An internal business leader who will be evaluated on whether the initiative delivers the promised outcome. If the accountability is diffuse, the outcome will be too.

6. What governance structure covers AI decisions?

Who is accountable when the AI system makes a wrong decision? What is the audit trail? How is performance monitored over time? What triggers a review or intervention? In regulated industries, these questions have regulatory answers. In all industries, they have risk management answers.

7. What does the full cost of ownership include?

Not just the licensing or build cost. Integration engineering, data preparation, change management, training, ongoing maintenance, model monitoring, and eventual model retraining. The total cost is typically 2–3x the technology cost alone.

8. What is the plan if the first implementation does not achieve the target?

A contingency plan is not pessimism. It is evidence that the organization has thought honestly about risk. An initiative with no contingency plan is an initiative whose advocates have not seriously considered the possibility of failure.

"The quality of the questions asked before an AI initiative begins determines more of its outcome than the quality of the technology chosen to execute it."

9. How does this initiative connect to the overall AI strategy?

Is this a standalone investment or part of a coherent AI strategy? Does it build capabilities that will be leveraged by future initiatives? Does it create technical debt that will constrain future options? AI investments that are isolated from a strategy are harder to compound over time.

10. What does success look like in 90 days?

Not the full ROI case — the 90-day checkpoint. What will be true in 90 days if the implementation is on track? If the team cannot answer this question, the initiative does not have the operational specificity required for execution.

Using This Framework

These questions are most useful as a pre-approval conversation rather than a gate. The goal is not to reject initiatives that cannot immediately answer all ten. It is to identify which questions require more work before the budget is committed — and to ensure that the organization's AI investment is grounded in clarity rather than momentum.


Mudassir Saleem Malik uses this framework with executive teams before AI budget decisions are made. He is CEO of AppsGenii Technologies, based in Richardson, Texas.