Revenue forecasting has a trust problem. Leadership wants a number. Sales gives a number. Finance adjusts the number. By the time it reaches the board, nobody remembers where the original number came from.
Why gut-feel forecasting fails
The traditional approach asks reps to self-report deal confidence. "Are you going to close this?" is a terrible forecasting question. Reps are optimistic by nature — that's what makes them good at sales. But optimism is the enemy of accurate forecasting.
The alternative — having managers haircut rep forecasts — just adds another layer of subjective adjustment. You end up with a number that's been filtered through two sets of biases, neither of which is grounded in data.
The AI + human framework
Better forecasting combines three signal types:
Historical patterns. What does your actual conversion data say? If deals in stage 3 close at 40%, that's a fact — not an opinion. Start there.
Live pipeline signals. Deal velocity, engagement recency, stakeholder involvement, competitor mentions, next steps quality. These are leading indicators that AI can score automatically.
Rep context. The rep knows things the data doesn't: the champion just got promoted, the budget cycle shifted, or the competitor just had a security breach. This context matters — but it should supplement data, not replace it.
The weekly commit review
A good commit review takes 30 minutes, not 2 hours. The format: AI generates a draft forecast based on historical conversion and live signals. Reps review and adjust with context. Managers approve or challenge specific adjustments — not the whole number.
This flips the meeting from "tell me your number" to "here's what the data says — do you agree?" Much more productive. Much less political.
Measuring forecast accuracy
Track your forecast accuracy over time. Not just "did we hit the number?" but at each stage: was our stage-3-to-close conversion rate what we predicted? Were our velocity assumptions right? This is how you calibrate the model and build trust in the process.