The SaaS metrics canon is well established. Most finance teams know the names: ARR, NRR, CAC, LTV, Rule of 40. Fewer get the definitions exactly right. Fewer still understand how they connect, which ones to trust, and which ones can be gamed into telling a story the underlying business doesn't support.
This is a guide to the metrics that matter, how to calculate them, and what to watch for when the numbers stop being honest.
ARR and the gap between ARR and revenue
Annual Recurring Revenue is the foundation metric for any subscription business. It normalises all active subscriptions to an annual value: take every active subscription, convert to annual, sum them up. The formula is ARR = MRR × 12 for businesses tracking monthly, or a direct sum of annualised contract values for annual contracts.
The distinction that trips up most early-stage finance functions is the relationship between ARR and GAAP revenue. ARR is a non-GAAP metric. It is not recognised under ASC 606. A company can sign a $120,000 annual contract on 1 January and book $120,000 in ARR immediately. GAAP revenue recognises $10,000 per month over the contract period. In a high-growth business where contracts are landing throughout the year, the two numbers diverge materially. ARR is a leading indicator; GAAP revenue lags by the recognition schedule. Finance teams must reconcile both and explain the gap to boards and investors. Aggressive ARR definitions — including professional services revenue, defining renewals loosely, or counting pipeline as committed — inflate the headline and obscure the real growth rate.
Monthly Recurring Revenue is the more useful unit at early stage, where contract lengths are short and the mix changes quickly. ARR becomes the primary lens once the business moves upmarket into annual contracts. Both track the same underlying health: the rate at which customers are committing to pay.
A related variant boards sometimes request is Committed ARR (CARR): ARR plus signed-but-not-yet-live contracts, net of forecasted churn. CARR is a legitimate metric for businesses with long implementation timelines — it captures topline momentum earlier than ARR does. The distinction matters: transparent CARR reporting is sound practice; including uncommitted pipeline in ARR without disclosure is manipulation.
Bookings and ARR are not interchangeable — but they are sometimes treated that way. A company signing a three-year, $360,000 subscription contract books $360,000 in total contract value, $120,000 in annual contract value, and $120,000 in ARR. The numbers diverge further when professional services, one-time fees, or usage commitments are included in the signed contract. Boards asking about bookings momentum and finance teams reporting ARR are often talking past each other. Define what is in each bucket before the conversation starts — and document it in the company's metrics glossary so the definition is stable across reporting periods.
One structural caveat: the formulas in this guide assume annual subscription contracts. Usage-based or consumption-based businesses — Snowflake, Twilio, Datadog — require different approaches. In those models, trailing twelve-month spend replaces MRR as the base unit, and ARR-style metrics are calculated from annualised consumption rather than contracted value. Applying subscription metrics to a consumption model produces misleading signals, particularly around expansion and churn.
Net New ARR: the waterfall, not the number
ARR tells you where you are. Net New ARR tells you how you got there. It breaks into five components: gross new ARR from customers who were not customers before, expansion ARR from existing customers upgrading or growing usage, restart ARR from cancelled accounts reactivating, contraction ARR lost to downgrades, and churned ARR lost to full cancellations. The formula is: Net New ARR = Gross New + Expansion + Restart − Contraction − Churn.
The composition matters as much as the total. A business where Net New ARR is dominated by gross new logo acquisition is dependent on sales capacity. A business where expansion drives 60% of the gain has a compounding base: the existing customer portfolio generates growth without requiring sales to close anything new. Early-stage companies are almost always gross-new dominant, because the customer base is too small to generate meaningful expansion. Mature businesses shift progressively toward expansion. That transition is one of the clearest indicators of business model maturity. A business still showing 90% gross-new composition at Series C has a retention or expansion problem it is papering over with sales motion.
Boards should see Net New ARR as a waterfall, not a single number. Each component tells a different story and belongs to a different team. Churn belongs to customer success. Contraction is usually a signal of either competitive pricing pressure or weak product fit. Gross new is a sales and marketing output. Expansion indicates that the product is delivering enough value for customers to buy more. Presenting a blended number hides all of that.
Gross Revenue Retention and Net Revenue Retention
NRR: Share of prior-period revenue retained including expansion. Can exceed 100%.
NRR = (Opening ARR − Churn − Contraction + Expansion) / Opening ARR
Gross Revenue Retention and Net Revenue Retention are related but tell entirely different stories. Understanding both, and understanding which one is being manipulated in any given investor deck, is a core CFO skill.
GRR measures how much of last period's revenue survives without any expansion. It can only reach 100% — you cannot retain more than 100% of revenue if you exclude upsells. GRR is the floor. It reveals the combined impact of full cancellations and downgrades — both are subtracted in the formula. A business with low logo churn but heavy contraction will show a poor GRR, and the diagnosis is different: that is a pricing or packaging problem, not a retention problem. A business with 80% GRR is losing 20% of its revenue base every year before it sells a single additional dollar to anyone.
NRR includes expansion, which means it can exceed 100%. NRR above 120% means existing customers are growing spend fast enough that the business generates positive Net New ARR from the existing base alone, without acquiring a single new logo. That is the definition of a durable business model. NRR below 100% means the customer base is shrinking in revenue terms even before accounting for what sales brings in.
Logo churn and revenue churn are different measurements of the same event. Logo churn counts the number of customers who cancelled, expressed as a percentage of the total customer count. Revenue churn (the quantity that feeds GRR) measures the ARR lost from those cancellations as a percentage of starting ARR. A business with low logo churn can still show high revenue churn if the customers leaving are disproportionately large. Conversely, high logo churn can coexist with low revenue churn when the departing customers are disproportionately small — SMB tail churn in an otherwise enterprise-growing business is the most common version. Reading GRR alongside logo churn rate reveals which segment is actually driving attrition. A business burning through small customers while retaining large ones has a different set of problems than one that cannot hold enterprise at all.
The critical trap is that high NRR can mask a serious GRR problem. A business with 15% annual logo churn but 30% expansion from surviving accounts might show 115% NRR. The headline looks healthy. The underlying churn problem is severe and will compound. When GRR and NRR diverge significantly, the business is dependent on expansion to stay afloat, which is only sustainable as long as expansion rates hold.
The methodology used to calculate NRR matters as much as the number itself. Cohort-based NRR — identifying a year-ago cohort of customers, measuring their ARR then and now — is the only valid approach. The alternative, sometimes called lazy NRR, calculates starting ARR plus net expansion divided by starting ARR using a current-period snapshot. This includes new customers in the numerator even though they were not in the year-ago base, inflating the metric. Figma, Snowflake, and Confluent have each defined NRR slightly differently in their public filings. A company reporting 110% NRR under a cohort method might report 125% under a lazy approach. Benchmark comparisons only hold when the methodology is consistent.
There is a related error in how retention is sometimes measured directionally: running the analysis backward from today's customer base overstates retention because the worst churners are already gone. The surviving cohort looks healthier than the original cohort actually was. Always measure retention forward from a historical starting point, never backward from the current book.
NRR is also one of the single strongest predictors of software revenue multiples — arguably more powerful than Rule of 40 on a standalone basis. A business with 120% NRR commands a fundamentally different valuation than one at 100% NRR, even at identical growth rates, because the compounding base changes the long-run revenue trajectory.
Gross margin: what belongs in COGS
Gross margin defines how much revenue remains after direct delivery costs. For SaaS businesses this is typically 70–85%. The number looks clean until you have to decide what belongs in cost of goods sold.
The correct test is: does this cost directly enable delivery of the product to the customer? Cloud infrastructure costs that scale with usage — AWS, GCP, Azure compute, storage — belong in COGS. Customer support that is directly tied to resolving product issues belongs in COGS. Payment processing fees for fintech-embedded products belong in COGS. The practical shorthand: if a customer churned tomorrow, would this cost disappear? If yes, it is COGS. If no, it is operating expense.
What does not belong: R&D is building the product, not delivering it. Sales and marketing are selling the product, not serving it. Customer success managers with renewal or expansion quotas are partially a sales function and their costs should reflect that. G&A — finance, legal, HR — is overhead regardless of customer volume. The boundaries matter because misclassification flows directly into gross margin. Companies that shift R&D or G&A costs into COGS report artificially lower gross margins. Companies that shift support costs into G&A inflate gross margins. Both distort comparisons and mislead valuation models.
Gross margin benchmarks vary by business model. Pure software with no services typically runs 75–85%. SaaS with meaningful implementation or professional services typically runs 65–75%. Vertical SaaS businesses with embedded payments or transaction revenue — Toast, Mindbody, ServiceTitan — often run 40–60%, because payment processing COGS are real and high. Lower gross margins are not automatically a problem. They are a structural characteristic of the business model. What matters is whether the gross margin is sufficient to fund the operating expense structure at scale.
Contribution margin: the unit economics lens
Contribution Margin % = Contribution Margin / Revenue
Gross margin and contribution margin are not the same metric. Gross margin is an accounting output — it reflects which costs the income statement classifies as cost of goods sold. Contribution margin is an economics tool — it reflects which costs actually vary when you acquire one more customer or deliver one more unit. A SaaS business paying commissions on each sale, processing payments per transaction, and provisioning cloud resources per customer has variable costs beyond what accounting typically puts in COGS. Contribution margin captures them. Gross margin does not.
This distinction matters most in three places. First, CAC payback: the formula uses gross margin as a proxy for monthly cash contribution per customer, but contribution margin is the more accurate denominator. If meaningful variable costs are sitting in operating expenses rather than COGS, the payback calculation understates the time to break even. Second, LTV: lifetime value must be calculated as contribution margin times average customer lifespan, not revenue or gross profit. Using revenue inflates LTV; using gross margin may still miss variable costs classified as operating expense. Third, pricing decisions: whether a contract is profitable at a given discount depends on whether the contribution margin covers its variable cost load, not whether it produces positive gross margin.
The marginality trap is the main risk. Marginal contribution logic — the idea that incremental revenue at a lower margin still "contributes" to fixed cost coverage — is correct for a single deal but dangerous at scale. Each low-margin deal is defensible in isolation. Enough of them and the aggregate margin profile deteriorates to the point where fixed costs are no longer covered. Contribution margin analysis must be applied at the portfolio level, not just to individual deals. Tracking it by customer segment, acquisition channel, and product line reveals which parts of the business are genuinely accretive and which are subsidised by the rest.
CAC payback period: a risk metric, not a return metric
The customer acquisition cost payback period is one of the most misunderstood metrics in SaaS. Most people treat it as a return metric. It is not. It is a risk metric. It measures how long capital remains exposed before it is recovered.
The formula: CAC Payback Period = Total S&M Spend / (New MRR × Gross Margin %). At the individual customer level: CAC ÷ Monthly Gross Profit per Customer. The output is months. Under 12 months is excellent — the business recovers acquisition investment within a year and can redeploy that capital. 12–18 months is healthy for most SaaS businesses. 18–24 months is borderline. Over 24 months is a cash flow problem, particularly for businesses with meaningful churn, because there is a real chance the customer churns before the investment is recovered.
That last point is the one most payback analyses ignore. The formula does not include churn. A business with a 20-month CAC payback period and 5% monthly logo churn has a significant probability of customers churning before month 20. A driver-based model that simulates customer survival rates against payback schedules reveals the true exposure. The back-of-envelope formula does not.
There is also a prepaid contract distortion. Enterprise SaaS businesses with annual or multi-year contracts prepaid upfront receive cash at signing. If the standard formula yields a result less than 12 months on an annual prepaid deal, payback happens effectively at invoice. The formula assumes monthly cash collection, which misrepresents the economics of prepaid businesses.
The Magic Number and sales efficiency
The SaaS Magic Number measures how efficiently the go-to-market engine converts spend into recurring revenue. Formula: Net New ARR / Prior Period S&M Spend. A result above 1.0 means every dollar spent on sales and marketing generated more than a dollar of ARR. Above 1.5 is a signal to invest more aggressively. Between 0.75 and 1.5 is healthy. Between 0.5 and 0.75 is a prompt to improve efficiency before scaling. Below 0.5 means the engine is broken and adding more budget will compound the problem.
The number has three significant limitations. First, gross margin context is essential. A magic number of 0.8 at 80% gross margin is fundamentally different from 0.8 at 50% gross margin — the first business generates much more contribution from each acquired dollar of ARR. Second, timing distorts the metric: S&M spend in one quarter generates revenue across multiple future quarters due to sales cycles and ramp time. The number is noisy in any single period. Third, the metric uses Net New ARR in the numerator, which includes the effect of churn. Sales teams do not own churn. Using Net New ARR penalises the GTM motion for churn that customer success is responsible for. Some operators prefer using Gross New ARR or New ARR (new plus expansion) as the numerator to isolate sales efficiency.
SaaS Quick Ratio
The Quick Ratio surfaces something Net New ARR and NRR can obscure: what is happening to revenue momentum right now, in this period. NRR measures whether a historical cohort has grown; Quick Ratio measures whether the current period's adds outpace its losses. A business can show healthy trailing NRR while its Quick Ratio is deteriorating — which is an early warning that churn is accelerating before it shows up in cohort data.
The ratio is most useful as a trend line, not a point-in-time number. A Quick Ratio declining from 3.5 to 2.0 over four quarters tells a different story than a stable 2.5. It belongs in the monthly operating review alongside Net New ARR.
One limitation worth noting: a very small denominator — when churn and contraction are near zero — can produce a ratio that looks spectacular but carries no information. This typically occurs in the very early stages when the customer base is too small to have generated meaningful churn yet. In that situation, track the absolute MRR flows rather than relying on the ratio itself.
LTV and LTV:CAC
LTV:CAC = LTV / CAC
LTV is the total gross profit a customer is expected to generate over their lifetime. The formula treats the customer relationship as a perpetuity: annual gross profit divided by annual churn rate. At 10% annual churn, the average customer lifespan is 10 years. At 25% churn, it collapses to 4 years. The formula is sensitive to churn in a way most practitioners do not fully appreciate — moving annual churn from 8% to 12% reduces implied average lifespan from 12.5 years to 8.3 years, a 34% reduction in LTV without changing a single commercial outcome.
LTV:CAC ratios above 3:1 are the conventional benchmark, borrowed from the SaaS growth playbook. The problem is that a 5:1 ratio with a 36-month payback period is a structurally different risk profile from a 3:1 ratio with a 10-month payback. The LTV:CAC ratio does not capture time. A customer generating $60,000 in lifetime gross profit over 12 years is less valuable than one generating the same amount in 3 years, but they look identical in LTV:CAC. This is why LTV:CAC should always be read alongside CAC payback, not as a standalone signal.
LTV also does not capture the time value of money, and the perpetuity formula ignores expansion. For businesses with meaningful NRR above 100%, the formula understates LTV because it does not include the contribution from customer growth. For early-stage businesses where churn rate estimates are unreliable, LTV:CAC is best treated as a directional signal and a segment comparison tool — useful for showing that enterprise customers are worth 4x what SMB customers cost to acquire, less useful as a precise unit economics valuation.
Rule of 40
The Rule of 40 combines revenue growth rate and profit margin into a single health score. The sum of the two should equal or exceed 40. A business growing at 30% with 10% EBITDA margin scores 40. One growing at 50% with a 15% operating loss scores 35.
The metric resolves the growth-profitability tension that defines venture-backed SaaS. Early-stage businesses sacrifice margin to acquire customers and expand into the market. Late-stage businesses trade off growth for margin. Rule of 40 provides a common language across stages: as growth slows, margin must compensate.
The profit margin component is typically EBITDA margin or free cash flow margin. The choice matters for benchmarking — EBITDA will always be higher than FCF due to capex, so comparing Rule of 40 scores across companies requires consistent methodology. A further trap specific to public company comparisons: most public SaaS companies report Rule of 40 using adjusted EBITDA, which excludes stock-based compensation. SBC at high-growth software companies typically runs 15–30% of revenue. A private company calculating Rule of 40 on a GAAP basis — which includes SBC as an expense — will show a materially lower score than an equivalent public company on an adjusted basis. When benchmarking against public comps, establish whether the comparison uses GAAP or adjusted EBITDA before drawing conclusions. Among public SaaS companies, the median Rule of 40 score sits around 42. Approximately 77% of public SaaS market cap is concentrated in businesses that meet the threshold. The correlation between Rule of 40 score and revenue multiple is one of the strongest empirical relationships in software valuation.
The metric is a diagnostic tool, not an absolute requirement. A business deliberately growing at 80% and losing 30% is making a rational bet that the market opportunity justifies burning cash now. Rule of 40 framing does not invalidate that. It does prompt the question: at what growth rate will the margin catch up, and does the business have the capital runway to get there?
In PE-backed SaaS, the effective bar has shifted. As EBITDA multiples compressed from ~30x to ~15x post-ZIRP, the deal economics that made Rule of 40 sufficient no longer hold. PE investors increasingly treat Rule of 60 — typically 20% growth plus 40% EBITDA margin — as the structural minimum for leverage to work. This is not a preference; it is a constraint imposed by debt servicing costs at current rates. Finance leaders in PE-backed businesses should understand which standard their investors are applying before presenting a Rule of 40 score as evidence of health.
Burn Multiple
The Burn Multiple asks how much cash a business consumes for every dollar of net new ARR it generates. A Burn Multiple of 2.0 means $2 of cash burned per $1 of ARR added. A Burn Multiple of 0.8 means the business generates new ARR more cheaply than it spends — exceptional capital efficiency at any stage.
The metric gained traction post-2022 as a corrective to the growth-at-all-costs era, when many businesses ran implicit Burn Multiples above 5 without scrutinising what growth was actually costing. When multiples compressed and capital became scarce, the economics of buying revenue at any price inverted sharply.
The Burn Multiple is useful precisely because it forces a comparison between the income statement and the cash flow statement. Rule of 40 can be flattered by revenue recognition timing, non-cash items, and EBITDA adjustments. The Burn Multiple uses actual cash consumed — which includes working capital movements and items the income statement obscures. Read alongside CAC payback, the two metrics tell a coherent story about whether the growth being purchased is worth the price being paid for it.
Which metrics to track at each stage
The right metrics are not the same at every stage. Tracking 15 metrics at seed burns bandwidth that does not exist. Tracking three metrics at Series C leaves blind spots the board will find before you do.
At early stage — pre-Series A — the metrics that matter are conversion rates, early customer churn signals, and unit economics in the simplest possible form. Is the product sticky enough that customers are not immediately leaving? What does it cost to acquire a customer, and does the margin per customer make the economics viable at scale? These questions are qualitative as much as quantitative. The data sets are too small for statistical significance.
At Series A and B, the focus shifts to NRR and CAC payback. The customer base is now large enough to measure cohort retention meaningfully. The go-to-market motion is established enough to evaluate efficiency. NRR reveals whether the product delivers sufficient value for customers to stay and grow. CAC payback reveals whether the sales and marketing investment has reasonable return mechanics. Both of these metrics need to be trending in the right direction before scaling S&M aggressively.
At Series C and beyond, gross margin structure, OPEX benchmarks, and Rule of 40 trajectory matter. The business is now large enough that structural inefficiencies — high COGS, bloated G&A, R&D that is heavy on maintenance rather than growth — will cap profitability at scale. Investors are beginning to look at the path to public markets, where Rule of 40 and free cash flow margin drive multiples. Concrete opex targets at this stage: S&M at 30–50% of revenue (down from 50–80% at early stage), R&D at 20–35% (down from 30–50%), G&A at 8–15% (down from 15–25%). These are not absolute targets but directional anchors. A business at Series C with G&A still at 20% has overhead it cannot take public.
The traps
One thing the standard metric suite does not make visceral: the compounding effect of churn. A 1% monthly churn rate becomes 11.4% annually. 2% monthly becomes 21.5%. 3% becomes 30.9%. 5% becomes 46%. Monthly churn rates that look modest compound into annual losses that hollow out the business. This is why CAC payback at 20 months with 3% monthly logo churn is a serious problem — not a borderline one. The survival probability of that customer reaching payback is below 50%.
The most common mistake is using metrics in isolation. CAC payback without considering churn probability is incomplete. NRR without examining the GRR-to-NRR gap misses fragility. Magic Number without gross margin context misleads. LTV:CAC at 3:1 can still represent a bad business if the payback period is 36 months in a market with meaningful churn. The metrics form a system. No single number tells the whole story.
The second trap is definition drift. When metrics start looking worse, there is pressure to refine them in ways that happen to improve the headline. Extending the NRR measurement window from 12 to 24 months. Switching from cohort-based to lazy NRR. Redefining what counts as churn. These moves are sometimes legitimate adjustments as the business evolves, but they require documentation and reconciliation to prior periods. A company that cannot explain why its NRR methodology changed simultaneously with a churn inflection has a credibility problem with anyone paying attention.
The third trap is assuming that what gets measured gets managed. It is the opposite. Metrics that lack consequences — where a miss requires no change in strategy or investment — are not metrics, they are decoration. A finance team that produces 30 KPIs and watches management ignore significant variances has a reporting infrastructure that adds cost but not decision quality. The right number of core metrics is five or six. They should evolve as the business matures. They should drive action.
A $3 billion market cap SaaS company once discovered that 25% of its paying users had not logged in over the prior year. The ARR looked fine. The NRR looked fine. The usage data told a different story about what would happen to the next renewal cycle. Metrics that only measure money miss what money is built on.
The point of this suite of metrics is not to make the business look good. It is to tell the truth about what is happening to revenue quality, growth efficiency, and capital productivity — before the quarterly board review forces the conversation. Finance teams that optimise the definitions rather than the business produce numbers that satisfy investors until the moment the gap between what is reported and what is real becomes impossible to bridge. Track what matters, define it consistently, and report what the data actually shows.