Skip to main content

Usage-Based Retention — Beyond Active Users

DAU/MAU is lying to you. Discover the five usage layers that actually predict churn, learn how Slack and Notion measure what matters, and build a retention score that prevents customers from leaving.

14 min read
Usage-Based Retention — Beyond Active Users

The Vanity Metric Trap

DAU/MAU is the biggest vanity metric in SaaS. That single number blurs healthy engagement patterns and dangerous churn signals into one misleading statistic. It's time to look deeper.

If you're a PLG founder watching your DAU/MAU ratio climb, you might feel confident your product has "stickiness." But here's the uncomfortable truth: a product with 20% DAU/MAU could have either healthy, distributed usage across your user base—or 80% of users logging in once and never returning. The ratio tells you nothing about who's actually at risk.

Retention isn't engagement. Amplitude's research makes this clear: users may return to your product without engaging meaningfully. They might open the app, glance around, and leave. That counts toward DAU but does nothing for retention. To prevent churn, you need to understand what usage actually predicts long-term commitment. This requires moving beyond surface-level metrics into a multi-layered view of how your customers actually use your product.

This guide introduces the five usage layers that predict whether someone will churn—and how to build a system that lets you intervene before they do. We'll also give you a copy-pasteable engagement scoring formula and a free Google Sheet template (including an L7 Power User Curve generator) so you can start implementing this today.

Your DAU/MAU ratio is lying to you

Sequoia's benchmarks suggest 10–20% DAU/MAU is standard stickiness for SaaS, with 50%+ considered exceptional. B2B SaaS best-in-class? Around 40%+. These numbers sound authoritative. They're also dangerously reductive.

The problem: DAU/MAU collapses variance into a single number. Imagine two products, both at 20% stickiness:

  • Product A: Users spread engagement across the month. Most log in 4–6 days per week. Usage is consistent.
  • Product B: 80% of users log in exactly once. A small power-user cohort drives the entire ratio.

Both show 20% on your dashboard. Only one has a healthy business. DAU/MAU can't tell them apart.

Key Insight

Retention ≠ engagement. Users may return but not engage meaningfully. Track both frequency and depth.

Duolingo learned this the hard way. By focusing on motivation features that drove actual engagement (not just opens), they increased current user retention by 21%—which led to a 4.5x increase in DAUs. The lesson: optimize for behavior that predicts retention, not vanity ratios. When you stop worshipping the ratio and start asking "what actions predict who stays?", everything changes.

The 5 usage layers that actually predict whether someone will churn

Layer 1 — Feature adoption breadth and depth

Users who adopt more features show 50–80% lower churn rates than single-feature users. This isn't theory—it's a consistent finding across PLG businesses.

Breadth means how many features someone uses. Depth means how intensively they use them. Both matter. A user who only uses one core feature is at higher risk than someone exploring three or four—even if their total session count is similar. Why? Because single-feature users haven't built switching costs. They haven't invested in learning your product. When a competitor offers a marginally better version of that one feature, they'll leave.

Target benchmarks:

  • Core features: 60–80% adoption
  • Specialized features: 20–40% adoption
  • Feature adoption depth: Users engaging with 3+ features have 2.5x lower churn than single-feature users

In practice, this means mapping your feature set into tiers: must-have (core workflow), nice-to-have (secondary capabilities), and power-user (advanced tools). Track adoption at each tier. If 90% of users never touch a core feature, that's a sign your activation flow is broken—or that the "core" feature isn't actually core for your users. Our guide on building meaningful customer engagement covers how to structure activation around these milestones in the first 90 days.

Layer 2 — The Power User Curve (your new best friend)

Andrew Chen's Power User Curve reframes engagement as a histogram: how many days in the last 30 (or 7) did each user engage? Instead of averaging everyone into DAU/MAU, you see the distribution. One number becomes a shape—and the shape tells you everything.

For B2B SaaS, use the L7 version—last 7 days—aligned with workweek cycles. Plot users on the x-axis by "days active in last 7" (0, 1, 2, 3, 4, 5, 6, 7) and count of users on the y-axis.

Healthy product: A "smile" or reverse-J shape—imagine a histogram where the right side (5–7 days) is tall and the left (1–2 days) is short. Many users at 5–7 days, fewer at 1–2. Your base is engaged. The bulk of your users are habitual.

At-risk product: Left-leaning curve. Most users cluster at 1–2 days per week. A long tail stretches to 6–7 days where a small cohort of power users lives. The majority are one step from churning—they're logging in just enough to "count" in your analytics but not enough to form a habit. If your curve looks like this, prioritize activation and re-engagement before growing top-of-funnel.

The Power User Curve is screenshot-worthy because it's instantly comprehensible. Stakeholders who glaze over at DAU/MAU will understand a histogram. We've included an L7 Power User Curve generator in our free Usage-Based Retention Scorecard—a Google Sheet template that turns raw usage data (user IDs and active-day counts) into this visualization. Paste your export from Amplitude, Mixpanel, or a simple CSV, and the chart builds itself.

Layer 3 — Feature stacking and workflow completion

When users combine features to create compound value—upload → filter → visualize → share—they're building switching costs. They're not just "using" your product; they're running workflows that would be painful to recreate elsewhere. Each step in the chain reinforces the next. If someone has uploaded files, filtered them, visualized the results, and shared with teammates, they've built a workflow. Recreating that in another tool means re-teaching their team, migrating data, and losing context.

Feature stacking is highly predictive of long-term retention. A user who completes a full workflow once is more likely to repeat it. Someone who only ever uploads files? Easy to leave. The difference is workflow completion vs. single-action usage.

Time-to-first-use benchmarks to aim for:

  • Core workflow features: 50% of users engaged within 24–48 hours
  • Secondary features: within 5–7 days
  • Advanced tools: within 14–21 days

Map your "hero workflows"—the 2–3 workflows that define your product's value—and measure how many users complete them. Track sequences, not just events. These workflows are your retention moats. When users stack features into a repeatable process, they're investing. Investment creates stickiness.

Layer 4 — Collaborative usage patterns

Slack's internal finding is legendary: teams that reached 2,000 messages had 93% long-term retention. That's the headline stat worth sharing. Teams that adopted integrations during trial showed 3.5x higher conversion. Collaboration creates network effects—and natural retention moats. When the whole team is in the tool, churning means convincing everyone to switch. That's a different conversation.

When usage is shared (messages, comments, shared docs), leaving your product means leaving the team. That's a much harder decision than cancelling a solo tool. Notion and Figma work the same way: shared workspaces, real-time collaboration, comments, and permissions create a web of dependency. The more people in the doc or file, the stickier it gets.

If your product has collaborative features, track:

  • Team-level activation (not just individual)—how many teams hit your activation threshold?
  • Integration adoption during trial—Slack's 3.5x lift came partly from this
  • Messages, shares, or collaborative actions per team—raw collaboration volume

These metrics often outperform individual usage for predicting retention in team-based products. If you're selling to teams, team behavior is the leading indicator. Individual engagement matters, but team engagement matters more.

Layer 5 — Engagement velocity and decay

Track not just current usage but the rate of change. A user whose engagement score drops 40% within a month signals imminent churn. Acting within days—not weeks—can save them. The direction of travel matters as much as the destination.

Velocity matters as much as level. A steady medium-engagement user is often safer than a formerly high-engagement user who's declining. Why? Because decline is predictive. Users rarely churn from a standing start—they drift. Login frequency drops. Feature use narrows. Session depth shrinks. By the time they cancel, the decay has been visible for weeks. This is why identifying early warning signs of churn before they become critical is essential—usage decay is one of the most reliable leading indicators. Combine it with other signals like support ticket patterns and payment behavior for a full picture.

Action Threshold

40% drop in engagement score within a month = high churn risk. Intervene within days.

How Slack, Notion, and Figma measure what actually matters

The best PLG companies obsess over Product Qualified Leads (PQLs)—users who've demonstrated value through usage, not just form fills. PQL scoring combines product usage data with intent signals to identify which trial users or leads are most likely to convert. Real PQL thresholds and activation metrics from the field:

  • Slack: 2,000 messages sent—the team has built a habit of communication in the product
  • Facebook: 7 friends in 10 days—early social proof drives retention
  • Bonjoro: Reduced churn by 60% by switching focus from MQLs to PQLs—they stopped optimizing for form fills and started optimizing for usage

PQL conversion rates typically hit 25–30% versus 1–2% for MQLs. The difference is qualification criteria: MQLs are qualified by demographic and firmographic data (job title, company size). PQLs are qualified by behavior. Someone who's already getting value from your product is a fundamentally different lead.

Yet a 2019 survey found only 1 in 4 SaaS companies tracked PQLs. The gap is massive—and so is the opportunity. If you're not segmenting by product usage, you're leaving conversion and retention on the table.

These companies don't optimize for DAU/MAU. They optimize for actions that correlate with retention in their product. Slack's 2,000 messages wouldn't make sense for a design tool; Figma's magic moment is different. Your job is to find yours—the action or threshold that, once reached, dramatically increases the likelihood someone stays. Run cohort analysis: compare users who hit your candidate threshold within 14 days vs. those who didn't. If retention at 90 days is meaningfully higher for the former, you've found a PQL signal worth optimizing for. Then build your funnel around that—onboarding, email sequences, and in-app prompts should all steer users toward that threshold early.

Building your usage-based retention score in a spreadsheet

You don't need expensive tools to start. Here's a step-by-step approach you can implement today. A spreadsheet gives you flexibility to tweak weights and thresholds until you find what works for your product—then you can codify it in a tool later.

Step 1: Define 3–5 core actions

List the actions that most predict retention in your product. Examples: logins per week, core feature use count, workflow completion (did they finish the hero flow?), collaboration events (messages sent, comments, shares). Avoid vanity metrics—every action should have a theoretical link to retention. "Opened app" is weak; "completed onboarding workflow" is strong.

Step 2: Assign weights

Weight actions by predictive strength. Run a simple correlation if you have the data: which actions correlate with users who stayed 90+ days? A "completed workflow" might be 3x more valuable than a "login." A typical starting point: Login Frequency 30%, Core Feature Use 40%, Session Depth 30%. Adjust based on your product.

Step 3: Create composite score

Use this formula (copy-paste ready):

CES = (w₁ × Login Frequency) + (w₂ × Core Feature Use) + (w₃ × Session Depth)

Where each component is normalized (e.g., 0–100 scale) and weights sum to 1. Normalize by defining min/max for each metric—e.g., 0 logins = 0, 7 logins per week = 100. Linear scale in between. This is similar to a customer health score—but usage-weighted rather than support-ticket or NPS-weighted.

Step 4: Set traffic-light thresholds

  • Green: Score above 70—healthy, no action needed
  • Yellow: 40–70—monitor, consider light touch (e.g., in-app tip or email)
  • Red: Below 40—intervene immediately

Calibrate these by looking at historical churn: what score did churned customers have 30 days before they left? Set red below that.

Step 5: Automate alerts

When scores drop across thresholds, trigger workflows. Don't wait for manual review. Even a weekly export into a spreadsheet with conditional formatting is better than nothing. Ideally, pipe scores into a tool that can send emails or create tasks when thresholds are breached.

Free Resource

We've created a Usage-Based Retention Scorecard—a Google Sheet template that maps your features, defines engagement weights, calculates composite health scores, and includes an L7 Power User Curve generator. Download it here—no competitor offers this specific tool.

From insight to intervention — using usage data to prevent churn before it happens

A score is useless without action. Connect your scoring system to automated triggers so risk flows into response without manual triage:

Suggested triggers by risk level
SignalTrigger
Low score (40–70)Re-engagement email: 'We noticed you haven't been in lately—here's a quick win'
Very low score (<40)Personal outreach: CSM or founder reach-out to understand blockers
Declining score (40% drop in 30 days)Feature education campaign: targeted tips on underused high-value features

The goal: intercept at-risk users before they make the decision to leave. Once someone has decided to churn, win-back is expensive and low-yield. Manual monitoring doesn't scale—you need systems that flag risk and route it to the right response. Playbooks should be documented: when Red, do X. When declining, do Y.

This is where tools like Tether tie in naturally. Tracking usage-based health across hundreds or thousands of accounts, setting thresholds, and automating playbooks when scores drop—that's the operational layer that turns retention insight into retention results. You've built the mental model (the five layers) and the score (the formula). The final step is operationalizing it: dashboards, alerts, and automated actions that run while you sleep.

Conclusion

DAU/MAU will never tell you who's about to churn. The five usage layers—feature adoption, Power User Curve, feature stacking, collaboration, and engagement velocity—will. Each layer adds a dimension that the ratio flattens out. Together, they give you a map of who's engaged, who's at risk, and who's already drifting away.

Start with the spreadsheet. Define your core actions, weight them, build your score, and set thresholds. Add an L7 Power User Curve to see your distribution—that single visualization will change how you think about stickiness. Then connect those insights to interventions: emails, outreach, education. Finally, automate so you're acting within days, not months. The companies that win at retention don't have better products; they have better systems for detecting risk and acting on it.

The best retention strategy isn't hoping users come back—it's knowing when they won't, and reaching out before they decide to leave. That's the shift from vanity metrics to usage-based retention. And it starts with the five layers. Download the scorecard, build your first version this week, and iterate. Your future retained customers will thank you.

Sources

  • Sequoia Capital, "The SaaS Metric That Matters Most" (DAU/MAU benchmarks)
  • Amplitude, "Retention vs. Engagement" (retention ≠ engagement)
  • Duolingo, product team findings (21% retention lift, 4.5x DAU)
  • Andrew Chen, "The Power User Curve" (L30/L7 histogram framework)
  • Slack, internal activation research (2,000 messages, 93% retention; 3.5x integration lift)
  • OpenView Partners / PQL adoption survey (1 in 4 SaaS, 2019)
  • Bonjoro case study (60% churn reduction via PQL focus)
  • PLG benchmarks on feature adoption and time-to-first-use

Track usage-based retention across all your customers

Tether monitors engagement scores, flags at-risk accounts, and automates interventions—so you prevent churn before it happens.

Try Tether Free
Scott Wittrock

Scott Wittrock

Founder & CEO

Solo founder of Tether. Built to help SaaS founders stop losing customers in the noise. No more choosing between shipping features and customer success.

Learn more →