How to calculate MAU in SQL

Sharpen SQL for your next interview
500+ SQL problems with worked solutions — joins, window functions, CTEs.
Join the waitlist

What MAU actually means

MAU stands for Monthly Active Users — the count of unique users who performed a meaningful action at least once over a month. The metric sounds trivial until your PM pings you on Friday afternoon: "Can you send me MAU by acquisition channel by Monday morning? And by platform. And growth versus last month." Suddenly you are choosing between calendar windows, rolling windows, distinct counts that do not fit into a single window function, and a definition of "active" that nobody in the room agrees on.

This guide walks through the SQL recipes that come up in every product-analyst and data-analyst interview at consumer-tech companies — places like Netflix, Uber, DoorDash, Notion, Airbnb. The patterns are the same whether you are running them on Snowflake, BigQuery, Redshift, or vanilla Postgres. We will start with the calendar and rolling definitions, then add platform and channel breakdowns, stickiness, new versus returning splits, the optimization tricks that keep these queries fast on a billion-row events table, and the common mistakes that quietly inflate the number.

One thing to settle before any SQL: the "active" event. Interviewers love to ask candidates to write a MAU query without specifying what counts as activity. The correct first response is to ask. Backend teams often define activity as an app open, marketing as any tracked event, and product as content consumption that lasts more than three seconds. Pick one definition, write it down, and use the same event_name filter everywhere downstream. We will use app_open throughout this post — substitute your own canonical event when you copy these queries.

The two ways to compute MAU

There are essentially two flavors of MAU: calendar MAU, where you bucket users by calendar month and count distinct user IDs in each bucket, and rolling MAU, where every single day you look back over the prior 28 or 30 days and count distinct users in that window. Calendar MAU is the version on board decks. Rolling MAU is the version on operational dashboards.

Calendar MAU

The straightforward version. One row per month, one distinct user count.

SELECT
    DATE_TRUNC('month', created_at) AS month,
    COUNT(DISTINCT user_id)         AS mau
FROM events
WHERE event_name = 'app_open'
GROUP BY 1
ORDER BY 1;

The flaw of calendar MAU is obvious as soon as you compare February to March: 28 days versus 31. A 10 percent drop from January to February might be entirely structural. If you compare year-over-year same-month numbers and your business has seasonality, calendar MAU is fine. If you compare month-over-month within the same year, calendar MAU lies a little.

Rolling 28-day MAU

Every day, count distinct users seen over the trailing 28 days. The reason for 28 instead of 30 is that 28 equals exactly four weeks, which cancels out weekday-of-week effects. Most consumer products have a clear weekly seasonality — fewer logins on Sundays, spikes on Wednesdays — and a 28-day window removes that pattern from the time series.

In Postgres, COUNT(DISTINCT ...) OVER (...) is not supported as a window function. That single fact trips up half of all candidates in a SQL interview. The workaround is a correlated subquery or a LATERAL join.

WITH days AS (
    SELECT DISTINCT created_at::DATE AS day FROM events
)
SELECT
    d.day,
    (
        SELECT COUNT(DISTINCT e.user_id)
        FROM events e
        WHERE e.event_name = 'app_open'
          AND e.created_at >= d.day - INTERVAL '27 day'
          AND e.created_at <  d.day + INTERVAL '1 day'
    ) AS mau_28d
FROM days d
ORDER BY d.day;

In production, this is the kind of query you snapshot. Running a correlated subquery over a year of daily values on a billion-row events table will burn your warehouse credits. Materialize the daily snapshots into a separate table — daily_mau_snapshots(day, mau_28d) — and refresh once a day in your transformation pipeline.

Ad-hoc MAU for the last 30 days

When a stakeholder asks "how many users did we have last month" and they do not need a time series, the answer is a single scalar:

SELECT COUNT(DISTINCT user_id) AS mau
FROM events
WHERE event_name = 'app_open'
  AND created_at >= CURRENT_DATE - INTERVAL '30 day';

That is the entire query. The mistake here is to write WHERE DATE_TRUNC('month', created_at) = DATE_TRUNC('month', CURRENT_DATE) instead, which gives you a month-to-date count that is structurally smaller than a full-month count until the last day of the month. We will revisit that pitfall below.

MAU breakdowns that interviewers ask for

Once the headline number is correct, the real work begins: slicing it.

By platform

SELECT
    platform,
    COUNT(DISTINCT user_id) AS mau
FROM events
WHERE event_name = 'app_open'
  AND created_at >= CURRENT_DATE - INTERVAL '30 day'
GROUP BY platform
ORDER BY mau DESC;

The expected gotcha: the sum of mau across platforms does not equal the total MAU, because the same user can open the iOS app and the web app in the same month. Distinct counts do not add. Be ready to explain that to a non-technical PM the first time they spot it.

By acquisition channel

SELECT
    u.channel,
    COUNT(DISTINCT e.user_id) AS mau
FROM events e
JOIN users u USING (user_id)
WHERE e.event_name = 'app_open'
  AND e.created_at >= CURRENT_DATE - INTERVAL '30 day'
GROUP BY u.channel
ORDER BY mau DESC;

Channel attribution lives in the users dimension table — usually populated at signup from a utm_source or a campaign tag. If your channel field is null for a significant portion of users, coalesce to 'unknown' rather than silently filtering them out with the join. An inner join here will quietly drop every user whose channel was never recorded, and your channel split will not reconcile with the total MAU.

By country, app version, or cohort

The pattern repeats: join the users table or the app_versions event field, group by the dimension, and count distinct users. The cohort variant — MAU by signup month — requires the MIN(created_at) per user to define the cohort, which is the same logic we use for new versus returning splits below.

DAU/MAU stickiness

Stickiness is the ratio of daily actives to monthly actives. It tells you what share of your monthly users come back on any given day. The benchmarks vary wildly by category. Daily-use products like messaging apps and the main feed of a social network sit above 50 percent. Regular-use products like e-commerce and social discovery land at 20–40 percent. Infrequent-use products like travel, insurance, or government services sit below 20 percent. A stickiness of 8 percent on a messaging app would be a five-alarm fire; the same 8 percent on a tax-filing app is healthy.

WITH daily AS (
    SELECT
        created_at::DATE        AS day,
        COUNT(DISTINCT user_id) AS dau
    FROM events
    WHERE event_name = 'app_open'
    GROUP BY 1
),
rolling_mau AS (
    SELECT
        d.day,
        (
            SELECT COUNT(DISTINCT e.user_id)
            FROM events e
            WHERE e.event_name = 'app_open'
              AND e.created_at >= d.day - INTERVAL '27 day'
              AND e.created_at <  d.day + INTERVAL '1 day'
        ) AS mau_28d
    FROM daily d
)
SELECT
    d.day,
    d.dau,
    m.mau_28d,
    ROUND(d.dau::NUMERIC / NULLIF(m.mau_28d, 0), 3) AS stickiness
FROM daily        d
JOIN rolling_mau  m USING (day)
ORDER BY d.day;

When stickiness moves, decompose: did DAU drop, did MAU drop, or did both move? A flat DAU with a rising MAU usually means top-of-funnel growth without retention — new users are coming in and not coming back. A falling DAU with a flat MAU means your core users are pulling back. The interview follow-up here is almost always "which is worse?" — and the answer is the second, because acquisition can be turned up with budget while a core-user retention drop typically points to a product regression.

New vs returning MAU

Splitting MAU into newly acquired versus returning users is one of the most informative decompositions. The same headline MAU number can hide a collapsing returning cohort that is being temporarily masked by an acquisition campaign, or vice versa.

WITH user_first AS (
    SELECT user_id, MIN(created_at)::DATE AS first_seen
    FROM events
    GROUP BY user_id
),
last_30 AS (
    SELECT DISTINCT user_id
    FROM events
    WHERE event_name = 'app_open'
      AND created_at >= CURRENT_DATE - INTERVAL '30 day'
)
SELECT
    COUNT(*) FILTER (WHERE first_seen >= CURRENT_DATE - INTERVAL '30 day') AS new_mau,
    COUNT(*) FILTER (WHERE first_seen <  CURRENT_DATE - INTERVAL '30 day') AS returning_mau
FROM last_30
JOIN user_first USING (user_id);

In a healthy growing product, both new and returning should grow week over week. In a saturating product, new flattens but returning grows — the existing base is becoming stickier. In a decaying product, new looks fine because of paid acquisition but returning quietly drops; total MAU stays flat for a few months and then falls off a cliff when the acquisition spend stops.

Sharpen SQL for your next interview
500+ SQL problems with worked solutions — joins, window functions, CTEs.
Join the waitlist

Common pitfalls

The first trap is using month-to-date MAU as if it were a full month. Early in the month the number is structurally small, and by the 28th it looks like growth. Stakeholders see a "down 40 percent versus last month" panic on the 5th and a "back to normal" report on the 30th. The fix is to never put month-to-date MAU on a trend chart without explicitly labeling it as partial, or — better — to use rolling 30-day MAU which is comparable every day of the month.

The second trap is conflicting definitions of "active." In one company we have seen backend define MAU on any HTTP request, marketing define it on any tracked event, and product define it on a content-completion event. Three teams report three different MAU numbers in the same all-hands and spend the next two weeks arguing about whose definition is "right." The fix is to write the canonical definition in a metrics dictionary, commit it to the warehouse, and reference it from every dashboard. If two teams genuinely need different definitions, give them different metric names — mau_app_open and mau_content_consumed — instead of letting them both call their thing "MAU."

The third trap is forgetting to filter out bots and test accounts. Internal QA accounts that ping the app on every deploy can quietly add a thousand to your MAU, which sounds small until your DAU/MAU ratio looks wonderful and your retention curves look better than they should. The fix is a join against an excluded_users table or a heuristic filter that drops obvious bot IPs and accounts with the company email domain.

The fourth trap is the most SQL-specific: assuming COUNT(DISTINCT ...) OVER (...) works as a window function. In Postgres, Redshift, and most variants of MySQL, it does not. Snowflake has an approximation. BigQuery supports APPROX_COUNT_DISTINCT over a window but not exact distinct counts. The cross-vendor portable pattern is the correlated subquery or LATERAL join shown above, possibly materialized.

The fifth trap is dropping the event filter entirely and just counting any user who appears in the events table. That pulls in every technical ping — silent push receipts, background sync, session heartbeats — and inflates MAU by 20 to 40 percent depending on the product. The fix is the canonical event filter from the metrics dictionary, applied at the very first step of every MAU query.

Optimization tips

Rolling-MAU queries are the most expensive analytics workload most companies have, because every day they re-scan the trailing 28 days. The single highest-leverage change is to materialize a daily snapshot table and never recompute history. A simple daily_mau_snapshot(day, mau_28d, dau, new_mau, returning_mau) updated once per day by a scheduled job in your transformation tool turns a five-minute warehouse query into a sub-second dashboard read.

Partition the events table by date. On Snowflake the equivalent is clustering by event_date; on BigQuery use partitioned tables with _PARTITIONDATE; on Postgres use declarative partitioning by month. The 28-day window scan should touch one or two partitions, not the whole table. If you ever see a MAU query scanning the entire history, the partition pruning is not firing — check the EXPLAIN plan and the predicate types.

For the distinct count itself, if approximate is acceptable, HyperLogLog-based approximations (APPROX_COUNT_DISTINCT in Snowflake and BigQuery) are an order of magnitude faster than exact distinct counts and accurate to within roughly 1 percent. Board decks and operational dashboards do not need exact precision. Reserve the exact COUNT(DISTINCT user_id) for financial reporting or compliance contexts where the number gets audited.

If you want to drill SQL questions like this until rolling-window distinct counts become muscle memory, NAILDD is launching with 500+ SQL problems built around exactly this pattern.

MAU vs WAU vs DAU

DAU is daily, WAU is weekly (7-day window), MAU is monthly (28 or 30 days). For B2B products WAU is often more informative than MAU, because the working audience naturally cycles weekly — Monday through Friday, with a dead weekend — and a 7-day window matches the natural rhythm of work software. For consumer products MAU is the standard headline.

DAU/WAU and WAU/MAU are both stickiness-style ratios. The DAU/MAU ratio is the one that appears on most growth dashboards, but WAU/MAU is sometimes more sensitive to subtle regressions because the denominator and numerator move on more similar time scales.

FAQ

Should I report calendar MAU or rolling MAU?

Use rolling 28-day MAU for any operational dashboard that an analyst, PM, or engineer looks at daily, because the values are comparable every day. Use calendar MAU when finance or the board asks for "users last month" — they want a clean, month-aligned number that matches their other reporting cadences. Most data teams maintain both and label them clearly.

Why 28 days and not 30?

Because 28 equals four full weeks, and most consumer products have strong weekly seasonality. A 30-day window contains some months with four Mondays and some with five, which introduces a noise pattern that has nothing to do with user behavior. A 28-day window cancels that out. The trade-off is that 28 sounds less natural to non-technical stakeholders, so some teams use 30 anyway and accept the noise.

Is a 5x MAU/DAU ratio normal?

It depends on the category. A 5x ratio means stickiness of 20 percent — somewhere between an e-commerce app and a social-discovery product. For a daily-use product like a messaging app or a feed-driven social network, 5x is alarmingly low and points to a retention problem. For a travel or finance app, 5x would actually be on the high side. Always benchmark against your category, not the global "good" number.

MAU is falling — what should I look at first?

Decompose into new versus returning MAU. If new MAU is falling, look at acquisition channels and signup conversion — top-of-funnel is leaking. If returning MAU is falling, look at retention curves, push notification deliverability, and recent product releases that may have broken core flows. The two failure modes need different fixes, and you cannot tell which is happening without the decomposition.

Should bots and internal accounts be filtered before or after the distinct count?

Before. The filter belongs in the base CTE that defines "active events," not in a post-hoc subtraction. Filtering after the count means you either have to maintain a parallel "true MAU minus bots" metric or live with inflated numbers on every dashboard. The canonical metric definition in your warehouse should have the bot filter baked in, applied once, consistently across every downstream consumer.

How do I version a MAU definition change without breaking history?

Create a new metric name — mau_v2 — and run both definitions in parallel for at least one full month. Compare the values, document the delta, and switch dashboards over only after stakeholders have seen both side-by-side. Never silently overwrite the definition; every existing trend chart and every historical comparison will quietly become wrong.