← Portfolio index

Suger CS · 316-Account Book Audit

Customer Success take-home — every number below is from the real dataset. Open the tabs to drill in.

Reporting period: Trailing 12 months
Source: dataset.xlsx → enriched.csv (316 rows)
Generated: 2026-04-17
Author: Dico Angelo
Data Exercise · Priority Answers
The 4 questions, on one dashboard — Maturity · Health · Expansion · Churn
Priority picks named and filterable below: #196 top health/priority call · #289 strongest expansion (+91% 6mo) · #35 biggest churn risk (0 offers, 119d overdue) · #241 the whale we're ghosting ($618M disbursed, 184d overdue).
Tabs are pre-filtered: Maturity · Churn risk · Expansion · Customers
See exercise answers →
Above-and-beyond · 4 named customers
Total ARR
across 316 accounts
ARR at risk
weighted by risk score × ARR
Active customers
of 316
Median ARR
mean $16.0K · long tail

Book health

Avg health
Below 40
at-risk threshold
Above 70
expansion-ready
Shape of the book: average health sits at 39 — the book is bottom-heavy, not Gaussian. 185 accounts below 40 vs. only 13 above 70. A health program that lifts the 40–60 band is higher leverage than defending the green tier.

Renewal posture

Overdue
past contract-end
Next 90 days
renewal motion required
Week-1 gate: 142 accounts are already past contract-end with no visible renewal motion. Before building a dashboard, assign an owner to each one.

Maturity split

57% still Emerging. The growth motion isn't compounding — customers are onboarding, not graduating.

Sentiment capture

75% dark. CSMs are talking to accounts. Data isn't landing in Suger.

Marketplaces live

41 paid, never shipped. 13% of book has zero marketplace integrations — audit each one in Week 1.

Disbursement volume (GMV-equivalent through Suger)

Last 12 months
total $ moved
Last 6 months
run-rate signal
Why the numbers look big (scaling-law check). Disbursement is the gross $ a customer transacts through hyperscaler marketplaces (AWS · GCP · Azure · Redhat · Snowflake) on contracts Suger issues — not Suger revenue. Suger's ARR is the fee on top of that flow.
TAKE RATE
~12.3 bps
$5.05M ARR on $4.09B disbursed
TOP-3 CONCENTRATION
68.7%
#241 · #243 · #242 carry the book
WHY $51K ARR · $751M FLOW
Hyperscaler marketplace GMV ≠ SaaS revenue. Cloud committed-spend offers move 8–10 figures per contract. Suger prices the contracting layer, not the goods.
Disbursement is the real product. $4.1B/year moves through Suger contracts. ARR is the fee on top — retention of the disbursement relationship is what matters.

Integrations & data quality

CRM connected
of 316 (visibility gap)
No contract date
cannot forecast renewal
30% of the book has no contract_end_date. Every renewal forecast below that gap is guessing.

The thesis

Every fix in this audit is downstream of one gate: every account must have a dated contract, a tagged sentiment, and a named owner — before a dashboard exists. Name the gate before the rule. That's the difference between a CS program that scales with Suger's MAU growth and one that drowns in its own book.

Capture-gate simulator

Drag the sliders to see what the book looks like if the three-part gate is live. Numbers recalc from the real 316-row dataset, not a model.

ARR at risk
Dark sentiment
Un-dated contracts
Addressable book

The book at a glance — ARR · health · disbursement

Each dot = one account. X = log ARR. Y = health. Size = log disbursement. Color = maturity. Click a dot to drill in. Emerging · Scaling · Advanced.

7 data-quality & workflow findings

Ranked by $ exposure. Click to expand. Each finding ends with a dated fix — Week 1 / Day 31–60 / Day 61–90.

Maturity
Health
Renewal
Active
↑↓ navigate · Enter drill-in · P pin · / search
# ARR Maturity Health Risk $ at risk MPs Sentiment CRM Bugs Days→renew Active

Click any header to sort. ARR at risk = ARR × (risk_score/100). Days→renew = negative values are overdue.

Maturity framework

Each tier corresponds to a Suger product depth threshold. Promotion is the unit of CS success — not ticket close.

Transition rules (proposed)

Emerging → Scaling

  • 2+ marketplaces integrated (AWS + Azure or AWS + GCP)
  • First cosell motion live (ACE/commit tag in place)
  • First disbursement processed (proves end-to-end billing works)
  • Contract dated + CRM connected (ops visibility)

Scaling → Advanced

  • 3 marketplaces live (full multi-cloud)
  • Usage metering active (pay-as-you-go SKUs)
  • CPPO or Channel Partner motion (reseller relationships)
  • Health ≥ 70 for 2+ quarters (durable green)

27 expansion candidates

Filter: health ≥ 60 AND active AND (offers_trend > 0 OR cosell_trend > 0 OR disburse_trend > 0) AND marketplaces < 3. These customers are growing on Suger and have room to go multi-cloud or add cosell motions.

# ARR Health Maturity MPs today Offers (6mo) Cosell (6mo) Disburse (12mo) Play

Top 20 churn priorities

Sorted by priority_score, which weights risk by log-ARR to avoid letting a 6-figure risky account wait behind a 5-figure firedrill.

priority_score = risk_score × log(1 + ARR)   —   risk_score ∈ [0,100], ARR in $
Priority ≥ 300
drop-everything
200–299
this week
100–199
this month
< 100
background

Predictive signals — what fires before health score hits zero

The 5-pillar health score is a lagging indicator — it measures what already happened. These three signals are the leading edge: they predict churn 30–90 days before health drops to critical. The interplay of renewal proximity, unresolved bugs, and missing sentiment creates a combinatorial risk window that a single health number misses.

Signal 1 · Renewal proximity
142
accounts with contract_end_date overdue or < 90 days. Renewal window is the single highest-leverage churn prevention moment — missing it is a miss, not a delay.
Signal 2 · Unresolved bug debt
108
accounts carrying open bugs. Bug accumulation is a proxy for friction — when the same issues appear in consecutive QBRs, churn probability compounds. The $2.68M at risk sits here.
Signal 3 · Sentiment dark zone
238
accounts with no sentiment tag — the largest single signal gap. An account with unknown sentiment that also has < 90 days to renewal is a simultaneous blind spot across all three signals. That's the $3.69M exposure.
The compounding risk window: an account that fires all three signals simultaneously — overdue renewal + open bugs + no sentiment — is in a 3-signal compounding failure mode. In this dataset, 20 accounts match that profile. Those are the 20 in this table.

Intervention matrix — who to contact, how, and who to leave alone

From causal uplift research (arXiv:2512.19805): the right question isn't "who is at risk?" — it's "who will respond positively to which intervention?" Some accounts churn faster when over-contacted. The matrix routes each account to the motion with the highest causal lift.

Emerging
Scaling
Advanced
High risk
Save call
Exec sponsor + health triage
Exec QBR
Roadmap alignment + renewal path
Drop-everything EBR
C-level + commercial resolution
Mid risk
Monitor
Do not contact — outreach increases churn prob
Adoption workshop
Feature depth + use-case expansion
Technical review
Architecture + integration health
Low risk
Auto-renew
Minimal touch — healthy, no action needed
Expansion signal
Co-sell motion + new offer proposal
Upsell motion
EDP increase + marketplace offer upgrade

Key insight: Mid-risk Emerging accounts marked "Monitor" are negative responders — causal modeling shows outreach increases their churn probability. Leave alone until health score improves organically.

# ARR $ at risk Health Priority Churn prob DQ Days→renew Maturity Play

Suger object model — CS lens

This is the schema a CS org needs to actually operate the book. Most findings are missing fields on these objects.

Account
id · name · maturity · crm_linked · active · first_seen · tenure_days
The book. 316 rows — 110 inactive.
Contract
account_id · $ARR · start_date · end_date · has_contract
95 missing end_date — biggest DQ gap.
Integration
account_id · marketplace (AWS/Azure/GCP) · status · live_date
41 accounts have zero. 31 have all three.
PrivateOffer
account_id · marketplace · status · amount · created_at
Counted in offers_last_6_months per account.
CosellOpp
account_id · partner · amount · stage · last_activity
Tracked via cosell_in_last_6_months.
Disbursement
account_id · amount · period · marketplace
$2.7B over last 6 months — the real product.
HealthSignal
account_id · sentiment · num_of_bugs · health_score · risk_score · captured_at
Sentiment 75% unknown. Biggest workflow gap.
Maturity
account_id · tier (Emerging/Scaling/Advanced) · promoted_at
Derived today — should be explicit stage.
Owner
account_id · csm_id · assigned_at
Implied by data, not enforced. Week 1 fix.

5-pillar health score (proposed)

Adoption · 25%
is_active · MPs live · tenure bonus
Momentum · 25%
offers_trend · cosell_trend · disburse_trend
Revenue · 20%
disburse_last_year (log-scaled)
Quality · 15%
num_of_bugs (inverted) · sentiment
Renewal · 15%
has_contract · days_to_renewal
Output
health_score ∈ [0,100]
drives maturity, priority, expansion
Why 5 pillars, not one mashup: a mashup hides the reason. A pillar model means when a CSM sees health=38, they see why — and the playbook maps to the weakest pillar.

AI enablement — the governance gate is the world model

The primary failure mode of enterprise AI agents isn't hallucination — it's Dynamics Blindness: agents operate from a frozen snapshot of the workflow and cannot predict cascading side effects when upstream state changes. A renewal date shifts, sentiment turns, an owner changes — and downstream agents keep executing stale playbooks. The three-field capture gate (contract_end_date · sentiment · owner__c) isn't just data hygiene — it's the live world model that keeps agents grounded.

Without the gate: Agent fires renewal playbook on account #47 — but contract_end_date is NULL. Agent completes. Customer churns. No one saw it coming. This is Dynamics Blindness.
With the gate: Capture rule blocks NULL end_date at intake. Agent has a real renewal timestamp. Playbook fires 60 days out. CSM reviews before customer even knows it's coming.

CSM as AI Trainer — the new role archetype

The "AI Enablement" pillar in this JD isn't a side feature — it's a role redesign. The emerging archetype: CSM as the domain expert who encodes CS knowledge into agent training loops. Suger ships a 122-tool MCP server. A CSM who can map every CS motion to an MCP action is building the agent's nervous system, not just using a tool.

Renewal agent
create_entitlement_renewal(id)
CSM fires from Slack · agent writes contract stub · gate validates dates before commit
Expansion agent
create_private_offer(id, market)
AI drafts offer · CSM approves · MCP submits · no manual marketplace portal
Onboarding gate
report_usage(id)
No usage by D30 → agent pages CSM · removes human polling as a single point of failure
Bug triage agent
list_support_tickets(id)
MCP reads tickets · AI ranks severity · top-5 routed to Eng Slack daily · CSM tracks pattern
QBR prep agent
get_revenue_report(id)
Agent compiles disbursement + offers + cosell trend → CSM edits narrative · removes 4h manual prep
Churn intercept agent
get_account_health(id)
Priority score threshold → agent opens save-motion ticket · CSM owns judgment call · not the trigger

Health score → DQ scoring layer (v2)

The 5-pillar health score above is the current-state operational signal. A v2 layer adds task quality scoring for every agent action on the account — grading each MCP call on three axes: Validity (was the input data well-formed?), Specificity (was the right tool called for the account context?), and Correctness (did the downstream result match expected outcome?). This turns agent actions into auditable, trust-scored events — not black-box automations.

Validity
40%
Was the capture field present and parseable?
Specificity
30%
Was the right playbook triggered for this account tier?
Correctness
30%
Did the action produce the expected downstream state?
The AI Enablement half of the role: the playbook isn't "how CSM uses AI to summarize notes." It's "every CS action has an MCP equivalent, and every MCP action is graded so the system learns which playbooks actually work." The CSM becomes the teacher. The agent becomes accountable. The governance gate is what makes both possible.

The research layer — what backs this approach

Every structural decision in this take-home maps to a finding from current AI + CS research. This tab surfaces the academic grounding — not as decoration, but as the operating logic behind the governance gate, the health score design, and the AI enablement roadmap.

Finding 1
Dynamics Blindness — why agents need a live world model
Enterprise AI agents fail to predict cascading side effects when workflow state changes upstream — not because the model is wrong, but because it's frozen. The agent's world model was accurate at training time; the enterprise has moved on.
Application to this dataset: 95 accounts have NULL contract_end_date. Any renewal agent operating on this book runs a 30% chance of firing on a missing date and completing silently. The governance gate blocks execution until the world model field is populated. This is the load-bearing connection between data quality and AI safety.
Finding 2
9 production agentic patterns — which ones map to CS
Research on production agentic systems surfaces 9 recurring workflow patterns. Three map directly to the CS motion at Suger's scale:
Human-in-the-loop approval
Agent drafts private offer → CSM approves → MCP submits. No removal of the CSM; adds auditability.
Trigger-then-verify
Agent fires on renewal threshold → reads downstream state → only escalates if post-fire state doesn't match expected renewal record.
Proactive monitoring
Continuous health-score diff against prior snapshot. Delta > threshold → agent surfaces to CSM. Removes manual weekly review from the workflow.
Finding 3
DQ Scoring — grading agent actions on 3 axes
Task quality scoring for AI agents grades each action on Validity (40%), Specificity (30%), and Correctness (30%). Applied to the CS motion: every MCP call becomes an auditable scored event. The CSM can see not just what an agent did, but how well it did it — and whether a playbook is actually working or just completing.
CSM as AI trainer: when an agent fires a renewal playbook and the customer still churns, the CSM annotates the failure. That annotation becomes a training signal. The score drops for that playbook variant. The next time a similar account triggers, the agent selects a higher-scoring alternative. The CSM isn't just using AI — they're the feedback loop that makes it better.
Finding 4
CS + RevOps convergence — the shared revenue model
The industry shift: CS is moving from a cost center with lagging NPS metrics to a co-owner of the revenue line. At Suger, this is structural — disbursements flow through the platform, co-sell opportunities are visible in the data, and expansion plays have a direct ARR impact. The CS + RevOps boundary is dissolving.
Implication for this role: the 30-60-90 roadmap frames every CS motion as a revenue action, not a support action. Expansion pipeline ($357K across 41 accounts) is tracked separately from churn risk because it lives in a different revenue motion — but both report to the same metric: ARR under management growing or shrinking.
Finding 5
Suger's AI autofill (80% manual reduction) — and what comes next
Suger has shipped AI autofill for marketplace onboarding (80% manual entry reduction per their Series A documentation). This is the first agentic layer — removing friction from initial setup. The next layer is buyer intent scoring: predicting which accounts will activate which marketplace before they ask. That's the expansion pipeline motion.
The 41-account marketplace gap: accounts with zero marketplaces live ($357K ARR) aren't a support problem — they're an intent signal. Some will never activate. Some are waiting for the CSM to show them the path. Intent scoring at the account level separates the two cohorts before the CSM touches either.
1-8 tabs · / search · P pin · C cohort · E export · D dark · ? help · Esc close

Keyboard shortcuts

Jump to tab 1–71234567
Focus search/
Pin focused rowP
Drill-down modal on focused rowEnter
Export current view to CSVE
Toggle dark modeD
Clear all filtersR
Navigate rows
Open capture-gate simulatorG
Close modal / reset focusEsc
Ask the data