REV. MAY 2026
/ Methodology
Methodology and sources
Vendor and analyst sources cited across the site, the calculation framework for each cost category, in-scope and out-of-scope coverage, refresh cadence, limitations, and the corrections process. Verified May 2026.
Vendor and analyst sources
The sources below are the primary references used across the site. Each is labelled by source type so readers can calibrate the kind of claim the source supports. Paywalled analyst research is cited by publisher name and category context only; we do not republish paywalled figures.
| Source | Type | Used for | Refresh cadence |
|---|---|---|---|
| Gartner CASB Magic Quadrant | Analyst research (paywalled, used for vendor-landscape context only) | CASB vendor landscape and category leadership context for /discovery-methods/casb and /tools-overview. We cite Gartner as an analyst publisher; we do not republish paywalled figures. | Annual |
| Gartner SaaS Management Platform analyst research | Analyst research (peer-review reviews are public) | SaaS management platform (SMP) market context on /tools-overview. Peer-review summaries used to characterise SMP coverage range; no paywalled figures republished. | Continuous (Peer Insights), Annual (formal reports) |
| Productiv State of SaaS | Vendor-published telemetry, customer sample | Benchmark for average enterprise SaaS application count (the 269-apps figure on the homepage). Cited with vendor name and sample bias label on /statistics. | Annual |
| Zylo SaaS Spend Benchmarks | Vendor-published telemetry, customer sample | SaaS spend per employee band and by category benchmarks on /statistics. Labelled as vendor-published with sample bias caveat. | Annual |
| BetterCloud State of SaaSOps | Vendor-published practitioner survey, respondent self-selection | SaaSOps practitioner findings on shadow IT adoption and SaaS governance maturity. Cited with vendor-published survey label. | Annual |
| Cisco Umbrella DNS-layer security data | Vendor-published DNS-traffic telemetry | Context for DNS-log analysis as a lower-cost CASB substitute on /discovery-methods/casb. Cited for category framing, not as a benchmark figure source. | Continuous (real-time intelligence), periodic public reports |
| Microsoft Defender for Cloud Apps | Vendor product reference | Tool example for CASB-style shadow IT discovery (especially in Microsoft-stack organizations) on /discovery-methods/casb and /tools-overview. Listed as a category example, not a vendor endorsement. | Continuous (vendor pricing and feature surface) |
| IBM Cost of a Data Breach Report | Primary-source measurement (Ponemon Institute methodology) | Breach-cost benchmark for the C-02 annualized loss expectancy math on /cost-categories/breach-risk. Industry-split figures are preferred over global average where the reader has a clear industry. Methodology appendix in the report. | Annual |
| Verizon Data Breach Investigations Report (DBIR) | Primary-source incident pattern data | Breach probability and incident pattern context for C-02. DBIR industry incident rates are the defensible anchor for breach-probability inputs when the reader has no internal threat model. | Annual |
| EU GDPR Article 83 (regulator page) | Regulatory primary source | Statutory penalty caps (4 percent of worldwide annual turnover, or 20 million euros, whichever is higher; tiered) on /cost-categories/compliance. Cap, not expected fine. | Updated when legislation amends |
| US HHS HIPAA Civil Money Penalty page | Regulatory primary source | Statutory penalty tiers under HIPAA for /cost-categories/compliance. Tier caps cited, not enforcement averages. | Updated when regulation amends |
| European Commission EU AI Act page | Regulatory primary source | Penalty tier framework for the EU AI Act on /cost-categories/compliance. Penalty tiers cited; enforcement practice still nascent. | Updated as the Act enters effect |
For the wider reading list including academic shadow IT literature, see /industry-data-sources.
In scope
- >Four-category cost decomposition (observable spend, probabilistic breach exposure, compliance fine exposure, operational overhead).
- >Five discovery methods (CASB and network analysis, SSO gap analysis, expense audit, browser inventory plus survey, SaaS management platform telemetry).
- >Range-based estimator math: low, expected, high per category, summed to a combined range.
- >Attribution assumptions: explicit shadow IT attribution for breach probability, explicit enforcement probability for compliance fines, labelled as assumptions rather than findings.
- >Source-cited statistics ledger with the trust flag (primary, analyst, vendor-published, cannot-verify) for every figure.
- >Regulator-page statutory caps for GDPR, HIPAA, PCI DSS, SOC 2, ISO 27001, EU AI Act.
Out of scope
- >Vendor pricing quotes for CASB, SaaS management platforms, or DSPM products. We do not republish vendor pricing; readers consult the vendor pages directly.
- >Enterprise-negotiated discounts or specific procurement terms for any vendor in the SaaS management or security tooling space.
- >Specific organization breach probability claims; we provide framework anchors (DBIR, internal threat models, cyber insurance carrier estimates) and let the reader assemble the input.
- >Point-estimate average shadow IT cost figures. We present ranges; a single average across organizations is methodologically indefensible.
- >Legal advice on compliance fines. Statutory caps are cited from regulator pages; expected enforcement depends on jurisdiction, regulator discretion, and case-specific factors outside the scope of this site.
- >Substitute for an actuarial threat model. The ALE math on /cost-categories/breach-risk is a transparent planning tool, not a replacement for cyber insurance underwriting or formal threat modelling.
Calculation framework
The site treats shadow IT cost as the sum of four categories, each with a distinct measurement method. Below is the calculation method per category, the combined exposure formula, and the range-presentation discipline that keeps the result defensible.
C-01 Observable spend
Expense audit plus SSO gap analysis. For paid SaaS leaving a financial trail, low = audited subscriptions actually identified; expected = audited plus the SSO-gap apps with subscription estimates; high = expected plus the survey-disclosed paid apps that are likely under-reported. Vendor-confirmed unit pricing applied where available.
C-02 Probabilistic breach exposure (ALE)
Breach probability x shadow IT attribution x breach cost. Breach probability sourced from DBIR industry incident rates, cyber insurance carrier estimate, or internal threat model. Shadow IT attribution is an explicit 10 to 30 percent assumption labelled as such, with sensitivity analysis. Breach cost from IBM industry split where the industry is clear.
C-03 Compliance fine exposure
Statutory cap (from regulator page) x subjective annual enforcement probability. Caps are public; enforcement probability is reader-supplied (1 to 10 percent typical; higher in regulated sectors). Multiplication produces an expected-value estimate of fine exposure, with the cap as the upper bound. We cite caps from official regulator pages, not from third-party summaries.
C-04 Operational overhead
Internal IT time audit. Estimate the FTE share of IT capacity consumed by shadow IT handling (integration rework, offboarding gaps, ticket volume from unsupported apps). Multiply by IT loaded hourly rate from internal HR data. No fabricated industry benchmark; the figure is yours from start to finish.
Combined exposure
Low = sum of category lows. High = sum of category highs. Expected = sum of per-category expected values. Combined range will span roughly an order of magnitude for most organizations; a narrower range typically overstates certainty. The estimator on /measure-your-exposure produces this combined output as a CSV for board decks.
Range presentation
Never collapse to a single number. Lead with the expected; show low and high adjacent. The board can engage with the assumptions driving each category; a single number gives them nothing to engage with except a yes-or-no vote.
Refresh cadence
The verified-date constant rolls forward on the first business week of every month after a source re-check pass against the primary references in the sources table above. The current verified label is May 2026, held in one constant (LAST_VERIFIED_DATE) imported by every page so footer text, schema dateModified, and visible REV chips agree.
Out-of-cycle refreshes are triggered by:
- >IBM Cost of a Data Breach annual refresh (usually Q3) -> update /cost-categories/breach-risk benchmark anchor and /statistics ledger row.
- >Verizon DBIR annual refresh (usually Q2) -> update incident-pattern context on /cost-categories/breach-risk and the DBIR ledger row on /statistics.
- >EU AI Act phased implementation milestones -> update /cost-categories/compliance penalty section as the Act enters effect.
- >GDPR enforcement-trend shift (a new regulator-published methodology, a precedent-setting case) -> update /cost-categories/compliance enforcement-probability framing.
- >Vendor-published State-of-SaaS / SaaSOps refresh from Productiv, Zylo, BetterCloud -> update the corresponding /statistics ledger rows and the homepage figure cards.
Limitations
- >Shadow IT attribution to breach probability is not a solved problem in the security literature. Any specific attribution percentage is a planning assumption, not a measurement. The defensible posture is to state the assumption explicitly, show a sensitivity range, and label the result as scenario-dependent.
- >Vendor-published benchmarks (Productiv, Zylo, BetterCloud) are telemetry from customers who self-selected into buying SaaS management tooling. They are not random-sample studies of enterprises. Sample bias is labelled on every vendor-published figure on /statistics.
- >Figures repeated across the industry that we cannot trace to a primary public source are called out separately on /statistics rather than smuggled into confident-sounding paragraphs elsewhere. The honest section is the differentiator.
- >Estimator outputs are bounded estimates of current exposure, not forecasts. Forecasted reduction from a governance program is a separate calculation on /governance-roi with the assumption-driven inputs labelled.
- >Analyst paywalled research (Gartner, Forrester) is cited by publisher name and category context; figures behind paywalls are not republished. Readers seeking the analyst figure consult the analyst directly.
Corrections process
Spotted a stale citation, a missing source, an outdated regulatory cap, or a methodological gap? Email [email protected] with the page URL and the primary source you would like cited. Substantive corrections are typically actioned within five business days, with the verified-date constant rolled forward to reflect the update.
For broader editorial position, refer to /about.