Google Retires &num=100: 2025 SEO Rank Tracking Accuracy Update
Google retired the &num=100 results-per-page in 2025, disrupting SEO rank tracking accuracy. Learn impacts, expert strategies, and new KPIs now.


Updated on Oct 7, 2025 — This story is evolving. We’ll monitor changes in pagination behavior, device/locale differences, and vendor adaptations. Expect brief updates over the next 2–4 weeks, then monthly summaries through Q4 2025.
What changed and why it matters
In mid-September 2025, Google stopped honoring the unofficial “&num=100” URL parameter that historically let practitioners view up to 100 organic results on one SERP. Google clarified its stance via Search Engine Land’s September 18, 2025 coverage, noting the parameter is “not something they support,” as reported in Search Engine Land’s confirmation.
The rollout looked uneven for a few days. On September 12, 2025, early tests captured by Barry Schwartz showed “&num=100” sometimes working, sometimes not—an intermittent phase documented in SERoundtable’s testing note. By mid-September, disruptions to rank trackers and desktop impression reporting were widely covered; see Search Engine Journal’s September 15, 2025 analysis.
Why this matters: many rank tracking methodologies assumed top-100 completeness in a single request. With “&num=100” now retired, data collection requires pagination (e.g., via start=
), increasing request counts, costs, and latency—especially for teams tracking large portfolios.
What this breaks in measurement
Two immediate effects surfaced across practitioner datasets:
- Completeness assumptions break: the old “one call for 100 results” workflow is gone, and traditional top-100 rank distributions can no longer be replicated as-is.
- Practical retrievability limits: vendors and agencies observed an effective focus around the first ~20 results in many scenarios. Treat this as observed behavior—not an official cap.
Quantitative disruptions were documented across cohorts. Search Engine Land summarized a vendor dataset in a September 18, 2025 piece reporting broad impression declines and term visibility losses across hundreds of properties: see Search Engine Land’s impact data. On-the-ground operational impacts (10x request multiplication, slower reporting cadences, some vendors constraining depth temporarily) are described in Logical Position’s September 16, 2025 analysis.
The takeaway: accuracy in 2025 depends on rethinking the denominator—SERP depth, features, and mixed-method validation—rather than chasing pre-change top-100 exhaustiveness.
Redesign your KPIs and collection workflow
Move beyond linear ranks to visibility-centric metrics designed for page-one reality and modern SERPs:
- Share of top-3/5/10 and page-1 coverage: percentage of tracked keywords occupying those tiers.
- SERP feature occupancy: your presence across sitelinks, video, Top Stories, and AI Overview placements.
- Volatility index: week-over-week movement across top positions.
- AI answer citation share: how often your brand or pages are cited in generative answers.
Collection and processing tips:
- Paginate via
start=
:start=0
(1–10),start=10
(11–20), and so on. Rate-limit requests, parse, deduplicate, and aggregate. Log HTTP status, location, device, and UI variants. - Depth standard: adopt a top‑20 routine reporting standard; sample deeper only for mission-critical queries.
- Cross-validate against trend direction in Google Search Console impressions/clicks rather than raw “average position,” which can appear artificially improved when lower pages aren’t captured.
Example workflow integration: If you’re broadening measurement to include AI answers alongside SERPs, a multi-platform tool like Geneo can be used to monitor brand visibility across Google AI Overviews, ChatGPT, and Perplexity while your rank tracking pipeline adjusts. Disclosure: Geneo is our product.
Sampling and QA: accuracy without runaway costs
You can maintain accuracy while controlling costs by adopting statistical sampling and rigorous QA:
- Stratified sampling: split your keyword set by intent (informational vs. commercial), device (desktop vs. mobile), and locale. Sample each stratum sufficiently to achieve a ±5–7% margin of error at 95% confidence for portfolio-level reporting.
- Spot audits: run manual/panel checks on high-value queries weekly to validate tool output and catch UI anomalies.
- Method documentation: record pagination parameters, request pacing, deduplication rules, sampling sizes, and QA routines in an internal methodology doc.
- Vendor methodology differences: make sure your team understands the trade-offs between tools’ collection strategies; for a neutral overview of how AI monitoring approaches differ, see our AI brand monitoring comparison.
Beyond blue links: add AI answer visibility to your KPI stack
Generative answer engines are now part of discovery. Expanding KPIs to include AI citation and sentiment metrics helps offset the blind spots of traditional scraping:
- AI citation rate: frequency of your brand or URLs appearing in AI answers.
- Sentiment trend: tone associated with your brand mentions in answers.
- Cross-engine coverage: presence across Google AI Overviews, ChatGPT, and Perplexity.
To visualize how AI visibility reporting can look, review this short example: AI visibility report: GDPR fines 2025.
Stakeholder communication playbook
- One-page explainer: Briefly clarify that post‑September 2025 rank fluctuations may reflect measurement changes, not demand shocks. Include the new KPI set and your sampling/QA approach.
- Dashboard updates: Re-orient performance views to top‑3/5/10 share, page‑1 coverage, SERP feature occupancy, volatility, and AI citation metrics.
- Contracts and SLAs: Explicitly define post‑&num=100 data availability constraints (depth standards, sampling error bounds, update cadence) to maintain trust.
- Cadence: Weekly mini-updates through October for any observed UI/parameter shifts; monthly summaries in Q4 2025 as vendors stabilize.
Watchlist and update cadence
- Device/locale variance: Evidence is inconclusive; monitor for differences between desktop/mobile and across geographies.
- UI changes and caps: Treat any current “top‑20” effect as observed, not guaranteed; continue spot checks and sampling.
- Google Search Console interpretation: Expect lower impressions and sometimes improved average position if lower pages aren’t captured; validate against clicks and trend direction.
- Official documentation: Track any updates from Google’s public statements; as of September 18, 2025, Google communicated through industry outlets that the parameter is unsupported, per Search Engine Land’s reporting.
What to do next
- Adopt the top‑20 reporting standard and redefine KPIs around visibility, features, and volatility.
- Implement stratified sampling with explicit error bounds, plus weekly spot audits on priority queries.
- Add AI answer visibility to your measurement stack to avoid misreading performance in 2025.
- If you’re expanding beyond traditional SERPs, consider incorporating a multi‑platform visibility tracker like Geneo to complement your rank tracking pipeline while vendors adapt.
Sources cited
- According to Search Engine Land’s confirmation on September 18, 2025, Google does not support the results-per-page parameter: Search Engine Land confirmation.
- Early rollout behavior was documented on September 12, 2025: SERoundtable testing note.
- Rank tracker disruptions and desktop impression impacts were covered on September 15, 2025: Search Engine Journal analysis.
- Quantified impact across cohorts was summarized on September 18, 2025: Search Engine Land impact data.
- Operational repercussions for teams and tools were discussed on September 16, 2025: Logical Position analysis.
