Google Search Console 12 min read

Decoding the Coverage Report in Google Search Console

The Coverage Report reveals which of your pages Google has indexed and which ones have problems. Understanding this report is crucial for maintaining your site's search visibility.

The Coverage Report (also known as the Index Coverage Report or Pages Report) in Google Search Console provides a comprehensive overview of your website's indexing status. It shows exactly which pages Google has indexed, which ones have issues, and why certain pages are excluded from the index.

What is the Coverage Report?

The Coverage Report is a section of Google Search Console that provides detailed information about the indexing status of all URLs that Google has discovered on your website. It's essentially your window into understanding what Google knows about your site and how successfully it can add your pages to its search index.

This report helps you answer critical questions:

  • How many of my pages are indexed by Google?
  • Which pages have errors preventing indexing?
  • Why are certain pages excluded from the index?
  • Are there technical issues I need to fix?
  • Has indexing improved or declined over time?
Daily Updates The Coverage Report is updated daily, giving you near real-time visibility into your indexing status

The Four Status Categories

The Coverage Report categorizes all discovered URLs into four main statuses, each represented by a distinct color:

1. Valid (Green)

Pages in this category are successfully indexed and can appear in Google Search results. This is what you want for all your important content pages. When you see a URL listed as "Valid," it means:

  • Google has crawled the page
  • The page met Google's quality standards
  • No technical issues prevented indexing
  • The page is eligible to appear in search results

2. Valid with Warnings (Yellow)

These pages are indexed but have potential issues that might affect their performance or could indicate problems you should address. The page appears in search results, but you should investigate the warnings.

3. Error (Red)

URLs with errors are NOT indexed. These represent serious issues that prevent Google from adding the page to its index. Error pages require immediate attention as they're essentially invisible to Google Search.

4. Excluded (Gray)

These URLs are not indexed, but intentionally so. Exclusions can be due to your own directives (robots.txt, noindex tags) or Google's decisions about duplicate content and canonicalization. Not all exclusions are problems.

Status Indexed? Action Needed?
Valid Yes No
Valid with Warnings Yes Investigate
Error No Yes - Fix urgently
Excluded No Review if unexpected

Understanding Error Types

When URLs show error status, the Coverage Report provides specific reasons. Here are the most common error types:

Server Errors (5xx)

Google received a server error when trying to crawl the page. This could be a 500 Internal Server Error, 502 Bad Gateway, 503 Service Unavailable, or similar. These errors indicate your server wasn't able to respond to Google's request.

  • Cause: Server overload, configuration issues, or temporary outages
  • Solution: Check server logs, optimize server resources, contact hosting provider

Redirect Error

Problems with your redirect configuration. This includes redirect loops, redirect chains that are too long, empty or invalid redirect URLs, or redirects that exceed the maximum chain length.

  • Cause: Misconfigured redirects, circular references, broken redirect targets
  • Solution: Audit your redirects, ensure each redirect points directly to the final destination

Submitted URL Blocked by robots.txt

A URL in your sitemap is blocked by your robots.txt file. This is contradictory - you're telling Google the page is important (via sitemap) while simultaneously blocking it (via robots.txt).

  • Cause: Overly broad robots.txt rules, sitemap including pages that shouldn't be indexed
  • Solution: Either update robots.txt to allow crawling or remove the URL from your sitemap

Submitted URL Marked 'noindex'

Similar contradiction - a URL in your sitemap has a noindex directive. You're asking Google to index a page while telling it not to index the same page.

  • Cause: Noindex tag accidentally applied, sitemap not filtered properly
  • Solution: Remove the noindex tag or remove the URL from your sitemap

Submitted URL Returns Soft 404

The page returns a 200 (OK) status but appears to be an error page with no useful content. Google treats these as "soft" 404 errors.

  • Cause: Empty pages, placeholder content, error pages returning wrong status codes
  • Solution: Add meaningful content or return a proper 404 status code
Errors in the Coverage Report require immediate attention. Each error represents a page that Google cannot index, potentially costing you valuable search traffic.

Valid with Warnings Explained

Pages that are "Valid with warnings" are indexed but have issues worth investigating:

Indexed, though blocked by robots.txt

Google indexed the page despite robots.txt blocking it. This happens when Google finds enough signals (like backlinks) suggesting the page is important. However, Google can only index the URL, not the content, since it can't crawl the page.

Page indexed without content

Google indexed the URL but couldn't process the content. This might indicate rendering issues, JavaScript problems, or server issues that occurred during crawling.

Warnings don't prevent indexing, but they often indicate misconfigurations or missed optimization opportunities. Address them when possible.

Common Exclusion Reasons

Not all exclusions are problems. Many are intentional or expected. Here are the most common exclusion reasons:

Excluded by 'noindex' tag

The page has a noindex directive and Google is respecting it. This is intentional exclusion - no action needed unless you didn't mean to noindex the page.

Blocked by robots.txt

Your robots.txt file prevents crawling. Verify this is intentional for the affected URLs.

Crawled - currently not indexed

Google crawled the page but chose not to index it. This often indicates quality concerns - the content may be thin, duplicate, or not valuable enough to warrant indexing.

Discovered - currently not indexed

Google knows about the URL but hasn't crawled it yet. This may be due to crawl budget limitations or Google determining the page isn't a priority.

Duplicate without user-selected canonical

Google detected duplicate content and selected a canonical URL that differs from the one you're inspecting. The inspected URL is not indexed; the canonical version is.

Duplicate, Google chose different canonical

You specified a canonical URL, but Google selected a different one. This indicates Google disagrees with your canonical designation.

Alternate page with proper canonical tag

This page correctly points to another page as the canonical version. This is working as intended.

Monitor Your Indexing Automatically

RSS AutoIndex helps ensure your new content gets indexed quickly by automatically submitting new pages to Google. Focus on creating content while we handle indexation.

Start Free Trial

How to Fix Coverage Issues

Follow this systematic approach to address coverage problems:

Step 1: Prioritize by Impact

Focus first on errors affecting important pages. Use these criteria:

  • Revenue-generating pages (product, service pages)
  • High-traffic pages (according to analytics)
  • Pages with valuable backlinks
  • New content you want indexed quickly

Step 2: Investigate the Root Cause

Click on any issue type to see the affected URLs. Use URL Inspection to get detailed information about specific pages. Look for patterns - are all affected URLs in a specific directory or using a particular template?

Step 3: Implement Fixes

Based on your investigation:

  1. Fix server configuration issues
  2. Correct redirect problems
  3. Update robots.txt if blocking important pages
  4. Remove noindex tags from pages that should be indexed
  5. Add quality content to thin pages
  6. Implement proper canonical tags

Step 4: Request Validation

After fixing issues, use the "Validate Fix" button to have Google re-check the affected URLs.

Validating Your Fixes

The Coverage Report includes a validation feature that lets you confirm fixes have been successful:

  1. Click on the issue type you've fixed
  2. Click "Validate Fix" button
  3. Google will start recrawling the affected URLs
  4. Monitor the validation status over the following days

Validation statuses include:

  • Started: Validation has begun
  • Looking good: Initial checks passed
  • Passed: All instances fixed successfully
  • Failed: Some instances still have the issue

Ongoing Monitoring

Make Coverage Report monitoring part of your regular SEO routine:

  1. Weekly checks: Review the report at least weekly to catch new issues early
  2. Track trends: Monitor the total number of indexed pages over time
  3. Alert setup: Configure email notifications for coverage issues
  4. Document changes: Keep notes on issues you've fixed and their resolutions
  5. Correlate with updates: Check coverage after site updates or redesigns
A sudden drop in valid pages or spike in errors often indicates a technical issue that needs immediate attention. Set up regular monitoring to catch these quickly.

Conclusion

The Coverage Report is one of the most valuable tools in Google Search Console for understanding and improving your site's indexation. By regularly monitoring this report and promptly addressing issues, you ensure that Google can properly index all your important content.

Key takeaways:

  • Understand the four status categories and what they mean for your pages
  • Prioritize fixing errors on your most important pages
  • Not all exclusions are problems - many are intentional
  • Use the validation feature to confirm your fixes work
  • Monitor the report regularly to catch issues early
  • Look for patterns when investigating issues

Automate Your Indexing Workflow

While monitoring is essential, prevention is better. RSS AutoIndex ensures your new content gets to Google quickly, reducing the chance of "discovered but not indexed" issues.

Get Started Free