A crawler, also known as a spider or a bot, is the software Google uses to process and index the content of webpages. The content crawler visits your site to determine its content in order to provide relevant ads.
Here are some important facts to know about the content crawler:
- The crawler report is updated weekly.
The crawl is performed automatically and we're not able to accommodate requests for more frequent crawling. - The content crawler is different from the Google crawler.
The two crawlers are separate, but they do share a cache. We do this to avoid both crawlers requesting the same pages, thereby helping publishers conserve their bandwidth. Similarly, the Search Console crawler is separate. - Resolving content crawler issues will not resolve issues with the Google crawl.
Resolving the issues listed on your Crawler access page will have no impact on your placement within Google search results. For more information on your site's ranking on Google, review the Adsense article on getting included in Google search results. - The crawler indexes by URL.
Our crawler will access site.com and www.site.com separately. However, our crawler will not count site.com and site.com/#anchor separately. - The crawler won't access pages or directories prohibited by a robots.txt file.
Both the Google and AdMob Mediapartners crawlers honor your robots.txt file. If your robot.txt file prohibits access to certain pages or directories, then they will not be crawled.Note that if you’re serving ads on pages that are being roboted out with the line User-agent: *, then the content crawler will still crawl these pages. To prevent the content crawler from accessing your pages, you need to specifyUser-agent: Mediapartners-Google
in your robots.txt file. Learn more. - The crawler will attempt to access URLs only where our ad tags are implemented.
Only pages displaying Google ads should be sending requests to our systems and being crawled. - The crawler will attempt to access pages that redirect.
When you have "original pages" that redirect to other pages, our crawler must access the original pages to determine that a redirect is in place. Therefore, our crawler's visit to the original pages will appear in your access logs. - Re-crawling sites
At this time, we're unable to control how often our crawlers index the content on your site. Crawling is done automatically by our bots. If you make changes to a page, it may take up to 1 or 2 weeks before the changes are reflected in our index.