Is your Site Audit not running properly?
There are a number of reasons why pages could be blocked from the Site Audit crawler based on your website’s configuration and structure, including:
- Robots.txt blocking crawler
- Crawl scope excluding certain areas of the site
- Website is not directly online due to shared hosting
- Pages are behind a gateway / user base area of site
- Crawler blocked by noindex tag
- Domain could not be resolved by DNS - the domain entered in setup is offline -
Follow these troubleshooting steps to see if you can make any adjustments on your own before reaching out to our support team for help.
- Check your Robots.txt file for disallow commands
- Remove noindex code on site
- Whitelist the SEMrushBot
- Check your account limits
- Make sure there are proper redirects to relevant versions of your site
- Change crawl source to sitemap
- Change the User Agent (from SEMrushBot to GoogleBot)
- Bypass disallow in robots.txt
- Crawl password protected areas with your credentials
- Contact SEMrush Support for further assistance
A Robots.txt file gives instructions to bots about how to crawl (or not crawl) the pages of a website.
You can inspect your Robots.txt to see if there are any disallow commands that would prevent crawlers like ours from accessing your website.
If you do not want to modify the robots.txt file, check out this section here .
To check the Robots.txt file of a website, enter the root domain of your site, followed by /robots.txt. For example, the robots.txt file on target.com is found at http://www.target.com/robots.txt
To allow the Semrush-SA Bot to crawl your site, add the following into your robots.txt file:
(leave a blank space after “Disallow:”)
Remove NOINDEX Code on Site
If you see the following code on the main page of a website, it tells us that we’re not allowed to index/follow links on it and our access is blocked.
<meta name="robots" content="noindex, nofollow" >
Or, a page containing at least one of the following: "noindex", "nofollow", "none", will lead to the error of crawling.
To allow our bot to crawl such a page, remove the “noindex” tag from your page’s code.
To whitelist the bot, contact your webmaster or hosting provider and ask them to whitelist SemrushBot-SA.
The bot's IP addresses are:
The bot is using standard 80 HTTP and 443 HTTPS ports to connect.
If you use any plugins (Wordpress, for example) or CDNs (content delivery networks) to manage your site, you will have to whitelist the bot IP within those as well.
For whitelisting on Wordpress, contact Wordpress support.
Common CDNs that block our crawler include:
- Cloudflare - read how to whitelist here.
- Incapsula - read how to whitelist here (add SEMrush as a “Good bot”).
- ModSecurity - read how to whitelist here.
- Sucuri - read how to whitelist here.
Please note: If you have shared hosting, it is possible that your hosting provider may not allow you to whitelist any bots or edit the Robots.txt file.
Check Account Limits
To see how much of your current crawl budget has been used, go to Profile - Subscription Info and look for “Pages to crawl” under “My plan.”
Depending on your subscription level, you are limited to a set number of pages that you can crawl in a month (monthly crawl budget). If you go over the amount of pages allowed within your subscription, you’ll have to purchase additional limits or wait until the next month when your limits will refresh.
Proper Redirects (for DNS Issues)
If the domain could not be resolved by DNS, it likely means that the domain you entered during configuration is offline. Commonly, users have this issue when entering a root domain (example.com) without realizing that the root domain version of their site doesn’t exist and the WWW version of their site would need to be entered instead (www.example.com).
To prevent this issue, the website owner could add a redirect from the unsecured “example.com” to the secured “www.example.com” that exists on the server. This issue could also occur the other way around, if someone’s root domain is not secured, but their WWW version is. In such a case, you would just have to redirect the root domain version to the WWW version.
In order to not miss the most important pages on your website with our crawl, you can change your crawl source from website to sitemap - this way we won’t miss any pages that are mentioned in the sitemap.
Change User Agent
Your website may be blocking the SEMrushBot in your robots.txt file. You can change the User Agent from SEMrushBot to GoogleBot and your website is likely to allow Google’s User Agent to crawl. To make this change, find the settings gear in your Project and select User Agent.
If this option is used, blocked internal resources and pages blocked from crawl checks will not be triggered. Keep in mind that to use this, site ownership will have to be verified.
This is useful for sites that are currently under maintenance. It’s also helpful for when the site owner does not want to modify the robots.txt file.
To audit private areas of your website that are password protected enter your credentials in the “Crawling with your credentials” option under the settings gear. This slide allows the Site Audit bot to reach those pages and audit them for you.
This is highly recommended for sites still under development or are private and fully password protected.
Contact SEMrush Support
If you still are having issues running your Site Audit, send an email to email@example.com or call us at the number on the website footer to explain your problem.
Further reading: Check out our 2017 study of the most common technical SEO mistakes.