A few days ago, we received an email from Google saying that Googlebot was unable to access CSS and JS files on some of our client’s websites. Of course it caught our attention, Google noted that this issue can result in suboptimal rankings!
So, what did we do? We simply followed Google’s instructions.
First, you need to identify the blocked resources.
To do this, you must log into your Google Webmaster Dashboard and use the “Fetch as Google” tool under Crawl and Fetch and Render your URL..
After a few minutes, you will be able to access the results page that will show you a list of blocked resources.
After identifying the blocked resources, you should unblock them in your robots.txt
We found out that the blocked resources in our client’s site are located in the following folders :
Take note that there are many ways to do this. In our client’s case, we unblocked the resources by removing the restrictions in their robots.txt. This is how their robots.txt looks now :
User-agent : *
If you are using a CDN like we do, you’re going to have to use a custom robots.txt set up to unblock the resources from your CDN.
Login to your MaxCDN account
Click on Manage beside your site’s URL then choose SEO
Click on all check boxes and enter your custom robots.txt
The final step is to check if the resources are now unblocked. To do this, just repeat the first step.
If your result still comes back as “Partial”, you may be using blocked resources from other sites.
Don’t worry, Google won’t penalize you because those resources are no longer in the webmaster’s hands.
Latest posts by Peter A. Liefer II (see all)
- Should you spend more than $10,000 on a leading web design agency in Phoenix? - November 30, 2017
- How to Prepare Your Website for the Holiday Season - October 30, 2017
- What are Facebook Ads and Why Invest in them for your Small Business - October 16, 2017