For optimal rendering and indexing, Google's new guideline specifies that you should allow Googlebot access to the JavaScript, CSS, and image files that your pages use. This provides you optimal rendering and indexing for your site. Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well Google algorithms render and index your content and can result in suboptimal rankings.
Optimizing the process
Google indexing systems used to resemble old text-only browsers, such as Lynx. Now, with indexing based on page rendering, that technique is obsolete, and a more accurate approximation would be what a modern web browser sees, with all the CSS and JavaScript tweaks.
In order to ensure optimal crawling indexing, here are some things you can do.
- Make sure your web design adheres to the principles of progressive enhancement, i.e. it is usable on a wide range of browsers and browsers that do not support certain features should still be able to use the content and provide basic functionality to the user
- Optimize the page load process
- Serving full JavaScript and CSS files to search bots might be demanding on the servers. Make sure your servers can take the load and are allowed that much bandwidth.
In order to make sure that GoogleBot can render your web pages properly, use the fetch and render tool within Google Webmaster Tools that Google introduced a few months back. Google back then said you should make sure not to block these files because GoogleBot is trying to render your full HTML.
Use a text browser such as Lynx to examine your site, because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site.
Now, the updated guidelines read;
To help Google fully understand your site’s contents, allow all of your site’s assets, such as CSS and JavaScript files, to be crawled. The Google indexing system renders webpages using the HTML of a page as well as its assets such as images, CSS, and Javascript files. To see the page assets that Googlebot cannot crawl and to debug directives in your robots.txt file, use the Fetch as Google and the robots.txt Tester tools in Webmaster Tools.
Check out the announcement on Google Webmaster blog for more details.
If you don't want to get yourself into Serious Technical Trouble while editing your Blog Template then just sit back and relax and let us do the Job for you at a fairly reasonable cost. Submit your order details by Clicking Here »
Mustafa Bhai!
ReplyDeletePlease! Do some blog posts on MBT. Just waiting for more blogging tricks not for news. :)