Search engines crawling your ajax site
This post begins with a particular dilemma that SEOs have often faced:
websites that use AJAX to load content into the page can be much quicker and provide a better user experience
BUT: these websites can be difficult (or impossible) for Google to crawl, and using AJAX can damage the site's SEO.
Fortunately, Google has made a proposal for how webmasters can get the best of both worlds. I'll provide links to Google documentation later in this post, but it boils down to to some relatively simple concepts.
Although Google made this proposal a year ago, I don't feel that it's attracted a great deal of attention - even though it ought to be particularly useful for SEOs. This post is targeted to people who've not explored Google's AJAX crawling proposal yet - I'll try to keep it short, and not too technical!
I'll explain the concepts and show you a famous site where they're already in action. I've also set up my own demo, which includes code that you can download and look at.
The Basics
Essentially, sites following this proposal are required to make two versions of their content available:
Content for JS-enabled users, at an 'AJAX style' URL
Content for the search engines, at a static 'traditional' URL - Google refers to this as an 'HTML snapshot'
Historically, developers had made use of the 'named anchor' part of URLs on AJAX-powered websites (this is the 'hash' symbol, #, and the text following it). For example, take a look at this demo - clicking menu items changes named anchor and loads the content into the page on the fly. It's great for users, but search engine spiders can't deal with it.
Rather than using a hash, #, the new proposal requires using a hash and an exclamation point: #!
The #! combination has occasionally been called a 'hashbang' by people geekier than me; I like the sound of that term, so I'm going to stick with it.
websites that use AJAX to load content into the page can be much quicker and provide a better user experience
BUT: these websites can be difficult (or impossible) for Google to crawl, and using AJAX can damage the site's SEO.
Fortunately, Google has made a proposal for how webmasters can get the best of both worlds. I'll provide links to Google documentation later in this post, but it boils down to to some relatively simple concepts.
Although Google made this proposal a year ago, I don't feel that it's attracted a great deal of attention - even though it ought to be particularly useful for SEOs. This post is targeted to people who've not explored Google's AJAX crawling proposal yet - I'll try to keep it short, and not too technical!
I'll explain the concepts and show you a famous site where they're already in action. I've also set up my own demo, which includes code that you can download and look at.
The Basics
Essentially, sites following this proposal are required to make two versions of their content available:
Content for JS-enabled users, at an 'AJAX style' URL
Content for the search engines, at a static 'traditional' URL - Google refers to this as an 'HTML snapshot'
Historically, developers had made use of the 'named anchor' part of URLs on AJAX-powered websites (this is the 'hash' symbol, #, and the text following it). For example, take a look at this demo - clicking menu items changes named anchor and loads the content into the page on the fly. It's great for users, but search engine spiders can't deal with it.
Rather than using a hash, #, the new proposal requires using a hash and an exclamation point: #!
The #! combination has occasionally been called a 'hashbang' by people geekier than me; I like the sound of that term, so I'm going to stick with it.
Advertise on APSense
This advertising space is available.
Post Your Ad Here
Post Your Ad Here


Comments (1)
Synbitz s.6
Synbitz Seven
thanks, nice information... I decided not to go with a ajax powered application because of SEO... what's a site with switching content all on the same URL? nothing.... I still rewrite url's to mass index pages for webhost-choice... all is going well