Basically a guy set up a test page with certain unique words either hardcoded in HTML (as a control) written to the page using JavaScript's document.write() function, and written to a page using JavaScript in a externally referenced file. Here are hist results: I then searched for each of the six words at Google. * The two HTML words both generated a search result that included the page. * The two words inserted by a JavaScript in the page generated no search results. * The two words inserted by a remotely sourced JavaScript generated no search results.
Which are utterly unsurprising if you think about it. Google's crawler doesn't implement a JavaScript interpreter. Plain and simple. Because it doesn't have to. As someone whose career is researching, designing, and developing advanced web crawlers, I can tell you JavaScript parsing/interpretation is a giant pain in the ass and a big performance killer. Plus things like client side validation and image pre-loading (things that most crawlers don't care about) also gets in the way and slows you down. From a shear cost vs. gain, it currently makes no sense for Google to interpret or index JavaScript. Ajax apps only makes crawling much harder. Does Google Index Dynamic JavaScript Content? No, of course not. |