If most of the HTML content is generated with the server-side tech (like in ASP.NET or PHP) I would still consider Hijax as an alternative, but if the content that you want indexed is served as JSON, XML, etc. and it never takes an HTML form until it is in the DOM, a hybrid Hijax-style solution would be expensive and complicated, let alone the case of using client-side templating engines like mustache, dust.js, handlebars, etc. This later case is the one we will talk about.
To become a GET request for:
That new URL is what the crawler will actually request when encountering and following a link with ‘#!’ in the anchor fragment. This solves an important part of the problem, but we still need to generate an HTML snapshot for the crawler to download, parse, and go on. How?…
Note that any content meant to be indexed needs to be accessible through an
<a> element with a proper #! formated href,
otherwise it can not be followed by the crawler.