Ajax and Adsense
The combination of Adsense and modern technology such as Ajax is a source of dilemmas for webmasters. Adsense provides contextual advertising, whether graphical or textual, which is selected according to the content of the page. To achieve this, the robot parses the page when it comes and is serviced by the server, extracts keywords and selects from its database which keywords (defined in the AdWords control panel, not in the ad text) match those on the page, and thus assigns the page the appropriate ad. page.
On the other hand, Ajax is a technology (XMLHttpRequest object, JavaScript, DOM, CSS) that allows you to partially change the content of a page after it is loaded.
Web 2.0 makes life difficult for robots: text is not scanned
When robots scan a page either to index it or to select the most relevant contextual ads, dynamic content is not available to them. Contextual advertising refers only to the static part of the page.
The loaded text is dynamic and not visible to search engines - not the only drawback of Ajax. Pages or even a whole part of the site may be completely unknown to robots and absent from search results if the site navigation is completely dynamic. However, it turns out that links are as important to the AdSense robot as the page content in finding relevance.
This is the disadvantage of Web 2.0: the content is dynamic, generated by CMS, JavaScript scripts, for the purpose of attractive presentation and easy access. for the user, but excluding robots.
JavaScript makes navigation almost impossible for crawlers, even though Matt Cutts tells us that Googlebot can sometimes find links in the code, this is not always possible.
A JavaScript link usually looks like this:
<p onclick="myfunction()"> ...text... </p>
To make it visible to robots, simply convert it to an HTML link:
<a href="mypage.html" onclick="myfunction()"> ...text... </p>
This makes no difference to the user; this is still an onclick value that will be considered by the browser, but the link itself can be considered by the robot, which interprets the href content and ignores the onclick value. The file name will now be used to define keywords for the landing page.
How to solve the problem
There is a way for search engines to index content, even if it is dynamically loaded onto the page. The idea is to have a responseHTML attribute in the XMLHttpRequest object that allows you to dynamically add text extracted from other HTML pages that are indexed by search engines to a page.
I'm not sure if this applies here as we want the text scanned INSIDE the page so the ads are contextual. This can only be done if all text is present when the page loads.
Search engines are now able to index Ajax content, as explained in the Ajax scannable article. This is not easy for non-programmers to do.
Question of internal relations
Robots ignore internal links denoted by # followed by a link to a chapter or section of the page.
The link to the section of another page will look like this:
<a href="mypage.html#mysection"> </a>
The robot will only see a link to mypage.html, not a section. So it would be with dynamic communication (which, remember, can sometimes be taken into account, sometimes ignored by robots).
<p onclick="myfunction('mypage.html#anchor=mysection')"> </p>
In this case, the internal reference to the sections is equivalent to the parameter. We transform it as follows:
<a href="mypage.html?anchor=mysection">...anchor...</a>
And liwill be indexed as a whole, with a page name and parameter. Then the AdSense robot will see the link, and the keywords contained in the link and in the binding will be taken into account, resulting in more relevant ads on the landing page.
Conclusion
From the very beginning, you should design your pages, as well as your site, not for search engines, but by integrating the concept of link tracking. It should be possible for each page, by clicking on the links, to get to any other page of the site, both for visitors and for robots.
The use of Ajax may require sacrifices, and this technology should be used primarily to represent data that should not be indexed in the context of an Ajax page, but instead should be indexed within other documents.