Google demystifying false ideas about search engine
On the sidelines of the last roundtable between webmasters and Google specialists, during which the latter answered the questions asked by them first, Google released a slideshow of false myths and ideas about their algorithm (now removed from the site). This list is supplemented by information contained in previous releases .
Where did these false beliefs come from? Information spreads quickly, correctly or incorrectly on the Web. There is a share of beliefs that come from the complete inexperience of webmasters, but one share also comes from a misinterpretation of the algorithm, from what Google says very little, the main part of what is known is the study of filed patents, forced communication.
Even the engine involuntarily participates in disinformation, changing its algorithm: what it declares false can become true in subsequent versions!
French slideshow content:
Myth: You need to fill the page with keywords, fill it with meta tags and expose the site to the number of catalogs.
Result: nothing, lost time.
Myth: Mixing HTML and XHTML prevents Webmaster Tools from being validated.
They recognize meta "verify" (for verification) regardless of code or document.
Myth: It is important to appear in a thousand catalogs and search engines.
No, it isn't. Search engines themselves find pages, it is useless to submit, it is a complete waste of time .
Myth: Participating in Adwords, AdSense or Analytics can help a site/or, conversely, punish it.
It isn't. But Analytics can help you get the best out of your site.
Myth: If a keyword is important, it is important to repeat it in important places on important pages so that they seem more important to important search engines (sic!).
Don't fill pages with keywords. There is no optimal density of keywords.
Myth: Using XML sitemap can downgrade a site.
No, it isn't. Sitemaps are useful for helping engines find their content and helping you get to know it yourself.
Myth: PageRank is dead/or, conversely, that's all that matters.
PageRank is a measurement of the number and quality of links on a page. Google uses it. He uses another 200 factors.
Myth: It's important to re-introduce your site regularly.
No, it isn't. As soon as the engine knows the URL, it never forgets it!
Myth: Once your site comes out on top, don't touch it.
There are things that go better when you leave them unchanged, but not sites.
Give users a reason to return to the site. Changes to the position may occur. The rest of the sites are not idle.
Myth: Valid HTML or XHTML provides a higher position.
No, it isn't. Only 5% of the website has a valid code. Navigators and engines converge.
Myth: Using disallow in robots.txt will remove pages from the index.
No, it only blocks indexing robots.
Myth: The more connections, the better.
The quality of communication is tied to the number of criteria. Which one matters .
Myth: All that is important when you make a site is to lead in SERPs.
Maybe a course error?
Myth: It is better to return the 404 error to old URLs so that the new architecture is discovered naturally.
It isn't.
Myth: It is important to use "index, follow" in meta-robots.
This is completely useless.
Placing a site on a mutual IP leads to the loss of positions.
It isn't.
Feel free to read the slideshow to the end, there you will find useful links.
Myths that actually came true
Although a slideshow dating back to 2008 describes them as myths, some false claims actually came true at the time!
Truth: Duplicate content will damage the site.
This was a misconception, except that it blurs incoming connections. Sites are also penalized for using the same (large) text on each page.
Truth: Social site backlinks have less value.
It wasn't, it became true. Links from Twitter, for example, are ignored. They are used, but not considered backlinks and do not convey PR.
See also the roundtable report and Google's responses to webmasters.