The evolution of Google's algorithm from the start
Changes flagged or announced on Google's algorithm and its page classification tools. A separate article is devoted to the evolution of results pages and interface.
June 1, 2024 - Leak exposes Google algorithm
An internal document leak in Google Search revealed the work of the current Google positioning algorithm, and published it SearchEnglineLand. Google confirmed the authenticity of the documents .
The page may be downgraded for the following reasons:
- The link does not match the landing page.
- The behavior of the engine users indicates dissatisfaction with the page (for example, a quick return).
- Product logs.
- Location.
- Pornography.
Google saves the last 20 link changes on each page.
Google Chrome is used to get information about user behavior.
Positioning depends on these criteria
- Links to other pages should be diversified to different sites and be relevant to the content of the page.
- Evaluation of the page content by the received links and behavior: good and bad (random) clicks, time on the page after clicking .
- The number of clicks on the link on the results page (although this is denied by Google).
- Originality improves the situation.
- Brands are important.
- Google is trying to figure out who we are as the author of the document.
- Freshness of information matters (it's not new).
- The name of the pages matters.
- The font size is taken into account for the importance of the text .
- Domain information is taken into account.
We are no longer talking about Panda, who based the positioning on the "popularity" of the site, in other words, the mention of the name of the site, which was probably largely rejected by the senders. But there is a criterion of "SiteAuthority" authority, which is equivalent. That Google cares about the author of the document is new .
If it really wants to be able to use artificial intelligence in search, Google must take serious care about the validity of page content.
September 12, 2019 - Promoting original sources in the news
For news-related results, Google now wants to focus less on popularity and more on originality. Thus, a site with a large audience that repeats the content of the original article should now take second place on the results pages. This applies primarily to articles that require a lot of work to investigate.
New criteria have been added to the algorithms to promote this kind of article among relevant results. We want the article, which is the source of all the news, to come first.
August 30, 2019 - Penalization of sites lending to submodels
Giving another company a subdomain or subdirectory so it can take advantage of its popularity - for a fee in general - is now punished by an algorithm. Google has issued a warning about this.
There was a significant reduction in the volume of traffic in places where this process is beneficial.
September 24, 2018 - Image positioning
Google is announcing a change in the image selection algorithm that is now approaching text content. The authority of the site becomes a more important factor and the "freshness" of the page (the fact of its update or creation recently) and the placement of the image in it are taken into account.
Authority is associated with relevance: if the theme of the picture is also the theme of the site, then this picture is positioned better. Placement is considered more favorable if the image is in the center and beginning of the page.
March 8, 2017 - "Fred" updated
Many sites are penalized for unnatural content, especially if they have pages designed to promote other commercial sites. The filtering criteria are blurred enough, even if the effect has been widely seen, and mark an attempt to lower the popularity of non-content sites, as Panda does.
February 2017 - Ghost 5
The search engine does not officially recognize the Phantom update. Their dates are estimated to be May 2013, May 2015, November 2015 and June 2016.
Again, unclear criteria in attempts to promote "quality," that is, popularity with a minimum of content .
January 10, 2017 - For mobile
This was announced long before, sites where pop-ups and interstitial advertising appear are punished.
September 23, 2016 - Penguin is built into the main algorithm
This is reported by the Google blog, designed for webmasters. Penguin, this algorithm launched in 2012, fines sites that want to increase PageRank artificially by creating links to satellite sites to the site that will be promoted. This also applies to connections in football or signatures.
The algorithm was launched periodically, but now it works in real time every time the robot passes on the site. And he also improved. It affects pages rather than the entire site.
July 1, 2016 - RankBrain, increasing share
RankBrain is an element of the general algorithm and is used to search for an answer when the search does not have an obvious answer in the stock of index pages.
This is a machine learning algorithm for large amounts of data (Deep learning), therefore an algorithm that teaches and takes into account the user's behavior to improve his performance .
February 5, 2016 - Search engine in Deep learning
Replacing Amit Singhal with John Giannandrea at the head of Google is a symbol of the ongoing evolution of the search engine. While the second is a specialist in artificial intelligence and deep learning, computer-based learning based on the amount of data and repeated experiments, the former was more traditionalist and relied more on programmers to implement link ranking algorithms. We are used to seeing how the search engine adheres to a set of page classification rules, and this brought happiness to SEO specialists. It disappears and we have no idea what happens when outcomes are determined by deep learning.
Artificial intelligence is already being used with RankBrain to understand some of the requests, it currently accounts for 15% of the work. Under the influence of the new boss, she will gradually do it entirely. There you can see the end of the reference, because Google engineers themselves do not understand how the machine selects pages.
October 28, 2015 - RankBrain
This update dates back several months and has just been revealed: this would be the third most important positioning criterion! (After keywords and links). Given the nickname RankBrain, this deep-learning AI algorithm attempts to interpret an unreleased query to give it meaning and select pages containing the answer.
April 21, 2015 - Mobile display becomes a signal
"From April 21, we will be expanding our consideration of mobile compatibility as a positioning signal."
This is reported by the Google Webmaster Central blog, as well as the fact that it will have a big impact on the ratings for searches made from a mobile.
Google invites you to log into your Webmaster Tools account to see problems on your site, and also offers a test site.
March 2015 - less beloved Wikipedia
Since the beginning of 2014, Wikipedia's audience has stopped growing and is even slowly falling. The algorithm is less supportive of the site than before.
This may be due to the company Knowledge Graph, which since the beginning of 2014 has provided more and more encyclopedic information and therefore makes Wikipedia less relevant.
See Wikimedia Report Map.
February 27, 2015. Application indexing, a new ranking factor.
It becomes possible to index the application, see the Google guide.
October 2014 - Penalization of copy sites
Sites for which Google receives multiple content withdrawal requests (DMCAs) and for which validity is checked are penalized in the results. Only 1% of requests were found to be unfounded. Otherwise, the link to the content is removed and the entire site is now fined.
August 28, 2014 - End of author tag
Webmasters did not actually follow and massively accept the rel = author tag to better position their pages in the results. Therefore, Google decided to no longer take it into account. In the same way, the display of information about the author, which has already decreased over time, will completely disappear. By June 2014, the pictures had already disappeared. It would seem that links with photos get no more clicks than others.
But Google claims that other types of structured data will remain in use.
July 24, 2014 - Pigeon Update
A change to improve local studies by applying gain to positive signals (e.g., return lines) when there is a local source.May 20, 2014 - New Panda
The new version of Panda's mathematical formula, which changes how sites are positioned, applies on May 21. This patch appears to be less harsh on medium or small sites. This reportedly affected 7.5% of sites in English (according to SearchEngineLand).
May 18, 2014 - Payday download algorithm
This algorithm change concerns groups of keywords that are most concerned about spam, that is, pages made only to display ads. He is not related to Panda or Penguin. This is an algo update introduced last summer (as in 2013).
February 28, 2014 - Penalization of copy sites
Google is launching a search for copy sites that are better positioned in results than the sites they copy. Apparently, the algorithm is not able to identify them, so witnesses are being approached (now closed).
At the moment, the most condemned site is Google itself, which takes paragraphs from Wikipedia for its knowledge graph!
In fact, the archives as of August 29, 2011 show that this is not the first time Google has launched this initiative.
September 27, 2013, Hummingbird leaves for flight
It's the end of black and white, after Panda and Penguin, here's a fly bird. This new algorithm concerns both the frontend and the backend: it knows how to handle questions in general, and not as a set of keywords, and it knows how to refer to the content of pages, in other words, find in its database those that best answer the question. This algorithm, if it has just been revealed by Google, has been in effect for several weeks.
July 18, 2013, Panda softened
Panda update that adds cues about the authority of the site in the niche to prevent the punishment of useful sites.
It takes 10 days for the global update to complete.
May 22, 2013, Penguin 2
While the previous version of Penguin affected the site's homepage, the new iteration touches all pages directly (rather than indirectly, as they depend on the homepage). This penalizes sites with artificial backlinks, usually with optimized anchors.
May 15, 2013. Program for the coming year
In the video (in English), Google's web spam manager indicates what to expect from changes in Algo in the coming months:
- Better to detect authority in different niches thanks to better cues to temper the negative Panda effect.
It is argued that sites that are at breaking point will benefit from doubt and will no longer be punished. Here, as always, it is all about signals and signals indicating competence in a particular area that will be improved. Let's hope this doesn't mean signing up to social sites ! - More sophisticated methods of analyzing links and removing any value from the activities of spammers.
- Too many links for sites that are too important in the SERP.
This goes and comes from Google with a discount and increase depending on the case, which leaves little confidence in this intention. - Improve categories of results that have too much spam. So reduce the amount of spam.
- No more PageRank through ads.
- The best information for the webmaster when the site is hacked .
This announcement confirms the weight given to the "signals" to evaluate the pages. In addition to extracting keywords, the algo ignores their content, which is the source of all spam. In fact, the list shows an intention to reduce spam, which is nothing very new and admits that Panda is exemplary .
November 17, 2012. Mysterious update
There are no reports of this change affecting many sites, otherwise it is not an iteration of Panda. My theory is that it's about factoring in likes on social sites and comments, but it's purely personal.
It is worth noting that at the same time, the like count on Google plus disappeared from GWT.
Panda was updated on November 5 and 21.
October 10, 2012. Links in football - penalty factor
Webmaster Guidelines have evolved recently. As they always clarify, trading and selling links is prohibited, but a new line appears:
Widespread links in the footer of various sites.
These links are not considered natural and therefore violate webmasters' instructions.
September 28, 2012. Domain names from keywords
Sites that have a chosen domain name for its keywords, without an assessment of the quality of the content (see Panda), are now punished. Less than 1% of English sites are affected.
September 14, 2012. Return to diversity
After a change on November 17, 2012, which allowed one domain to spam results pages, Google rolls back and again limits the number of links to the same domain.
August 10, 2012. Copyright infringement complaints fine website
Even if Google doesn't remove complained-about content from its index, it now fines a page when a site is too often subject to complaints. I wonder if Scribda will be punished!
Probably not because Google clarifies that many popular sites will leave this. In particular, Youtube uses a different form of DMCA, which is not taken into account by this criterion of its algorithm: it cannot be fined.
April 24, 2012. Algo change to spam affecting 3% of sites: Penguin update!
And, obviously, in connection with the announced over-optimization, even if the term is clarified. The site is targeted at the accumulation of negative signals, such as link exchange, self-created backlinks. Even internal connections, if they have the same keywords at anchor. Irrelevant external links, that is, not related to the text surrounding it, this is also a negative signal. As well as filling the page with the same keyword, as you already know .
Research has shown that a site is penalized when most backlinks do not come from sites in the same topic and when anchors contain keywords that are not in the topic of the page that make them.
It is the whole complex of negative signals that causes penalties. These are criteria that have always been taken into account, it is known, but the new algorithm conducts a deeper and more systematic analysis in order to better fine black hat methods.
In fact, Google reported the change and speaks more accurately about the "black hat webspam." This change has a name, by the way, it is penguin update (updated penguin), a hint of the opposition of black and white hut (this is not a joke).
Also in April, the algorithm was changed to no longer give a premium for freshness to new products when the site is rated as low-quality .
This is among other changes affecting primarily the presentation of results.
March 2012. Soon, suroptimization will be punished. Bind viewed links. Graphical interface, positioning criteria.
By fining sites whose algorithm considers content insufficiently different and substantive (which is most often expressed in promoting verbose and digressive ones), Google is preparing to take a new step and attack sites that push optimization too far, this is what Matt Cutts just announced at SXSW.
What optimizations will be fined?
- Artificial presence of many keywords that are useless for the reader. Repeat offers on one page and all pages should also be avoided.
- Link exchanges that bring useless backlinks. Directories requesting feedback, if not already received, will be covered, as will paid links and link exchanges in football.
- Too much manipulation in general. For example, too many nofollow links on external sites, for no reason.
All this was already contrary to the algorithm, but it seems that Google wants to improve the recognition of such actions and more severely fine sites. The effect will be in a couple of weeks.
Some already believe that di-optimizing a site (ignoring all optimization rules) to make it more natural and avoid punishment can have a positive effect, which can confirm this with this planned algo change. But Google clarifies that good optimization, done only to help engines find content, is still recommended.
Google says the way it interprets link binding has been changed. Without specification, the classification criterion was excluded. The interpretation of the anchor depending on the request has been clarified. Other changes concern synonyms, the date of discussion of threads, freshness, quality of sites.
It also became known that algo takes into account the interface and rendering on mobile for positioning. It is believed that the presence of icons, stars, etc... compiles a quality index .
February 2012. After a few years, the way references are analyzed has changed
"We often use link characteristics to help us understand the topic of a linked page. We have changed how we assess connections. In particular, we are abandoning the linkage analysis method that has been used for years."
Even if Google does not specify what has changed in its way of evaluating links, this formula strongly implies that these are signals related to the relevance of the link that are concerned and one of them is no longer taken into account. Here is a list of these signals:
- Link binding, wording.
- The text surrounding the link.
- Position on the page. Depending on whether the link is in the article, it has a different role at the end of the page in the case and a different relationship to the linked page.
- Title, rel, and nofollow attributes. Last, because of which the link is ignored (consuming its part of PageRank), the only possible change is that it itself is ignored.
- PageRank of the page containing the link .
- Social connection or link in article.
You could check out Google's various patents on link analysis to see what was canceled. Each of these factors can be devalued, except for one: an actual link, in the text.
The announcement of the changes in March contradicts that it is the anchor of communication that is now devaluing. Maybe you have to ?
- This criterion is replaced by other signals, including the content of the page containing the link. Basically,
- it is the referents who work on the selection of words in anchors.
- It is often seen that webmasters select keywords for anchors when it comes to their own sites, and put anything for others.
- The most correct thing is actually to put the name of the related item in the anchor. But this does not bring anything to search engines, so ignore this criterion so much.
January 19, 2012. Excess advertising at the top of the page is now a positioning criterion
As announced last November, pages that first post ads and then content visible when the text is run will be punished.
This affects 1% of searches.
Users complained that to find content that meets their request, you need to promote the page and go through advertising.
But how to determine what "barking with fold" is, since it depends on the screen resolution. On a mobile phone, depending on whether they are holding it in portrait or landscape mode, this is no longer the same. It can be assumed that this affects pages that host two ads 280 pixels high from side-by-side height. Google actually gives a statistical measurement of the page height visible without a process using the Browser Size tool. 550 pixels is an acceptable value.
Does the header have an effect? Not if it is considered part of the content.
The Browsersize site on Google (now closed) gave an assessment of what a "waterline" web page is.
Announcement in Inside Search.
December 2, 2011. Discovery of parked domains.
And whose home page is filled with ads. A new algorithm is added to detect and exclude them from the results. This is part of about ten measures announced for November, as well as for the freshness of content that favors the last pages.
Nov.
November 14, 2011. Bonuses for official sites.
Sites associated with a product, a person, when identified as official sites (made by the relevant party), will now receive preferential treatment when positioning, according to the algorithm change announced on November 14, 2011.
November 10, 2011. Too many ads on one page: now a direct algo criterion.
At PubCon 2011, Matt Cutts clarified that too many ads on one page become a direct (negative) positioning factor.
This has always been an indirect factor, as it can encourage visitors to leave the site and thus increase the rebound rate and reduce visiting time. But now it will be taken into account directly.
It also confirms that this was not a Panda criterion.
Note that "too much advertising" strongly depends on the size of the page, and it was also clarified that their placement in the first part of the page is taken into account.
November 3, 2011. New page freshness rating.
The change in the algorithm affects 35% of queries in the search engine. This concerns the novelty of pages, which can be promoted depending on the context of the search.
These are studies regarding recent events or topical topics, as well as those that appear regularly in the news (Ex: Grand Prix de F1), and which are constantly updated without any news (Ex: Software).
Other topics, such as, for example, culinary recipes, should not be affected by this modification .
August 29, 2011. Better recognition of "scrappers."
You should better identify sites that use pages from other sites to place ads. It happens that they are better positioned on the results pages than the originals!
Google is testing a new algorithm and asking users to report these sites to help it develop it.
Report scraper
This is not about copyright or copyright infringement, but about those sites that use any tool to transmit content and post it on their pages.
August 12, 2011. Panda is generalized in all languages.
Currently, French-language sites are influenced by this algorithm, which aims to improve page selection as a result, but sometimes unfairly affects quality sites. Depending on the language, 6 to 9% of sites are involved in each language.
Panda.
Report an unfairly affected quality site.
At the same time, Google has changed the way Analytics calculates the rebound coefficient.
June 20, 2011. Shadow of Panda.
Since June 15, in many sites their audience is mainly decreasing, in others it is growing, which is explained by the expansion of Panda to Google.fr .
However, important sites with poor or duplicated content were not affected, and therefore this cannot be compared with Panda Update, which is not a modification of the avitum, but an independent program launched manually and which assigns a score to sites.
It is possible that part of this program was included in the general algorithm.
June 8, 2011. Author attribute.
Several distribution tags that need to be placed in the body of the page are recognized by Google:
<a rel="author" href="pageauteur.html">Moi même</a>
<a rel="me" href="pageauteur.html">Moi même</a>
This will help you classify pages by author.
The profile page must be in the site that contains this attribute.
June 6, 2011. Schema.org.
The format for including metadata in your pages and thus clarifying their meaning is taken by Bing, Google, Yahoo!.
It is incompatible with RDFa and cannot be used on the same page without deindexing it.
This format is based on the W3C microdate specification, not to be confused with the microformat (hRecipe, hCard), which is a general term for a non-free format defined for a specific application .
An example of use (for all formats) is to describe a recipe with data such as photos and cooking times so that a special snippet can be produced on the results pages.
Schema.org.
Rich passage for testing. Online testing tool for checking page compliance.
April 11, 2011. Worldwide Panda Initiative.
Panda Update against poor content is becoming common and spreading to the whole world.
But this only applies to queries in English (on local engines).
Google is also starting to take into account the fact that some evening sites are blocked by users. This is another, but insignificant criterion.
New important sites such as ehow suffered from the update, but a less significant number of sites with an indirect result: links from these sites are devalued, which affects others that are not directly affected.
Panda Update. What criteria does Panda Update apply?
February 24, 2011. Updated March 3, 2011. Important changes to content farms (Panda Update).
The internal name "Panda" (this is the name of the engineer) influenced 11.8% of the studies, reducing the presence of poor, unoriginal or unsuitable content in the results. On the contrary, preference is given to those who provide detailed articles obtained from the initial search.
"We want to encourage a healthy ecosystem..." 'the Google post said.
Google clarifies that the change did not come from the new Chrome extension, which allows you to block sites. But a comparison with the data collected shows that 84% of the affected sites are on the list of blocked sites.
The effect will appear only today in the United States. The rest of the world will be affected later. One result would be more Adsense revenue for other sites, as content farms are primarily used to serve ads.
It remains to be seen how content farms will be affected, on Alexa or Google Trends and whether it will be Farmer's Day.
Search for better sites.
January 28, 2011. Change to copied content.
To combat sites that take content from other sites or whose content has no originality, a change was made to the algorithm at the beginning of the week, that is, from January 24.
This affects only 2% of requests, but according to Matt Cutts, this is enough for you to see a change in positioning (this is the case with Script, the audience has grown by 10%).
This is a new improvement affecting long haul. This can affect content farms that produce articles in a chain, but not in the original.
This was announced by Matt Cutts.
January 21, 2011. New classification formula.
The new algorithm is more effective for detecting spam in page content represented by repetition of words, with the explicit intention of being located on those words.
They can be found in the article or in the comments to the blog.
See link below.
January 21, 2011. Better than ever, anti-spam algorithm.
This is stated in a letter to Google, which responds to criticism of the quality of the search engine in particular in the fight against spam.
Google claims that displaying Adsense ads does not interfere with declassifying a site without useful content any more than participating in the Adwords program.
In 2010, the algorithm tested two major anti-spam corrections. They talked a lot about the changes that influenced the long get-together at the expense of sites without content.
Google commits to going further in 2011 and invites webmasters to have their say. The target is mainly "content farms," which provide aimless pages filled with keywords to position themselves in results (e.g. eHow, Answerbag, Associated Content).
The algorithm for recognizing copied or no original content will be improved.
Google search and search engine спам.
December 2, 2010. Sense analysis added to the algorithm.
After an article in The New York Times condemning the fact that a seller who displeases his customers and generates many complaints on blogs and on the forum gets an advantage over search engines, Google reacted.
Indeed, when you condemn the practice or content of a site, you put links to it to give examples, and these backlinks are considered as an index of popularity by the engines, which leads to a better position in the results!
Therefore, Google has developed a sense analysis algorithm that is designed to recognize whether the text surrounding the link is positive or negative to it, according to the keywords it contains, in order to punish the sites it complains about.
Google also advises the nofollow attribute to put a link to a site without wanting to help position it.
Being bad for your customers is bad for business.
Analysis of large-scale feelings for news and blogs. Algorithm analysis in English.
November 17, 2010. The same areas are more represented in the results.
Although the same area was limited to two links in the results, that number is now increasing. This can cause traffic loss on other sites.
November 5, 2010. Black Friday.
From October 21 to 22, depending on the regions, the change in the ranking algorithm in the results affected a huge number of sites, some lost up to 80% of traffic. Search engine Alexa, published graphs showing huge losses or equivalent profits for some sites.
These changes seem final. It seems that the purpose of the amendments is to increase the relevance of the results.
August 31, 2010. SVG indexed.
SVG content is now indexed regardless of whether it is in the file to include or embedded in HTML code.
August 20, 2010. Harmful internationalization?
Some webmasters have seen their traffic grow from Google search engines other than Google.com or from their home country.
So Americans can see the arrival of visitors who consult with google.fr, which suggests that the French engine includes American sites in the results.
This can reduce the audience of French sites.
June 8, 2010. Kaffine updates the index.
Google announced on June 8 that a new indexing engine - Caffeine - is being finalized. It offers a new index with more recent results of 50%.
It differs from the previous system, which was updated as a whole by waves. Kaffine updates the index incrementally. New pages can be added and made searchable as soon as they are found.
The new architecture also allows the page to be linked to multiple countries .
Kaffein vs. the previous system.
May 27, 2010. MayDay: The long pull is changing.
This was confirmed by Mat Cutts at May's Google I/O, in April a radical change was made to the Long Chain algorithm to improve the quality of content.
This is an algorithmic меняется в Google, looking for high quality sites to surface for long tail queries. It went through vigorous testing и isn 't going to be rolled back.
Translation: "This is an algorithmic change by Google, searching for better sites that rise to the surface for queries along a long chain. He has been thoroughly tested and will not be doubted."
Reppelons that long drag is a set of queries with several keywords, one rarely, but which all together make up the bulk of the site's traffic.
Webmasters gave this evolution the name MayDay. I already called it Black Tuesday. This was disastrous for some well-established but underserved sites. This happened in late April and early May according to the sites, although other sites experienced a loss of traffic for other reasons.
This increased scriptol.com and iqlevsha.ru. traffic
MayDay was explained by Matt Cutts in the video.
It is independent of Caffeine and final. Webmasters need to add content to recover traffic.
April 27, 2010. Black Tuesday: Positioning changes on a long wave.
Long train is all pages, many, on a site that we visit little, but together have wide traffic.
Multiple keyword queries are a long chain.
Many sites have changed the positioning of these pages since April 27. Some have lost up to 90% of their traffic.
The change was attributed to Cafeine - Google's new infrastructure that indexes more pages and creates more competition - but Google confirmed the change to its algorithm (see May 27).
April 14, 2010. Real time.
MySpace, Facebook, Buzz, Twitter are built into the search results. When more parameters are displayed after the results page is displayed and the Update button is clicked, you can see the social media activity associated with the query.
Replay on Twitter.
2011 update: Twitter and Facebook prohibit access to Google's robot.
April 9, 2010. Speed is officially a positioning factor.
It was announced a few months ago, it became a reality: too slow a site is now declassified on the result pages, or at least has a chance of being it combined with other factors.
"Today we are turning on a new signal in our search ranking algorithms: the speed site."
"Today we are incorporating a new signal into our search positioning algorithms: site speed."
You can find out if your site is too slow from Google Tools for Webmaster (Labs -> Performance Site).
Use the speed in web search ranking site.
December 15, 2009. Canonical between domains.
Accounting for the rel = "canonical" attribute, which was implemented a few months ago to avoid duplication between pages of the same site, has just been extended to identical pages on different domain names.
When transferring a site to another domain, it is better to use 301 redirects.
Google source.
To protect your site from sites that can copy content, see how to create a common canonical tag in PHP.
November 19, 2009. The loading speed of the site will be a positioning factor in 2010.
This was just stated by Matt Cutts in an interview.
"Historically, we have not factored this into our search positioning, but beacoup people at Google believe that the Web should be fast.
This should allow a more pleasant use and therefore it would be correct to say that if you have a fast site, it could get a small bonus.
If your site is very slow, there may be users who don't like it at all.
I think in 2010, many will think about how to have a fast site, how to get rich without having to write a bunch of personal JavaScript."
This should favor static rather than SQL sites... See our article, How to build a CMS without a database.
See also: Let's make an Internet feaster.
August 11, 2009. New search engine Caffeine.
Google is testing a new search engine. It is needed faster and more relevant.
July 2, 2009. Less weight for inappropriate references.
This is not officially confirmed by Google (which in any case says little about its algorithm), but webmasters believe that the results have changed and the positions in the SERPs have been lost, which proceeded from the number of backlinks of lower quality.
Inappropriate references mean:
- Blog blogs.
- links to social sites.
- links to books.
- Links in footer on partner sites.
- links to CMS templates.
In fact, Google recently announced that it will no longer take blogrolls into account. We probably see the result. And this is not just a loss of significance for these connections: they no longer matter.
As for social sites (for example, Delicious, Stumbleupon), in contrast, Google said in a roundtable with webmasters: "They are considered other sites."
June 19, 2009. Flash and its resources.
Flash applications are indexed by the search engine, and from now on the resources they use, images or texts, are also indexed.
Source Webmaster Central Blog.
June 2, 2009. Confirmation of changes from nofollow - onclick links.
The nofollow attribute ignores the link on the page for search engines. So PR is distributed among the remaining links.
It seems that now PR is distributed among all links (with or without nofollow), and then is not distributed for nofollow links.
Example: You are with 10 PR points and 5 links, 2 points are awarded to each. If two links in a nofollow, then not a single PR passed them, but the rest will not receive all the points, he will receive only 6 points, divided in 3.
The consequences are important, links in blog comments would lose PR on other pages.
Matt Cutts quote:
It is assumed that you have 10 links and 5 of them. Thera's this suggests that this other 5 links get ALL that PageRank and it cannot be a true animor.
Suppose you have 10 connections and 5 of them in nofollow. It is assumed that the remaining 5 receive ALL PageRank and this may already be untrue .
For more details, see PageRank and nofollow.
In addition, Google takes into account the links assigned in the onclick event.
April 12, 2009. Custom search.
It is generalized among all users of the search engine. Search results take into account the user's behavior, if he more often clicks on the pages of a site or site type, these pages will be displayed in subsequent searches as the main result only for him. Link: Personal search for everyone.
April 4, 2009. Local search.
Google improves local search based on IP address, which allows you to find the country and city of the Internet user. With its help, Google tries to show sites that they localize as close as possible.
To use this option, you must include the name of the location where you want the map to appear in your search.
Google blog source.
February 26, 2009. Brand names.
The algorithm increases the weight of brand names and therefore favors the respective sites. This is confirmed by Matt Cutts (head of staff and Google communicator) in the video.
Video. (English).
February 25, 2009. Canonical lighthouse.
The new tag tells the search engine robot which URL to store when a page is available with different addresses.
Duplicate with
trust problem resolved.
July 16, 2008.
Google is experimentally introducing a little Wikia into its search engine. Users can mark the results as good or undesirable.
The engine takes this into account, but only for the user who tagged them. So far...
July 2008.
Google announces it has 1,000 billion web page URLs in its database.
Not all pages are indexed.
June 2008. Nofollow factored in.
Links in nofollow do not matter to PageRank, but their PR is not distributed across normal links.
So PR transmitted to linked pages is divided by the number of links first, then evaporates when there are links to nofollow.
Source: PageRank Sculpting
May 17, 2007. Universal research
New architecture and algorithm for filling pages with various materials, such as images, videos, maps, news.
October 19, 2005. Jagger Update.
This update increases the relevance of links. Important objects also seem to be preferred.
Spam is fought, especially by techniques that use CSS to mask content for visitors.
May 20, 2005. Bourbon Update.
Update to fine sites with duplicate content, links to inappropriate pages (not linked to a linked page), mutual links by quantity, links by quantity to a close site.
This affected many sites with collateral damage.
2003. Updated Florida Update.
She upset the SERPs. One of the important changes is that the algorithm works differently depending on the types of requests, and SERPs are filled with results of different and complementary types.
1998.
Placing Google search engine on the network.
Further information
- How Google works on its algorithm! (Video. French translation by pressing cc).
- Google algorithm. Description of the original algorithm.
- Google anatomy. Search engine infrastructure diagram.