The History of Search Engines

Last modified:
Owen Whitcombe

Senior SEO Specialist

I am one of the older souls in our young and hip digital marketing office. Many have never known a life without the internet, and therefore, no experience of a pre-Google dominated world of search.

Nowadays, Google is indispensable, but when the search engines came into being, Google was just a misspelling of googol. (A number equal to 10 to the 100th power, or more accurately, an unfathomable number.) It wasn’t until 1996 when Google got it’s first outing, under the name of backrub, because the system checked backlinks to estimate the importance of a site in order to rank it.

The first recognised search engine was an engine called Archie, but it wasn’t quite what you would class as a search engine in the modern world. Archie browsed folders and files in FTP directories. These formed a collection, which was sorted and processed and was then accessible to the searcher. The main issue with Archie was that the search engine didn’t search text, only files and folders.

World Wide Web – 1994

In 1994 the first recognised crawler search engine was developed. WebCrawler was the first search engine to provide full text search. In 1994, Brian Pinkerton, a computer science student at the University of Washington, used his spare time to create WebCrawler. With WebCrawler, Brian created a list of the top 25 websites on March 15, 1994. Only a month later he announced the launch of WebCrawler live on the web with a database of 4000 sites. 

On June 11, 1994, Brian declared WebCrawler available for research. On November 14, 1994, WebCrawler hit one million hits. It was the second most visited website on the internet in February 1996, but it quickly dropped below rival search engines.

Unlike its predecessors, it let users search for any word in any web page, which became the standard for all major search engines since. It was also the first one to be widely known by the public.

In June 1995, America Online (AOL), acquired WebCrawler. But just over 2 years later in April 1997 WebCrawler was sold by AOL to Excite. In 2001, Excite went bankrupt. Despite this, WebCrawler still exists today but it no longer uses its own database to display results. 2018 was the last time the engine had a face lift and new spider logo.

AltaVista was launched in 1994 and established itself as the standard among search engines for a number of years. It was the first with its own index. Alta Vista continued to be prominent in the search market for a long time because it used a robot called “Scooter” that was particularly powerful. These days, redirects to Yahoo!

Lycos came onto the market in the summer of 1994The function and the principle were ground-breaking, because Lycos not only could search documents, but also had one algorithm which measured the frequency of the search term on the page and also looked at the proximity of the words to each other.

That same year, Yahoo! brought its directory service online. Basically, it started out as just a collection of favourite links because Yahoo! did nothing more than collect web addresses. In this way, the user could create links according to his personal interests. As the accumulation of links grew over time, categories and subcategories were added to help users find what websites they were looking for.

Fast forward two years to 1996 and my personal favourite at the time was introduced, called HotBot. HotBot was launched using a “new links” strategy of marketing, claiming to index the entire web weekly, more often than competitors like AltaVista to give its user far fresher and more up to date results. At the time their claims were that they were the “most complete Web index online” with over 54 million documents in their database.

HotBot used search data from Direct Hit Technologies for a period which was a tool that used click-through data to manipulate results in order to find the best choices for ranking highly. They also incorporated user data into determining rankings, as Direct Hit Technologies were known to use how long searchers spent viewing each web page to help determine a page’s ranking. Direct Hit were purchased by Ask Jeeves in 2000 just as HotBot was on the decline.

Sadly, Hotbot no longer exists as a search engine, but you can see below the colourful 90s interface which helped make it such a popular choice and brings back good memories for me. As of 2020, the HotBot domain is controlled by a VPN company.

Hello Google – 1998

Initially entering the landscape as an underdog, due to Alta Vista, Lycos and Yahoo! already having the largest market shares, it didn’t take long for Google to pick up speed. Not only was the user interface particularly well structured, but the relevance of the search results was outstanding. The speed was extremely fast for the time also. While rival search engines at that time tried to plaster every millimetre of advertising with colourful advertising banners, Google refrained from doing so which also helped improve user satisfaction.

In the same year, Microsoft brings out its own search engine “MSN Search”. At that time, Microsoft beat the market leader Netscape in the battle for the most used browser and was able to win an extremely large number of users for its search engine. However, MSN Search failed to impress around the world.

Google World Wide Web Domination

The years go by, Google is gaining more and more users with its “better” product and the air for the other search engines is getting thin. In 2006, MSN Search became “Windows Live Search” and three years later “Bing”.

Google was able to convince right from the start with its much better search results and won the race for search engine domination right from the start. This was because Google not only used the information supplied by the website but also other functions, such as the popularity of a website. This popularity was mainly determined by links from other websites. In this way, Google was able to ensure that not just any post for a specific search term was displayed, but the most relevant one from the most “popular” website. 

Links still play a huge part in how Google ranks webpages but it now uses many other algorithm attributes such as RankBrain and in the past, PageRank.

Today’s Search Engine Market Landscape

As of March 2022, according to StatCounter the market share for today’s search engines, worldwide and here in the United Kingdom can be seen below:


Bing was launched by Microsoft in 2009 as the successor to (Live Search) in order to finally be able to compete with Google and has increasingly included social media content since 2012. In October 2010 Bing started been working with Facebook . Bing’s integration with Facebook’s search engine was seen as a potentially important way that Microsoft could compete with Google. Alongside Facebook’s own semi-programmable search queries, users could plug in web searches that would return structured information like the local weather.

But as of 2014, Facebook no longer uses Bing’s search engine on its platform.

In 2009, Bing and Yahoo formed a strategic alliance to unite against giant Google. The reach of the Microsoft technology was increased by all Yahoo users. But Google still refused to be pushed to one side.


YAHOO! was founded in 1994 by David Filo and Jerry Yang and was initially a pure web directory.

Yahoo was also very prominent in the world of online chat. Before instant messaging services, the 1990s web was full of chat room, with Yahoo chat being on of the most popular options due to their wide choice of subject matter chat rooms.

Yahoo would be described as a meta search engine these days, one which gets their results from several or one web crawler search engine. Another example of a meta search engine, another which uses Bing’s search results, is another internet early day big hitter AOL. This former well known Internet Service Provide still has a search engine in play.


DuckDuckGo was developed by Gabriel Weinberg and went live in 2008. This search engine with the unusual name is a combination of a meta search engine and its own web crawler. The search results are delivered from over 400 external sources, such as Bing, Yandex, Yahoo Search BOSS or Wikipedia. In addition, there are the results of our own web crawler called DuckDuckBot, which regularly crawls the web itself for information. According to Wikipedia, on November 13, 2017, there were more than 20 million searches in one day via DuckDuckGo.

The engine that styles itself as “The search engine that doesn’t follow you.” DuckDuckGo search delivers the same results to all users (there is no accounting for user data or location bias when displaying rankings) and does not store any data and does not pass on any personal data. The company cites ads in its search results as a source of revenue, which only relate to the search results and not to the user’s search behaviour something which Google uses along with the user’s search query.

To further increase anonymity, DuckDuckGo can also be used as a Tor hidden service. This service can only be accessed via the Tor browser, which also protects your own data even further.

The search at DuckDuckGo offers various options, such as setting the country in the search query or filtering for images, videos and news. In addition, there is a filter mode that can exclude adult content.


The Russian search engine Yandex (Яндекс) was founded in 1997 and, according to Wikipedia , is a company based in Amsterdam and has its operational headquarters in Moscow. There are also branches in Belarus, Switzerland, the USA, Turkey and Germany.

As of March 2022, they still hold the market share in Russia, with 50.21% of the market share, compared to 39.09% for Google. As with Google, who has Chrome, Yandex provides users with its own browser ( Yandex.Browser ), an online translator ( Yandex.Translate ), a map service ( Yandex.Maps )  e-mail addresses, cloud services or an app store for Android .


Baidu is the leading search engine in China. Baidu is the most popular search engine in China with a staggering 84.26% of the market share. Bing hold 6.65% and Google only see a 1.33% share of the Chinese market.


The search engine that plants trees. Over 35 million have been planted thanks to Ecosia users.

How does that work? Ecosia is a search engine that uses its search ad revenue to plant trees. An ecological search engine, so to speak.

Ecosia GmbH was founded on December 7th, 2009 and is based in Berlin. According to Wikipedia, 80% of the surplus revenue is donated to conservation organizations. Ecosia search results are provided by Bing and no personally identifiable information is stored on Ecosia. They claim they use the search technology of the search expert Bing – and make it even more efficient with their own algorithms.

A Brief History of Search Engine Optimisation

The beginning of search engine optimisation (SEO) can be roughly be tracked to the birth of search engines as we know them today, in the mid-1990s. At that time, search engine optimisation was mainly a manipulation of search results with the aim of bringing dubious pages to the top of the search results or disrupting the way the search engine worked. 

Common uses were so-called black hat methods such as keyword stuffing (Generally refers to the excessive use of keywords on a page with the aim of improving the page’s ranking for that keyword. This technique worked until around the beginning of 2000. It was also common with this tactic to place keywords on the page as “hidden text”: Keywords were implemented in white on a white background.) or cloaking (The basic goal of cloaking is to optimize a page differently for the visitor and the search engines).

Google thought they could provide better results for users by using their backlink system to only show results of websites that other found popular by linking to it. This however was another area targeted by black-hat SEO techniques, as in the beginning, it was a matter of quantity over quantity, so a website owner could buy links from retailers in order to boost their sites backlink profile or use software tools that would automatically crawl the web and leave backlinks to a site using an automated process. These links were often of very poor and spammy quality and often contained no relevance for the intended linked to site.

Google has evolved into an entity that can now understand the difference between relevant links and spammy links. In the early 2010’s it was commonplace for a website owner to create a disavow file to ensure their site was no affected by spammy link building techniques, but these days Google generally understands these links and can ignore them when it comes to ranking a website of webpage.

One of Google’s many algorithm updates, called penguin is the reason why Google no longer penalises most websites if they have some spammy backlinks. But even as recent as July 2021, they still keep improving this and launch targeted anti-spam updates in their algorithm.

Today’s SEO professionals have strayed as far from the black-hat methods of the 90’s and 00’s as they could. On the one hand, this is due to the regular updates and anti-spam measures from Google, which uncover and punish practically every single black hat method. On the other hand, it is also based on the experience that a white hat methodology is worthwhile and pays off in the long term.

The success of today’s SEOs is based on a technically clean structure and programming of the website, a flawless user experience on the website, fast and easy to use front ends and most importantly, quality content which leads to high quality, content relevant backlinks.

The Future of Search Engines

Google won’t be going anywhere anytime soon, and while privacy and tracking concerns remain, the fact is “Google it” is now much more common that “search for it” and with Google’s continued improvements, not just in its search results but their incorporated features means that we will be using Google for many years to come.

The personalisation or location controlled search will become more and more important. By limiting the search area of a specific user, not only important resources can be saved, but the search for information becomes more personal and possibly relevant for the person searching. It is already normal today that, for example, previous clicks and search queries influence future queries and results of a person.

Google is always looking to make their SERPs (Search Engine Results Page) more personalised and tailored to the person searching so it will start to pre-empt what a user is looking for by providing follow up searches they may want to make next following their initial search. There are already the first attempts at a “predictive” search engine, but since human language is far too complex seen in the context, it will probably only remain with “suggestions” for the foreseeable future.

With the rise of virtual assistants, which can be voiced activated, virtual assistants such as Alexa and Siri are taking the place of a traditional search engines. Context-related queries are already possible with the voice assistants. For example, they show us the nearest petrol station or cash point. Or they tell us the quickest way to get from where we are to another desired location. With their hands-free capabilities, voice activated search engines are only going to get more common place.  

While knowing all this history isn’t a requirement for today’s SEO professionals, we at Liberty live and breathe this stuff which makes us one of the U.K’s leading SEO consulting agencies.

Share this on:


Owen Whitcombe

Senior SEO Specialist

Owen has worked in primarily Ecommerce since leaving University. Many years spent in the online sports retail arena before gaining experience in the online catering business then becoming digital marketing manager for national toy company before finding his feet in SEO. As a results driven individual Owen loves nothing more than when we can give…

See more written by this author: Owen Whitcombe

We’ll be your
proactive partner