SEO | December 17, 2019
SERP trends of 2020; how to be ahead of the curve (and Google!)
Unbelievably, the way we talk about and consider search intent hasn’t changed much since 2002 and Google’s ranking factors have matured immensely since then. But now, intent has stepped into the spotlight and is the phrase on every digital marketers’ lips. Keep reading to stay ahead of the curve in 2020…
What is intent?
Intent is what search engines, particularly Google, use to determine the results it serves a search query. Originally identified as ‘navigational’, ‘informational’ and ‘transactional’, a search query is weighed for its intent to make sure the search engine pulls the most relevant results from the web to serve the user.
Intent is what determines a recipe being shown over an online grocery store, or a YouTube video over an article. In short, it is how search engines understand what results a user is expecting to receive.
SEO’s group keywords based on search intent to build relevant distribution lists, which, through optimisation, will likely aid the success of content in organic search results. So, for example, if a search engine understands the intent of a search query is informational, our group of keywords needs to include other relevant informational queries.
But, what if we told you there is a better way to group keywords? Interested in finding out how? Keep reading… because we can confidently say that the likelihood is you’ve been grouping keywords incorrectly, and you didn’t even know it.
How do SEOs discover the intent of a keyword?
The process of SERPs analysis is different for many search engine specialists, and you’ll probably find a number of articles explaining different methods for uncovering keyword intent.
Most popular (usually because it’s the quickest and appears to be the easiest/most obvious method) is to guess the intent. And, it isn’t as daft as it sounds.
#1 Manually guessing search results
It may seem natural to some that a particular word or phrase would be grouped with another. I mean, f1 and formula one would obviously belong in the same keyword group, right?
Another example… a search specialist with a keen interest in music has been assigned a music client – great! They’re familiar with the words and phrases that musicians use; it’s a perfect match.
A priority keyword for this particular music client that needs grouping with other relative keywords is ‘EQ’. Well, that quite obviously references an equalizer, therefore the specialist guesses the relevance and intent based on ‘equalizer’.
Great. Job done.
However, had the search specialist used Google to confirm their suggestion, they would have found that ‘EQ’ to Google is ‘emotional intelligence’ and is closely related to IQ; uncovering no musical connection.
Suddenly that perfectly matched search specialist has over-promised to a client by creating a keyword group to target the head term ‘EQ’ with 100,000+ searches, when the group has no relevance.
You see the problem? And believe it or not, this is a very common way to justify intent and group keywords.
#2 A manual approach without the guesswork
Likely to be the most common method of topic clustering (i.e. grouping keywords); search specialists use a number of tools available to them within the SERPs to manually build keyword groups. This approach will see a long list of keywords searched individually, to analyse the results and uncover keyword intent.
By using search result data, it is more reliable than basing intent on conjecture (it’s best practice too). However, it’s still not without human error and is incredibly labour-intensive.
#3 Using what’s commonly known as ‘n-grams’
In a sentence, an ‘n-gram’ is a subsequence of n items from a given sequence; in search engines the n denotes words. In basic terms, it’s a sequence of words with similarities.
Variations of n-grams exist when two-word phrases, three-word phrases, or four-word phrases and so on are used.
Personalised gift = two-word phrase
Personalised wedding gift = thee-word phrase
When using the n-gram method, a list of component terms is formulated via keyword research and analysed for frequency to identify important terms or phrases. Those terms and phrases sometimes referred to as “hot words” are cleansed of negative or irrelevant keywords before being organised into topic groups for keyword clustering. Keyword clustering then occurs by a spreadsheet identifying the same word in the dataset.
This is slightly more automated than the manual approach mentioned above by using spreadsheet formulas to do the grouping on your behalf, but the issue here is that there could still be a mass of desirable keywords being ignored.
Not only is using n-grams limiting, but it’s also restrictive in niche industries and emerging markets that are early adopters of new terminology that has no search history.
#4 Natural Language Processing
Again, this is a level-up from using an n-gram to cluster keywords, so we’re looking at a slightly slicker method here.
Natural language processing (NLP) uses search engine result page data to inform keyword groups. Search specialists will scrape data from one, two, sometimes three results pages for a given query in a bid to find the semantic keywords that work together to create context.
Using NLP techniques sounds fancy and is slightly more impressive than relying on your intuition, but does it work?
In some circumstances, yes. However, Google has already made a decision on user intent using a corpus many times larger than anything you may be able to gather as an in-house or agency SEO.
Is there a better way to determine intent?
We believe so. Put it this way, a search engine has its own understanding of every search query ever entered into its system and it’s their understanding that plays a big part in deciding which pages to return as a result. This search query data exists, and it’s time we started using it better.
Take the phrase “makeup brush”, Google has already decided based on ranking and tests over time that people searching that keyword have purchase intent. That is why the front-page results are all product category pages.
Now take, for example, the phrase “best makeup brush”. There is purchase intent, but it is one step removed from “makeup brush”. The user, in Google’s eyes, is still in the research phase of their buying journey. Therefore, the SERPs is now largely industry publications and lists of the top makeup brushes as compiled by the industry media. It is not retailers, brands or manufacturers appearing.
Optimising your “makeup brush” category page for “best” may seem like a natural fit, however, Google will not reward this kind of behaviour which will not organically benefit the category page.
So, how do we use Google’s understanding to our advantage?
SERPs, refers to the page presented to a user (with default settings) when a given query has passed through a search engine. It is arguably the only reliable data that you can get from Google in a legal fashion.
The kind of usable data that is provided by the SERPs includes:
- Ranking URLs
- Position of ranking URL
- Number of results from a specific domain
- Number of total organic results
- SERPs features, such as map pack, knowledge graph, image carousel, answer box, etc.
By combining this data from a search engine results page, it is possible to judge how similar the intent of one keyword is to another. That’s how a manual method to keyword grouping (process #2 listed above) works, right? Yes, but bear with us…
How closely related are “best eye makeup brushes” and “best makeup brushes” in your mind?
Looking at the example above with the SERP results side-by-side, the immediate response is that these two results look nothing alike.
However, strip out the entities powered by Google’s knowledge graph and solely compare the organic search results, and it becomes clear that they have a large degree of similarity.
In a nutshell, some of the same URLs ranking for “best eye makeup brush” are also ranking for “best makeup brush”, and manually a search specialist may have missed it.
This approach can be used to provide a mathematical metric for assessing keyword similarity. A mathematical indicator for whether a keyword belongs on a page or not.
How to best assess the similarity of a keyword’s intent?
An example using the mathematical indicator…
There are two SERPs containing ten URLs each. If all URLs are the same and sharing the same positions on-page in both SERPs, then the similarity is complete or 100%. If only two URLs match and they are in different positions the similarity is 5%.
This method can produce huge amounts of data, which could be conceived as a double-edged sword. This data is needed to draw the conclusions, but those conclusions can be hard to draw when so much exists.
What we use to conclude the search intent similarity of masses of data…
They allow our search specialists to visualise vast data sets in interesting ways. They make analysis intuitive, and they are also a great representation tool to provide insights to our clients.
The nodes represent the keyword and/or phrase being analysed, and the lines between them represent the strength of the link; we call them weighted bi-graph clusters.
Using soft clustering of relational SERPs data in a network graph, it is possible to deduce how people search for cosmetics (for example), and how Google understands and contextualises those searches.
Topic clustering in this way can sometimes reveal opportunities we/the client didn’t know existed, and manually it would take us a lot more time and market knowledge than we currently possess to pull out the same valuable information.
How do we do it?
Now, that’s our secret.
But, in a nutshell, our Senior Technical SEO Specialist built a tool that we call “The SERPs Moz Combinator 3000” – except in front of clients!
This tool is fed data and produces a series of wonderfully formulated spreadsheets. It takes the data scraped from Google’s results pages gathered by our team using an automated system, then using a variety of formulas it builds sheets that dissect that information.
The detailed results look at the client domain, direct competitor domains, and in-direct competitor domains to create similarity scores between domains, pages and keywords.
From that information, our in-house tool also quantifies the weight of similarities, the keyword groupings and importance to create the data required for our visualisation tool, Gephi.
Once the data is run through the network graph generator, Gephi, the keywords with similar intent are drawn into clusters. The closer a keyword is to the centre of the cluster, the higher the intent and relevancy is. If it straddles two clusters, it’s likely the intent is mixed.
Conclusion: is the extra work worth it?
We have an example that suggests it is.
Label Source, a client of ours who came onboard in 2018, had only 39 page-1 ranking keywords, with most not ranked in the top 50. With our topic clustering and intent focused tool, within just a few short months, we increased the first-page ranking positions to 236 (+505%).
A simple example of the way the tool works was by educating us that a “brass asset tag” and “brass number tag” both of transactional intent. Manually, we might assume a “brass number tag” was for numbering front doors and an “asset tag” for numbering business assets, but the scraped search data proved that they are in fact one of the same in the eyes of a search engine.
Not only was it a perfect tool for processing keyword research at a grand scale, but the visual representation of weighted keyword clustering using a series of nodes and edges was the perfect way to present to the client for their buy-in too.
“Thanks for this spreadsheet, I am working my way through it and it is helping me spot a lot of opportunities I never realised were there.”Ryan Phillips, Digital Marketing Assistant.
What’s more, almost 50% of transactions are attributable to organic traffic off the back of our keyword distribution strategy, following the data gathered by our in-house tool.
Read more about this case study, here.
Interested in seeing how our topic clustering tool can help you? Get in touch!