100% found this document useful (1 vote)
524 views117 pages

The Basics of Search Engine Optimization

The document provides an overview of search engine optimization (SEO) basics, including: 1. It introduces Mozlow's hierarchy of SEO needs which prioritizes crawl accessibility, compelling content, keyword optimization, and other factors for effective SEO. 2. The 7 steps to successful SEO are outlined as crawl accessibility, compelling content, keyword optimization, user experience, share-worthy content, title/URL optimization, and schema markup. 3. SEO is described as the practice of increasing website traffic and exposure through organic search results by understanding user intent and optimizing content for search engines.

Uploaded by

Shikha Mishra
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
100% found this document useful (1 vote)
524 views117 pages

The Basics of Search Engine Optimization

The document provides an overview of search engine optimization (SEO) basics, including: 1. It introduces Mozlow's hierarchy of SEO needs which prioritizes crawl accessibility, compelling content, keyword optimization, and other factors for effective SEO. 2. The 7 steps to successful SEO are outlined as crawl accessibility, compelling content, keyword optimization, user experience, share-worthy content, title/URL optimization, and schema markup. 3. SEO is described as the practice of increasing website traffic and exposure through organic search results by understanding user intent and optimizing content for search engines.

Uploaded by

Shikha Mishra
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 117

The Basics of Search Engine Optimization

Ever heard of Maslow's hierarchy of needs? It's a theory of psychology that


prioritizes the most fundamental human needs (like air, water, and physical
safety) over more advanced needs (like esteem and social belonging). The
theory is that you can't achieve the needs at the top without ensuring the
more fundamental needs are met first. Love doesn't matter if you don't
have food.

Our founder, Rand Fishkin, made a similar pyramid to explain the way folks
should go about SEO, and we've affectionately dubbed it "Mozlow's
hierarchy of SEO needs."

Here's what it looks like:

As you can see, the foundation of good SEO begins with ensuring crawl
accessibility, and moves up from there.

Using this beginner's guide, we can follow these seven steps to successful
SEO:
1. Crawl accessibility so engines can read your website
2. Compelling content that answers the searcher’s query
3. Keyword optimized to attract searchers & engines
4. Great user experience including a fast load speed and compelling UX
5. Share-worthy content that earns links, citations, and amplification
6. Title, URL, & description to draw high CTR in the rankings
7. Snippet/schema markup to stand out in SERPs

Chapter 1
SEO 101
What is it, and why is it important?

Welcome! We’re excited that you’re here!

If you already have a solid understanding of SEO and why it's important,
you can skip to Chapter 2 (though we'd still recommend skimming the best
practices from Google and Bing at the end of this chapter; they're useful
refreshers).

For everyone else, this chapter will help build your foundational SEO
knowledge and confidence as you move forward.

What is SEO?
SEO stands for “search engine optimization.” It’s the practice of increasing
both the quality and quantity of website traffic, as well as exposure to your
brand, through non-paid (also known as "organic") search engine results.

Despite the acronym, SEO is as much about people as it is about search


engines themselves. It’s about understanding what people are searching
for online, the answers they are seeking, the words they’re using, and the
type of content they wish to consume. Knowing the answers to these
questions will allow you to connect to the people who are searching online
for the solutions you offer.

If knowing your audience’s intent is one side of the SEO coin, delivering it
in a way search engine crawlers can find and understand is the other. In
this guide, expect to learn how to do both.

What's that word mean?


If you're having trouble with any of the definitions in this chapter, be sure to
open up our SEO glossary for reference!
Check out the SEO glossary 

Search engine basics


Search engines are answer machines. They scour billions of pieces of
content and evaluate thousands of factors to determine which content is
most likely to answer your query.

Search engines do all of this by discovering and cataloguing all available


content on the Internet (web pages, PDFs, images, videos, etc.) via a
process known as “crawling and indexing,” and then ordering it by how well
it matches the query in a process we refer to as “ranking.” We’ll cover
crawling, indexing, and ranking in more detail in Chapter 2.

Which search results are "organic"?


As we said earlier, organic search results are the ones that are earned
through effective SEO, not paid for (i.e. not advertising). These used to be
easy to spot - the ads were clearly labeled as such and the remaining
results typically took the form of "10 blue links" listed below them. But with
the way search has changed, how can we spot organic results today?

Today, search engine results pages — often referred to as “SERPs” — are


filled with both more advertising and more dynamic organic results formats
(called “SERP features”) than we've ever seen before. Some examples of
SERP features are featured snippets (or answer boxes), People Also Ask
boxes, image carousels, etc. New SERP features continue to emerge,
driven largely by what people are seeking.

For example, if you search for "Denver weather," you’ll see a weather
forecast for the city of Denver directly in the SERP instead of a link to a site
that might have that forecast. And, if you search for “pizza Denver,” you’ll
see a “local pack” result made up of Denver pizza places. Convenient,
right?

It’s important to remember that search engines make money from


advertising. Their goal is to better solve searcher’s queries (within SERPs),
to keep searchers coming back, and to keep them on the SERPs longer.

Some SERP features on Google are organic and can be influenced by


SEO. These include featured snippets (a promoted organic result that
displays an answer inside a box) and related questions (a.k.a. "People Also
Ask" boxes).

It's worth noting that there are many other search features that, even
though they aren't paid advertising, can't typically be influenced by SEO.
These features often have data acquired from proprietary data sources,
such as Wikipedia, WebMD, and IMDb.

Why SEO is important


While paid advertising, social media, and other online platforms can
generate traffic to websites, the majority of online traffic is driven by search
engines.

Organic search results cover more digital real estate, appear more credible
to savvy searchers, and receive way more clicks than paid advertisements.
For example, of all US searches, only ~2.8% of people click on paid
advertisements.

In a nutshell: SEO has ~20X more traffic opportunity than PPC on both


mobile and desktop.

SEO is also one of the only online marketing channels that, when set up
correctly, can continue to pay dividends over time. If you provide a solid
piece of content that deserves to rank for the right keywords, your traffic
can snowball over time, whereas advertising needs continuous funding to
send traffic to your site.

Search engines are getting smarter, but they still need our help.

Optimizing your site will help deliver better information to search engines so
that your content can be properly indexed and displayed within search
results.

Should I hire an SEO professional, consultant, or agency?


Depending on your bandwidth, willingness to learn, and the complexity of
your website(s), you could perform some basic SEO yourself. Or, you might
discover that you would prefer the help of an expert. Either way is okay!

If you end up looking for expert help, it's important to know that many
agencies and consultants "provide SEO services," but can vary widely in
quality. Knowing how to choose a good SEO company can save you a lot
of time and money, as the wrong SEO techniques can actually harm your
site more than they will help.

White hat vs black hat SEO


"White hat SEO" refers to SEO techniques, best practices, and strategies
that abide by search engine rule, its primary focus to provide more value to
people.
"Black hat SEO" refers to techniques and strategies that attempt to
spam/fool search engines. While black hat SEO can work, it puts websites
at tremendous risk of being penalized and/or de-indexed (removed from
search results) and has ethical implications.

Penalized websites have bankrupted businesses. It's just another reason to


be very careful when choosing an SEO expert or agency.

Search engines share similar goals with the SEO industry


Search engines want to help you succeed. In fact, Google even has
a Search Engine Optimization Starter Guide, much like the Beginner’s
Guide! They're also quite supportive of efforts by the SEO community.
Digital marketing conferences — such as Unbounce, MNsearch,
SearchLove, and Moz's own MozCon — regularly attract engineers and
representatives from major search engines.

Google assists webmasters and SEOs through their Webmaster Central


Help Forum and by hosting live office hour hangouts. (Bing, unfortunately,
shut down their Webmaster Forums in 2014.)

While webmaster guidelines vary from search engine to search engine, the
underlying principles stay the same: Don’t try to trick search engines.
Instead, provide your visitors with a great online experience. To do that,
follow search engine guidelines and fulfill user intent.

Google Webmaster Guidelines


Basic principles:

 Make pages primarily for users, not search engines.


 Don't deceive your users.
 Avoid tricks intended to improve search engine rankings. A good rule
of thumb is whether you'd feel comfortable explaining what you've
done to a website to a Google employee. Another useful test is to
ask, "Does this help my users? Would I do this if search engines
didn't exist?"
 Think about what makes your website unique, valuable, or engaging.

Things to avoid:

 Automatically generated content


 Participating in link schemes
 Creating pages with little or no original content (i.e. copied from
somewhere else)
 Cloaking — the practice of showing search engine crawlers different
content than visitors.
 Hidden text and links
 Doorway pages — pages created to rank well for specific searches to
funnel traffic to your website.

It's good to be very familiar with Google's Webmaster Guidelines. Make


time to get to know them.

See the full Google Webmaster Guidelines here 


Bing Webmaster Guidelines
Basic principles:

 Provide clear, deep, engaging, and easy-to-find content on your site.


 Keep page titles clear and relevant.
 Links are regarded as a signal of popularity and Bing rewards links
that have grown organically.
 Social influence and social shares are positive signals and can have
an impact on how you rank organically in the long run.
 Page speed is important, along with a positive, useful user
experience.
 Use alt attributes to describe images, so that Bing can better
understand the content.

Things to avoid:

 Thin content, pages showing mostly ads or affiliate links, or that


otherwise redirect visitors away to other sites will not rank well.
 Abusive link tactics that aim to inflate the number and nature of
inbound links such as buying links, participating in link schemes, can
lead to de-indexing.
 Ensure clean, concise, keyword-inclusive URL structures are in
place. Dynamic parameters can dirty up your URLs and cause
duplicate content issues.
 Make your URLs descriptive, short, keyword rich when possible, and
avoid non-letter characters.
 Burying links in Javascript/Flash/Silverlight; keep content out of these
as well.
 Duplicate content
 Keyword stuffing
 Cloaking — the practice of showing search engine crawlers different
content than visitors.
Guidelines for representing your local business on Google
If the business for which you perform SEO work operates locally, either out
of a storefront or drives to customers’ locations to perform service, it
qualifies for a Google My Business listing. For local businesses like these,
Google has guidelines that govern what you should and shouldn’t do in
creating and managing these listings.

Basic principles:

 Be sure you’re eligible for inclusion in the Google My Business index;


you must have a physical address, even if it’s your home address,
and you must serve customers face-to-face, either at your location
(like a retail store) or at theirs (like a plumber)
 Honestly and accurately represent all aspects of your local business
data, including its name, address, phone number, website address,
business categories, hours of operation, and other features.

Things to avoid

 Creation of Google My Business listings for entities that aren’t eligible


 Misrepresentation of any of your core business information, including
“stuffing” your business name with geographic or service keywords,
or creating listings for fake addresses
 Use of PO boxes or virtual offices instead of authentic street
addresses
 Abuse of the review portion of the Google My Business listing, via
fake positive reviews of your business or fake negative ones of your
competitors
 Costly, novice mistakes stemming from failure to read the fine details
of Google’s guidelines

If you'll be focusing on ranking in the Bing search engine, get to know their
guidelines, as well. It's only polite (and good sense!)

See the full Bing Webmaster Guidelines here 


Local, national, or international SEO?
Local businesses will often want to rank for local-intent keywords such as
“[service] + [near me]” or “[service] + [city]” in order to capture potential
customers searching for products or services in the specific locale in which
they offer them. However, not all businesses operate locally. Many
websites do not represent a location-based business, but instead target
audiences on a national or even an international level. Be on the lookout for
more on the topic of local, national, and international SEO in Chapter 4!
Fulfilling user intent
Instead of violating these guidelines in an attempt to trick search engines
into ranking you higher, focus on understanding and fulfilling user intent.
When a person searches for something, they have a desired outcome.
Whether it’s an answer, concert tickets, or a cat photo, that desired content
is their “user intent.”

If a person performs a search for “bands," is their intent to find musical


bands, wedding bands, band saws, or something else?

Your job as an SEO is to quickly provide users with the content they desire
in the format in which they desire it.

Common user intent types:

Informational: Searching for information. Example: “What is the best type


of laptop for photography?”
Navigational: Searching for a specific website. Example: “Apple”
Transactional: Searching to buy something. Example: “good deals on
MacBook Pros”
You can get a glimpse of user intent by Googling your desired keyword(s)
and evaluating the current SERP. For example, if there's a photo carousel,
it’s very likely that people searching for that keyword search for photos.

Also evaluate what content your top-ranking competitors are providing that
you currently aren’t. How can you provide 10X the value on your website?

Providing relevant, high-quality content on your website will help you rank
higher in search results, and more importantly, it will establish credibility
and trust with your online audience.

Before you do any of that, you have to first understand your website’s goals
to execute a strategic SEO plan.

Know your website/client’s goals


Every website is different, so take the time to really understand a specific
site’s business goals. This will not only help you determine which areas of
SEO you should focus on, where to track conversions, and how to set
benchmarks, but it will also help you create talking points for negotiating
SEO projects with clients, bosses, etc.

What will your KPIs (Key Performance Indicators) be to measure the return
on SEO investment? More simply, what is your barometer to measure the
success of your organic search efforts? You'll want to have it documented,
even if it's this simple:

For the website ____________, my primary SEO KPI is ____________.


Here are a few common KPIs to get you started:

 Sales
 Downloads
 Email signups
 Contact form submissions
 Phone calls

And if your business has a local component, you’ll want to define KPIs for
your Google My Business listings, as well. These might include:

 Clicks-to-call
 Clicks-to-website
 Clicks-for-driving-directions

You may have noticed that things like “ranking” and “traffic” weren’t on the
KPIs list, and that’s intentional.

“But wait a minute!” You say. “I came here to learn about SEO because I
heard it could help me rank and get traffic, and you’re telling me those
aren’t important goals?”

Not at all! You’ve heard correctly. SEO can help your website rank higher in
search results and consequently drive more traffic to your website, it’s just
that ranking and traffic are a means to an end. There’s little use in ranking if
no one is clicking through to your site, and there’s little use in increasing
your traffic if that traffic isn’t accomplishing a larger business objective.

For example, if you run a lead generation site, would you rather have:

 1,000 monthly visitors and 3 people fill out a contact form? Or...
 300 monthly visitors and 40 people fill out a contact form?

If you’re using SEO to drive traffic to your site for the purpose of
conversions, we hope you’d pick the latter! Before embarking on SEO,
make sure you’ve laid out your business goals, then use SEO to help you
accomplish them — not the other way around.

SEO accomplishes so much more than vanity metrics. When done well, it
helps real businesses achieve real goals for their success.
Chapter 2
HOW SEARCH ENGINES WORK: CRAWLING, INDEXING,
AND RANKING
First, show up.

As we mentioned in Chapter 1, search engines are answer machines. They


exist to discover, understand, and organize the internet's content in order to
offer the most relevant results to the questions searchers are asking.

In order to show up in search results, your content needs to first be visible


to search engines. It's arguably the most important piece of the SEO
puzzle: If your site can't be found, there's no way you'll ever show up in the
SERPs (Search Engine Results Page).

How do search engines work?


Search engines have three primary functions:

1. Crawl: Scour the Internet for content, looking over the code/content


for each URL they find.
2. Index: Store and organize the content found during the crawling
process. Once a page is in the index, it’s in the running to be
displayed as a result to relevant queries.
3. Rank: Provide the pieces of content that will best answer a
searcher's query, which means that results are ordered by most
relevant to least relevant.

What is search engine crawling?


Crawling is the discovery process in which search engines send out a team
of robots (known as crawlers or spiders) to find new and updated content.
Content can vary — it could be a webpage, an image, a video, a PDF, etc.
— but regardless of the format, content is discovered by links.

What's that word mean?


Having trouble with any of the definitions in this section? Our SEO glossary
has chapter-specific definitions to help you stay up-to-speed.

See Chapter 2 definitions 


Googlebot starts out by fetching a few web pages, and then follows the
links on those webpages to find new URLs. By hopping along this path of
links, the crawler is able to find new content and add it to their index
called Caffeine — a massive database of discovered URLs — to later be
retrieved when a searcher is seeking information that the content on that
URL is a good match for.

What is a search engine index?


Search engines process and store information they find in an index, a huge
database of all the content they’ve discovered and deem good enough to
serve up to searchers.

Search engine ranking


When someone performs a search, search engines scour their index for
highly relevant content and then orders that content in the hopes of solving
the searcher's query. This ordering of search results by relevance is known
as ranking. In general, you can assume that the higher a website is ranked,
the more relevant the search engine believes that site is to the query.

It’s possible to block search engine crawlers from part or all of your site, or
instruct search engines to avoid storing certain pages in their index. While
there can be reasons for doing this, if you want your content found by
searchers, you have to first make sure it’s accessible to crawlers and is
indexable. Otherwise, it’s as good as invisible.

By the end of this chapter, you’ll have the context you need to work with the
search engine, rather than against it!

In SEO, not all search engines are equal


Many beginners wonder about the relative importance of particular search
engines. Most people know that Google has the largest market share, but
how important it is to optimize for Bing, Yahoo, and others? The truth is that
despite the existence of more than 30 major web search engines, the SEO
community really only pays attention to Google. Why? The short answer is
that Google is where the vast majority of people search the web. If we
include Google Images, Google Maps, and YouTube (a Google
property), more than 90% of web searches happen on Google — that's
nearly 20 times Bing and Yahoo combined.

Crawling: Can search engines find your pages?


As you've just learned, making sure your site gets crawled and indexed is a
prerequisite to showing up in the SERPs. If you already have a website, it
might be a good idea to start off by seeing how many of your pages are in
the index. This will yield some great insights into whether Google is
crawling and finding all the pages you want it to, and none that you don’t.

One way to check your indexed pages is "site:yourdomain.com",


an advanced search operator. Head to Google and type
"site:yourdomain.com" into the search bar. This will return results Google
has in its index for the site specified:

The number of results Google displays (see “About XX results” above) isn't


exact, but it does give you a solid idea of which pages are indexed on your
site and how they are currently showing up in search results.

For more accurate results, monitor and use the Index Coverage report in
Google Search Console. You can sign up for a free Google Search
Console account if you don't currently have one. With this tool, you can
submit sitemaps for your site and monitor how many submitted pages have
actually been added to Google's index, among other things.

If you're not showing up anywhere in the search results, there are a few
possible reasons why:

 Your site is brand new and hasn't been crawled yet.


 Your site isn't linked to from any external websites.
 Your site's navigation makes it hard for a robot to crawl it effectively.
 Your site contains some basic code called crawler directives that is
blocking search engines.
 Your site has been penalized by Google for spammy tactics.
Tell search engines how to crawl your site
If you used Google Search Console or the “site:domain.com” advanced
search operator and found that some of your important pages are missing
from the index and/or some of your unimportant pages have been
mistakenly indexed, there are some optimizations you can implement to
better direct Googlebot how you want your web content crawled. Telling
search engines how to crawl your site can give you better control of what
ends up in the index.

Most people think about making sure Google can find their important
pages, but it’s easy to forget that there are likely pages you don’t want
Googlebot to find. These might include things like old URLs that have thin
content, duplicate URLs (such as sort-and-filter parameters for e-
commerce), special promo code pages, staging or test pages, and so on.

To direct Googlebot away from certain pages and sections of your site, use
robots.txt.

Robots.txt
Robots.txt files are located in the root directory of websites (ex.
yourdomain.com/robots.txt) and suggest which parts of your site search
engines should and shouldn't crawl, as well as the speed at which they
crawl your site, via specific robots.txt directives.

How Googlebot treats robots.txt files

 If Googlebot can't find a robots.txt file for a site, it proceeds to crawl


the site.
 If Googlebot finds a robots.txt file for a site, it will usually abide by the
suggestions and proceed to crawl the site.
 If Googlebot encounters an error while trying to access a site’s
robots.txt file and can't determine if one exists or not, it won't crawl
the site.

Optimize for crawl budget!


Crawl budget is the average number of URLs Googlebot will crawl on your
site before leaving, so crawl budget optimization ensures that Googlebot
isn’t wasting time crawling through your unimportant pages at risk of
ignoring your important pages. Crawl budget is most important on very
large sites with tens of thousands of URLs, but it’s never a bad idea to
block crawlers from accessing the content you definitely don’t care about.
Just make sure not to block a crawler’s access to pages you’ve added
other directives on, such as canonical or noindex tags. If Googlebot is
blocked from a page, it won’t be able to see the instructions on that page.
Not all web robots follow robots.txt. People with bad intentions (e.g., e-mail
address scrapers) build bots that don't follow this protocol. In fact, some
bad actors use robots.txt files to find where you’ve located your private
content. Although it might seem logical to block crawlers from private pages
such as login and administration pages so that they don’t show up in the
index, placing the location of those URLs in a publicly accessible robots.txt
file also means that people with malicious intent can more easily find them.
It’s better to NoIndex these pages and gate them behind a login form rather
than place them in your robots.txt file.

You can read more details about this in the robots.txt portion of our
Learning Center.

Defining URL parameters in GSC


Some sites (most common with e-commerce) make the same content
available on multiple different URLs by appending certain parameters to
URLs. If you’ve ever shopped online, you’ve likely narrowed down your
search via filters. For example, you may search for “shoes” on Amazon,
and then refine your search by size, color, and style. Each time you refine,
the URL changes slightly:

https://www.example.com/products/women/dresses/green.htmhttps://www.
example.com/products/women?
category=dresses&color=greenhttps://example.com/shopindex.php?
product_id=32&highlight=green+dress&cat_id=1&sessionid=123$affid=43
How does Google know which version of the URL to serve to searchers?
Google does a pretty good job at figuring out the representative URL on its
own, but you can use the URL Parameters feature in Google Search
Console to tell Google exactly how you want them to treat your pages. If
you use this feature to tell Googlebot “crawl no URLs with ____ parameter,”
then you’re essentially asking to hide this content from Googlebot, which
could result in the removal of those pages from search results. That’s what
you want if those parameters create duplicate pages, but not ideal if you
want those pages to be indexed.

Can crawlers find all your important content?


Now that you know some tactics for ensuring search engine crawlers stay
away from your unimportant content, let’s learn about the optimizations that
can help Googlebot find your important pages.

Sometimes a search engine will be able to find parts of your site by


crawling, but other pages or sections might be obscured for one reason or
another. It's important to make sure that search engines are able to
discover all the content you want indexed, and not just your homepage.
Ask yourself this: Can the bot crawl through your website, and not just to it?

Is your content hidden behind login forms?


If you require users to log in, fill out forms, or answer surveys before
accessing certain content, search engines won't see those protected
pages. A crawler is definitely not going to log in.

Are you relying on search forms?


Robots cannot use search forms. Some individuals believe that if they
place a search box on their site, search engines will be able to find
everything that their visitors search for.

Is text hidden within non-text content?


Non-text media forms (images, video, GIFs, etc.) should not be used to
display text that you wish to be indexed. While search engines are getting
better at recognizing images, there's no guarantee they will be able to read
and understand it just yet. It's always best to add text within the <HTML>
markup of your webpage.

Can search engines follow your site navigation?


Just as a crawler needs to discover your site via links from other sites, it
needs a path of links on your own site to guide it from page to page. If
you’ve got a page you want search engines to find but it isn’t linked to from
any other pages, it’s as good as invisible. Many sites make the critical
mistake of structuring their navigation in ways that are inaccessible to
search engines, hindering their ability to get listed in search results.

Common navigation mistakes that can keep crawlers from seeing all of your site:

 Having a mobile navigation that shows different results than your


desktop navigation
 Any type of navigation where the menu items are not in the HTML,
such as JavaScript-enabled navigations. Google has gotten much
better at crawling and understanding Javascript, but it’s still not a
perfect process. The more surefire way to ensure something gets
found, understood, and indexed by Google is by putting it in the
HTML.
 Personalization, or showing unique navigation to a specific type of
visitor versus others, could appear to be cloaking to a search engine
crawler
 Forgetting to link to a primary page on your website through your
navigation — remember, links are the paths crawlers follow to new
pages!

This is why it's essential that your website has a clear navigation and
helpful URL folder structures.

Do you have clean information architecture?


Information architecture is the practice of organizing and labeling content
on a website to improve efficiency and findability for users. The best
information architecture is intuitive, meaning that users shouldn't have to
think very hard to flow through your website or to find something.

Are you utilizing sitemaps?


A sitemap is just what it sounds like: a list of URLs on your site that
crawlers can use to discover and index your content. One of the easiest
ways to ensure Google is finding your highest priority pages is to create a
file that meets Google's standards and submit it through Google Search
Console. While submitting a sitemap doesn’t replace the need for good site
navigation, it can certainly help crawlers follow a path to all of your
important pages.

Ensure that you’ve only included URLs that you want indexed by search
engines, and be sure to give crawlers consistent directions. For example,
don’t include a URL in your sitemap if you’ve blocked that URL via
robots.txt or include URLs in your sitemap that are duplicates rather than
the preferred, canonical version (we’ll provide more information on
canonicalization in Chapter 5!).

Learn more about XML sitemaps 


If your site doesn't have any other sites linking to it, you still might be able
to get it indexed by submitting your XML sitemap in Google Search
Console. There's no guarantee they'll include a submitted URL in their
index, but it's worth a try!

Are crawlers getting errors when they try to access your URLs?
In the process of crawling the URLs on your site, a crawler may encounter
errors. You can go to Google Search Console’s “Crawl Errors” report to
detect URLs on which this might be happening - this report will show you
server errors and not found errors. Server log files can also show you this,
as well as a treasure trove of other information such as crawl frequency,
but because accessing and dissecting server log files is a more advanced
tactic, we won’t discuss it at length in the Beginner’s Guide, although you
can learn more about it here.
Before you can do anything meaningful with the crawl error report, it’s
important to understand server errors and "not found" errors.

4xx Codes: When search engine crawlers can’t access your content due to a client
error
4xx errors are client errors, meaning the requested URL contains bad
syntax or cannot be fulfilled. One of the most common 4xx errors is the
“404 – not found” error. These might occur because of a URL typo, deleted
page, or broken redirect, just to name a few examples. When search
engines hit a 404, they can’t access the URL. When users hit a 404, they
can get frustrated and leave.

5xx Codes: When search engine crawlers can’t access your content due to a server
error
5xx errors are server errors, meaning the server the web page is located on
failed to fulfill the searcher or search engine’s request to access the page.
In Google Search Console’s “Crawl Error” report, there is a tab dedicated to
these errors. These typically happen because the request for the URL
timed out, so Googlebot abandoned the request. View Google’s
documentation to learn more about fixing server connectivity issues.

Thankfully, there is a way to tell both searchers and search engines that
your page has moved — the 301 (permanent) redirect.

Create custom 404 pages!


Customize your 404 page by adding in links to important pages on your
site, a site search feature, and even contact information. This should make
it less likely that visitors will bounce off your site when they hit a 404.

Learn more about custom 404 pages 

Say you move a page from example.com/young-


dogs/ to example.com/puppies/. Search engines and users need a bridge
to cross from the old URL to the new. That bridge is a 301 redirect.

When you do implement a When you don’t implement


301: a 301:

Link Equity Transfers link equity from Without a 301, the


When you do implement a When you don’t implement
301: a 301:

the page’s old location to authority from the previous


the new URL. URL is not passed on to the
new version of the URL.

Indexing Helps Google find and index The presence of 404 errors
the new version of the on your site alone don't
page. harm search performance,
but letting ranking /
trafficked pages 404 can
result in them falling out of
the index, with rankings and
traffic going with them —
yikes!

User Experience Ensures users find the page Allowing your visitors to
they’re looking for. click on dead links will take
them to error pages instead
of the intended page, which
can be frustrating.

The 301 status code itself means that the page has permanently moved to
a new location, so avoid redirecting URLs to irrelevant pages — URLs
where the old URL’s content doesn’t actually live. If a page is ranking for a
query and you 301 it to a URL with different content, it might drop in rank
position because the content that made it relevant to that particular query
isn't there anymore. 301s are powerful — move URLs responsibly!

You also have the option of 302 redirecting a page, but this should be
reserved for temporary moves and in cases where passing link equity isn’t
as big of a concern. 302s are kind of like a road detour. You're temporarily
siphoning traffic through a certain route, but it won't be like that forever.

Watch out for redirect chains!


It can be difficult for Googlebot to reach your page if it has to go through
multiple redirects. Google calls these “redirect chains” and they
recommend limiting them as much as possible. If you redirect
example.com/1 to example.com/2, then later decide to redirect it to
example.com/3, it’s best to eliminate the middleman and simply redirect
example.com/1 to example.com/3.

Learn more about redirect chains 


Once you’ve ensured your site is optimized for crawlability, the next order
of business is to make sure it can be indexed.

Indexing: How do search engines interpret and store


your pages?
Once you’ve ensured your site has been crawled, the next order of
business is to make sure it can be indexed. That’s right — just because
your site can be discovered and crawled by a search engine doesn’t
necessarily mean that it will be stored in their index. In the previous section
on crawling, we discussed how search engines discover your web pages.
The index is where your discovered pages are stored. After a crawler finds
a page, the search engine renders it just like a browser would. In the
process of doing so, the search engine analyzes that page's contents. All of
that information is stored in its index.

Read on to learn about how indexing works and how you can make sure
your site makes it into this all-important database.

Can I see how a Googlebot crawler sees my pages?


Yes, the cached version of your page will reflect a snapshot of the last time
Googlebot crawled it.

Google crawls and caches web pages at different frequencies. More


established, well-known sites that post frequently
like https://www.nytimes.com will be crawled more frequently than the
much-less-famous website for Roger the Mozbot’s side
hustle, http://www.rogerlovescupcakes.com (if only it were real…)

You can view what your cached version of a page looks like by clicking the
drop-down arrow next to the URL in the SERP and choosing "Cached":
You can also view the text-only version of your site to determine if your
important content is being crawled and cached effectively.

Are pages ever removed from the index?


Yes, pages can be removed from the index! Some of the main reasons why
a URL might be removed include:

 The URL is returning a "not found" error (4XX) or server error (5XX) –
This could be accidental (the page was moved and a 301 redirect
was not set up) or intentional (the page was deleted and 404ed in
order to get it removed from the index)
 The URL had a noindex meta tag added – This tag can be added by
site owners to instruct the search engine to omit the page from its
index.
 The URL has been manually penalized for violating the search
engine’s Webmaster Guidelines and, as a result, was removed from
the index.
 The URL has been blocked from crawling with the addition of a
password required before visitors can access the page.

If you believe that a page on your website that was previously in Google’s
index is no longer showing up, you can use the URL Inspection tool to learn
the status of the page, or use Fetch as Google which has a "Request
Indexing" feature to submit individual URLs to the index. (Bonus: GSC’s
“fetch” tool also has a “render” option that allows you to see if there are any
issues with how Google is interpreting your page).

Tell search engines how to index your site


Robots meta directives
Meta directives (or "meta tags") are instructions you can give to search
engines regarding how you want your web page to be treated.

You can tell search engine crawlers things like "do not index this page in
search results" or "don’t pass any link equity to any on-page links". These
instructions are executed via Robots Meta Tags in the <head> of your
HTML pages (most commonly used) or via the X-Robots-Tag in the HTTP
header.

Robots meta tag


The robots meta tag can be used within the <head> of the HTML of your
webpage. It can exclude all or specific search engines. The following are
the most common meta directives, along with what situations you might
apply them in.
index/noindex tells the engines whether the page should be crawled and
kept in a search engines' index for retrieval. If you opt to use "noindex,"
you’re communicating to crawlers that you want the page excluded from
search results. By default, search engines assume they can index all
pages, so using the "index" value is unnecessary.

 When you might use: You might opt to mark a page as "noindex" if


you’re trying to trim thin pages from Google’s index of your site (ex:
user generated profile pages) but you still want them accessible to
visitors.

follow/nofollow tells search engines whether links on the page should be


followed or nofollowed. “Follow” results in bots following the links on your
page and passing link equity through to those URLs. Or, if you elect to
employ "nofollow," the search engines will not follow or pass any link equity
through to the links on the page. By default, all pages are assumed to have
the "follow" attribute.

 When you might use: nofollow is often used together with noindex


when you’re trying to prevent a page from being indexed as well as
prevent the crawler from following links on the page.

noarchive is used to restrict search engines from saving a cached copy of


the page. By default, the engines will maintain visible copies of all pages
they have indexed, accessible to searchers through the cached link in the
search results.

 When you might use: If you run an e-commerce site and your prices
change regularly, you might consider the noarchive tag to prevent
searchers from seeing outdated pricing.

Here’s an example of a meta robots noindex, nofollow tag:

<!DOCTYPE html><html><head><meta name="robots" content="noindex,


nofollow" /></head><body>...</body></html>
This example excludes all search engines from indexing the page and from
following any on-page links. If you want to exclude multiple crawlers, like
googlebot and bing for example, it’s okay to use multiple robot exclusion
tags.

Meta directives affect indexing, not crawling


Googlebot needs to crawl your page in order to see its meta directives, so if
you’re trying to prevent crawlers from accessing certain pages, meta
directives are not the way to do it. Robots tags must be crawled to be
respected.
X-Robots-Tag
The x-robots tag is used within the HTTP header of your URL, providing
more flexibility and functionality than meta tags if you want to block search
engines at scale because you can use regular expressions, block non-
HTML files, and apply sitewide noindex tags.

For example, you could easily exclude entire folders or file types (like
moz.com/no-bake/old-recipes-to-noindex):

<Files ~ “\/?no\-bake\/.*”> Header set X-Robots-Tag “noindex,


nofollow”</Files>
The derivatives used in a robots meta tag can also be used in an X-Robots-
Tag.

Or specific file types (like PDFs):

<Files ~ “\.pdf$”> Header set X-Robots-Tag “noindex,


nofollow”</Files>
For more information on Meta Robot Tags, explore Google’s Robots Meta
Tag Specifications.

WordPress tip:
In Dashboard > Settings > Reading, make sure the "Search Engine
Visibility" box is not checked. This blocks search engines from coming to
your site via your robots.txt file!

Understanding the different ways you can influence crawling and indexing
will help you avoid the common pitfalls that can prevent your important
pages from getting found.

Ranking: How do search engines rank URLs?


How do search engines ensure that when someone types a query into the
search bar, they get relevant results in return? That process is known as
ranking, or the ordering of search results by most relevant to least relevant
to a particular query.

To determine relevance, search engines use algorithms, a process or


formula by which stored information is retrieved and ordered in meaningful
ways. These algorithms have gone through many changes over the years
in order to improve the quality of search results. Google, for example,
makes algorithm adjustments every day — some of these updates are
minor quality tweaks, whereas others are core/broad algorithm updates
deployed to tackle a specific issue, like Penguin to tackle link spam. Check
out our Google Algorithm Change History for a list of both confirmed and
unconfirmed Google updates going back to the year 2000.

Why does the algorithm change so often? Is Google just trying to keep us
on our toes? While Google doesn’t always reveal specifics as to why they
do what they do, we do know that Google’s aim when making algorithm
adjustments is to improve overall search quality. That’s why, in response to
algorithm update questions, Google will answer with something along the
lines of: "We’re making quality updates all the time." This indicates that, if
your site suffered after an algorithm adjustment, compare it
against Google’s Quality Guidelines or Search Quality Rater Guidelines,
both are very telling in terms of what search engines want.

What do search engines want?


Search engines have always wanted the same thing: to provide useful
answers to searcher’s questions in the most helpful formats. If that’s true,
then why does it appear that SEO is different now than in years past?

Think about it in terms of someone learning a new language.

At first, their understanding of the language is very rudimentary — “See


Spot Run.” Over time, their understanding starts to deepen, and they learn
semantics — the meaning behind language and the relationship between
words and phrases. Eventually, with enough practice, the student knows
the language well enough to even understand nuance, and is able to
provide answers to even vague or incomplete questions.

When search engines were just beginning to learn our language, it was
much easier to game the system by using tricks and tactics that actually go
against quality guidelines. Take keyword stuffing, for example. If you
wanted to rank for a particular keyword like “funny jokes,” you might add
the words “funny jokes” a bunch of times onto your page, and make it bold,
in hopes of boosting your ranking for that term:

Welcome to funny jokes! We tell the funniest jokes in the world. Funny


jokes are fun and crazy. Your funny joke awaits. Sit back and read funny
jokes because funny jokes can make you happy and funnier.
Some funny favorite funny jokes.
This tactic made for terrible user experiences, and instead of laughing at
funny jokes, people were bombarded by annoying, hard-to-read text. It may
have worked in the past, but this is never what search engines wanted.
The role links play in SEO
When we talk about links, we could mean two things. Backlinks or "inbound
links" are links from other websites that point to your website, while internal
links are links on your own site that point to your other pages (on the same
site).

Links have historically played a big role in SEO. Very early on, search
engines needed help figuring out which URLs were more trustworthy than
others to help them determine how to rank search results. Calculating the
number of links pointing to any given site helped them do this.

Backlinks work very similarly to real-life WoM (Word-of-Mouth) referrals.


Let’s take a hypothetical coffee shop, Jenny’s Coffee, as an example:

 Referrals from others = good sign of authority


o Example: Many different people have all told you that Jenny’s
Coffee is the best in town
 Referrals from yourself = biased, so not a good sign of authority
o Example: Jenny claims that Jenny’s Coffee is the best in town
 Referrals from irrelevant or low-quality sources = not a good
sign of authority and could even get you flagged for spam
o Example: Jenny paid to have people who have never visited
her coffee shop tell others how good it is.
 No referrals = unclear authority
o Example: Jenny’s Coffee might be good, but you’ve been
unable to find anyone who has an opinion so you can’t be sure.

This is why PageRank was created. PageRank (part of Google's core


algorithm) is a link analysis algorithm named after one of Google's
founders, Larry Page. PageRank estimates the importance of a web page
by measuring the quality and quantity of links pointing to it. The assumption
is that the more relevant, important, and trustworthy a web page is, the
more links it will have earned.

The more natural backlinks you have from high-authority (trusted) websites,
the better your odds are to rank higher within search results.

The role content plays in SEO


There would be no point to links if they didn’t direct searchers to something.
That something is content! Content is more than just words; it’s anything
meant to be consumed by searchers — there’s video content, image
content, and of course, text. If search engines are answer machines,
content is the means by which the engines deliver those answers.

Any time someone performs a search, there are thousands of possible


results, so how do search engines decide which pages the searcher is
going to find valuable? A big part of determining where your page will rank
for a given query is how well the content on your page matches the query’s
intent. In other words, does this page match the words that were searched
and help fulfill the task the searcher was trying to accomplish?

Because of this focus on user satisfaction and task accomplishment,


there’s no strict benchmarks on how long your content should be, how
many times it should contain a keyword, or what you put in your header
tags. All those can play a role in how well a page performs in search, but
the focus should be on the users who will be reading the content.

Today, with hundreds or even thousands of ranking signals, the top three
have stayed fairly consistent: links to your website (which serve as a third-
party credibility signals), on-page content (quality content that fulfills a
searcher’s intent), and RankBrain.

What is RankBrain?
RankBrain is the machine learning component of Google’s core algorithm.
Machine learning is a computer program that continues to improve its
predictions over time through new observations and training data. In other
words, it’s always learning, and because it’s always learning, search results
should be constantly improving.

For example, if RankBrain notices a lower ranking URL providing a better


result to users than the higher ranking URLs, you can bet that RankBrain
will adjust those results, moving the more relevant result higher and
demoting the lesser relevant pages as a byproduct.

Like most things with the search engine, we don’t know exactly what
comprises RankBrain, but apparently, neither do the folks at Google.

What does this mean for SEOs?


Because Google will continue leveraging RankBrain to promote the most
relevant, helpful content, we need to focus on fulfilling searcher intent more
than ever before. Provide the best possible information and experience for
searchers who might land on your page, and you’ve taken a big first step to
performing well in a RankBrain world.
Engagement metrics: correlation, causation, or both?
With Google rankings, engagement metrics are most likely part correlation
and part causation.

When we say engagement metrics, we mean data that represents how


searchers interact with your site from search results. This includes things
like:

 Clicks (visits from search)


 Time on page (amount of time the visitor spent on a page before
leaving it)
 Bounce rate (the percentage of all website sessions where users
viewed only one page)
 Pogo-sticking (clicking on an organic result and then quickly
returning to the SERP to choose another result)

Many tests, including Moz’s own ranking factor survey, have indicated that
engagement metrics correlate with higher ranking, but causation has been
hotly debated. Are good engagement metrics just indicative of highly
ranked sites? Or are sites ranked highly because they possess good
engagement metrics?

What Google has said


While they’ve never used the term “direct ranking signal,” Google has been
clear that they absolutely use click data to modify the SERP for particular
queries.

According to Google’s former Chief of Search Quality, Udi Manber:

“The ranking itself is affected by the click data. If we discover that, for a
particular query, 80% of people click on #2 and only 10% click on #1, after
a while we figure out probably #2 is the one people want, so we’ll switch it.”
Another comment from former Google engineer Edmond Lau corroborates
this:

“It’s pretty clear that any reasonable search engine would use click data on
their own results to feed back into ranking to improve the quality of search
results. The actual mechanics of how click data is used is often proprietary,
but Google makes it obvious that it uses click data with its patents on
systems like rank-adjusted content items.”
Because Google needs to maintain and improve search quality, it seems
inevitable that engagement metrics are more than correlation, but it would
appear that Google falls short of calling engagement metrics a “ranking
signal” because those metrics are used to improve search quality, and the
rank of individual URLs is just a byproduct of that.

What tests have confirmed


Various tests have confirmed that Google will adjust SERP order in
response to searcher engagement:

 Rand Fishkin’s 2014 test resulted in a #7 result moving up to the #1


spot after getting around 200 people to click on the URL from the
SERP. Interestingly, ranking improvement seemed to be isolated to
the location of the people who visited the link. The rank position
spiked in the US, where many participants were located, whereas it
remained lower on the page in Google Canada, Google Australia,
etc.
 Larry Kim’s comparison of top pages and their average dwell time
pre- and post-RankBrain seemed to indicate that the machine-
learning component of Google’s algorithm demotes the rank position
of pages that people don’t spend as much time on.
 Darren Shaw’s testing has shown user behavior’s impact on local
search and map pack results as well.

Since user engagement metrics are clearly used to adjust the SERPs for
quality, and rank position changes as a byproduct, it’s safe to say that
SEOs should optimize for engagement. Engagement doesn’t change the
objective quality of your web page, but rather your value to searchers
relative to other results for that query. That’s why, after no changes to your
page or its backlinks, it could decline in rankings if searchers’ behaviors
indicates they like other pages better.

In terms of ranking web pages, engagement metrics act like a fact-checker.


Objective factors such as links and content first rank the page, then
engagement metrics help Google adjust if they didn’t get it right.

The evolution of search results


Back when search engines lacked a lot of the sophistication they have
today, the term “10 blue links” was coined to describe the flat structure of
the SERP. Any time a search was performed, Google would return a page
with 10 organic results, each in the same format.
In this search landscape, holding the #1 spot was the holy grail of SEO. But
then something happened. Google began adding results in new formats on
their search result pages, called SERP features. Some of these SERP
features include:

 Paid advertisements
 Featured snippets
 People Also Ask boxes
 Local (map) pack
 Knowledge panel
 Sitelinks

And Google is adding new ones all the time. They even experimented with
“zero-result SERPs,” a phenomenon where only one result from the
Knowledge Graph was displayed on the SERP with no results below it
except for an option to “view more results.”
The addition of these features caused some initial panic for two main
reasons. For one, many of these features caused organic results to be
pushed down further on the SERP. Another byproduct is that fewer
searchers are clicking on the organic results since more queries are being
answered on the SERP itself.

So why would Google do this? It all goes back to the search experience.
User behavior indicates that some queries are better satisfied by different
content formats. Notice how the different types of SERP features match the
different types of query intents.

Query Intent Possible SERP Feature Triggered

Informational Featured snippet

Informational with one answer Knowledge Graph / instant answer

Local Map pack

Transactional Shopping

We’ll talk more about intent in Chapter 3, but for now, it’s important to know
that answers can be delivered to searchers in a wide array of formats, and
how you structure your content can impact the format in which it appears in
search.

Localized search
A search engine like Google has its own proprietary index of local business
listings, from which it creates local search results.

If you are performing local SEO work for a business that has a physical
location customers can visit (ex: dentist) or for a business that travels to
visit their customers (ex: plumber), make sure that you claim, verify,
and optimize a free Google My Business Listing.

When it comes to localized search results, Google uses three main factors
to determine ranking:
1. Relevance
2. Distance
3. Prominence

Relevance
Relevance is how well a local business matches what the searcher is
looking for. To ensure that the business is doing everything it can to be
relevant to searchers, make sure the business’ information is thoroughly
and accurately filled out.

Distance
Google use your geo-location to better serve you local results. Local search
results are extremely sensitive to proximity, which refers to the location of
the searcher and/or the location specified in the query (if the searcher
included one).

Organic search results are sensitive to a searcher's location, though


seldom as pronounced as in local pack results.

Prominence
With prominence as a factor, Google is looking to reward businesses that
are well-known in the real world. In addition to a business’ offline
prominence, Google also looks to some online factors to determine local
ranking, such as:

Reviews
The number of Google reviews a local business receives, and the
sentiment of those reviews, have a notable impact on their ability to rank in
local results.

Citations
A "business citation" or "business listing" is a web-based reference to a
local business' "NAP" (name, address, phone number) on a localized
platform (Yelp, Acxiom, YP, Infogroup, Localeze, etc.).

Local rankings are influenced by the number and consistency of local


business citations. Google pulls data from a wide variety of sources in
continuously making up its local business index. When Google finds
multiple consistent references to a business's name, location, and phone
number it strengthens Google's "trust" in the validity of that data. This then
leads to Google being able to show the business with a higher degree of
confidence. Google also uses information from other sources on the web,
such as links and articles.
Organic ranking
SEO best practices also apply to local SEO, since Google also considers a
website’s position in organic search results when determining local ranking.

In the next chapter, you’ll learn on-page best practices that will help Google
and users better understand your content.

[Bonus!] Local engagement


Although not listed by Google as a local ranking factor, the role of
engagement is only going to increase as time goes on. Google continues to
enrich local results by incorporating real-world data like popular times to
visit and average length of visits...

Curious about a certain local business' citation accuracy? Moz has a free
tool that can help out, aptly named Check Listing.

Check listing accuracy 

...and even provides searchers with the ability to ask the business
questions!

Undoubtedly now more than ever before, local results are being influenced
by real-world data. This interactivity is how searchers interact with and
respond to local businesses, rather than purely static (and game-able)
information like links and citations.

Since Google wants to deliver the best, most relevant local businesses to
searchers, it makes perfect sense for them to use real time engagement
metrics to determine quality and relevance.
Chapter 3
KEYWORD RESEARCH
Understand what your audience wants to find.

Now that you’ve learned how to show up in search results, let’s determine
which strategic keywords to target in your website’s content, and how to
craft that content to satisfy both users and search engines.

The power of keyword research lies in better understanding your target


market and how they are searching for your content, services, or products.

Keyword research provides you with specific search data that can help you
answer questions like:

 What are people searching for?


 How many people are searching for it?
 In what format do they want that information?

In this chapter, you'll get tools and strategies for uncovering that
information, as well as learn tactics that'll help you avoid keyword research
foibles and build strong content. Once you uncover how your target
audience is searching for your content, you begin to uncover a whole new
world of strategic SEO!

Before keyword research, ask questions


Before you can help a business grow through search engine optimization,
you first have to understand who they are, who their customers are, and
their goals.

This is where corners are often cut. Too many people bypass this crucial
planning step because keyword research takes time, and why spend the
time when you already know what you want to rank for?

The answer is that what you want to rank for and what your audience
actually wants are often two wildly different things. Focusing on your
audience and then using keyword data to hone those insights will make for
much more successful campaigns than focusing on arbitrary keywords.

Here’s an example. Frankie & Jo’s (a Seattle-based vegan, gluten-free ice


cream shop) has heard about SEO and wants help improving how and how
often they show up in organic search results. In order to help them, you
need to first understand a little more about their customers. To do so, you
might ask questions such as:

 What types of ice cream, desserts, snacks, etc. are people searching
for?
 Who is searching for these terms?
 When are people searching for ice cream, snacks, desserts, etc.?
o Are there seasonality trends throughout the year?
 How are people searching for ice cream?
o What words do they use?
o What questions do they ask?
o Are more searches performed on mobile devices?
 Why are people seeking ice cream?
o Are individuals looking for health-conscious ice cream
specifically or just looking to satisfy a sweet tooth?
 Where are potential customers located — locally, nationally, or
internationally?

And finally — here's the kicker — how can you help provide the best
content about ice cream to cultivate a community and fulfill what all those
people are searching for? Asking these questions is a crucial planning step
that will guide your keyword research and help you craft better content.

What's that word mean?


Remember, if you're stumped by any of the terms used in this chapter, our
SEO glossary is here to help!

See Chapter 3 definitions 

What terms are people searching for?


You may have a way of describing what you do, but how does your
audience search for the product, service, or information you provide?
Answering this question is a crucial first step in the keyword research
process.

Discovering keywords
You likely have a few keywords in mind that you would like to rank for.
These will be things like your products, services, or other topics your
website addresses, and they are great seed keywords for your research, so
start there! You can enter those keywords into a keyword research tool to
discover average monthly search volume and similar keywords. We’ll get
into search volume in greater depth in the next section, but during the
discovery phase, it can help you determine which variations of your
keywords are most popular amongst searchers.

Once you enter in your seed keywords into a keyword research tool, you
will begin to discover other keywords, common questions, and topics for
your content that you might have otherwise missed.

Let’s use the example of a florist that specializes in weddings.

Typing “wedding” and “florist” into a keyword research tool, you may
discover highly relevant, highly searched for related terms such as:

 Wedding bouquets
 Bridal flowers
 Wedding flower shop

In the process of discovering relevant keywords for your content, you will
likely notice that the search volume of those keywords varies greatly. While
you definitely want to target terms that your audience is searching for, in
some cases, it may be more advantageous to target terms with lower
search volume because they're far less competitive.

Since both high- and low-competition keywords can be advantageous for


your website, learning more about search volume can help you prioritize
keywords and pick the ones that will give your website the biggest strategic
advantage.

We've got a tool for that


Moz has a free tool that can help you discover and analyze keywords.
When you're ready to get your hands dirty with keyword research, give it a
try!

Try Keyword Explorer 


Diversify!
It’s important to note that entire websites don’t rank for keywords
— pages do. With big brands, we often see the homepage ranking for
many keywords, but for most websites this isn’t usually the case. Many
websites receive more organic traffic to pages other than the homepage,
which is why it’s so important to diversify your website’s pages by
optimizing each for uniquely valuable keywords.

Learn more about optimizing a single page for multiple keywords 

How often are those terms searched?


Uncovering search volume
The higher the search volume for a given keyword or keyword phrase, the
more work is typically required to achieve higher rankings. This is often
referred to as keyword difficulty and occasionally incorporates SERP
features; for example, if many SERP features (like featured snippets,
knowledge graph, carousels, etc) are clogging up a keyword’s result page,
difficulty will increase. Big brands often take up the top 10 results for high-
volume keywords, so if you’re just starting out on the web and going after
the same keywords, the uphill battle for ranking can take years of effort.

Typically, the higher the search volume, the greater the competition and
effort required to achieve organic ranking success. Go too low, though, and
you risk not drawing any searchers to your site. In many cases, it may be
most advantageous to target highly specific, lower competition search
terms. In SEO, we call those long-tail keywords.

Understanding the long tail


It would be great to rank #1 for the keyword "shoes"... or would it?

It's wonderful to deal with keywords that have 50,000 searches a month, or
even 5,000 searches a month, but in reality, these popular search terms
only make up a fraction of all searches performed on the web. In fact,
keywords with very high search volumes may even indicate ambiguous
intent, which, if you target these terms, it could put you at risk for drawing
visitors to your site whose goals don't match the content your page
provides.
Does the searcher want to know the nutritional value of pizza? Order a
pizza? Find a restaurant to take their family? Google doesn’t know, so they
offer these features to help you refine. Targeting “pizza” means that you’re
likely casting too wide a net.

If you're searching for "pizza," Google thinks you may also be interested in
"cheese." They're not wrong...

Was your intent to find a pizza place for lunch? The "Discover more places"
SERP feature has that covered.

The remaining 75% lie in the "chunky middle" and "long tail" of search.

Don’t underestimate these less popular keywords. Long tail keywords with
lower search volume often convert better, because searchers are more
specific and intentional in their searches. For example, a person searching
for "shoes" is probably just browsing. On the other hand, someone
searching for "best price red womens size 7 running shoe" practically has
their wallet out!
Questions are SEO gold!
Discovering what questions people are asking in your space — and adding
those questions and their answers to an FAQ page — can yield incredible
organic traffic for your website.

Learn more about how to target the long tail of search 

Getting strategic with search volume


Now that you’ve discovered relevant search terms for your site and their
corresponding search volumes, you can get even more strategic by looking
at your competitors and figuring out how searches might differ by season or
location.

Keywords by competitor
You’ll likely compile a lot of keywords. How do you know which to tackle
first? It could be a good idea to prioritize high-volume keywords that your
competitors are not currently ranking for. On the flip side, you could also
see which keywords from your list your competitors are already ranking for
and prioritize those. The former is great when you want to take advantage
of your competitors’ missed opportunities, while the latter is an aggressive
strategy that sets you up to compete for keywords your competitors are
already performing well for.

Keywords by season
Knowing about seasonal trends can be advantageous in setting a content
strategy. For example, if you know that “christmas box” starts to spike in
October through December in the United Kingdom, you can prepare
content months in advance and give it a big push around those months.

Keywords by region
You can more strategically target a specific location by narrowing down
your keyword research to specific towns, counties, or states in the Google
Keyword Planner, or evaluate "interest by subregion" in Google Trends.
Geo-specific research can help make your content more relevant to your
target audience. For example, you might find out that in Texas, the
preferred term for a large truck is “big rig,” while in New York, “tractor
trailer” is the preferred terminology.

Which format best suits the searcher's intent?


In Chapter 2, we learned about SERP features. That background is going
to help us understand how searchers want to consume information for a
particular keyword. The format in which Google chooses to display search
results depends on intent, and every query has a unique one. Google
describes these intents in their Quality Rater Guidelines as either “know”
(find information), “do” (accomplish a goal), “website” (find a specific
website), or “visit-in-person” (visit a local business).

While there are thousands of possible search types, let’s take a closer look
at five major categories of intent:

1.Informational queries: The searcher needs information, such as the


name of a band or the height of the Empire State Building.

If you're enjoying this chapter so far, be sure to check out the Keyword
Research episode of our One-Hour Guide to SEO video series!

Watch the video 

2. Navigational queries: The searcher wants to go to a particular place on


the Internet, such as Facebook or the homepage of the NFL.
3. Transactional queries: The searcher wants to do something, such as
buy a plane ticket or listen to a song.

4. Commercial investigation: The searcher wants to compare products


and find the best one for their specific needs.
5. Local queries: The searcher wants to find something locally, such as a
nearby coffee shop, doctor, or music venue.
An important step in the keyword research process is surveying the SERP
landscape for the keyword you want to target in order to get a better gauge
of searcher intent. If you want to know what type of content your target
audience wants, look to the SERPs!

Google has closely evaluated the behavior of trillions of searches in an


attempt to provide the most desired content for each specific keyword
search.

Take the search “dresses,” for example:


By the shopping carousel, you can infer that Google has determined many
people who search for “dresses” want to shop for dresses online.
There is also a Local Pack feature for this keyword, indicating Google’s
desire to help searchers who may be looking for local dress retailers.
If the query is ambiguous, Google will also sometimes include the “refine
by” feature to help searchers specify what they’re looking for further. By
doing so, the search engine can provide results that better help the
searcher accomplish their task.

Google has a wide array of result types it can serve up depending on the
query, so if you’re going to target a keyword, look to the SERP to
understand what type of content you need to create.

Tools for determining the value of a keyword


How much value would a keyword add to your website? These tools can
help you answer that question, so they’d make great additions to your
keyword research arsenal:

 Moz Keyword Explorer - Input a keyword in Keyword Explorer and


get information like monthly search volume and SERP features (like
local packs or featured snippets) that are ranking for that term. The
tool extracts accurate search volume data by using live clickstream
data. To learn more about how we're producing our keyword data,
check out Announcing Keyword Explorer.
o Bonus! Keyword Explorer’s "Difficulty" score can also help you
narrow down your keyword options to the phrases you have the
best shot at ranking for. The higher a keyword’s score, the
more difficult it would be to rank for that term. More about
Keyword Difficulty.
 Google Keyword Planner - Google's AdWords Keyword Planner has
historically been the most common starting point for SEO keyword
research. However, Keyword Planner does restrict search volume
data by lumping keywords together into large search volume range
buckets. To learn more, check out Google Keyword Planner’s Dirty
Secrets.
 Google Trends - Google’s keyword trend tool is great for finding
seasonal keyword fluctuations. For example, “funny halloween
costume ideas” will peak in the weeks before Halloween.
 AnswerThePublic - This free tool populates commonly searched for
questions around a specific keyword. Bonus! You can use this tool in
tandem with another free tool, Keywords Everywhere, to prioritize
ATP’s suggestions by search volume.
 SpyFu Keyword Research Tool - Provides some really neat
competitive keyword data.
Chapter 4
ON-PAGE SEO
Use your research to craft your message.

Now that you know how your target market is searching, it’s time to dive
into on-page SEO, the practice of crafting web pages that answer
searcher’s questions. On-page SEO is multifaceted, and extends beyond
content into other things like schema and meta tags, which we’ll discuss
more at length in the next chapter on technical optimization. For now, put
on your wordsmithing hats — it’s time to create your content!

Creating your content


Applying your keyword research
In the last chapter, we learned methods for discovering how your target
audience is searching for your content. Now, it’s time to put that research
into practice. Here is a simple outline to follow for applying your keyword
research:

1. Survey your keywords and group those with similar topics and intent.
Those groups will be your pages, rather than creating individual
pages for every keyword variation.
2. If you haven’t done so already, evaluate the SERP for each keyword
or group of keywords to determine what type and format your content
should be. Some characteristics of ranking pages to take note of:
1. Are they image- or video-heavy?
2. Is the content long-form or short and concise?
3. Is the content formatted in lists, bullets, or paragraphs?
3. Ask yourself, “What unique value could I offer to make my page
better than the pages that are currently ranking for my keyword?”

On-page SEO allows you to turn your research into content your audience
will love. Just make sure to avoid falling into the trap of low-value tactics
that could hurt more than help!

What's that word mean?


There are bound to be a few stumpers in this hefty chapter on on-page
optimization — be prepared for unknown terms with our SEO glossary!

See Chapter 4 definitions 


Low-value tactics to avoid
Your web content should exist to answer searchers’ questions, to guide
them through your site, and to help them understand your site’s purpose.
Content should not be created for the purpose of ranking highly in search
alone. Ranking is a means to an end, the end being to help searchers. If
we put the cart before the horse, we risk falling into the trap of low-value
content tactics.

Some of these tactics were introduced in Chapter 2, but by way of review,


let’s take a deeper dive into some low-value tactics you should avoid when
crafting search engine optimized content.

Thin content
While it’s common for a website to have unique pages on different topics,
an older content strategy was to create a page for every single iteration of
your keywords in order to rank on page 1 for those highly specific queries.

For example, if you were selling bridal dresses, you might have created
individual pages for bridal gowns, bridal dresses, wedding gowns, and
wedding dresses, even if each page was essentially saying the same thing.
A similar tactic for local businesses was to create multiple pages of content
for each city or region from which they wanted clients. These “geo pages”
often had the same or very similar content, with the location name being
the only unique factor.

Tactics like these clearly weren’t helpful for users, so why did publishers do
it? Google wasn’t always as good as it is today at understanding the
relationships between words and phrases (or semantics). So, if you wanted
to rank on page 1 for “bridal gowns” but you only had a page on “wedding
dresses,” that may not have cut it.

This practice created tons of thin, low-quality content across the web, which
Google addressed specifically with its 2011 update known as Panda. This
algorithm update penalized low-quality pages, which resulted in more
quality pages taking the top spots of the SERPs. Google continues to
iterate on this process of demoting low-quality content and promoting high-
quality content today.

Google is clear that you should have a comprehensive page on a topic


instead of multiple, weaker pages for each variation of a keyword.
Duplicate content
Like it sounds, “duplicate content” refers to content that is shared between
domains or between multiple pages of a single domain. “Scraped” content
goes a step further, and entails the blatant and unauthorized use of content
from other sites. This can include taking content and republishing as-is, or
modifying it slightly before republishing, without adding any original content
or value.

There are plenty of legitimate reasons for internal or cross-domain


duplicate content, so Google encourages the use of a rel=canonical tag to
point to the original version of the web content. While you don’t need to
know about this tag just yet, the main thing to note for now is that your
content should be unique in word and in value.

Debunking the "duplicate content penalty" myth


There is no Google penalty for duplicate content. That is to say, for
example, if you take an article from the Associated Press and post it on
your blog, you won’t get penalized with something like a Manual Action
from Google. Google does, however, filter duplicate versions of content
from their search results. If two or more pieces of content are substantially
similar, Google will choose a canonical (source) URL to display in its
search results and hide the duplicate versions. That’s not a penalty. That’s
Google filtering to show only one version of a piece of content to improve
the searcher’s experience.

Learn more about canonicalization 


Cloaking
A basic tenet of search engine guidelines is to show the same content to
the engine's crawlers that you'd show to a human visitor. This means that
you should never hide text in the HTML code of your website that a normal
visitor can't see.

When this guideline is broken, search engines call it "cloaking" and take
action to prevent these pages from ranking in search results. Cloaking can
be accomplished in any number of ways and for a variety of reasons, both
positive and negative. Below is an example of an instance
where Spotify showed different content to users than to Google.
Users were presented with a login screen in Spotify when searching for the
National Philharmonic orchestra.
Viewing Google's cached version of the page shows the content Spotify
provided to the search engine.

In some cases, Google may let practices that are technically cloaking pass
because they contribute to a positive user experience. For more on the
subject of hidden content and how Google handles it, see our Whiteboard
Friday entitled How Does Google Handle CSS + Javascript "Hidden" Text?

Keyword stuffing
If you’ve ever been told, “You need to include {critical keyword} on this
page X times,” you’ve seen the confusion over keyword usage in action.
Many people mistakenly think that if you just include a keyword within your
page’s content X times, you will automatically rank for it. The truth is,
although Google looks for mentions of keywords and related concepts on
your site’s pages, the page itself has to add value outside of pure keyword
usage. If a page is going to be valuable to users, it won’t sound like it was
written by a robot, so incorporate your keywords and phrases naturally in a
way that is understandable to your readers.

Below is an example of a keyword-stuffed page of content that also uses


another old method: bolding all your targeted keywords. Oy.

An example of a keyword-stuffed paragraph, bolding all the target


keywords.

Auto-generated content
Arguably one of the most offensive forms of low-quality content is the kind
that is auto-generated, or created programmatically with the intent of
manipulating search rankings and not helping users. You may recognize
some auto-generated content by how little it makes sense when read —
they are technically words, but strung together by a program rather than a
human being.

It is worth noting that advancements in machine learning have contributed


to more sophisticated auto-generated content that will only get better over
time. This is likely why in Google’s quality guidelines on automatically
generated content, Google specifically calls out the brand of auto-
generated content that attempts to manipulate search rankings, rather than
any-and-all auto-generated content.

What to do instead: 10x it!


There is no “secret sauce” to ranking in search results. Google ranks pages
highly because it has determined they are the best answers to the
searcher’s questions. In today’s search engine, it’s not enough that your
page isn’t duplicate, spamming, or broken. Your page has to provide value
to searchers and be better than any other page Google is currently serving
as the answer to a particular query. Here’s a simple formula for content
creation:

 Search the keyword(s) you want your page to rank for


 Identify which pages are ranking highly for those keywords
 Determine what qualities those pages possess
 Create content that’s better than that

We like to call this 10x content. If you create a page on a keyword that is


10x better than the pages being shown in search results (for that keyword),
Google will reward you for it, and better yet, you’ll naturally get people
linking to it! Creating 10x content is hard work, but will pay dividends in
organic traffic.

Just remember, there’s no magic number when it comes to words on a


page. What we should be aiming for is whatever sufficiently satisfies user
intent. Some queries can be answered thoroughly and accurately in 300
words while others might require 1,000 words!

A competitor analysis can help!


When you're researching how to 10x your content, performing an in-depth
competitive analysis is your edge. Luckily, we've got another guide devoted
to just that! ;-)

Read the Guide to SEO Competitor Analysis 


Don’t reinvent the wheel!
If you already have content on your website, save yourself time by
evaluating which of those pages are already bringing in good amounts of
organic traffic and converting well. Refurbish that content on different
platforms to help get more visibility to your site. On the other side of the
coin, evaluate what existing content isn’t performing as well and adjust it,
rather than starting from square one with all new content.

Learn more about refurbishing your top content 


NAP: A note for local businesses
If you’re a business that makes in-person contact with your customers, be
sure to include your business name, address, and phone number (NAP)
prominently, accurately, and consistently throughout your site’s content.
This information is often displayed in the footer or header of a local
business website, as well as on any "contact us" pages. You’ll also want to
mark up this information using local business schema. Schema and
structured data are discussed more at length in the “Other optimizations”
section of this chapter.

If you are a multi-location business, it’s best to build unique, optimized


pages for each location. For example, a business that has locations in
Seattle, Tacoma, and Bellevue should consider having a page for each:

example.com/seattleexample.com/tacomaexample.com/bellevue
Each page should be uniquely optimized for that location, so the Seattle
page would have unique content discussing the Seattle location, list the
Seattle NAP, and even testimonials specifically from Seattle customers. If
there are dozens, hundreds, or even thousands of locations, a store locator
widget could be employed to help you scale.

Local vs national vs international


Just remember that not all businesses operate at the local level and
perform what we call “local SEO.” Some businesses want to attract
customers on a national level (ex: the entire United States) and others want
to attract customers from multiple countries (“international SEO”). Take
Moz, for example. Our product (SEO software) is not tied to a specific
location, whereas a coffee shop’s is, since customers have to travel to the
location to get their caffeine fix.

In this scenario, the coffee shop should optimize their website for their
physical location, whereas Moz would target “SEO software” without a
location-specific modifier like “Seattle.”

How you choose to optimize your site depends largely on your audience,
so make sure you have them in mind when crafting your website content.
Hope you still have some energy left after handling the difficult-yet-
rewarding task of putting together a page that is 10x better than your
competitors’ pages, because there are just a few more things needed
before your page is complete! In the next sections, we’ll talk about the other
on-page optimizations your pages need, as well as naming and organizing
your content.

Beyond content: Other optimizations your pages need


Can I just bump up the font size to create paragraph headings?

How can I control what title and description show up for my page in search
results?

After reading this section, you’ll understand other important on-page


elements that help search engines understand the 10x content you just
created, so let’s dive in!

Header tags
Header tags are an HTML element used to designate headings on your
page. The main header tag, called an H1, is typically reserved for the title
of the page. It looks like this:

<h1>Page Title</h1>
There are also sub-headings that go from H2 to H6 tags, although using all
of these on a page is not required. The hierarchy of header tags goes from
H1 to H6 in descending order of importance.

Each page should have a unique H1 that describes the main topic of the
page, this is often automatically created from the title of a page. As the
main descriptive title of the page, the H1 should contain that page’s primary
keyword or phrase. You should avoid using header tags to mark up non-
heading elements, such as navigational buttons and phone numbers. Use
header tags to introduce what the following content will discuss.

Take this page about touring Copenhagen, for example:

<h1>Copenhagen Travel Guide</h1><h2>Copenhagen by the


Seasons</h2><h3>Visiting in Winter</h3><h3>Visiting in Spring</h3>
The main topic of the page is introduced in the main <h1> heading, and
each additional heading is used to introduce a new sub-topic. In this
example, the <h2> is more specific than the <h1>, and the <h3> tags are
more specific than the <h2>. This is just an example of a structure you
could use.
Although what you choose to put in your header tags can be used by
search engines to evaluate and rank your page, it’s important to avoid
inflating their importance. Header tags are one among many on-page SEO
factors, and typically would not move the needle like quality backlinks and
content would, so focus on your site visitors when crafting your headings.

Internal links
In Chapter 2, we discussed the importance of having a crawlable website.
Part of a website’s crawlability lies in its internal linking structure. When you
link to other pages on your website, you ensure that search engine
crawlers can find all your site’s pages, you pass link equity (ranking power)
to other pages on your site, and you help visitors navigate your site.

The importance of internal linking is well established, but there can be


confusion over how this looks in practice.

Link accessibility
Links that require a click (like a navigation drop-down to view) are often
hidden from search engine crawlers, so if the only links to internal pages on
your website are through these types of links, you may have trouble getting
those pages indexed. Opt instead for links that are directly accessible on
the page.

Anchor text
Anchor text is the text with which you link to pages. Below, you can see an
example of what a hyperlink without anchor text and a hyperlink with
anchor text would look like in the HTML.

<a href="http://www.example.com/"></a><a
href="http://www.example.com/" title="Keyword Text">Keyword Text</a>
On live view, that would look like this:

http://www.example.com/
Keyword Text
The anchor text sends signals to search engines regarding the content of
the destination page. For example, if I link to a page on my site using the
anchor text “learn SEO,” that’s a good indicator to search engines that the
targeted page is one at which people can learn about SEO. Be careful not
to overdo it, though. Too many internal links using the same, keyword-
stuffed anchor text can appear to search engines that you’re trying to
manipulate a page’s ranking. It’s best to make anchor text natural rather
than formulaic.
Link volume
In Google’s General Webmaster Guidelines, they say to “limit the number
of links on a page to a reasonable number (a few thousand at most).” This
is part of Google’s technical guidelines, rather than the quality guideline
section, so having too many internal links isn’t something that on its own is
going to get you penalized, but it does affect how Google finds and
evaluates your pages.

The more links on a page, the less equity each link can pass to its
destination page. A page only has so much equity to go around.

So it’s safe to say that you should only link when you mean it! You
can learn more about link equity from our SEO Learning Center.

Aside from passing authority between pages, a link is also a way to help
users navigate to other pages on your site. This is a case where doing
what’s best for search engines is also doing what’s best for searchers. Too
many links not only dilute the authority of each link, but they can also be
unhelpful and overwhelming. Consider how a searcher might feel landing
on a page that looks like this:

Welcome to our gardening website! We have many articles on


gardening, how to garden, and helpful
tips on herbs, fruits, vegetables, perennials, and annuals. Learn more
about gardening from our gardening blog.
Whew! Not only is that a lot of links to process, but it also reads pretty
unnaturally and doesn’t contain much substance (which could be
considered “thin content” by Google). Focus on quality and helping your
users navigate your site, and you likely won’t have to worry about too many
links.

Redirection
Removing and renaming pages is a common practice, but in the event that
you do move a page, make sure to update the links to that old URL! At the
very least, you should make sure to redirect the URL to its new location,
but if possible, update all internal links to that URL at the source so that
users and crawlers don’t have to pass through redirects to arrive at the
destination page. If you choose to redirect only, be careful to avoid redirect
chains that are too long (Google says, "Avoid chaining redirects... keep the
number of redirects in the chain low, ideally no more than 3 and fewer than
5.")

Example of a redirect chain:


(original location of content) example.com/location1 →
example.com/location2 → (current location of content)
example.com/location3
Better:

example.com/location1 → example.com/location3

Image optimization
Images are the biggest culprits of slow web pages! The best way to solve
for this is to compress your images. While there is no one-size-fits-all when
it comes to image compression, testing various options like "save for web,"
image sizing, and compression tools like Optimizilla or ImageOptim for Mac
(or Windows alternatives), as well as evaluating what works best is the way
to go.

Another way to help optimize your images (and improve your page speed)
is by choosing the right image format.

How to choose which image format to use:

 If your image requires animation, use a GIF.


 If you don’t need to preserve high image resolution, use JPEG (and
test out different compression settings).
 If you do need to preserve high image resolution, use PNG.
o If your image has a lot of colors, use PNG-24.
o If your image doesn’t have a lot of colors, use PNG-8.

Learn more about choosing image formats in Google's image optimization


guide.

There are different ways to keep visitors on a semi-slow loading page by


using images that produce a colored box or a very blurry/low resolution
version while rendering to help visitors feel as if things are loading faster.
We'll discuss these options in more detail in Chapter 5.

Don’t forget about thumbnails!


Thumbnails (especially for e-commerce sites) can be a huge page speed
slow down. Optimize thumbnails properly to avoid slow pages and to help
retain more qualified visitors.

Alt text
Alt text (alternative text) within images is a principle of web accessibility,
and is used to describe images to the visually impaired via screen readers.
It’s important to have alt text descriptions so that any visually impaired
person can understand what the pictures on your website depict.

Search engine bots also crawl alt text to better understand your images,
which gives you the added benefit of providing better image context to
search engines. Just ensure that your alt descriptions reads naturally for
people, and avoid stuffing keywords for search engines.

Bad:

<img src="grumpycat.gif" alt="grumpy cat, cat is grumpy, grumpy cat


gif">
Good:

<img src="grumpycat.gif" alt="A black cat looking very grumpy at a


big spotted dog">
Web accessibility and SEO
There's a great deal of intersection between web accessibility and SEO.
Much of our work can help or harm online experiences for non-sighted
Internet users. Be sure to check out our blog post series on this important
topic — we have the opportunity to help make the web a better place for
everyone!

Learn more about accessibility and SEO 


Submit an image sitemap
To ensure that Google can crawl and index your images, submit an image
sitemap in your Google Search Console account. This helps Google
discover images they may have otherwise missed.

Formatting for readability & featured snippets


Your page could contain the best content ever written on a subject, but if
it’s formatted improperly, your audience might never read it! While we can
never guarantee that visitors will read our content, there are some
principles that can promote readability, including:

 Text size and color - Avoid fonts that are too tiny. Google
recommends 16-point font and above to minimize the need for
“pinching and zooming” on mobile. The text color in relation to the
page’s background color should also promote readability. Additional
information on text can be found in the website accessibility
guidelines and via Google’s web accessibility fundamentals.
 Headings - Breaking up your content with helpful headings can help
readers navigate the page. This is especially useful on long pages
where a reader might be looking only for information from a particular
section.
 Bullet points - Great for lists, bullet points can help readers skim
and more quickly find the information they need.
 Paragraph breaks - Avoiding walls of text can help prevent page
abandonment and encourage site visitors to read more of your page.
 Supporting media - When appropriate, include images, videos, and
widgets that would complement your content.
 Bold and italics for emphasis - Putting words in bold or italics can
add emphasis, so they should be the exception, not the rule.
Appropriate use of these formatting options can call out important
points you want to communicate.

Formatting can also affect your page’s ability to show up in featured


snippets, those “position 0” results that appear above the rest of organic
results.

An example of a featured snippet, appearing in "position 0" at the top of a


SERP.

There is no special code that you can add to your page to show up here,
nor can you pay for this placement, but taking note of the query intent can
help you better structure your content for featured snippets. For example, if
you’re trying to rank for “cake vs. pie,” it might make sense to include a
table in your content, with the benefits of cake in one column and the
benefits of pie in the other. Or if you’re trying to rank for “best restaurants to
try in Portland,” that could indicate Google wants a list, so formatting your
content in bullets could help.

Title tags
A page’s title tag is a descriptive, HTML element that specifies the title of a
particular web page. They are nested within the head tag of each page and
look like this:

<head> <title>Example Title</title></head>


Each page on your website should have a unique, descriptive title tag.
What you input into your title tag field will show up here in search results,
although in some cases Google may adjust how your title tag appears in
search results.

Title tag tips for better traffic


While there are no shortcuts in SEO, there are absolutely a ton of tips and
tricks that can boost a page title's clickability and attractiveness in the
SERPs. Check out our Whiteboard Friday on the subject!

Watch the video 

It can also show up in web browsers…

Or when you share the link to your page on certain external websites…
Your title tag has a big role to play in people’s first impression of your
website, and it’s an incredibly effective tool for drawing searchers to your
page over any other result on the SERP. The more compelling your title
tag, combined with high rankings in search results, the more visitors you’ll
attract to your website. This underscores that SEO is not only about search
engines, but rather the entire user experience.

What makes an effective title tag?

 Keyword usage: Having your target keyword in the title can help


both users and search engines understand what your page is about.
Also, the closer to the front of the title tag your keywords are, the
more likely a user will be to read them (and hopefully click) and the
more helpful they can be for ranking.
 Length: On average, search engines display the first 50–60
characters (~512 pixels) of a title tag in search results. If your title tag
exceeds the characters allowed on that SERP, an ellipsis "..." will
appear where the title was cut off. While sticking to 50–60 characters
is safe, never sacrifice quality for strict character counts. If you can’t
get your title tag down to 60 characters without harming its
readability, go longer (within reason).
 Branding: At Moz, we love to end our title tags with a brand name
mention because it promotes brand awareness and creates a higher
click-through rate among people who are familiar with Moz.
Sometimes it makes sense to place your brand at the beginning of
the title tag, such as on your homepage, but be mindful of what
you're trying to rank for and place those words closer toward the
beginning of your title tag.

Meta descriptions
Like title tags, meta descriptions are HTML elements that describe the
contents of the page that they’re on. They are also nested in the head tag,
and look like this:

<head> <meta name=”description” content=”Description of page


here.”/></head>
What you input into the description field will show up here in search results:

For example, if you search “find backlinks,” Google will provide this meta
description as it deems it more relevant to the specific search:

While the actual meta description is:

This often helps to improve your meta descriptions for unique searches.
However, don’t let this deter you from writing a default page meta
description — they're still extremely valuable.
What makes an effective meta description?
The qualities that make an effective title tag also apply to effective meta
descriptions. Although Google says that meta descriptions are not a
ranking factor, like title tags, they are incredibly important for click-through
rate.

 Relevance: Meta descriptions should be highly relevant to the


content of your page, so it should summarize your key concept in
some form. You should give the searcher enough information to
know they've found a page relevant enough to answer their question,
without giving away so much information that it eliminates the need to
click through to your web page.
 Length: Search engines tend to truncate meta descriptions to around
155 characters. It’s best to write meta descriptions between 150–300
characters in length. On some SERPs, you’ll notice that Google gives
much more real estate to the descriptions of some pages. This
usually happens for web pages ranking right below a featured
snippet.

URL structure: Naming and organizing your pages


URL stands for Uniform Resource Locator. URLs are the locations or
addresses for individual pieces of content on the web. Like title tags and
meta descriptions, search engines display URLs on the SERPs, so URL
naming and format can impact click-through rates. Not only do searchers
use them to make decisions about which web pages to click on, but URLs
are also used by search engines in evaluating and ranking pages.

Clear page naming


Search engines require unique URLs for each page on your website so
they can display your pages in search results, but clear URL structure and
naming is also helpful for people who are trying to understand what a
specific URL is about. For example, which URL is clearer?

example.com/desserts/chocolate-pie
or

example.com/asdf/453?=recipe-23432-1123
Searchers are more likely to click on URLs that reinforce and clarify what
information is contained on that page, and less likely to click on URLs that
confuse them.

The URL is a minor ranking signal, but you cannot expect to rank on the
basis of the words in your domain/page names alone (see Google EMD
update). When naming your pages or selecting a domain name, have your
audience in mind first.
Page organization
If you discuss multiple topics on your website, you should also make sure
to avoid nesting pages under irrelevant folders. For example:

example.com/commercial-litigation/alimony
It would have been better for this fictional multi-practice law firm website to
nest alimony under “/family-law/” than to host it under the irrelevant
"/commercial-litigation/" section of the website.

The folders in which you locate your content can also send signals about
the type, not just the topic, of your content. For example, dated URLs can
indicate time-sensitive content. While appropriate for news-based websites,
dated URLs for evergreen content can actually turn searchers away
because the information seems outdated. For example:

example.com/2015/april/what-is-seo/
vs.

example.com/what-is-seo/
Since the topic “What is SEO?” isn’t confined to a specific date, it’s best to
host on a non-dated URL structure or else risk your information appearing
stale.

As you can see, what you name your pages, and in what folders you
choose to organize your pages, is an important way to clarify the topic of
your page to users and search engines.

URL length
While it is not necessary to have a completely flat URL structure, many
click-through rate studies indicate that, when given the choice between a
URL and a shorter URL, searchers often prefer shorter URLs. Like title tags
and meta descriptions that are too long, too-long URLs will also be cut off
with an ellipsis. Just remember, having a descriptive URL is just as
important, so don’t cut down on URL length if it means sacrificing the URL's
descriptiveness.

example.com/services/plumbing/plumbing-repair/toilets/leaks/
vs.

example.com/plumbing-repair/toilets/
Minimizing length, both by including fewer words in your page names and
removing unnecessary subfolders, makes your URLs easier to copy and
paste, as well as more clickable.
Keywords in URL
If your page is targeting a specific term or phrase, make sure to include it in
the URL. However, don't go overboard by trying to stuff in multiple
keywords for purely SEO purposes. It’s also important to watch out for
repeat keywords in different subfolders. For example, you may have
naturally incorporated a keyword into a page name, but if located within
other folders that are also optimized with that keyword, the URL could
begin to appear keyword-stuffed.

Example:

example.com/seattle-dentist/dental-services/dental-crowns/
Keyword overuse in URLs can appear spammy and manipulative. If you
aren’t sure whether your keyword usage is too aggressive, just read your
URL through the eyes of a searcher and ask, “Does this look natural?
Would I click on this?”

Static URLs
The best URLs are those that can easily be read by humans, so you should
avoid the overuse of parameters, numbers, and symbols. Using
technologies like mod_rewrite for Apache and ISAPI_rewrite for Microsoft,
you can easily transform dynamic URLs like this:

http://moz.com/blog?id=123
into a more readable static version like this:

https://moz.com/google-algorithm-change

Hyphens for word separation


Not all web applications accurately interpret separators like underscores
(_), plus signs (+), or spaces (%20). Search engines also do not
understand how to separate words in URLs when they run together without
a separator (example.com/optimizefeaturedsnippets/). Instead, use the
hyphen character (-) to separate words in a URL.

Case sensitivity
Sites should avoid case sensitive URLs. Instead of
example.com/desserts/Chocolate-Pie-Recipe it would be better to use
example.com/desserts/chocolate-pie-recipe. If the site you're working on
has lots of mixed-case URLs indexed, don't fret — your developers can
help. Ask them about adding a rewrite formula to something known as
the .htaccess file to automatically make any uppercase URLs lowercase.
Geographic modifiers in URLs
Some local business owners omit geographic terms that describe their
physical location or service area because they believe that search engines
can figure this out on their own. On the contrary, it’s vital that local business
websites’ content, URLs, and other on-site assets make specific mention of
city names, neighborhood names, and other regional descriptors. Let both
consumers and search engines know exactly where you are and where you
serve, rather than relying on your physical location alone.

Protocols: HTTP vs HTTPS


A protocol is that “http” or “https” preceding your domain name. Google
recommends that all websites have a secure protocol (the “s” in “https”
stands for “secure”). To ensure that your URLs are using the https://
protocol instead of http://, you must obtain an SSL (Secure Sockets Layer)
certificate. SSL certificates are used to encrypt data. They ensure that any
data passed between the web server and browser of the searcher remains
private. As of July 2018, Google Chrome displays “not secure” for all HTTP
sites, which could cause these sites to appear untrustworthy to visitors and
result in them leaving the site.

Try HTTP/2 for improved efficiency


HTTP/2 is an improvement to the traditional HTTP network protocol and
makes sending your resources from your server to your browser more
efficient. This update improves the "fetch and load" part of your critical
rendering path (discussed more at length in Chapter 5), helps increase the
security of your website, and can help improve performance. You must be
on HTTPS to migrate to HTTP/2.

If you’ve made it this far, congratulations on surpassing the halfway point of


the Beginner’s Guide to SEO! So far, we’ve learned how search engines
crawl, index, and rank content, how to find keyword opportunities to target,
and now, you know the on-page SEO strategies that can help your pages
get found. Next, buckle up, because we’ll be diving into the exciting world
of technical SEO in Chapter 5!
Chapter 5
TECHNICAL SEO
Basic technical knowledge will help you optimize your site for search
engines and establish credibility with developers.

Now that you’ve crafted valuable content on the foundation of solid


keyword research, it’s important to make sure it’s not only readable by
humans, but by search engines too!

You don’t need to have a deep technical understanding of these concepts,


but it is important to grasp what these technical assets do so that you can
speak intelligently about them with developers. Speaking your developers’
language is important because you'll probably need them to carry out some
of your optimizations. They're unlikely to prioritize your asks if they can’t
understand your request or see its importance. When you establish
credibility and trust with your devs, you can begin to tear away the red tape
that often blocks crucial work from getting done.

What's that word mean?


Make sure you're ready to tackle all the new ideas in this chapter by having
the SEO glossary handy!

See Chapter 5 definitions 


SEOs need cross-team support to be effective
It’s vital to have a healthy relationship with your developers so that you can
successfully tackle SEO challenges from both sides. Don’t wait until a
technical issue causes negative SEO ramifications to involve a developer.
Instead, join forces for the planning stage with the goal of avoiding the
issues altogether. If you don’t, it can cost you time and money later.

Beyond cross-team support, understanding technical optimization for SEO


is essential if you want to ensure that your web pages are structured for
both humans and crawlers. To that end, we’ve divided this chapter into
three sections:

1. How websites work


2. How search engines understand websites
3. How users interact with websites

Since the technical structure of a site can have a massive impact on its
performance, it’s crucial for everyone to understand these principles. It
might also be a good idea to share this part of the guide with your
programmers, content writers, and designers so that all parties involved in
a site's construction are on the same page.

How websites work


If search engine optimization is the process of optimizing a website for
search, SEOs need at least a basic understanding of the thing they're
optimizing!

Below, we outline the website’s journey from domain name purchase all the
way to its fully rendered state in a browser. An important component of the
website’s journey is the critical rendering path, which is the process of a
browser turning a website’s code into a viewable page.

Knowing this about websites is important for SEOs to understand for a few
reasons:

 The steps in this webpage assembly process can affect page load
times, and speed is not only important for keeping users on your site,
but it’s also one of Google’s ranking factors.
 Google renders certain resources, like JavaScript, on a "second
pass." Google will look at the page without JavaScript first, then a few
days to a few weeks later, it will render JavaScript, meaning SEO-
critical elements that are added to the page using JavaScript might
not get indexed.

Imagine that the website loading process is your commute to work. You get
ready at home, gather your things to bring to the office, and then take the
fastest route from your home to your work. It would be silly to put on just
one of your shoes, take a longer route to work, drop your things off at the
office, then immediately return home to get your other shoe, right? That’s
sort of what inefficient websites do. This chapter will teach you how to
diagnose where your website might be inefficient, what you can do to
streamline, and the positive ramifications on your rankings and user
experience that can result from that streamlining.

Before a website can be accessed, it needs to be set up!

1. Domain name is purchased. Domain names like moz.com are


purchased from a domain name registrar such as GoDaddy or
HostGator. These registrars are just organizations that manage the
reservations of domain names.
2. Domain name is linked to IP address. The Internet doesn’t
understand names like “moz.com” as website addresses without the
help of domain name servers (DNS). The Internet uses a series of
numbers called an Internet protocol (IP) address (ex: 127.0.0.1), but
we want to use names like moz.com because they’re easier for
humans to remember. We need to use a DNS to link those human-
readable names with machine-readable numbers.

How a website gets from server to browser

1. User requests domain. Now that the name is linked to an IP


address via DNS, people can request a website by typing the domain
name directly into their browser or by clicking on a link to the website.
2. Browser makes requests. That request for a web page prompts the
browser to make a DNS lookup request to convert the domain name
to its IP address. The browser then makes a request to the server for
the code your web page is constructed with, such as HTML, CSS,
and JavaScript.
3. Server sends resources. Once the server receives the request for
the website, it sends the website files to be assembled in the
searcher’s browser.
4. Browser assembles the web page. The browser has now received
the resources from the server, but it still needs to put it all together
and render the web page so that the user can see it in their browser.
As the browser parses and organizes all the web page’s resources,
it’s creating a Document Object Model (DOM). The DOM is what you
can see when you right click and “inspect element” on a web page in
your Chrome browser (learn how to inspect elements in other
browsers).
5. Browser makes final requests. The browser will only show a web
page after all the page’s necessary code is downloaded, parsed, and
executed, so at this point, if the browser needs any additional code in
order to show your website, it will make an additional request from
your server.
6. Website appears in browser. Whew! After all that, your website has
now been transformed (rendered) from code to what you see in your
browser.

Talk to your developers about async!


Something you can bring up with your developers is shortening the critical
rendering path by setting scripts to "async" when they’re not needed to
render content above the fold, which can make your web pages load faster.
Async tells the DOM that it can continue to be assembled while the browser
is fetching the scripts needed to display your web page. If the DOM has to
pause assembly every time the browser fetches a script (called “render-
blocking scripts”), it can substantially slow down your page load. It would
be like going out to eat with your friends and having to pause the
conversation every time one of you went up to the counter to order, only
resuming once they got back. With async, you and your friends can
continue to chat even when one of you is ordering. You might also want to
bring up other optimizations that devs can implement to shorten the critical
rendering path, such as removing unnecessary scripts entirely, like old
tracking scripts.

Now that you know how a website appears in a browser, we’re going to
focus on what a website is made of — in other words, the code
(programming languages) used to construct those web pages.

The three most common are:

 HTML – What a website says (titles, body content, etc.)


 CSS – How a website looks (color, fonts, etc.)
 JavaScript – How it behaves (interactive, dynamic, etc.)

This image was inspired by Alexis Sanders' fantastic example in JavaScript


& SEO: Making Your Bot Experience As Good As Your User Experience

HTML: What a website says


HTML stands for hypertext markup language, and it serves as the
backbone of a website. Elements like headings, paragraphs, lists, and
content are all defined in the HTML.

Here’s an example of a webpage and what its corresponding HTML looks


like:

This is a screenshot from W3schools.com, our favorite place to learn and


practice HTML, CSS, and JavaScript.
HTML is important for SEOs to know because it’s what lives “under the
hood” of any page they create or work on. While your CMS likely doesn’t
require you to write your pages in HTML (ex: selecting “hyperlink” will allow
you to create a link without you having to type in “a href=”), it is what you’re
modifying every time you do something to a web page such as adding
content, changing the anchor text of internal links, and so on. Google
crawls these HTML elements to determine how relevant your document is
to a particular query. In other words, what’s in your HTML plays a huge role
in how your web page ranks in Google organic search!

CSS: How a website looks


CSS stands for "cascading style sheets," and this is what causes your web
pages to take on certain fonts, colors, and layouts. HTML was created to
describe content, rather than to style it, so when CSS entered the scene, it
was a game-changer. With CSS, web pages could be “beautified” without
requiring manual coding of styles into the HTML of every page — a
cumbersome process, especially for large sites.

It wasn’t until 2014 that Google’s indexing system began to render web


pages more like an actual browser, as opposed to a text-only browser. A
black-hat SEO practice that tried to capitalize on Google’s older indexing
system was hiding text and links via CSS for the purpose of manipulating
search engine rankings. This “hidden text and links” practice is a violation
of Google’s quality guidelines.

Components of CSS that SEOs, in particular, should care about:

 Since style directives can live in external stylesheet files (CSS files)
instead of your page’s HTML, it makes your page less code-heavy,
reducing file transfer size and making load times faster.
 Browsers still have to download resources like your CSS file,
so compressing them can make your webpages load faster, and
page speed is a ranking factor.
 Having your pages be more content-heavy than code-heavy can lead
to better indexing of your site’s content.
 Using CSS to hide links and content can get your website manually
penalized and removed from Google’s index.

JavaScript: How a website behaves


In the earlier days of the Internet, webpages were built with HTML. When
CSS came along, webpage content had the ability to take on some style.
When the programming language JavaScript entered the scene, websites
could now not only have structure and style, but they could be dynamic.
JavaScript has opened up a lot of opportunities for non-static web page
creation. When someone attempts to access a page enhanced with this
programming language, that user’s browser will execute the JavaScript
against the static HTML that the server returned, resulting in a webpage
that comes to life with some sort of interactivity.

You’ve definitely seen JavaScript in action — you just may not have known
it! That’s because JavaScript can do almost anything to a page. It could
create a pop-up, for example, or it could request third-party resources like
ads to display on your page.

Client-side rendering versus server-side rendering


JavaScript can pose some problems for SEO, though, since search
engines don’t view JavaScript the same way human visitors do. That’s
because of client-side versus server-side rendering. Most JavaScript is
executed in a client’s browser. With server-side rendering, on the other
hand, the files are executed at the server and the server sends them to the
browser in their fully rendered state.

SEO-critical page elements such as text, links, and tags that are loaded on
the client’s side with JavaScript, rather than represented in your HTML, are
invisible from your page’s code until they are rendered. This means that
search engine crawlers won’t see what’s in your JavaScript — at least not
initially.

Google says that, as long as you’re not blocking Googlebot from crawling


your JavaScript files, they’re generally able to render and understand your
web pages just like a browser can, which means that Googlebot should see
the same things as a user viewing a site in their browser. However, due to
this “second wave of indexing” for client-side JavaScript, Google can miss
certain elements that are only available once JavaScript is executed.

There are also some other things that could go wrong during Googlebot’s
process of rendering your web pages, which can prevent Google from
understanding what’s contained in your JavaScript:

 You’ve blocked Googlebot from JavaScript resources (ex: with


robots.txt, like we learned about in Chapter 2)
 Your server can’t handle all the requests to crawl your content
 The JavaScript is too complex or outdated for Googlebot to
understand
 JavaScript doesn’t "lazy load" content into the page until after the
crawler has finished with the page and moved on.
Needless to say, while JavaScript does open a lot of possibilities for web
page creation, it can also have some serious ramifications for your SEO if
you’re not careful.

Thankfully, there's a way to check whether Google sees the same thing as
your visitors. To see a page how Googlebot views your page, use Google
Search Console's "URL Inspection" tool. Simply paste your page's URL into
the GSC search bar:

From here, click "Test Live URL".

After Googlebot has recrawled your URL, click "View Tested Page" to see
how your page is being crawled and rendered.
Clicking the "Screenshot" tab adjacent to "HTML" shows how Googlebot
smartphone renders your page.
In return, you’ll see how Googlebot sees your page versus how a visitor (or
you) may see the page. In the "More Info" tab, Google will also show you a
list of any resources they may not have been able to get for the URL you
entered.

Understanding the way websites work lays a great foundation for what we’ll
talk about next: technical optimizations to help Google understand the
pages on your website better.

How search engines understand websites


Imagine being a search engine crawler scanning down a 10,000-word
article about how to bake a cake. How do you identify the author, recipe,
ingredients, or steps required to bake a cake? This is where schema
markup comes in. It allows you to spoon-feed search engines more specific
classifications for what type of information is on your page.

Schema is a way to label or organize your content so that search engines


have a better understanding of what certain elements on your web pages
are. This code provides structure to your data, which is why schema is
often referred to as “structured data.” The process of structuring your data
is often referred to as “markup” because you are marking up your content
with organizational code.

JSON-LD is Google’s preferred schema markup (announced in May ‘16),


which Bing also supports. To view a full list of the thousands of available
schema markups, visit Schema.org or view the Google Developers
Introduction to Structured Data for additional information on how to
implement structured data. After you implement the structured data that
best suits your web pages, you can test your markup with
Google’s Structured Data Testing Tool.

In addition to helping bots like Google understand what a particular piece of


content is about, schema markup can also enable special features to
accompany your pages in the SERPs. These special features are referred
to as "rich snippets," and you’ve probably seen them in action.
They’re things like:

 Top Stories carousels


 Review stars
 Sitelinks search boxes
 Recipes

Remember, using structured data can help enable a rich snippet to be


present, but does not guarantee it. Other types of rich snippets will likely be
added in the future as the use of schema markup increases.

Some last words of advice for schema success:


 You can use multiple types of schema markup on a
page. However, if you mark up one element, like a product for
example, and there are other products listed on the page, you must
also mark up those products.
 Don’t mark up content that is not visible to visitors and
follow Google’s Quality Guidelines. For example, if you add review
structured markup to a page, make sure those reviews are actually
visible on that page.
 If you have duplicate pages, Google asks that you mark up each
duplicate page with your structured markup, not just the
canonical version.
 Provide original and updated (if applicable) content on your
structured data pages.
 Structured markup should be an accurate reflection of your
page.
 Try to use the most specific type of schema markup for your
content.
 Marked-up reviews should not be written by the business. They
should be genuine unpaid business reviews from actual customers.

Tell search engines about your preferred pages with canonicalization


When Google crawls the same content on different web pages, it
sometimes doesn’t know which page to index in search results. This is why
the rel="canonical" tag was invented: to help search engines better index
the preferred version of content and not all its duplicates.

The rel="canonical" tag allows you to tell search engines where the original,
master version of a piece of content is located. You’re essentially saying,
"Hey search engine! Don’t index this; index this source page instead." So, if
you want to republish a piece of content, whether exactly or slightly
modified, but don’t want to risk creating duplicate content, the canonical tag
is here to save the day.

Proper canonicalization ensures that every unique piece of content on your


website has only one URL. To prevent search engines from indexing
multiple versions of a single page, Google recommends having a self-
referencing canonical tag on every page on your site. Without a canonical
tag telling Google which version of your web page is the preferred
one, https://www.example.com could get indexed separately from
https://example.com, creating duplicates.

"Avoid duplicate content" is an Internet truism, and for good reason! Google
wants to reward sites with unique, valuable content — not content that’s
taken from other sources and repeated across multiple pages. Because
engines want to provide the best searcher experience, they will rarely show
multiple versions of the same content, opting instead to show only the
canonicalized version, or if a canonical tag does not exist, whichever
version they deem most likely to be the original.

Distinguishing between content filtering & content penalties


There is no such thing as a duplicate content penalty. However, you
should try to keep duplicate content from causing indexing issues by using
the rel="canonical" tag when possible. When duplicates of a page exist,
Google will choose a canonical and filter the others out of search results.
That doesn’t mean you’ve been penalized. It just means that Google only
wants to show one version of your content.

Learn more about canonicalization 


It’s also very common for websites to have multiple duplicate pages due to
sort and filter options. For example, on an e-commerce site, you might
have what’s called a faceted navigation that allows visitors to narrow down
products to find exactly what they’re looking for, such as a “sort by” feature
that reorders results on the product category page from lowest to highest
price. This could create a URL that looks something like this:
example.com/mens-shirts?sort=price_ascending. Add in more sort/filter
options like color, size, material, brand, etc. and just think about all the
variations of your main product category page this would create!

To learn more about different types of duplicate content, this post by Dr.


Pete helps distill the different nuances.

How users interact with websites


In Chapter 1, we said that despite SEO standing for search engine
optimization, SEO is as much about people as it is about search engines
themselves. That’s because search engines exist to serve searchers. This
goal helps explain why Google’s algorithm rewards websites that provide
the best possible experiences for searchers, and why some websites,
despite having qualities like robust backlink profiles, might not perform well
in search.
When we understand what makes their web browsing experience optimal,
we can create those experiences for maximum search performance.

Ensuring a positive experience for your mobile visitors


Being that well over half of all web traffic today comes from mobile, it’s safe
to say that your website should be accessible and easy to navigate for
mobile visitors. In April 2015, Google rolled out an update to its algorithm
that would promote mobile-friendly pages over non-mobile-friendly pages.
So how can you ensure that your website is mobile-friendly? Although there
are three main ways to configure your website for mobile, Google
recommends responsive web design.

Responsive design
Responsive websites are designed to fit the screen of whatever type of
device your visitors are using. You can use CSS to make the web page
"respond" to the device size. This is ideal because it prevents visitors from
having to double-tap or pinch-and-zoom in order to view the content on
your pages. Not sure if your web pages are mobile friendly? You can use
Google’s mobile-friendly test to check!

AMP
AMP stands for Accelerated Mobile Pages, and it's used to deliver content
to mobile visitors at speeds much greater than with non-AMP delivery. AMP
is able to deliver content so fast because it delivers content from its cache
servers (not the original site) and uses a special AMP version of HTML and
JavaScript.

Learn more about AMP.

Mobile-first indexing
As of 2018, Google started switching websites over to mobile-first indexing.
That change sparked some confusion between mobile-friendliness and
mobile-first, so it’s helpful to disambiguate. With mobile-first indexing,
Google crawls and indexes the mobile version of your web pages. Making
your website compatible to mobile screens is good for users and your
performance in search, but mobile-first indexing happens independently of
mobile-friendliness.

This has raised some concerns for websites that lack parity between
mobile and desktop versions, such as showing different content,
navigation, links, etc. on their mobile view. A mobile site with different links,
for example, will alter the way in which Googlebot (mobile) crawls your site
and sends link equity to your other pages.

Improving page speed to mitigate visitor frustration


Google wants to serve content that loads lightning-fast for searchers.
We’ve come to expect fast-loading results, and when we don’t get them,
we’ll quickly bounce back to the SERP in search of a better, faster page.
This is why page speed is a crucial aspect of on-site SEO. We can improve
the speed of our web pages by taking advantage of tools like the ones
we’ve mentioned below. Click on the links to learn more about each.

 Google's PageSpeed Insights tool & best practices documentation


 How to Think About Speed Tools
 GTMetrix
 Google's Mobile Website Speed & Performance Tester
 Google Lighthouse
 Chrome DevTools & Tutorial

Images are one of the main culprits of slow pages!


As discussed in Chapter 4, images are one of the number one reasons for
slow-loading web pages! In addition to image compression, optimizing
image alt text, choosing the right image format, and submitting image
sitemaps, there are other technical ways to optimize the speed and way in
which images are shown to your users. Some primary ways to improve
image delivery are as follows:

1. SRCSET: How to deliver the best image size for each device
The SRCSET attribute allows you to have multiple versions of your image
and then specify which version should be used in different situations. This
piece of code is added to the <img> tag (where your image is located in the
HTML) to provide unique images for specific-sized devices.

This is like the concept of responsive design that we discussed earlier,


except for images!

This doesn’t just speed up your image load time, it’s also a unique way to
enhance your on-page user experience by providing different and optimal
images to different device types.

There are more than just three image size versions!


It’s a common misconception that you just need a desktop, tablet, and
mobile-sized version of your image. There are a huge variety of screen
sizes and resolutions.

Learn more about SRCSET 


2. Show visitors image loading is in progress with lazy loading
Lazy loading occurs when you go to a webpage and, instead of seeing a
blank white space for where an image will be, a blurry lightweight version of
the image or a colored box in its place appears while the surrounding text
loads. After a few seconds, the image clearly loads in full resolution. The
popular blogging platform Medium does this really well.

The low resolution version is initially loaded, and then the full high


resolution version. This also helps to optimize your critical rendering path!
So while all of your other page resources are being downloaded, you’re
showing a low-resolution teaser image that helps tell users that things are
happening/being loaded. For more information on how you should lazy load
your images, check out Google’s Lazy Loading Guidance.

Improve speed by condensing and bundling your files


Page speed audits will often make recommendations such as “minify
resource,” but what does that actually mean? Minification condenses a
code file by removing things like line breaks and spaces, as well as
abbreviating code variable names wherever possible.

“Bundling” is another common term you’ll hear in reference to improving


page speed. The process of bundling combines a bunch of the same
coding language files into one single file. For example, a bunch of
JavaScript files could be put into one larger file to reduce the amount of
JavaScript files for a browser.

By both minifying and bundling the files needed to construct your web
page, you’ll speed up your website and reduce the number of your HTTP
(file) requests.

Improving the experience for international audiences


Websites that target audiences from multiple countries should familiarize
themselves with international SEO best practices in order to serve up the
most relevant experiences. Without these optimizations, international
visitors might have difficulty finding the version of your site that caters to
them.

There are two main ways a website can be internationalized:

 Language
Sites that target speakers of multiple languages are considered
multilingual websites. These sites should add something called an
hreflang tag to show Google that your page has copy for another
language. Learn more about hreflang.
 Country
Sites that target audiences in multiple countries are called multi-
regional websites and they should choose a URL structure that
makes it easy to target their domain or pages to specific countries.
This can include the use of a country code top level domain (ccTLD)
such as “.ca” for Canada, or a generic top-level domain (gTLD) with a
country-specific subfolder such as “example.com/ca” for
Canada. Learn more about locale-specific URLs.

Chapter 6
LINK BUILDING & ESTABLISHING AUTHORITY
Turn up the volume.

You've created content that people are searching for, that answers their
questions, and that search engines can understand, but those qualities
alone don't mean it'll rank. To outrank the rest of the sites with those
qualities, you have to establish authority. That can be accomplished by
earning links from authoritative websites, building your brand, and nurturing
an audience who will help amplify your content.

Google has confirmed that links and quality content (which we covered


back in Chapter 4) are two of the three most important ranking factors for
SEO. Trustworthy sites tend to link to other trustworthy sites, and spammy
sites tend to link to other spammy sites.

But what is a link, exactly? How do you go about earning them from other
websites? Let's start with the basics.

What are links?


Inbound links, also known as backlinks or external links, are HTML
hyperlinks that point from one website to another. They're the currency of
the Internet, as they act a lot like real-life reputation. If you went on
vacation and asked three people (all completely unrelated to one another)
what the best coffee shop in town was, and they all said, "Cuppa Joe on
Main Street," you would feel confident that Cuppa Joe is indeed the best
coffee place in town. Links do that for search engines.

What's that word mean?


There's a lot to remember when it comes to the wide world of link building.
Check out more definitions for this section in the SEO glossary.
See Chapter 6 definitions 

Since the late 1990s, search engines have treated links as votes for
popularity and importance on the web.

Internal links, or links that connect internal pages of the same domain,
work very similarly for your website. A high amount of internal links pointing
to a particular page on your site will provide a signal to Google that the
page is important, so long as it's done naturally and not in a spammy way.

The engines themselves have refined the way they view links, now using
algorithms to evaluate sites and pages based on the links they find. But
what's in those algorithms? How do the engines evaluate all those links? It
all starts with the concept of E-A-T.

You are what you E-A-T


Google's Search Quality Rater Guidelines put a great deal of importance
on the concept of E-A-T — an acronym for expert, authoritative, and
trustworthy. Sites that don't display these characteristics tend to be seen as
lower-quality in the eyes of the engines, while those that do are
subsequently rewarded. E-A-T is becoming more and more important as
search evolves and increases the importance of solving for user intent.

Creating a site that's considered expert, authoritative, and trustworthy


should be your guiding light as you practice SEO. Not only will it simply
result in a better site, but it's future-proof. After all, providing great value to
searchers is what Google itself is trying to do.

What is user intent?


"User intent" refers to the driving reason behind a searcher's query. A
search for "puppy" doesn't have a strong intent — are they looking for
pictures? Facts about breeds? Care information? On the other hand, a
search for "puppy training in Seattle, WA" has a very strong intent: this user
wants to train their puppy, they're probably looking for help in Seattle, and
they may wish to sign up for a class. Try to craft content that satisfies your
searchers' intent.

Learn more about user intent 


E-A-T and links to your site
The more popular and important a site is, the more weight the links from
that site carry. A site like Wikipedia, for example, has thousands of diverse
sites linking to it. This indicates it provides lots of expertise, has cultivated
authority, and is trusted among those other sites.

To earn trust and authority with search engines, you'll need links from
websites that display the qualities of E-A-T. These don't have to be
Wikipedia-level sites, but they should provide searchers with credible,
trustworthy content.

Moz has proprietary metrics to help you determine how authoritative a site
is: Domain Authority, Page Authority, and Spam Score. In general, you'll
want links from sites with a higher Domain Authority than your sites.

Followed vs. nofollowed links


Remember how links act as votes? The rel=nofollow attribute (pronounced
as two words, "no follow") allows you to link to a resource while removing
your "vote" for search engine purposes.

Just like it sounds, "nofollow" tells search engines not to follow the link.
Some engines still follow them simply to discover new pages, but these
links don't pass link equity (the "votes of popularity" we talked about
above), so they can be useful in situations where a page is either linking to
an untrustworthy source or was paid for or created by the owner of the
destination page (making it an unnatural link).

Say, for example, you write a post about link building practices, and want to
call out an example of poor, spammy link building. You could link to the
offending site without signaling to Google that you trust it.

Standard links (ones that haven't had nofollow added) look like this:

<a href="">I love Moz</a>


Nofollow link markup looks like this:

<a href="" rel="nofollow">I love Moz</a>


If follow links pass all the link equity, shouldn't that mean you want only follow
links?
Not necessarily. Think about all the legitimate places you can create links
to your own website: a Facebook profile, a Yelp page, a Twitter account,
etc. These are all natural places to add links to your website, but they
shouldn't count as votes for your website. (Setting up a Twitter profile with a
link to your site isn't a vote from Twitter that they like your site.)

It's natural for your site to have a balance between nofollowed and followed
backlinks in its link profile (more on link profiles below). A nofollow link
might not pass authority, but it could send valuable traffic to your site and
even lead to future followed links.

Use the MozBar extension for Google Chrome to highlight links on any
page to find out whether they're nofollow or follow without ever having to
view the source code!

Download the MozBar 

Your link profile


Your link profile is an overall assessment of all the inbound links your site
has earned: the total number of links, their quality (or spamminess), their
diversity (is one site linking to you hundreds of times, or are hundreds of
sites linking to you once?), and more. The state of your link profile helps
search engines understand how your site relates to other sites on the
Internet. There are various SEO tools that allow you to analyze your link
profile and begin to understand its overall makeup.

How can I see which inbound links point to my website?


Visit Moz Link Explorer and type in your site's URL. You'll be able to see
how many and which websites are linking back to you.

See your links 


What are the qualities of a healthy link profile?
When people began to learn about the power of links, they began
manipulating them for their benefit. They'd find ways to gain artificial links
just to increase their search engine rankings. While these dangerous
tactics can sometimes work, they are against Google's terms of service and
can get a website deindexed (removal of web pages or entire domains from
search results). You should always try to maintain a healthy link
profile.

A healthy link profile is one that indicates to search engines that you're
earning your links and authority fairly. Just like you shouldn't lie, cheat, or
steal, you should strive to ensure your link profile is honest and earned via
good, old-fashioned hard work.

Links are earned or editorially placed


Editorial links are links added naturally by sites and pages that want to link
to your website.

The foundation of acquiring earned links is almost always through creating


high-quality content that people genuinely wish to reference. This is where
creating 10X content (a way of describing extremely high-quality content) is
essential! If you can provide the best and most interesting resource on the
web, people will naturally link to it.

Naturally earned links require no specific action from you, other than the
creation of worthy content and the ability to create awareness about it.

Earned mentions are often unlinked!


When websites are referring to your brand or a specific piece of content
you've published, they will often mention it without linking to it. To find
these earned mentions, use Moz's Fresh Web Explorer. You can then
reach out to those publishers to see if they'll update those mentions with
links.

Links are relevant and from topically similar websites


Links from websites within a topic-specific community are generally better
than links from websites that aren't relevant to your site. If your website
sells dog houses, a link from the Society of Dog Breeders matters much
more than one from the Roller Skating Association. Additionally, links from
topically irrelevant sources can send confusing signals to search engines
regarding what your page is about.

Linking domains don't have to match the topic of your page exactly, but
they should be related. Avoid pursuing backlinks from sources that are
completely off-topic; there are far better uses of your time.

Anchor text is descriptive and relevant, without being spammy


Anchor text helps tell Google what the topic of your page is about. If
dozens of links point to a page with a variation of a word or phrase, the
page has a higher likelihood of ranking well for those types of phrases.
However, proceed with caution! Too many backlinks with the same anchor
text could indicate to the search engines that you're trying to manipulate
your site's ranking in search results.

Consider this. You ask ten separate friends at separate times how their day
was going, and they each responded with the same phrase:

"Great! I started my day by walking my dog, Peanut, and then had a


picante beef Top Ramen for lunch."
That's strange, and you'd be quite suspicious of your friends. The same
goes for Google. Describing the content of the target page with the anchor
text helps them understand what the page is about, but the same
description over and over from multiple sources starts to look suspicious.
Aim for relevance; avoid spam.
Use the "Anchor Text" report in Moz's Link Explorer to see what anchor text
other websites are using to link to your content.

Links send qualified traffic to your site


Link building should never be solely about search engine rankings.
Esteemed SEO and link building thought leader Eric Ward used to say that
you should build your links as though Google might disappear tomorrow. In
essence, you should focus on acquiring links that will bring qualified traffic
to your website — another reason why it's important to acquire links from
relevant websites whose audience would find value in your site, as well.

Use the "Referral Traffic" report in Google Analytics to evaluate websites


that are currently sending you traffic. How can you continue to build
relationships with similar types of websites?

Link building don'ts & things to avoid


Spammy link profiles are just that: full of links built in unnatural, sneaky, or
otherwise low-quality ways. Practices like buying links or engaging in a link
exchange might seem like the easy way out, but doing so is dangerous and
could put all of your hard work at risk. Google penalizes sites with spammy
link profiles, so don't give in to temptation.

A guiding principle for your link building efforts is to never try to manipulate
a site's ranking in search results.

But isn't that the entire goal of SEO? To increase a site's ranking in search
results? And herein lies the confusion. Google wants you to earn links, not
build them, but the line between the two is often blurry. To avoid penalties
for unnatural links (known as "link spam"), Google has made clear what
should be avoided.

⃠ Purchased links
Google and Bing both seek to discount the influence of paid links in their
organic search results. While a search engine can't know which links were
earned vs. paid for from viewing the link itself, there are clues it uses to
detect patterns that indicate foul play. Websites caught buying or selling
followed links risk severe penalties that will severely drop their rankings.
(By the way, exchanging goods or services for a link is also a form of
payment and qualifies as buying links.)

⃠ Link exchanges / reciprocal linking


If you've ever received a "you link to me and I'll link to you" email from
someone you have no affiliation with, you've been targeted for a link
exchange. Google's quality guidelines caution against "excessive" link
exchange and similar partner programs conducted exclusively for the sake
of cross-linking, so there is some indication that this type of exchange on a
smaller scale might not trigger any link spam alarms.

It is acceptable, and even valuable, to link to people you work with, partner
with, or have some other affiliation with and have them link back to you.

It's the exchange of links at mass scale with unaffiliated sites that can
warrant penalties.

⃠ Low-quality directory links


These used to be a popular source of manipulation. A large number of pay-
for-placement web directories exist to serve this market and pass
themselves off as legitimate, with varying degrees of success. These types
of sites tend to look very similar, with large lists of websites and their
descriptions (typically, the site's critical keyword is used as the anchor text
to link back to the submittor's site).

There are many more manipulative link building tactics that search engines


have identified. In most cases, they have found algorithmic methods for
reducing their impact. As new spam systems emerge, engineers will
continue to fight them with targeted algorithms, human reviews, and the
collection of spam reports from webmasters and SEOs. By and large, it isn't
worth finding ways around them.

If your site does get a manual penalty, there are steps you can take to get it
lifted.

Links should always:

Be earned/editorial

Come from authoritative pages

Increase with time

Come from topically relevant sources


Use relevant, natural anchor text

Bring qualified traffic to your site

Be a healthy mix of follow and nofollow

Be strategically targeted or naturally earned

How to build high-quality backlinks


Link building comes in many shapes and sizes, but one thing is always
true: link campaigns should always match your unique goals. With that
said, there are some popular methods that tend to work well for most
campaigns. This is not an exhaustive list, so visit Moz's blog posts on link
building for more detail on this topic.

Find customer and partner links


If you have partners you work with regularly, or loyal customers that love
your brand, there are ways to earn links from them with relative ease. You
might send out partnership badges (graphic icons that signify mutual
respect), or offer to write up testimonials of their products. Both of those
offer things they can display on their website along with links back to you.

Publish a blog
This content and link building strategy is so popular and valuable that it's
one of the few recommended personally by the engineers at Google. Blogs
have the unique ability to contribute fresh material on a consistent basis,
generate conversations across the web, and earn listings and links from
other blogs.

Careful, though — you should avoid low-quality guest posting just for the
sake of link building. Google has advised against this and your energy is
better spent elsewhere.
Create unique resources
Creating unique, high-quality resources is no easy task, but it's well worth
the effort. High quality content that is promoted in the right ways can be
widely shared. It can help to create pieces that have the following traits:

 Elicits strong emotions (joy, sadness, etc.)


 Something new, or at least communicated in a new way
 Visually appealing
 Addresses a timely need or interest
 Location-specific (example: the most searched-for halloween
costumes by state).

Creating a resource like this is a great way to attract a lot of links with one
page. You could also create a highly-specific resource — without as broad
of an appeal — that targeted a handful of websites. You might see a higher
rate of success, but that approach isn't as scalable.

Users who see this kind of unique content often want to share it with
friends, and bloggers/tech-savvy webmasters who see it will often do so
through links. These high-quality, editorially earned votes are invaluable to
building trust, authority, and rankings potential.

Build resource pages


Resource pages are a great way to build links. However, to find them you'll
want to know some advanced Google operators to make discovering them
a bit easier.

For example, if you were doing link building for a company that made pots
and pans, you could search for:

cooking intitle:"resources"
...and see which pages might be good link targets.

This can also give you great ideas for content creation — just think about
which types of resources you could create that these pages would all like to
reference and link to.

Get involved in your local community


For a local business (one that meets its customers in person), community
outreach can result in some of the most valuable and influential links.

 Engage in sponsorships and scholarships


 Host or participate in community events, seminars, workshops, and
organizations
 Donate to worthy local causes and join local business associations
 Post jobs and offer internships
 Promote loyalty programs
 Run a local competition
 Develop real-world relationships with related local businesses to
discover how you can team up to improve the health of your local
economy

All of these smart and authentic strategies provide good local link
opportunities.

Use advanced search operators in Google


From content research to plagiarism checks to technical audits and
beyond, using advanced Google search operators can power up your SEO
research.

Learn more 
Link building for local SEO
Building linked unstructured citations — references to a business' contact
information on a non-directory platform, like a blog or a news site — is
important for moving the ranking needle for local SEO. It's also a great way
to earn valuable links when you're marketing a local business. Learn more
in our guide:

Follow the tutorial 


Refurbish top content
You likely already know which of your site's content earns the most traffic,
converts the most customers, or retains visitors for the longest amount of
time.

Take that content and refurbish it for other platforms (Slideshare, YouTube,


Instagram, Quora, etc.) to expand your acquisition funnel beyond Google.

You can also dust off, update, and simply republish older content on the
same platform. If you discover that a few trusted industry websites all linked
to a popular resource that's gone stale, update it and let those industry
websites know — you may just earn a good link.

You can also do this with images. Reach out to websites that are using
your images and not citing you or linking back to you and ask if they'd mind
including a link.

Be newsworthy
Earning the attention of the press, bloggers, and news media is an
effective, time-honored way to earn links. Sometimes this is as simple as
giving something away for free, releasing a great new product, or stating
something controversial. Since so much of SEO is about creating a digital
representation of your brand in the real world, to succeed in SEO, you have
to be a great brand.

Be personal and genuine


The most common mistake new SEOs make when trying to build links is
not taking the time to craft a custom, personal, and valuable initial outreach
email. You know as well as anyone how annoying spammy emails can be,
so make sure yours doesn't make people roll their eyes.

Your goal for an initial outreach email is simply to get a response. These
tips can help:

 Make it personal by mentioning something the person is working on,


where they went to school, their dog, etc.
 Provide value. Let them know about a broken link on their website or
a page that isn't working on mobile.
 Keep it short.
 Ask one simple question (typically not for a link; you'll likely want to
build a rapport first).

Earning links is resource-intensive, so measure your success to prove value


Metrics for link building should match up with the site's overall KPIs. These
might be sales, email subscriptions, page views, etc. You should also
evaluate Domain Authority and/or Page Authority scores, the ranking of
desired keywords, and the amount of traffic to your content. We'll talk more
about measuring the success of your SEO campaigns in Chapter 7.

Measuring and improving your link efforts


So far, we've gone over the importance of earning quality links to your site
over time, as well as some common tactics for doing so. Now, we’ll cover
ways to measure the returns on your link building investment and strategies
for sustaining quality backlink growth over time.

Total number of links


The most direct way to measure your link building efforts is by tracking the
growth of total links to your site or page. Moz’s Link Explorer is a great tool
for doing that. For example, say you recently published a blog post that
received a lot of attention and you want to track total links that resource
earned.

Pop the URL into Link Explorer…


And then select "linking domains" from the "metrics over time" section to
see month-over-month link growth.

You could also do this with a root domain, subdomain, or a specific page.

A note on link cleanup


Some SEOs not only need to build good links, but need to get rid of bad
ones as well. If you’re performing link cleanup while simultaneously building
good links, just keep in mind that a stagnating or declining "linking domains
over time" graph is completely normal. You might also want to check out
Link Explorer’s "Discovered and Lost" tool to keep track of exactly which
links you’ve gained and lost.

See your discovered and lost links 


If you didn’t see the number of backlinks come in that you were aiming for,
all hope is not lost! Each link building campaign is something you can learn
from. If you want to improve the total links you earn for your next campaign,
consider these questions:

Did you create content that was 10x better than anything else out there?
It’s possible that the reason your link building efforts fell flat is that your
content wasn’t substantially more valuable than anything else like it. Take a
look back at the pages ranking for that term you’re targeting and see if
there’s anything else you could do to improve.
Did you promote your content? How?
Promotion is perhaps one of the most difficult aspects of link building, but
letting people know about your content and convincing them to link to you
is what’s really going to move the needle. For great tips on content
promotion, visit Chapter 7 of our Beginner's Guide to Content Marketing.

How many links do you actually need?


Consider how many backlinks you might actually need to rank for the
keyword you were targeting. In Keyword Explorer’s "SERP Analysis" report,
you can view the pages that are ranking for the term you're targeting, as
well as how many backlinks those URLs have. This will give you a good
benchmark for determining how many links you actually need in order to
compete and which websites might be a good link target.

What was the quality of the links you received?


One link from a very authoritative source is more valuable than ten from
low-quality sites, so keep in mind that quantity isn’t everything. When
targeting sites for backlinks, you can prioritize by how authoritative they are
using Domain Authority and Page Authority metrics.

Beyond links: How awareness, amplification, and


sentiment impact authority
A lot of the methods you'd use to build links will also indirectly build your
brand. In fact, you can view link building as a great way to increase
awareness of your brand, the topics on which you're an authority, and the
products or services you offer.

Once your target audience is familiar with you and you have valuable
content to share, let your audience know about it! Sharing your content on
social platforms will not only make your audience aware of your content,
but it can also encourage them to amplify that awareness to their own
networks, thereby extending your own reach.

Are social shares the same as links? No. But shares to the right people can
result in links. Social shares can also promote an increase in traffic and
new visitors to your website, which can grow brand awareness, and with a
growth in brand awareness can come a growth in trust and links. The
connection between social signals and rankings seems indirect, but even
indirect correlations can be helpful for informing strategy.

Trustworthiness goes a long way


For search engines, trust is largely determined by the quality and quantity
of the links your domain has earned, but that's not to say that there aren't
other factors at play that can influence your site's authority. Think about all
the different ways you come to trust a brand:

 Awareness (you know they exist)


 Helpfulness (they provide answers to your questions)
 Integrity (they do what they say they will)
 Quality (their product or service provides value, possibly more than
others you've tried)
 Continued value (they continue to provide value even after you've
gotten what you needed)
 Voice (they communicate in unique, memorable ways)
 Sentiment (others have good things to say about their experience
with the brand)

That last point is what we're going to focus on here. Reviews of your brand,
its products, or its services can make or break a business.

In your effort to establish authority from reviews, follow these review rules
of thumb:

 Never pay any individual or agency to create a fake positive review


for your business or a fake negative review of a competitor.
 Don't review your own business or the businesses of your
competitors. Don't have your staff do so, either.
 Never offer incentives of any kind in exchange for reviews.
 All reviews must be left directly by customers in their own accounts;
never post reviews on behalf of a customer or employ an agency to
do so.
 Don't set up a review station/kiosk in your place of business; many
reviews stemming from the same IP can be viewed as spam.
 Read the guidelines of each review platform where you're hoping to
earn reviews.

Be aware that review spam is a problem that's taken on global proportions,


and that violation of governmental truth-in-advertising guidelines has led to
legal prosecution and heavy fines. It's just too dangerous to be worth it.
Playing by the rules and offering exceptional customer experiences is the
winning combination for building both trust and authority over time.

Authority is built when brands are doing great things in the real-world,
making customers happy, creating and sharing great content, and earning
links from reputable sources.
Chapter 7
TRACKING SEO PERFORMANCE
Set yourself up for success.

They say if you can measure something, you can improve it.

In SEO, it’s no different. Professional SEOs track everything from rankings


and conversions to lost links and more to help prove the value of SEO.
Measuring the impact of your work and ongoing refinement is critical to
your SEO success, client retention, and perceived value.

It also helps you pivot your priorities when something isn’t working.

Start with the end in mind


While it’s common to have multiple goals (both macro and micro),
establishing one specific primary end goal is essential.

The only way to know what a website’s primary end goal should be is to
have a strong understanding of the website’s goals and/or client needs.
Good client questions are not only helpful in strategically directing your
efforts, but they also show that you care.

Client question examples:

1. Can you give us a brief history of your company?


2. What is the monetary value of a newly qualified lead?
3. What are your most profitable services/products (in order)?

Keep the following tips in mind while establishing a website’s primary goal,
additional goals, and benchmarks:

Goal setting tips

 Measurable: If you can’t track it, you can’t improve it.


 Be specific: Don’t let vague industry marketing jargon water down
your goals.
 Share your goals: Studies have shown that writing down and
sharing your goals with others boosts your chances of achieving
them.
Know your client
Asking your client the right questions is key to understanding their website
goals. We've prepared a list of questions you can use to start getting to
know your clients below!

Download the list 


What's that word mean?
Speaking of industry marketing jargon, make sure you're on top of it with
the SEO glossary for this chapter!

See Chapter 7 definitions 

Measuring
Now that you’ve set your primary goal, evaluate which additional metrics
could help support your site in reaching its end goal. Measuring additional
(applicable) benchmarks can help you keep a better pulse on current site
health and progress.

Engagement metrics
How are people behaving once they reach your site? That’s the question
that engagement metrics seek to answer. Some of the most popular
metrics for measuring how people engage with your content include:

Conversion rate
The number of conversions (for a single desired action/goal) divided by the
number of unique visits. A conversion rate can be applied to anything, from
an email signup to a purchase to account creation. Knowing your
conversion rate can help you gauge the return on investment (ROI) your
website traffic might deliver.

Time on page
How long did people spend on your page? If you have a 2,000-word blog
post that visitors are only spending an average of 10 seconds on, the
chances are slim that this content is being consumed (unless they’re a
mega-speed reader). However, if a URL has a low time on page, that’s not
necessarily bad either. Consider the intent of the page. For example, it’s
normal for “Contact Us” pages to have a low average time on page.

Pages per visit


Was the goal of your page to keep readers engaged and take them to a
next step? If so, then pages per visit can be a valuable engagement metric.
If the goal of your page is independent of other pages on your site (ex:
visitor came, got what they needed, then left), then low pages per visit are
okay.
Bounce rate
"Bounced" sessions indicate that a searcher visited the page and left
without browsing your site any further. Many people try to lower this metric
because they believe it’s tied to website quality, but it actually tells us very
little about a user’s experience. We’ve seen cases of bounce rate spiking
for redesigned restaurant websites that are doing better than ever. Further
investigation discovered that people were simply coming to find business
hours, menus, or an address, then bouncing with the intention of visiting
the restaurant in person. A better metric to gauge page/site quality is scroll
depth.

Scroll depth
This measures how far visitors scroll down individual webpages. Are
visitors reaching your important content? If not, test different ways of
providing the most important content higher up on your page, such as
multimedia, contact forms, and so on. Also consider the quality of your
content. Are you omitting needless words? Is it enticing for the visitor to
continue down the page? Scroll depth tracking can be set up in your
Google Analytics.

In Google Analytics, you can set up goals to measure how well your site
accomplishes its objectives. If your objective for a page is a form fill, you
can set that up as a goal. When site visitors accomplish the task, you’ll be
able to see it in your reports.

Search traffic
Ranking is a valuable SEO metric, but measuring your site’s organic
performance can’t stop there. The goal of showing up in search is to be
chosen by searchers as the answer to their query. If you’re ranking but not
getting any traffic, you have a problem.

But how do you even determine how much traffic your site is getting from
search? One of the most precise ways to do this is with Google Analytics.

Using Google Analytics to uncover traffic insights


Google Analytics (GA) is bursting at the seams with data — so much so
that it can be overwhelming if you don’t know where to look. This is not an
exhaustive list, but rather a general guide to some of the traffic data you
can glean from this free tool.

Isolate organic traffic

GA allows you to view traffic to your site by channel. This will mitigate any
scares caused by changes to another channel (ex: total traffic dropped
because a paid campaign was halted, but organic traffic remained steady).
Traffic to your site over time

GA allows you to view total sessions/users/pageviews to your site over a


specified date range, as well as compare two separate ranges.

How many visits a particular page has received

Site Content reports in GA are great for evaluating the performance of a


particular page — for example, how many unique visitors it received within
a given date range.

Traffic from a specified campaign

You can use UTM (urchin tracking module) codes for better
attribution. Designate the source, medium, and campaign, then append the
codes to the end of your URLs. When people start clicking on your UTM-
code links, that data will start to populate in GA’s "campaigns" report.

Click-through rate (CTR)

Your CTR from search results to a particular page (meaning the percent of
people that clicked your page from search results) can provide insights on
how well you’ve optimized your page title and meta description. You can
find this data in Google Search Console, a free Google tool.

In addition, Google Tag Manager is a free tool that allows you to manage
and deploy tracking pixels to your website without having to modify the
code. This makes it much easier to track specific triggers or activity on a
website.

Additional common SEO metrics


Domain Authority & Page Authority (DA/PA)
Moz’s proprietary authority metrics provide powerful insights at a glance
and are best used as benchmarks relative to your competitors’ Domain
Authority and Page Authority.

Keyword rankings
A website’s ranking position for desired keywords. This should also include
SERP feature data, like featured snippets and People Also Ask boxes that
you’re ranking for. Try to avoid vanity metrics, such as rankings for
competitive keywords that are desirable but often too vague and don’t
convert as well as longer-tail keywords.
Number of backlinks
Total number of links pointing to your website or the number of unique
linking root domains (meaning one per unique website, as websites often
link out to other websites multiple times). While these are both common link
metrics, we encourage you to look more closely at the quality of backlinks
and linking root domains your site has.

How to track these metrics


There are lots of different tools available for keeping track of your site’s
position in SERPs, site crawl health, SERP features, and link metrics, such
as Moz Pro and STAT.

The Moz and STAT APIs (among other tools) can also be pulled into
Google Sheets or other customizable dashboard platforms for clients and
quick at-a-glance SEO check-ins. This also allows you to provide more
refined views of only the metrics you care about.

Dashboard tools like Data Studio, Tableau, and PowerBI can also help to
create interactive data visualizations.

Evaluating a site’s health with an SEO website audit


By having an understanding of certain aspects of your website — its
current position in search, how searchers are interacting with it, how it’s
performing, the quality of its content, its overall structure, and so on —
you’ll be able to better uncover SEO opportunities. Leveraging the search
engines’ own tools can help surface those opportunities, as well as
potential issues:

 Google Search Console - If you haven’t already, sign up for a free


Google Search Console (GSC) account and verify your website(s).
GSC is full of actionable reports you can use to detect website errors,
opportunities, and user engagement.
 Bing Webmaster Tools - Bing Webmaster Tools has similar
functionality to GSC. Among other things, it shows you how your site
is performing in Bing and opportunities for improvement.
 Lighthouse Audit - Google’s automated tool for measuring a
website’s performance, accessibility, progressive web apps, and
more. This data improves your understanding of how a website is
performing. Gain specific speed and accessibility insights for a
website here.
 PageSpeed Insights - Provides website performance insights using
Lighthouse and Chrome User Experience Report data from real user
measurement (RUM) when available.
 Structured Data Testing Tool - Validates that a website is
using schema markup (structured data) properly.
 Mobile-Friendly Test - Evaluates how easily a user can navigate
your website on a mobile device.
 Web.dev - Surfaces website improvement insights using Lighthouse
and provides the ability to track progress over time.
 Tools for web devs and SEOs - Google often provides new tools for
web developers and SEOs alike, so keep an eye on any new
releases here.

While we don’t have room to cover every SEO audit check you should
perform in this guide, we do offer an in-depth Technical SEO Site Audit
course for more info. When auditing your site, keep the following in mind:

Crawlability
Are your primary web pages crawlable by search engines, or are you
accidentally blocking Googlebot or Bingbot via your robots.txt file? Does
the website have an accurate sitemap.xml file in place to help direct
crawlers to your primary pages?

Indexed pages
Can your primary pages be found using Google? Doing a site:yoursite.com
OR site:yoursite.com/specific-page check in Google can help answer this
question. If you notice some are missing, check to make sure a meta
robots=noindex tag isn’t excluding pages that should be indexed and found
in search results.

Page titles & meta descriptions


Do your titles and meta descriptions do a good job of summarizing the
content of each page? How are their CTRs in search results, according to
Google Search Console? Are they written in a way that entices searchers
to click your result over the other ranking URLs? Which pages could be
improved? Site-wide crawls are essential for discovering on-page and
technical SEO opportunities.

Page speed
How does your website perform on mobile devices and in Lighthouse?
Which images could be compressed to improve load time?

Content quality
How well does the current content of the website meet the target market’s
needs? Is the content 10X better than other ranking websites’ content? If
not, what could you do better? Think about things like richer content,
multimedia, PDFs, guides, audio content, and more.
Website pruning can improve overall quality
Removing thin, old, low-quality, or rarely visited pages from your site can
help improve your website’s perceived quality. Performing a content
audit will help you discover these pruning opportunities.

Learn more about website pruning 


Keyword research and competitive website analysis (performing audits on
your competitors’ websites) can also provide rich insights on opportunities
for your own website.

For example:

 Which keywords are competitors ranking on page 1 for, but your


website isn’t?
 Which keywords is your website ranking on page 1 for that also have
a featured snippet? You might be able to provide better content and
take over that snippet.
 Which websites link to more than one of your competitors, but not to
your website?

Discovering website content and performance opportunities will help devise


a more data-driven SEO plan of attack! Keep an ongoing list in order to
prioritize your tasks effectively.

Prioritizing your SEO fixes


In order to prioritize SEO fixes effectively, it’s essential to first have specific,
agreed-upon goals established between you and your client.

While there are a million different ways you could prioritize SEO, we


suggest you rank them in terms of importance and urgency. Which fixes
could provide the most ROI for a website and help support your agreed-
upon goals?

Stephen Covey, author of The 7 Habits of Highly Effective


People, developed a handy time management grid that can ease the
burden of prioritization:

Urgent Not Urgent

Important Quadrant I: Urgent & Important Quadrant II: Not Urgent &
Urgent Not Urgent

Not Important Quadrant III: Urgent & Not Important Quadrant IV: Not Urgent &
Important

Putting out small, urgent SEO fires might feel most effective in the short
term, but this often leads to neglecting non-urgent important fixes. The Not
Urgent & Important items are ultimately what will move the needle for a
website’s SEO. Don’t put these off.

Urgent Not Urgent

Important Primary page issues, high-volume Non-primary page issues, m


issues issues

Not Important Client reports (unrelated to goals), Video sitemaps, meta keyw
vanity keywords

SEO planning & execution


Much of your success depends on effectively mapping out and scheduling
your SEO tasks. Free tools like Google Sheets can help plan out your SEO
execution (we have a free template here), but you can use whatever
method works best for you. Some people prefer to schedule out their SEO
tasks in their Google Calendar, in a kanban or scrum board, or in a daily
planner.

Use what works for you and stick to it.

Measuring your progress along the way via the metrics mentioned above
will help you monitor your effectiveness and allow you to pivot your SEO
efforts when something isn’t working. Say, for example, you changed a
primary page’s title and meta description, only to notice that the CTR for
that page decreased. Perhaps you changed it to something too vague or
strayed too far from the on-page topic — it might be good to try a different
approach. Keeping an eye on drops in rankings, CTRs, organic traffic, and
conversions can help you manage hiccups like this early, before they
become a bigger problem.
Communication is essential for SEO client longevity
Many SEO fixes are implemented without being noticeable to a client (or
user). This is why it’s essential to employ good communication skills
around your SEO plan, the time frame in which you’re working, and your
benchmark metrics, as well as frequent check-ins and reports.

Congratulations on making it through the entire Beginner’s Guide to SEO!


Now it’s time for the fun part — applying it. As a next step, we recommend
taking the initiative to start an SEO project of your own. Read on for our
suggestions!

"Without strategy, execution is aimless. Without execution, strategy is


useless." - Morris Chang

Free SEO planning worksheet


Get started on your SEO planning with the worksheet template we've
provided below. Make a copy and edit it to match your needs!

Download the template 

Practice, practice, practice


The best thing you can do to build your confidence, skills, and abilities is to
dive in and get your hands dirty. If you’re serious about SEO and hope to
serve clients someday, there’s no better place to start than with your own
website, whether there’s a hobby you’d like to blog about or you need to set
up a personal freelancing page.

We've put together a quick to-do list you can use to guide your next steps
in the wide, wonderful world of SEO:

1. Figure out your site's information architecture, design, UX, and other
necessary considerations before starting a new project. We
recommend reading Strategic SEO Decisions to Make Before
Website Design and Build and watching our Whiteboard Friday on
the topic, Launching a New Website: Your SEO Checklist.
2. Follow the steps of the Beginner's Guide to SEO as you go along:
1. Understand your goals and the basic rules of SEO
2. Make sure your site is crawlable and indexable in search
3. Conduct thorough keyword research
4. Make sure your on-site optimizations are up to snuff
5. Perform necessary technical SEO optimizations or audits
6. Earn links and establish your site's authority
7. Prioritize effectively and measure the right metrics
3. Test, iterate, and test again! Many SEOs have test sites where they
challenge SEO norms or experiment with new types of optimization
tactics. Implement this today by setting up a website and make up a
gibberish word (one that likely has zero search volume and no
competition), then see how quickly you can get it to rank in search
results. From there, you can experiment with all sorts of other SEO
tests.
4. Take on harder tasks, test refurbishing content for other platforms,
set stretch goals, and compete with stronger competitors.
5. Consider going the extra mile and challenging yourself to learn
technical SEO.
6. Find a community where you can safely learn, discuss, share
experiences, and ask for help. Moz’s Q&A
Forum, TrafficThinkTank, Search Engine Journal's SEO Experts to
Follow, and finding SEO meetups near you are all great options to
start.
7. Take the time to evaluate what worked and what didn’t after an SEO
project. How might you do things a bit differently in the future to
improve your performance?

When it comes to tracking your SEO progress, data is your best friend. You
can use Moz Pro's suite of SEO analytics and research tools to keep a
close eye on rankings, link building, technical site health, and more. Put
your new SEO skills into action with a free 30-day trial of Moz Pro!

SEO TERMS & MEANINGS

We know learning all the ins and outs of SEO vocabulary and jargon can
feel like learning another language. To help you get a handle on all the new
terms we're throwing at you, we've compiled a chapter-by-chapter SEO
glossary with definitions and helpful links. You might want to bookmark this
page for future reference!

Chapter 1: SEO 101


10 blue links: The format search engines used to display search results;
ten organic results all appearing in the same format.

Black hat: Search engine optimization practices that violate Google’s


quality guidelines.
Crawling: The process by which search engines discover your web pages.

De-indexed: Refers to a page or group of pages being removed from


Google’s index.

Featured snippets: Organic answer boxes that appear at the top of


SERPs for certain queries.

Google My Business listing: A free listing available to local businesses.

Image carousels: Image results in some SERPs that are scrollable from


left to right.

Indexing: The storing and organizing of content found during crawling.

Intent: In the context of SEO, intent refers to what users really want from
the words they typed into the search bar.

KPI: A “key performance indicator” is a measurable value that indicates


how well an activity is achieving a goal.

Local pack: A pack of typically three local business listings that appear for
local-intent searches such as “oil change near me.”

Organic: Earned placement in search results, as opposed to paid


advertisements.

People Also Ask boxes: A box in some SERPs featuring a list of


questions related to the query and their answers.

Query: Words typed into the search bar.

Ranking: Ordering search results by relevance to the query.

Search engine: An information retrieval program that searches for items in


a database that match the request input by the user. Examples: Google,
Bing, and Yahoo.

SERP features: Results displayed in a non-standard format.

SERP: Stands for “search engine results page” — the page you see after
conducting a search.

Traffic: Visits to a website.
URL: Uniform Resource Locators are the locations or addresses for
individual pieces of content on the web.

Webmaster guidelines: Guidelines published by search engines like


Google and Bing for the purpose of helping site owners create content that
will be found, indexed, and perform well in search results.

White hat: Search engine optimization practices that comply with Google’s


quality guidelines.

Chapter 2: How Search Engines Work – Crawling,


Indexing, and Ranking
2xx status codes: A class of status codes that indicate the request for a
page has succeeded.

4xx status codes: A class of status codes that indicate the request for a
page resulted in error.

5xx status codes: A class of status codes that indicate the server’s
inability to perform the request.

Advanced search operators: Special characters and commands you can


type into the search bar to further specify your query.

Algorithms: A process or formula by which stored information is retrieved


and ordered in meaningful ways.

Backlinks: Or "inbound links" are links from other websites that point to
your website.

Bots: Also known as “crawlers” or “spiders,” these are what scour the


Internet to find content.

Caching: A saved version of your web page.

Caffeine: Google’s web indexing system. Caffeine is the index, or


collection of web content, whereas Googlebot is the crawler that goes out
and finds the content.

Citations: Also known as a “business listing,” a citation is a web-based


reference to a local business' name, address, and phone number (NAP).

Cloaking: Showing different content to search engines than you show to


human visitors.
Crawl budget: The average number of pages a search engine bot will
crawl on your site

Crawler directives: Instructions to the crawler regarding what you want it


to crawl and index on your site.

Distance: In the context of the local pack, distance refers to proximity, or


the location of the searcher and/or the location specified in the query.

Engagement: Data that represents how searchers interact with your site


from search results.

Google Quality Guidelines: Published guidelines from Google detailing


tactics that are forbidden because they are malicious and/or intended to
manipulate search results.

Google Search Console: A free program provided by Google that allows


site owners to monitor how their site is doing in search.

HTML: Hypertext markup language is the language used to create web


pages.

Index Coverage report: A report in Google Search Console that shows


you the indexation status of your site’s pages.

Index: A huge database of all the content search engine crawlers have
discovered and deem good enough to serve up to searchers.

Internal links: Links on your own site that point to your other pages on the
same site.

JavaScript: A programming language that adds dynamic elements to static


web pages.

Login forms: Refers to pages that require login authentication before a


visitor can access the content.

Manual penalty: Refers to a Google “Manual Action” where a human


reviewer has determined certain pages on your site violate Google’s quality
guidelines.

Meta robots tag: Pieces of code that provide crawlers instructions for how
to crawl or index web page content.

Navigation: A list of links that help visitors navigate to other pages on your
site. Often, these appear in a list at the top of your website (“top
navigation”), on the side column of your website (“side navigation”), or at
the bottom of your website (“footer navigation”).

NoIndex tag: A meta tag that instructions a search engine not to index the
page it’s on.

PageRank: A component of Google's core algorithm. It is a link analysis


program that estimates the importance of a web page by measuring the
quality and quantity of links pointing to it.

Personalization: Refers to the way a search engine will modify a person’s


results on factors unique to them, such as their location and search history.

Prominence: In the context of the local pack, prominence refers to


businesses that are well-known and well-liked in the real world.

RankBrain: the machine learning component of Google’s core algorithm


that adjusts ranking by promoting the most relevant, helpful results.

Relevance: In the context of the local pack, relevance is how well a local
business matches what the searcher is looking for

Robots.txt: Files that suggest which parts of your site search engines


should and shouldn't crawl.

Search forms: Refers to search functions or search bars on a website that


help users find pages on that website.

Search Quality Rater Guidelines: Guidelines for human raters that work


for Google to determine the quality of real web pages.

Sitemap: A list of URLs on your site that crawlers can use to discover and
index your content.

Spammy tactics: Like “black hat,” spammy tactics are those that violate
search engine quality guidelines.

URL folders: Sections of a website occurring after the TLD (“.com”),


separated by slashes (“/”). For example, in “moz.com/blog” we could say
“/blog” is a folder.

URL parameters: Information following a question mark that is appended


to a URL to change the page’s content (active parameter) or track
information (passive parameter).
X-robots-tag: Like meta robots tags, this tag provides crawlers instructions
for how to crawl or index web page content.

Chapter 3: Keyword Research


Ambiguous intent: Refers to a search phrase where the goal of the
searcher is unclear and requires further specification.

Commercial investigation queries: A query in which the searcher wants


to compare products to find the one that best suits them.

Informational queries: A query in which the searcher is looking for


information, such as the answer to a question.

Keyword Difficulty: At Moz, Keyword Difficulty is an estimate, in the form


of a numerical score, of how difficult it is for a site to outrank their
competitors.

Keyword Explorer: A Moz tool for in-depth keyword research and


discovery.

Local queries: A query in which the searcher is looking for something in a


specific location, such as “coffee shops near me” or “gyms in Brooklyn.”

Long-tail keywords: Longer queries, typically those containing more than


three words. Indicative of their length, they are often more specific than
short-tail queries.

Navigational queries: A query in which the searcher is trying to get to a


certain location, such as the Moz blog (query = “Moz blog”).

Regional keywords: Refers to keywords unique to a specific locale. Use


Google Trends, for example, to see whether “pop” or “soda” is the more
popular term in Kansas.

Search volume: The number of times a keyword was searched. Many


keyword research tools show an estimated monthly search volume.

Seasonal trends: Refers to the popularity of keywords over time, such as


“Halloween costumes” being most popular the week before October 31.

Seed keywords: The term we use to describe the primary words that


describe the product or service you provide.
Transactional queries: The searcher wants to take an action, such as buy
something. If keyword types sat in the marketing funnel, transactional
queries would be at the bottom.

Chapter 4: On-Site Optimization


Alt text: Alternative text is the text in HTML code that describes the images
on web pages.

Anchor text: The text with which you link to pages.

Auto-generated content: Content that is created programmatically, not


written by humans.

Duplicate content: Content that is shared between domains or between


multiple pages of a single domain.

Geographic modifiers: Terms that describe a physical location or service


area. For example, “pizza” is not geo-modified, but “pizza in Seattle” is.

Header tags: An HTML element used to designate headings on your page.

Image compression: The act of speeding up web pages by making image


file sizes smaller without degrading the image’s quality.

Image sitemap: A sitemap containing only the image URLs on a website.

Keyword stuffing: A spammy tactic involving the overuse of important


keywords and their variants in your content and links.

Link accessibility: The ease with which a link can be found by human


visitors or crawlers.

Link equity: The value or authority a link can pass to its destination.

Link volume: The quantity of links on a page.

Local business schema: Structured data markup placed on a web page


that helps search engines understand information about a business.

Meta descriptions: HTML elements that describe the contents of the page


that they’re on. Google sometimes uses these as the description line in
search result snippets.

Panda: A Google algorithm update that targeted low-quality content.


Protocol: The “http” or “https” preceding your domain name. This governs
how data is relayed between the server and browser.

Redirection: When a URL is moved from one location to another. Most


often, redirection is permanent (301 redirect).

Rel=canonical: A tag that allows site owners to tell Google which version
of a web page is the original and which are the duplicates.

Scraped content: Taking content from websites that you do not own and
republishing it without permission on your own site.

SSL certificate: A “Secure Sockets Layer” is used to encrypt data passed


between the web server and browser of the searcher.

Thin content: Content that adds little-to-no value to the visitor.

Thumbnails: Image thumbnails are a smaller version of a larger image.

Title tag: An HTML element that specifies the title of a web page.

Chapter 5: Technical Optimization


AMP: Often described as “diet HTML,” accelerated mobile pages (AMP)
are designed to make the viewing experience lightning fast for mobile
visitors.

Async: Short for “asynchronous,” async means that the browser doesn’t


have to wait for a task to finish before moving onto the next one while
assembling your web page.

Browser: A web browser, like Chrome or Firefox, is software that allows


you to access information on the web. When you make a request in your
browser (ex: “google.com”), you’re instructing your browser to retrieve the
resources necessary to render that page on your device.

Bundling: To combine multiple resources into a single resource.

ccTLD: Short for “country code top level domain,” ccTLD refers to domains
associated with countries. For example, .ru is the recognized ccTLD for
Russia.

Client-side & server-side rendering: Client-side and server-side


rendering refer to where the code runs. Client-side means the file is
executed in the browser. Server-side means the files are executed at the
server and the server sends them to the browser in their fully rendered
state.

Critical rendering path: The sequence of steps a browser goes through to


convert HTML, CSS and JavaScript into a viewable web page.

CSS: A Cascading Style Sheet (CSS) is the code that makes a website
look a certain way (ex: fonts and colors).

DNS: A Domain Name Server (DNS) allows domain names (ex:


“moz.com”) to be linked to IP addresses (ex: “127.0.0.1”). DNS essentially
translates domain names into IP addresses so that browsers can load the
page’s resources.

DOM: The Document Object Model (DOM) is the structure of an HTML


document — it defines how that document can be accessed and changed
by things like JavaScript.

Domain name registrar: A company that manages the reservation of


internet domain names. Example: GoDaddy.

Faceted navigation: Often used on e-commerce websites, faceted


navigations offer a number of sorting and filtering options to help visitors
more easily locate the URL they’re looking for out of a stack of thousands
or even millions of URLs. For example, you could sort a clothing page by
price: low to high, or filter the page to view only size: small.

Fetch and Render tool: A tool available in Google Search Console that
allows you to see a web page how Google sees it.

File compression: The process of encoding information using fewer bits;


reducing the size of the file. There are many different compression
techniques.

Hreflang: A tag that indicates to Google which language the content is in.
This helps Google serve the appropriate language version of your page to
people searching in that language.

IP address: An internet protocol (IP) address is a string of numbers that’s


unique to each specific website. We assign domain names to IP addresses
because they’re easier for humans to remember (ex: “moz.com”) but the
internet needs these numbers to find websites.

JSON-LD: JavaScript Object Notation for Linked Data (JSON-LD) is a


format for structuring your data. For example, schema.org can be
implemented in a number of different formats, JSON-LD is just one of them,
but it is the format preferred by Google.

Lazy loading: A way of deferring the loading of an object until it’s needed.
This method is often used to improve page speed.

Minification: To minify something means to remove as many unnecessary


characters from the source code as possible without altering functionality.
Whereas compression makes something smaller, minification actually
removes things.

Mobile-first indexing: Google began progressively moving websites over


to mobile first indexing in 2018. This change means that Google crawls and
indexes your pages based on their mobile version rather than their desktop
version.

Pagination: A website owner can opt to split a page into multiple parts in a
sequence, similar to pages in the book. This can be especially helpful on
very large pages. The hallmarks of a paginated page are the rel=”next” and
rel=”prev” tags, indicating where each page falls in the greater sequence.
These tags help Google understand that the pages should have
consolidated link properties and that searchers should be sent to the first
page in the sequence.

Programming language: Writing instructions in a way a computer can


understand. For example, JavaScript is a programming language that adds
dynamic (not-static) elements to a web page.

Rendering: The process of a browser turning a website’s code into a


viewable page.

Render-blocking scripts: A script that forces your browser to wait to be


fetched before the page can be rendered. Render-blocking scripts can add
extra round trips before your browser can fully render a page.

Responsive design: Google’s preferred design pattern for mobile-friendly


websites, responsive design allows the website to adapt to fit whatever
device it’s being viewed on.

Rich snippet: A snippet is the title and description preview that Google
and other search engines show of URLs on its results page. A “rich”
snippet, therefore, is an enhanced version of the standard snippet. Some
rich snippets can be encouraged by the use of structured data markup, like
review markup displaying as rating stars next to those URLs in the search
results.
Schema.org: Code that “wraps around” elements of your web page to
provide additional information about it to the search engine. Data using
schema.org is referred to as “structured” as opposed to “unstructured” — in
other words, organized rather than unorganized.

SRCSET: Like responsive design for images, SRCSET indicates which


version of the image to show for different situations.

Structured Data: Another way to say “organized” data (as opposed to


unorganized). Schema.org is a way to structure your data, for example, by
labeling it with additional information that helps the search engine
understand it.

Chapter 6: Link Building & Establishing Authority


10x content: Coined by Rand Fishkin to describe content that is “10x
better” than anything else on the web for that same topic.

Amplification: Sharing or spreading the word about your brand; often used


in the context of social media, paid advertisements, and influencer
marketing.

DA: Domain Authority (DA) is a Moz metric used to predict a domain’s


ranking ability; best used as a comparative metric (ex: comparing a
website’s DA score to that of its direct competitors).

Deindexed: When a URL, section of URLs, or an entire domain has been


removed from a search engine index. This can happen for a number of
reasons, such as when a website receives a manual penalty for
violating Google’s quality guidelines.

Directory links: “Directory” in the context of local SEO is an aggregate list


of local businesses, usually including each business’s name, address,
phone number (NAP) and other information like their website. “Directory”
can also refer to a type of unnatural link that violates Google’s guidelines:
“low-quality directory or bookmark site links.”

Editorial links: When links are earned naturally and given out of an


author’s own volition (rather than paid for or coerced), they are considered
editorial.

Fresh Web Explorer: A Moz tool that allows you to scan the web for
mentions of a specific word or phrase, such as your brand name.

Follow: The default state of a link, “follow” links pass PageRank.


Google Analytics: A free (with an option to pay for upgraded features) tool
that helps website owners get insight into how people are engaging with
their website. Some examples of reports you can see in Google Analytics
include acquisition reports that show what channels your visitors are
coming from, and conversion reports that show the rate at which people are
completing goals (ex: form fills) on your website.

Google search operators: Special text that can be appended to your


query to further specify what types of results you’re looking for. For
example, adding “site:” before a domain name can return a list of all (or
many) indexed pages on said domain.

Guest blogging: Often used as a link building strategy, guest blogging


involves pitching an article (or idea for an article) to a publication in the
hopes that they will feature your content and allow you to include a link
back to your website. Just be careful though. Large-scale guest posting
campaigns with keyword-rich anchor text links are a violation of Google’s
quality guidelines.

Link building: While “building” sounds like this activity involves creating


links to your website yourself, link building actually describes the process of
earning links to your site for the purpose of building your site’s authority in
search engines.

Link exchange: Also known as reciprocal linking, link exchanges involve


“you link to me and I’ll link to you” tactics. Excessive link exchanges are a
violation of Google’s quality guidelines.

Link Explorer: Moz’s tool for link discovery and analysis.

Link profile: A term used to describe all the inbound links to a select
domain, subdomain, or URL.

Linked unstructured citations: References to a business’ complete or


partial contact information on a non-directory platform (like online news,
blogs, best-of lists, etc.)

MozBar: A plugin available for the Chrome browser that allows you to


easily view metrics for the selected page, like DA, PA, title tag, and more.

NoFollow: Links marked up with rel=”nofollow” do not pass PageRank.


Google encourages the use of these in some situations, like when a link
has been paid for.
PA: Similar to DA, Page Authority (PA) predicts an individual page’s
ranking ability.

Purchased links: Exchanging money, or something else of value, for a


link. If a link is purchased, it constitutes an advertisement and should be
treated with a nofollow tag so that it does not pass PageRank.

Qualified traffic: When traffic is “qualified,” it usually means that the visit is


relevant to the intended topic of the page, and therefore the visitor is more
likely to find the content useful and convert.

Referral Traffic: Traffic sent to a website from another website. For


example, if your website is receiving visits from people clicking on your site
from a link on Facebook, Google Analytics will attribute that traffic as
“facebook.com / referral” in the Source/Medium report.

Resource pages: Commonly used for the purpose of link building,


resource pages typically contain a list of helpful links to other websites. If
your business sells email marketing software, for example, you could look
up marketing intitle:"resources" and reach out to the owners of said sites to
see if they would include a link to your website on their page.

Sentiment: How people feel about your brand.

Spam Score: A Moz metric used to quantify a domain’s relative risk of


penalization by using a series of flags that are highly correlated with
penalized sites.

Unnatural links: Google describes unnatural links as “creating links that


weren’t editorially placed or vouched for by the site’s owner on a page.”
This is a violation of their guidelines and could warrant a penalty against
the offending website.

Chapter 7: Measuring, Prioritizing, & Executing SEO


API: An application programming interface (API) allows for the creation of
applications by accessing the features or data of another service like an
operating system or application.

Bounce rate: The percentage of total visits that did not result in a


secondary action on your site. For example, if someone visited your home
page and then left before viewing any other pages, that would be a
bounced session.
Channel: The different vehicles by which you can get attention and acquire
traffic, such as organic search and social media.

Click-through rate: The ratio of impressions to clicks on your URLs.

Conversion rate: The ratio of visits to conversions. Conversion rate


answers how many of my website visitors are filling out my forms, calling,
signing up for my newsletter, etc.?

Qualified lead: If you use your website to encourage potential customers


to contact you via phone call or form, a “lead” is every contact you receive.
Not all of those leads will become customers, but “qualified” leads are
relevant prospects that have a high likelihood of becoming paying
customers.

Google Analytics goals: What actions are you hoping people take on your
website? Whatever your answer, you can set those up as goals in Google
Analytics to track your conversion rate.

Google Tag Manager: A single hub for managing multiple website tracking
codes.

Googlebot / Bingbot: How major search engines like Google and Bing


crawl the web; their “crawlers” or “spiders.”

Kanban: A scheduling system.

Pages per session: Also referred to as “page depth,” pages per session


describes the average number of pages people view of your website in a
single session.

Page speed: Page speed is made up of a number of equally important


qualities, such as first contentful/meaningful paint and time to interactive.

Pruning: In an SEO context, pruning typically refers to removing low-


quality pages in order to increase the quality of the site overall.

Scroll depth: A method of tracking how far visitors are scrolling down your
pages.

Scrum board: A method of keeping track of tasks that need to be


completed to accomplish a larger goal.

Search traffic: Visits sent to your websites from search engines like


Google.
Time on page: The amount of time someone spent on your page before
clicking to the next page. Because Google Analytics tracks time on page by
when someone clicks your next page, bounced sessions will clock a time
on page of 0.

UTM code: An urchin tracking module (UTM) is a simple code that you can
append to the end of your URL to track additional details about the click,
such as its source, medium, and campaign name.

You might also like