Blog Categories
Google / AdWords, Bing / Ads, and even Facebook...

Anatomy of Search Engine Optimization – Traffic, Content & Links

November 3, 2011.Finn.0 Likes.2 Comments
Home/Blog/SEO/Anatomy of Search Engine Optimization – Traffic, Content & Links

The Definition of SEO

Seattle-based SEO Firm SEOMoz defines SEO as:

the active practice of optimizing a web site by improving internal and external aspects in order to increase the traffic the site receives from search engines.

The Goal in SEO

The overall goal in SEO is to generate relevant traffic to your website via Google, Bing, Yahoo! & Nowadays you can include Facebook and, pending on your marketing strategy, Twitter. This also includes the local / maps, blogs, images, and news searches in each of these sites.

The Good, the Bad & the Ugly: Types of SEO

Most all modern technological disciplines break down their practices into three categories:

  • White Hat
  • Gray Hat
  • Black Hat

These practices are taken straight out of old Western Movies. White Hat- best accepted practice. Gray Hat – toein’ the line. Black Hat – evil.

For the purposes of best practices, we’ll stick to White Hat SEO practices. But in the following sections we will point out other like practices. And if you really want to know more about the other dark arts, leave me a comment.

Anatomy of Search Engine Optimization

Google and Bing use a complex, proprietary algorithm to determine which results to give you when you search. Their algorithms are composed from approximately 200 fact ors which are based upon three primary elements. Those elements are:

  • Content
  • Traffic
  • Backlinks


Imagine making a simple text copy of every page on the internet and storing it on your own system. Now imagine making a copy of the web every day of the year year as the internet expands by billions of pages per month. Welcome to Google’s nightmare.

Google and Microsoft Bing (the two big search engines) each has a network of servers dedicated to storing copies of the World Wide Web. They gather this data with an army of automated programs called “bots.”

Search Engine Bots

original source: 6smarketing.comBots are dumbed-down versions of web browsers – Internet Explorer, Mozilla Firefox, Apple Safari. They are designed to only to read “static content” – anything “text based.” Here’s a list of all things Google can read / index.

Bots are not able to read, understand or make copies of any “dynamic content.” They’ve got enough pages to crawl.

Examples of the dynamic content search engine bots cannot read includes

  • video
  • mp3s
  • website animation (best known as Adobe Flash programs that are not paginated or tagged with text).

What the Search Engine Bot Is Searching for

Though the search engine bots can’t “read” the content it is copying (at least, not yet), what it can do is recognize patterns.

Patterns search engine bots can determine include:

  • Keyword themes
  • Keyword phrases
  • Frequency a website updates content
  • Keywords on a link’s text
  • Keywords in a URL
  • Meta Data (see the Onsite SEO section)
  • Tags on dynamic content (see the OnSite SEO Section)
  • The number of links that point to a page
  • The amount of traffic the search engine sends to a page

Even pictures involve naming & tagging techniques in order to be indexed. Not only that, but search engine bots are not able to read text that is embedded within dynamic content. So if you have a site that has Flash animation with text scrolling through it, search engine bots will probably not be able to read it. At best, they can read the tags (if you set them up).

Once a bot crawls the page and makes a copy it then delivers its findings to the servers where the search engine algorithm determines the quality of the page.

And the cycle repeats.

Names of Commons Search Engine Bots:

  • Google – Googlebot
  • Bing – MSNBot
  • Yahoo! (RIP) Slurp
  • W3C – W3C_Validator

Here’s an Infograph on how Google works – in great detail

How Google Works.

Infographic by the Pay Per Click Blog


The path a search engine bot uses to crawl pages is primarily dependent upon the pages the bot has already crawled. Bots follow “links” – a web page’s URL. Those links the bot follow can either be to other pages on the website or to other websites.

Bots doesn’t blindly follow every link. It uses the search engine’s data to determine the quality of the page and the quality of the link before determining which page to crawl next. And like with content, the bot delivers its findings to the servers where the search engine algorithm determines the quality of the links.


The search engines not only deliver results but they also keep track of every search performed on their platform. They keep track of keyword (words searchers use o do a search engine query) used, the results displayed with the query as well as which links were clicked.

SEO Ranking Factors: How Content, Traffic, & Backlinks Worth Together

In another infographic designed Search Engine Land and recently shared by my mentor Douglas Karr at, you can see below how the balance of content, traffic & backlinks work together:

In the end, it’s a blend. And if you only have one to work on, you work on Content.

But you should find time for all three.

Categories: SEO
Privacy Policy | Copyright 2008-2020