Seattle-based SEO Firm SEOMoz defines SEO as:
“the active practice of optimizing a web site by improving internal and external aspects in order to increase the traffic the site receives from search engines.”
The overall goal in SEO is to generate relevant traffic to your website via Google, Bing, Yahoo! & Ask.com. Nowadays you can include Facebook and, pending on your marketing strategy, Twitter. This also includes the local / maps, blogs, images, and news searches in each of these sites.
Most all modern technological disciplines break down their practices into three categories:
These practices are taken straight out of old Western Movies. White Hat- best accepted practice. Gray Hat – toein’ the line. Black Hat – evil.
For the purposes of best practices, we’ll stick to White Hat SEO practices. But in the following sections we will point out other like practices. And if you really want to know more about the other dark arts, leave me a comment.
Google and Bing use a complex, proprietary algorithm to determine which results to give you when you search. Their algorithms are composed from approximately 200 fact ors which are based upon three primary elements. Those elements are:
Imagine making a simple text copy of every page on the internet and storing it on your own system. Now imagine making a copy of the web every day of the year year as the internet expands by billions of pages per month. Welcome to Google’s nightmare.
Google and Microsoft Bing (the two big search engines) each has a network of servers dedicated to storing copies of the World Wide Web. They gather this data with an army of automated programs called “bots.”
Bots are dumbed-down versions of web browsers – Internet Explorer, Mozilla Firefox, Apple Safari. They are designed to only to read “static content” – anything “text based.” Here’s a list of all things Google can read / index.
Bots are not able to read, understand or make copies of any “dynamic content.” They’ve got enough pages to crawl.
Though the search engine bots can’t “read” the content it is copying (at least, not yet), what it can do is recognize patterns.
Even pictures involve naming & tagging techniques in order to be indexed. Not only that, but search engine bots are not able to read text that is embedded within dynamic content. So if you have a site that has Flash animation with text scrolling through it, search engine bots will probably not be able to read it. At best, they can read the tags (if you set them up).
Once a bot crawls the page and makes a copy it then delivers its findings to the servers where the search engine algorithm determines the quality of the page.
And the cycle repeats.
Infographic by the Pay Per Click Blog
The path a search engine bot uses to crawl pages is primarily dependent upon the pages the bot has already crawled. Bots follow “links” – a web page’s URL. Those links the bot follow can either be to other pages on the website or to other websites.
Bots doesn’t blindly follow every link. It uses the search engine’s data to determine the quality of the page and the quality of the link before determining which page to crawl next. And like with content, the bot delivers its findings to the servers where the search engine algorithm determines the quality of the links.
The search engines not only deliver results but they also keep track of every search performed on their platform. They keep track of keyword (words searchers use o do a search engine query) used, the results displayed with the query as well as which links were clicked.
In another infographic designed Search Engine Land and recently shared by my mentor Douglas Karr at MarketingTechBlog.com, you can see below how the balance of content, traffic & backlinks work together: