Understanding the concept of SEO

Articles, Concept, Guide, Tips | on Sep. 23, 2011 | by 0 Comments

If you really want to learn how Search Optimization works, you need to understand that search engines are not human. You may react by commenting that this is an obvious statement, however the way humans and search engines search the web pages are not easy to determine.

Keep in mind that search engines are driven by text when they research the Internet, unlike humans who reason when they search. Although it is true that computer technology advances at a rapid pace, it is worth remembering that search engines do not have the intelligence humans have. This means that they will not rank attractive web pages and beauty of movement within the pages or even user-friendly tabs and buttons; they simply scan, or crawl in search for relevant text.

It is the content within the website that allows these spiders to find keywords that suit the users’ requests. Although these facts are relevant, there are other actions the search engines perform throughout the web before delivering a page of search results. They crawl, or scan, index, process, calculate the relevancy and finally retrieve the information. After all, we need to keep in mind that these are mathematical processes not human ones.

What is Crawling?

The first step search engines take is to crawl or scan the Internet in search for relevant material. They crawl around using specific software, known as a spider or crawler, hence the term “to crawl”. The Google’s search engine crawler is called Googlebot, but the principles are the same. What these crawlers or spiders do is they follow the links on web pages and index all the content they find on their passage.

Keeping in mind how many web pages there are on the web, we can imagine that crawlers may take up to two months to visit your website, in cases where you have added more content or modified your pages. The only action you can take to check your visibility is to check what crawlers view on your website, by running a Spider simulator.

It will not do your website much good if its teeming with flash movies and images, JavaScript and password protected pages as these may not be viewable to crawlers, therefore will be not be considered by search engines.

Once a web paged has been crawled the next step search engines will take is to index the content they find on the pages. This page will then be stored by the search engine in an extensive database, where it can be used when needed. This means that by indexing a page, the search engine picks out words and phrases that best describe the content on the page. It will then assign certain keywords to that page. By optimizing pages for search engines you are in fact helping them classify the pages of your website in the best possible way, which will result in higher page ranking.

Therefore when a user sends a search request with particular keywords, the search engine will process the phrase or words and search in its data bank where the indexed pages are stored. From there, the search engines checks the millions of indexed pages for relevancy and comes up with those that best fit the research terms.

 

Thumbnail

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks

No related posts.

Related posts brought to you by Yet Another Related Posts Plugin.

Share and Enjoy:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks

Leave Your Response

* Name, Email, Comment are Required