To best employ SEO techniques so that your site can attain higher page rank, you should first learn how the search engine works. There are some search engines out there, with Google leads the competition, most of them are crawler based search engines while some may be on human compiled directories. Search engines work by bringing up the most relevant sites to the search query and to display the best of these results first.
In order to accomplish this, they use a complex set of rules called algorithms. When a search query is submitted at a search engine, sites are determined to be relevant or not relevant to the search query according to these algorithms, and then ranked in the order it calculates from these algorithms to be the best matches first.
Search engine’s algorithms are kept confidentially and always changed very often to protect their databases from manipulation and to avoid webmasters from dominating search results. Algorithms alteration also intended to provide new sites at the top the search result on a regular basis rather than always having the same old sites show up month after month.
The differences between search engines and directories:
- Search engines use a spider to crawl the websites they found, as well as submitted sited. As they crawl the web, they gather information that is used by their algorithms to rank your site.
- Directories rely on submissions from webmasters, with live humans viewing your site to determine if it will be accepted. If accepted, directories often rank sites in alphanumeric order, with paid listings sometimes on top. Some search engines also place paid listings at the top, so it’s not always possible to get a ranking in the top three or more places unless you’re willing to pay for it.
Search engines are primarily composed of three parts:
A search engine robot‘s action is called spidering, as it resembles the multiple legged spiders. The spider‘s job is to go to a web page, read the contents, connect to any other pages on that web site through links and bring back the information. From one page it will travel to several pages and this proliferation follows several parallel and nested paths simultaneously. Spiders frequent the site at some interval, may be a month to a few months and re-index the pages. This way any changes that may have occurred in your pages could also be reflected in the index. The spiders automatically visit your web pages and create their listings. An important aspect is to study what factors promote ―deep crawl – the depth to which the spider will go into your website from the page it first visited. Listing (submitting or registering) with a search engine is a step that could accelerate and increase the chances of that engine spidering your pages.
The spider‘s movement across web pages stores those pages in its memory, but the key action is in indexing. The index is a huge database containing all the information brought back by the spider. The index is constantly being updated as the spider collects more information. The entire page is not indexed and the searching and page-ranking algorithm is applied only to the index that has been created. Most search engines claim that they index the full visible body text of a page. In a subsequent section, we explain the key considerations to ensure that indexing of your web pages improves relevance during search. The combined understanding of the indexing and the page-ranking process will lead to developing the right strategies. The Meta tags description and keywords have a vital role as they are indexed in a specific way. Some of the top search engines do not index the keywords that they consider spam. They will also not index certain stop words (commonly used words such as ‘a‘ or ‘the‘ or ‘of‘) so as to save space or speed up the process. Images are obviously not indexed, but image descriptions or Alt text or text within comments is included in the index by some search engines.
The Search Engine Program
The search engine software or program is the final part. When a person requests a search on a keyword or phrase, the search engine software searches the index for relevant information. The software then provides a report back to the searcher with the most relevant web pages listed first.