Webmasters started website optimization for the search engines in the 1990’s. The phrase search engine optimization probably came into use in 1997. That was the time when the search engines were first introduced to the web. The initial optimization process involved the webmasters to submit the URL or the address of the page to the search engines. These search engines would send a spider (a program) to crawl through the submitted link and used to extract links to other pages from it, and return information found on the page. In this process, the spider downloads a page and stores it on the search engine's own server then a second program namely an indexer, extracts various data about the page, such as the words it contains and where these are located, as well as weightage for specific words, and all links that page contains, which are then placed into a scheduler for crawling at a later date.
For everything on earth, man finds how to adulterate it. And, this is not an exception to it. As days passed, the knowledge about SEO grew up with webmasters and some of them started manipulating to get high ranking. These happened as a consequence of site owners starting to recognize the value of having their sites highly ranked and visible in search engine results which at first gave opportunities for white hat and black hat SEO practitioners and then competition grew over at an alarming rate with a number of practitioners coming into existence.
As a preventive measure, the ways by which manipulation occurred were found and tried for solving. Some were rectified and some were not. So the entire process was changed which involved much complicated ways so as to avoid manipulation and the sites found guilt were removed from the search engines’ database.
In view to provide better results to their users, search engines had to adapt to ensure their results pages showed the most relevant search results, rather than falling prey to the manipulated sites. The triumph and fame of a search engine is determined by its ability to produce the most relevant results to any given search, if the results were irrelevant, it would obviously turn the users to seek for a better search engine, as competition always stayed in this field. To make sure of this, search engines were developed with more complex ranking algorithms, taking into account additional factors that were more difficult for anyone to manipulate.
By 2004, search engines had incorporated a wide range of unrevealed factors in their ranking algorithms to diminish the impact of manipulation.
Nowadays, the search engines had started personalizing search results for each user. This was a milestone in the history of SEO as the ranking of a website will be meaningless with the user personalized searching option. These searches mainly concentrated on the user’s previous searches and several other factors. Google says it now uses more than 200 such factors to provide the search results.
Indexing
Various leading search engines, namely Google and Yahoo!, make use of crawlers and find pages for their algorithmic search results. Generally if we link pages to any search engine, those paged need not be submitted again to another as they are found automatically. Search engines, particularly Yahoo, guarantee crawling for either a set fee or cost per click while they maneuver a paid submission. This helps in mutual benefaction. These programs never guarantee any specific ranking but they usually assure inclusion of that in the database. The Yahoo Directory and the Open Directory Project, the two major directories both need manual submission as well as human editorial review. Whereas Google Webmaster Tools are offered by Google, in which an XML Sitemap feed can be created and submitted for free to ensure that all pages are found. The greatest advantage of this is, even the pages that aren't discoverable by automatically following links are found. Every single page is not indexed by the search engines. Basically the distance of pages from the root directory of a site stands a vivid factor in whether pages get crawled or not.
Crawling Prevention
Webmasters instruct spiders not to crawl certain files or directories through the standard robots.txt file in the root directory of the domain to avoid undesirable content in the search indexes. Also instead of instructing the spiders, a page can be excluded by using a meta-tag specific to robots. The general theorem is that the robots.txt file located in the root directory will be the first file to be crawled in when a search engine visits the site. The robots.txt file is then analyzed syntactically by assigning a constituent structure to and that will instruct the robot in crawling of the other pages associated. Pages usually prevented from being crawled include login specific pages such as shopping carts and user-specific content such as search results from internal searches. Sometimes, it may crawl into pages which are not wished to be crawled. For this purpose, a search engine crawler needs to keep a cached copy of that file. Internal search results pages shall be excluded from indexing as they are considered search spam.
Escalating Importance
A webpage can be made to show up in the search results by employing a numerous other methods. These include:
Making cross links between pages of the same website. Making more links to the main pages of the website. This is carried out to increase the Page rank used by the search engines. Using links through other websites, including comment spamming and link farming cam prove effective. One of the best methods is to write the contents using frequently searched keywords and phrases, which will obviously make it more appropriate to the search queries. URL normalization of web pages accessible via multiple urls, using the "canonical" meta-tag and keyword stuffing are other useful methods.
Nowadays we have almost shriveled the practice of stuffing our databases with lots of web addresses; the credit largely goes to the advent of search engines. But a decade back the reliablity of such search engines was questioned predominantly because of the inability of search engines to prioritize the sites exactly to the requirement of users. A specific tool emerged for this requirement termed as search engine optimization often mentioned as SEO. Search engine optimization is the process of ameliorating the quality of traffic to a site when linked from search engines by organic or unpaid results. SEO can be applied to all searches including image search, local search etc.
Coining of the word SEO dates back to late 90’s by the active entry of big players like Google, Yahoo and Microsoft’s Bing. The basic purpose of optimization is to both increase the relevance to specific keywords and to remove barriers to the indexing activities of search engines. Regardless of language the fundamental elements of search optimization are essentially the same. The webmasters are involved in developing more complex ranking algorithms, considering the additional factors that were more difficult for webmasters to manipulate. Leading search engine use specific crawlers for their client spiders. The Search engine crawlers take into consideration a lot of factors when crawling a site. Distance of pages from the root directory of a site may also be a factor in deciding whether or not pages get crawled. Not every page is indexed by the search engines. Pages that are linked from other search engine indexed pages are found automatically.
Typically, if a site appears on the top of a search results list, it will appeal easily to more visitors. Therefore the hosts are hiring search engine optimizers also known as SEO’s who potentially improve the site and its popularity among search engines. By this way a SEO can ensure that a site is designed to be search engine-friendly. But there are some unethical SEO’s who give the hosts a black eye through their aggressive marketing strategies. They usually manipulate search engine results in unfair ways. There are also some inexperienced SEO’s who stuffs in lot of keywords specifically to get priorities. Such practices that violate the guidelines of webmasters may result in a negative adjustment of the site's presence in search engines or even elimination of such listings from their databases altogether. Penalties will applied either automatically by the search engines' algorithms, or by a manual site review
Due to the high marketing value of targeted search results, there is potential for an adversarial relationship among search engines regarding popular sites which may cause irrelevant results to users. Since the success and popularity of a search engine is determined by its ability to produce the most relevant results to any given search, allowing those results would prove fatal for its usage. Therefore search engines must adhere to specific tactics that could eliminate spamdexing altogether.
Many online forums and blogs are persistently assisting to eliminate all the loop holes that prevent proper prioritization of sites. Therefore, the phenomenal change achieved in field of search engines through SEO requires constant innovation and validation of search results elating its prominence.
Hello friends his is the way to take a pictur from orkut album and save it in ypur P.C.
The procedure is:
Step1: open the photo what you want to save.
step2: bring the photo to middle of your computer screen.
Step3: press the key PRINT SCREEN SysRq which is above the insert key.
Step4:Open the paint software(MS PAINT)
step5: then press control+V (ie Paste)
Step6: the whole picture of your computer screen is come in the page and you have to select the photo and copy it
step7: then clear the paint document and paste the pic. and save it as jpg.
so you can take any picture from your computer screen
thanking you!
More Articles …
Page 12 of 13