Monday, July 25, 2011

How Search Engines Use Links

The search engines use links primarily to discover web pages, and to count the links as votes for those web pages. But how do they use this information once they acquire it? Let’s take a look:

Index inclusion

Search engines need to decide what pages to include in their index. Discovering pages by crawling the Web (following links) is one way they discover web pages (the other is through the use of XML Sitemap files). In addition, the search engines do not include pages that they deem to be of low value because cluttering their index with those pages will not lead to a good experience for their users. The cumulative link value, or link juice, of a page is a factor in making that decision.

Crawl rate/frequency

Search engine spiders go out and crawl a portion of the Web every day. This is no small task, and it starts with deciding where to begin and where to go. Google has publicly indicated that it starts its crawl in PageRank order. In other words, it crawls PageRank 10 sites first, PageRank 9 sites next, and so on. Higher PageRank sites also get crawled more deeply than other sites. It is likely that other search engines start their crawl with the most important sites first as well. This would make sense, because changes on the most important sites are the ones the search engines want to discover first. In addition, if a very important site links to a new resource for the first time, the search engines tend to place a lot of trust in that link and want to factor the new link (vote) into their algorithms quickly.

Ranking
 
Links play a critical role in ranking. For example, consider two sites where the on-page content is equally relevant to a given topic. Perhaps they are the shopping sites Amazon.com and (the less popular) JoesShoppingSite.com. The search engine needs a way to decide who comes out on top: Amazon or Joe. This is where links come in. Links cast the deciding vote. If more sites, and more important sites,
link to it, it must be more important, so Amazon wins.

Thursday, July 21, 2011

Keyword Targeting

The search engines face a tough task; based on a few words in a query, sometimes only one, they must return a list of relevant results, order them by measures of importance, and hope that the searcher finds what he is seeking. As website creators and web content publishers, you can make this process massively simpler for the search engines and, in turn, benefit from the enormous traffic they send by employing the same terms users search for in prominent positions on your pages.

Keyword targeting has long been a critical part of search engine optimization, and although other metrics (such as links) have a great deal of value in the search rankings, keyword usage is still at the core of targeting search traffic.

The first step in the keyword targeting process is uncovering popular terms and phrases that searchers regularly use to find the content, products, or services your site offers. There’s an art and science to this process, but it consistently begins with a list of keywords to target Once you have that list, you’ll need to include these in your pages. In the early days of SEO, the process involved stuffing keywords repetitively into every HTML tag possible. Now,keyword relevance is much more aligned with the usability of a page from a human perspective.

Since links and other factors make up a significant portion of the search engines’ algorithms, they no longer rank pages with 61 instances of “free credit report” above pages that contain only 60. In fact, keyword stuffing, as it is known in the SEO world, can actually get your pages devalued via search engine penalties. The engines don’t like to be manipulated, and they recognize keyword stuffing as a disingenuous tactic.

Keyword usage includes creating titles, headlines, and content designed to appeal to searchers in the results (and entice clicks), as well as building relevance for search engines to improve your rankings. Building a search-friendly site requires that the keywords searchers use to find content are prominently employed.

XML Sitemap Guide

Google, Yahoo!, and Microsoft all support a protocol known as XML Sitemaps. Google first announced it in 2005, and then Yahoo! and Microsoft agreed to support the protocol in 2006. Using the Sitemaps protocol you can supply the search engines with a list of all the URLs you would like them to crawl and index.

Adding a URL to a Sitemap file does not guarantee that a URL will be crawled or indexed. However, it can result in pages that are not otherwise discovered or indexed by the search engine getting crawled and indexed. In addition, Sitemaps appear to help pages that have been relegated to Google’s supplemental index make their way into the main index.

This program is a complement to, not a replacement for, the search engines’ normal, link-based
crawl. The benefits of Sitemaps include the following:
  • For the pages the search engines already know about through their regular spidering, they use the metadata you supply, such as the last date the content was modified (lastmod date) and the frequency at which the page is changed (changefreq), to improve how they crawl your site.
  • For the pages they don’t know about, they use the additional URLs you supply to increase their crawl coverage.
  • For URLs that may have duplicates, the engines can use the XML Sitemaps data to help choose a canonical version.
  • Verification/registration of XML Sitemaps may indicate positive trust/authority signals.
  • The crawling/inclusion benefits of Sitemaps may have second-order positive effects, such as improved rankings or greater internal link popularity.

The Google engineer who in online forums goes by GoogleGuy (a.k.a. Matt Cutts, the head of Google’s webspam team) has explained Google Sitemaps in the following way: Imagine if you have pages A, B, and C on your site. We find pages A and B through our normal web crawl of your links. Then you build a Sitemap and list the pages B and C. Now there’s a chance (but not a promise) that we’ll crawl page C. We won’t drop page A just because you didn’t list it in your Sitemap. And just because you listed a page that we didn’t know about doesn’t guarantee that we’ll crawl it. But if for some reason we didn’t see any links to C, or
maybe we knew about page C but the URL was rejected for having too many parameters or some other reason, now there’s a chance that we’ll crawl that page C.

Sitemaps use a simple XML format that you can learn about at http://www.sitemaps.org. XML Sitemaps are a useful and in some cases essential tool for your website. In particular, if you have reason to believe that the site is not fully indexed, an XML Sitemap can help you increase the number of indexed pages. As sites grow in size, the value of XML Sitemap files tends to increase dramatically, as additional traffic flows to the newly included URLs.

Wednesday, July 13, 2011

Now What The Hell Is keyword research!!

OK, now you know a little about seo and you have set up your domain/blog and started added content. But now you are wondering how can i choose the best keywords for my website/blog that would be relative to content and will rank me up in google search. Well this is not as hard as you might think. There are a lot of keyword research tools available on the internet and the great news is that most of them are free. Lets take a brief overview of what keyword research is and how are you going to choose your keywords out of an almost unlimited choice of words.

Keyword research is the core of any SEO campaign, since the keywords you pick during the research will be included in your website copy, into your PPC campaigns and any other website promotion campaigns. In a sense, keyword research is similar to customer research, because you are studying what words your potential clients use when searching for your service or product.

When starting your keyword research, you'll need to pick the main keywords to base your keyword research on. These will be the keywords that you, your customers and your competitors chiefly use when speaking of your service or product. The valid keywords would be synonyms, different in one aspect or another - view them as the directions your website SEO will go. Naturally, you'll need to use quite a variety of sources for your keywords. The most common tools used for keyword research are:

  • Google Keyword Tool
  • Wordtracker
  • Sitepoint
  • Keyword discovery
And many others. So you can choose the one which suits you the best. And remember that it'll give you an idea about how much the keyword is being used and what are the alternatives. You can take the basic idea from here and make keywords that best describe your blog/website. And it is a good practice to use synonyms.

Tuesday, July 5, 2011

Seo Basics Overview

Google is a name that almost everyone in the world knows and they should, as the world's most widely used search engine. Many people throw up a website for different reasons and believe that is the end of it. The internet contains so much information that it would take a person several lifetimes to read a fraction of the information, it needs the best organization possible. This task has been accomplished by search engines; the largest of which are Google, Yahoo, and Bing. Getting your site in front of viewers all comes down to how well it is optimized. SEO, also known as search engine optimization, is ideal for people who are trying to get visitors to their site and don't want to pay outrageous prices to get them. It's the sad truth but when it comes to SEO advice you need to be very careful who you listen to because there is so much damaging information out there. This article will be discussing some of the common mistakes that occur when you're trying to optimize your site.

SEO takes time to work but once it does it is almost self sustaining. Patience is a must for SEO. Of course, once you've done the initial work, the payoff is lots of traffic. Everybody wants to be on the first page for a commercial keyword that is in demand. You must be ready to build links, create unique content and put in the time to have everything in place, then you can simply forget getting ranked for any keyword. The speed at which your site loads is also a factor, not only to your popularity with visitors, but also to how the search engines view your site. If your site contains messy code or is overloaded with convoluted code, it will make your site load more slowly in a visitor's browser, which is something Google doesn't like. Where possible, try to avoid internal code on your pages and use an external CSS file instead. This small change can speed up your loading times and make it much easier to update your site later. Remember that search engines aim to offer their own users the best possible results to suit their needs, so your aim is to find ways to appeal to what the search engines want. In essence, this can mean your site needs to match what your visitors are looking for and also needs to fit within what the search engines are looking for.

Getting incoming links too quickly is also a mistake, as it will send out a red flag to the search engine and they'll penalize you for link spamming. A major myth going around right now is that your new site will catapult to the number spot on the first page if your link is mass distributed across the web. But this doesn't work that way, as search engines like natural link building. Are you willing to get your brand new site sandboxed and de-indexed because you didn't have the patience to build your links naturally over time? The big secret is to build links slow and steady.

See, SEO isn't as complicated as it seems, just stick to the basics and you will be doing better than most. The more you practice optimizing your site, the better results you will get. So, now you have it; quality backlinks, internally linking related pages, on-site optimization, and patience.

Monday, July 4, 2011

Social Bookmarking

Social bookmarking web sites offer users convenient storage of their bookmarks remotely for access
from any location. Examples of these sites include del.icio.us, digg, Reddit, and so on. These sites usually
allow these bookmarks to be private, but many choose to leave them public. And when a particular
web page is publicly bookmarked by many users, that is a major positive contributing factor in the
ranking algorithm of the search function on a social bookmarking site. Ranking well in these searches
presents another great source of organic traffic. Furthermore, if a web page is bookmarked by a large
number of people, it may result in a front page placement on such a site. This usually results in a landslide of traffic.

Many blogs present links to streamline the process of bookmarking a page. As is typical with facilitating
any action desired from a web site user, this may increase the number of bookmarks achieved by a page
on your web site. And it is a very common SEO practice with much effective results.

These little icons make it easy for people browsing a web site to do some free marketing for you — in
case they like the content at that particular URL and want to bookmark it. There are many free APIs on the web like addthis (personal favorite) which allows you to add the icons of almost all the social bookmarking websites on the internet.

Using the Robots Meta Tag

Using the robots meta tag you can exclude any HTMLbased content from a web site on a page-by-page basis, and it is frequently an easier method to use when eliminating duplicate content from a preexisting site for which the source code is available, or when a site contains many complex dynamic URLs.

To exclude a page with meta-exclusion, simply place the following code in the < head > section of the
HTML document you want to exclude:
<meta name=”robots” content=”noindex, nofollow” />
This indicates that the page should not be indexed (noindex) and none of the links on the page should
be followed (nofollow). It is relatively easy to apply some simple programming logic to decide whether
or not to include such a meta tag on the pages of your site. It will always be applicable, so long as you
have access to the source code of the application, whereas robots.txt exclusion may be difficult or
even impossible to apply in certain cases.

To exclude a specific spider, change “robots” to the name of the spider — for example googlebot,
msnbot, or slurp. To exclude multiple spiders, you can use multiple meta tags. For example, to
exclude googlebot and msnbot:

<meta name=”googlebot” content=”noindex, nofollow” />
<meta name=”msnbot” content=”noindex, nofollow” />

The only downside is that the page must be fetched in order to determine that it should not be indexed in the first place. This is likely to slow down indexing.