Request a Quotation

How does Google determine which real-time results will appear? They have to filter through an unimaginable volume of information and make a decision in mere seconds. This is a billion dollar question that will be on the minds of search marketers around the globe. Indeed, the question was so on the minds of marketers that in less-than-subtle ways the question was probed throughout the conference yesterday.

As expected the Google representatives at the Search Event 2009 press conference easily danced around related questions but there were a few nuggets worth taking away and mentioning. Essentially Google has applied a new form of algorithmic indicator that Marissa Mayer subtly called an Update Rank.

Here is exactly what she said about real-time data:

“… authoritativeness exists there as well and there are signals there that indicate it. So for example, retweets and replies and the structure of how the people in that ecosystem relate to each other. You can actually use some of our learnings from PageRank in order to develop a, say, a Updates Rank, or an Updater Rank for the specific people who are posting. So this is something we are beginning to experiment with but it is interesting to see that same parallel where PageRank looks at links you can actually look at the very mechanisms inside of these update streams and sense the authoritativeness the same way.”

So based on Marissa’s words and the other take-aways from the conference here is my first draft on what real-time results are based on: Read more…

Loren Baker over at announced that Matt Cutts had confirmed the PageRank update that happened last week was ‘primarily’ a response to link selling. No additional information was provided except that Google would be continuing to look “at additional sites that appear to be buying or selling PageRank.”

If your sites’s PageRank was damaged by this update Loren suggests taking a close look at your site and ironing out any wrinkles before asking for reconsideration via Google’s Webmaster Tools. Be CERTAIN that your website is wrinkle free otherwise you may get yourself in hotter water if you tell Google to give your site special attention.

Remember that it is always wise to wait for the fallout to clear before making any considerable changes to your website in response to a search engine update. Often we find that when search results stabilize many falsely affected websites are automatically reinstated – at least partially. Here’s hoping that the same happens this time around.

Google has upgraded its PageRank algorithm and it has negatively affected a whole host of popular websites.

What does this mean for you? First I should note that no one REALLY knows the reason these sites have been affected. After all, it is possible these PageRanks were reduced for another reason but with all of the righteous posturing from Google over the past year over purchased links I think it is a fair assumption that paid link advertising is the culprit. So with that in mind, if you have a high number of reciprocal or paid links on your website then it is quite possible you will also see a drop in PageRank soon. You should also ensure your site does not have links to websites that may have a vastly lower reputation – penalty by association. Read more…

Wednesday, January 24th, 2007

Wikipedia Links Useless for SEO

As reported in Search Engine Journal, in an attempt to eliminate spamming to Wikipedia, effective immediately all outbound links from the internet giant will have the “nofollow” tag appended. The “nofollow” tag was introduced a while back for webmasters to tell the major search engines to ignore the specific link. When Google sees this tag, the outbound link is passed by as if it were regular text.

What does this mean for site owners? If you have links pointing in from Wikipedia they will be lost, at least in terms of helping with your SEO campaigns. Links come and go all the time, but to lose a Wikipedia link is a big deal as it is a highly regarded site in the eyes of the search engines and its credibility with Google would mean the link would have a significant ranking value. For small sites with few links and good rankings, a loss of a Wikipedia link could have significant impact on rankings.

Internet marketing consultant and blogger Andy Beal is not going to take this sitting down and has launched a campaign in an attempt to reduce Wikipedia’s Page Rank down to zero. He suggests that to dispute the decision that all webmasters who have links directed at Wikipedia append the “nofollow” tag themselves to give Wikipedia a taste of their own medicine. Beal does go on to mention that his site does not have any incoming links from Wikipedia and that this campaign is based entirely on principle.

Wikipedia was made popular due to the vast numbers of incoming links it has gained over the years and if enough linking webmasters adding the “nofollow” tag it would certainly cause it to ultimately drop. Currently Wikipedia’s English home page has more than 1.5 million incoming links noted by Google. It would take an incredible feat to have their popularity decline as a result of “nofollow” tags, but it is still within the realm of possibility. has released a great new search option for webmasters which allows you to see just how many links point to other websites from within your own. To use it simply search using the following syntax without the quotes and replacing “” with your domain (do not use ‘www.’):

“” Read more…

A forum member on DigitalPoint discovered a piece of Google code when he tried to access a cached page from Google; it appeared as an error. The code appears to be extremely interesting, if a bit obtuse. It notes “Spam Scores” and “PageRank” which, of course, perks up the ears of any SEO. Check it out and let me know what you think. At this time there is no telling how legitimate or useful this is but I will keep you up to date if I hear of anything. Read more…

Links are the primary arteries of the Internet, the underlying connectors between different places. Links are the transporters that take you everywhere on the web. You likely came to this space via a link and are as likely to follow one out again. Links keep you going online, hopefully to places you want or need to get to.

Google created the most successful information retrieval device of all time based on sending spiders to follow each and every link they can find on each and every web document they come across. Yahoo, MSN, Ask, and all the other search databases have acquired the vast amounts of information they contain in similar fashion. Links play important roles in the ranking formulas of all search engines, especially Google, by providing numerous pieces of data for their algorithms to chew through. Read more…

Tuesday, February 28th, 2006

Florida's Effect on SEO Spam

A few years ago, in the days before the introduction of Google’s Florida Update, SEO was a quasi-Masonic vocation practiced by an expanding order of techno-monks who acted openly but held secret the minute details of their trade. As search engines and SEO techniques evolved, that old order was already dying, long before the discovery of Florida in November 2003.

I was reminded of the olden days yesterday when fielding a question over at one of the SEO Forums I spend time in. One of the new members wrote in asking about the correct range of Keyword Densities for the various search engines. That got me thinking about many of the lesser known tricks of the trade that have been used by search engine optimizers over the years and how some of these “tricks” have incorporated themselves into our SEO practice while others have been roundly rejected by SEO practitioners. Read more…

Wednesday, February 15th, 2006

Are Google SERPs Entirely Organic?

Google has made another alteration the Google Help Center, this time removing the assurance that Google’s results are completely automated. The change was first noted by Phillip Lenssen in his Google Blogoscope.

As recently as February 2, the document outlining Google’s Principles stated, ” The order and contents of Google search results are completely automated. No one hand picks a particular result for a given search query, nor does Google ever insert jokes or send messages by changing the order of results. Read more…

Spiders make great geek pets, at least virtual ones do. Here at StepForth, we keep a couple spiders on our system to test sites, pages and documents in the hopes of learning more about the behaviours of common search engine spiders such as GoogleBot, Yahoo’s Slurp and MSNBot. Read more…