A copyright lawsuit filed by Copiepresse last year over the publishing of articles, images, and links to Belgium newspaper websites will not only see the material removed, but will also cost Google in fines M&C reported this morning.

Along with the removal order, Google also has a hefty fine to pay of $32,390 per day for every day the copyright material was posted in Google. This retroactive total could be in excess of $4.7 million.

While Google argues they are simply sending traffic to the news sites, many of these Belgium sites do charge for access to their articles and images after a certain date, where it still ends up remaining in Google’s Cache.

The date has not yet been set for Google’s appeal.

Gravatar
Friday, February 16th, 2007

MSN Launches soapbox

If you are looking for an alternative to YouTube, MSN has officially launched the public beta version of soapbox. For several months now it has been available on an invitation basis, but now the general public can go see what’s been happening.

The beta launch was announced with a very brief blog post yesterday by the “soap box team“.

With all the copyright issues and controversy surrounding Google and YouTube one must wonder if the same issues are possible with soapbox, and what MSN has done, or what they will do, to help avoid copyright issues.

Gravatar
Friday, February 16th, 2007

Google Webmaster Tools out of Beta

Google’s Valentines gift to more than a million webmasters who have already joined is the removal of the beta status for webmaster tools. Vanessa Fox made the announcement early this morning at the Official Google Webmaster Central Blog.

“In addition to the many new features that we’ve provided, we’ve been making lots of improvements behind the scenes to ensure that webmaster tools are reliable, scalable, and secure.”

The official blog is also now allowing for comments to be posted by readers, a feature previously unavailable.

Gravatar
Tuesday, February 6th, 2007

A Few Questions and Answers

Online Forums are always a great place to find bits and pieces of information. Below are variations of questions I have found posted in some popular SEO forums, along with my answers.

1 – Q. The last Google update saw a number of my site pages find their way into the supplemental index. Most of these pages have a near duplicate version for “print this page” options. How do I fix this?

A. The answer is relatively straight forward, but it will most likely take some time to see any of these pages removed from the supplemental indexing. The best bet is to first block these printer friendly versions from the search engines. You really don”t want these pages ranking well in the first place as they will likely have much if not all of your main site navigation removed, so having a visitor land on this page would not be of much use anyways. There are a number of ways to block search engine spiders from viewing and indexing a page. Here are two of the most commonly used:


Robots Meta tag
Using the robots meta tag is very simple. To prevent search engine spiders from indexing a given page, add the following meta tag to the start of your <head> section:

meta name=”robots” content=”noindex, nofollow”

Robots.txt file

You can also use the robots.txt file to block spiders. If your printer friendly pages are in a specific folder, you can use this code in your robots.txt file to block googlebot.

User-Agent: Googlebot
Disallow: /printer-folder/

I also recommend adding the “rel=”nofollow”” attribute to all links that direct to the printer friendly versions. This will tell the spiders to ignore the page and the link, which will not only help to prevent the printer friendly page from being indexed, it will also slightly reduce the Page Rank leak. Even if you do use this method, I still highly recommend using one of the other two methods of blocking the spiders to ensure that these pages do not become indexed.

Ultimately, assuming that the original HTML version of these pages has substantial original content, you will hopefully start to see the supplemental status stripped away. Blocking the spiders should also help prevent future new printer friendly pages from causing you more grief.

While taking these steps, it may help, but nothing is guaranteed. The reason pages become supplemental is essentially because you have other pages on your site that are better suited for related rankings. If you have two pages about a specific topic, and page A is highly targeted, and page B is only loosely targeted, then you stand the chance of Page B becoming supplemental. Add original content, and work on increasing links to this page to help out with the supplemental issue.

2 – Q. Does Google Use WHOIS to help eliminate spam from those webmasters with dozens, and even hundreds of sites?

A. Google certainly has the ability to read through WHOIS and flag multiple sites with the same owners. While it is yet to be proven 100% that Google uses WHOIS data to connect spam websites, this is certainly within the realm of possibility, and if they do not use it today, will likely use it in the future.

It is also known that a sites age can help in terms of rankings. Where does Google get this age? It could be from either the day the site was first indexed, or from the WHOIS data. The longer a site has been online, the better its chances of successful rankings, at least assuming a number of other factors such as links, relevancy, etc, all ring true.

We have seen examples where registering a new domain for no less than 2 years can (sometimes) help reduce the time spend in the “sandbox”, as it displays to Google that it is less likely SPAM. Keep in mind of course, that a 2 year + registration is not enough on its own merit.

3. Q. How do I get my site indexed by Google, Yahoo, and MSN? Should I regularly submit my URL?

A. While the answer to this question is fairly simple, it is surprising how many do not have a clear answer. I see this and similar questions in the forums quite often and thought it was pertinent to mention it here.

First things first, do NOT regularly submit your site to the engines. When it comes down to it this is something you will never need to do (nor should you ever pay anyone to do for you). There is only one instance where a submission to the major engines is okay, and that is after the official launch of a brand new site on a brand new domain.

Before you submit your site check to make sure you are not already indexed. You may be surprised how quickly the major engines can find you. If you are not indexed, then one free site submission is alright. After you have made this submission, forget the option even exists, as you will never need to do this again.

To get the site indexed this will typically work in time, but the best way to have your site not only indexed, but also ranked, is to work on your incoming links, and also consider creating and submitting an XML sitemap. Google, Yahoo and MSN are good at finding sites, and they will index you on their own even if you only have a few in bound links.

Gravatar
Wednesday, January 24th, 2007

Wikipedia Links Useless for SEO

As reported in Search Engine Journal, in an attempt to eliminate spamming to Wikipedia, effective immediately all outbound links from the internet giant will have the “nofollow” tag appended. The “nofollow” tag was introduced a while back for webmasters to tell the major search engines to ignore the specific link. When Google sees this tag, the outbound link is passed by as if it were regular text.

What does this mean for site owners? If you have links pointing in from Wikipedia they will be lost, at least in terms of helping with your SEO campaigns. Links come and go all the time, but to lose a Wikipedia link is a big deal as it is a highly regarded site in the eyes of the search engines and its credibility with Google would mean the link would have a significant ranking value. For small sites with few links and good rankings, a loss of a Wikipedia link could have significant impact on rankings.

Internet marketing consultant and blogger Andy Beal is not going to take this sitting down and has launched a campaign in an attempt to reduce Wikipedia’s Page Rank down to zero. He suggests that to dispute the decision that all webmasters who have links directed at Wikipedia append the “nofollow” tag themselves to give Wikipedia a taste of their own medicine. Beal does go on to mention that his site does not have any incoming links from Wikipedia and that this campaign is based entirely on principle.

Wikipedia was made popular due to the vast numbers of incoming links it has gained over the years and if enough linking webmasters adding the “nofollow” tag it would certainly cause it to ultimately drop. Currently Wikipedia’s English home page has more than 1.5 million incoming links noted by Google. It would take an incredible feat to have their popularity decline as a result of “nofollow” tags, but it is still within the realm of possibility.

We’ve known it was coming as Yahoo has been talking about it for a while now, but the new ranking model will be officially unleashed on February 5, 2007, according to Yahoo’s mass mail out news letter.

In the good old days things were very simple. If you wanted your ad to rank well, you simply bid more money than the next guy. That was it. This did cause the occasional bidding war for advertisers resulting in a skyrocketing per click price, but the concept was simple.

Following in Google’s footsteps, as of early next month, rankings will be determined by more than just your bid. Rankings will continue to be reflective of your bid amount, but only to a degree. Now they will also incorporate the ad quality which is determined by the historical click through rate (CTR) combined with a number of other algorithmic factors which examine items such as your actual ad copy and competitor’s ad copy. This combination of items will result in the “quality index” which will be used to help sort out the ads.

Of importance to note is how this will affect your cost per click charges. Historically you only paid one cent more than your next closest competitor. Under the new system you will pay one cent more than the amount required to hold your current position in the search results. (This is based on a combination of your bid amount and your quality index score). Ultimately increasing your quality index score can result in a lower cost per click while maintaining the same ranking position. There is now a greater chance that you could end up paying closer to your maximum bid, so be sure you are comfortable with whatever figures you enter. For more information on the effects of your maximum bid, visit Yahoo Search Marketing Help.

Gravatar
Wednesday, January 24th, 2007

Google Germany Hijacked?

According to IBN live, German online news service, Heise, reported an unfinished website belonging to a client of a small hosting company, Goneo, in Western Germany “crashed quickly after an avalanche of Web surfers”.

It seems another Goneo client had used an automated ordering process to gain control of the Google.de domain Monday evening resulting in the redirect of searchers. Google.de was redirected for around 12 hours before the issue was resolved.

Goneo’s chief executive, Marc Keilwerth, apologized and said all applications in the future would be checked.

As an SEO I am asked a number of questions covering a broad range of SEO related topics and one question in particular is asked quite often. This question holds answers which, when ignored, could see a once well ranked website spiral into depths of the search engine rankings forever.

“I am in the process of redesigning my site, what should I look out for in
order to maintain the SEO (and rankings)?” Read more…

Gravatar
Wednesday, December 20th, 2006

An End to Google Search API

Earlier in December Google cut off access to its SOAP API for new customers. This move has concerned many developers and webmasters.

The Google Search API was still in beta and designed to allow developers to create programs to perform a Google search using SOAP. (Simple Object, Access, Protocol).
At code.google.com the site notes:

“As of December 5, 2006, we are no longer issuing new API keys for the SOAP Search API. Developers with existing SOAP Search API keys will not be affected.” Read more…

Gravatar
Wednesday, December 20th, 2006

Google / NASA Partnership

A research and development partnership between NASA and Google was announced back in September of 2005, and now, more than a year later, they are just about ready to collaborate.

The two companies will work together through their formal agreement to study a number of issues from scientific-data search technology to expanding Google Earth to the moon and Mars.

According to CNET ” the first collaboration between Google and NASA Ames will concern the availability of NASA information over the Internet. For example, NASA Administrator Michael Griffin said in a statement that “soon” there will be Google Earth flyovers available for the surfaces of Mars and the moon. Additional data will include real-time weather forecasting and visualization, as well as tracking of the International Space Station and space shuttle activity”.

Not all projects will involve incorporating NASA information into Google but will include a number of research based items such as dealing with human-computer interaction and education-related collaborations.