Home > Tips > Search Engine Tips > Link Baiting - The New SEO Spam?
Learn From the Expert
Link Baiting - Article Debunked
by Jerry West
Updated May 18, 2006
Okay, those of you who have read my stuff for years know that my biggest issue is people who believe information posted for free on the web in forums and other areas that have not been tested ... the RIGHT way.
I have seen some bad information that has been posted AND believed by people as truth in the past. But, this one takes the cake. It has been getting A LOT of press and attention over the last week, so I am here to debunk garbage test results and ridiculous claims once again.
My main issue here is there have been some veteran SEOs who have believed this info themselves and have been passing it to their readers as truth - further spreading, what I call, "Diseased SEO".
I am not here to bash anyone. Just the information. It is vital that before you go off and make changes to your site that you can actually validate that the information is true. Hopefully, the following will help many no get "suckered" into making changes that will ruin their site(s).
Most people think they know how to test, but they just don't. They put together a test and it has zero chance of succeeding. I don't profess to be the best SEO tester out there, but I at least know what the hell I am doing. First of all, props to Aaron Wall who first came up with the term "Link Bait" and also to Andy Hagans who first duped the article I am referencing as Link Bait. And, of course, to Mona, the Keyword Mod who pinged me on this issue over the weekend. I'll shut up now ... read on...
My comments will be in BOLD and INDENTED throughout the article.
Valid HTML - Does Google Care?
by Mark Daoust
Valid HTML is important - right? We are told every day in the website owner community that valid HTML code is the 'right' way to build your website. Valid HTML allows for greater accessibility, cross-browser compatability, and can even possibly help your search engine rankings.
But then again...
I decided to test whether valid HTML can actually help your rankings in Google. A lot of website owners talk about how their non-compliant websites do well in Google and how their complaint sites may not be doing as well. The implied suggestion here is that Google either simply did not care about errors in HTML, or even more extreme, that Google preferred non-compliant websites - a charge that would certainly be puzzling if it were true.
A Sneak Peak at Results - Google Preferred Invalid HTML?
The results of my test surprised me. Not only did I find that Google apparently does not give any preference to sites with valid HTML, Google actually seemed to prefer the sites with blatant errors in their code.
Think about this - if Google does give preference to websites with errors in their HTML, then it would actually benefit you to program errors into your website (as far as the SEO is concerned).
Any SEO or webmaster worth their weight knows that this is one of the worst statements ever uttered. Actually programming errors in your code for Google to rank your site better? Puleeeze! It should be understood, there is a definite difference between being "compliant" per the W3C and being "compliant" to a search engine spider and a browser. In my opinion, the W3C has some quirky rules that actually make a page load slower in a browser. In our testing, as long as there aren't errors (i.e. bad code) you are fine. Make the code lean and mean and often, good things happen. I think there is an interview with Matt Cutts that falls along those lines somewhere.
Now I am not ready to accept that as a valid conclusion, but the results are what they are. With these conclusions staring back at us, I though it necessary to publish the methodology and results, and open up the topic for discussion.
Setting Up the Study
As we all know, Google does not rely on any one factor to rank a website. As a result, a website could be horribly optimized in one aspect of their website, but still reach the top of the rankings because they are well optimized elsewhere. This makes testing individual aspects of SEO tricky.
In order to determine whether valid HTML was actually a factor that contributed to your ranking, even in the smallest of ways, it was necessary to isolate every other aspect that could possibly influence the ranking of a website. To do this, I looked to do the following things:
- The keyword density had to be identical
- The page size should be identical (just in case this effected the crawlability)
- The competing websites should be newly registered domains, all registered on the same day (removing age of site from the equation)
- The competing websites should be hosted on the same server
- Inbound links should be identical and from the same site so as to avoid different link weightings
- The links should have identical anchor text
- Since all the links would come in to the same site, link order might affect rankings - this would need to be neutralized
- The content should be identical so as not to influence the rankings in any other way (possible poison pill - I'll explain later)
The only difference between the two sites would be that one would be made with valid HTML and the other would be made with obvious, and significant, errors in the HTML.
The average reader would look at this testing setup and think that it was very sound. But as a former software tester of 12 years, I can tell you that it is not. Ask anyone with software testing experience and they will tell you the same. With an algorithm as complex as Google's you cannot take the above and expect to see any consistent results. We have over 300 test domains just for this purpose and every test that we do uses at least 50 domains. Using anything less would get me fired.
My initial thought was to have just two competing websites both trying to rank for the same non-sense keyword (the keyword would be one that currently has no rankings in Google). The idea was to get a snapshot of how Google initially ranked the websites. But there was a problem.
You have a bigger problem than that. Testing for a non-sense keyword phrase is one of the worst tests that you can perform and proves that this individual has never tested successfully with a search engine algorithm before. Test with real keywords, or don't test at all.
On Page Links are Not Necessarily Equal
In going over the requirements for the study, I could not guarantee that the links coming from my link partner were all going to be given the same weight by Google. They were all on the same page and they had to have the exact same anchor text. If Google saw two links with identical anchor text, it seemed reasonable to surmise that they may give more weight to the first one they discovered.
The quote of "it seemed reasonable" makes every veteran tester's skin crawl. This can never enter into the picture. Before doing this test, a test to determine if Google assigned different weight to one link or another would need to be done.
The answer to this small dilemma was to create two sets of competing sites. Each set would have two websites, both competing for the same keyword, with one website sporting valid HTML while the other would take on invalid HTML in each set. When linking over to these four sites, I would alternate how I linked.
Using new domains is a major flaw. In testing, you need consistent domains first, make ONE change and look at the results. New domains are volatile at best. You need to do A/B (split) testing not just with a pair of domains but with at least 25 pairs. You are looking for matches, consistencies and then you test again to verify. If I want to show you how Google will reward you with a #1 ranking by having exactly 12 words in your keyword meta tag I can show you two domains that it is true for. But is that accurate? No. Remember, there are lies, damn lies, and crap testing methodologies. Don't fall for it. Your gut tells you this can't be true. And it isn't.
Now, testing isn't just to see what will help you get a better ranking. You also want your site spidered deeper and faster. You want out of the Supplemental Index. You want the maximum number of pages crawled by GoogleBot. You have the best chance of the above with compliant (or near compliant) code. Michael Lange, developer of Website Manager has tested this on his domains and has seen it. Rich Creal, Michael Campbell, Jeff Greenfied, Michael Marshall, Ginette Degner and dozens of other top marketers in the field have all see it. Better crawls, pages removed from the Supplemental Index, etc.
Make your site non-compliant? And the exact opposite happens. I run this test every few days on my WebMarketingNow.com site. And it is consistent. Right now, the code is NON-COMPLIANT and my rankings have stayed consistent, but Google doesn't crawl my site well, and pages are in the Supplemental Index. This is one of the most consistent tests that I have run since November. It works like clockwork. Make the code compliant and I get a deeper crawl. Funny, this report says nothing of crawlability, which is one of the biggest issues for SEOs right now.
Link to Site with Invalid HTML
Link to Site with Valid HTML
Link to Site with Valid HTML
Link to Site with Invalid HTML
By using this method, I could erase link weights within the page from the overall equation.
Um, you only think that you did.
The Domains and Keywords
I chose four domains for the project - each with a nonsense name: Iggelomy, Pucroins, Gohthone.com, Hontihes.com. These would be split up into two groups, with the first group focusing on the keyword "Relpepiblus lost" and the second focusing on "startnipol pin".
Next I needed to create some content for the sites and create errors on one site from each set to invalidate the code. For the content, I made my way over to Gordon Mac and downloaded one of the free CSS templates he offers. I then modified the template to fit my needs and discuss the project - all the time making sure I used my targeted keyphrase.
Once the content had been created, I began to work on creating errors in the HTML. Rather than just randomly create errors, I had to be sure to keep the page sizes exactly the same and to not change the keyword density at all. I go over some of the changes that I made in more detail on the test sites. When the sites were done, I was able to create legitimate errors in the HTML, invalid attributes, open ended tags, an incorrect doctype declaration, and a few other errors - all without changing the page size or keyword density.
The Final Step - Linking Up the Site
I now needed a link partner. Fortunately I did not have to look far - I simply plundered my wife's blog (The Lazy Wife) for a couple of links. I don't think she even noticed - so don't tell her.
While he can't test, at least he has trust in his relationship. He has THAT going for him.
When linking up the sites, I was careful to link the sites in such a way that would not give preference to the sites with invalid markup, or vice-versa.
Finally - The Results
It only took a few days for Google to index the pages and include them in their index. Once there were results for the keywords "Relpepiblus lost" and "startnipol pin", I eagerly looked to see what Google had ranked first, and what it placed lower in its results.
I was a little bewildered. I was honestly expecting to find that Google would put the site with valid HTML above the site with invalid HTML in at least one of the examples. What I saw was different.
Not only did Google rank both sites with obviously wrong HTML higher, they even refused to include one of the sites with valid HTML altogether (pucroins.com)!
I figured there had to be something wrong - so I waited a little while...
There was definitely a change, but nothing too significant. My wife's blog, The Lazy Wife, was now usurping the rankings (being more established, this was expected). Google had still not given any preference to either site with valid HTML, and was still continuing to ignore pucroins altogether (a site with valid HTML).
To recap: Google actively ranked the websites with invalid HTML higher than the websites with valid HTML. Google even refused to rank one of the valid HTML websites altogether.
This seems to go against their Webmaster Guidelines in which they instruct webmasters to check for any HTML errors.
Wow, two whole domains have shown that Google's Webmaster Guidelines have been debunked. Could it have been dup content? Sandbox? Or a host of other issues that plague new sites?
Some Definite Conclusions
There are quite a few interesting factors to this study that we were able to draw. The first, and most important, factor is that Google does not apparently give any weight to valid HTML. More importantly, Google apparently does not penalize invalid HTML at all. The study itself would almost lead us to believe that Google actually rewards invalid HTML with a higher ranking.
Secondary to the study, it seems that on-page optimization is no match for an established website. After just two days of being in the rankings, all of our test websites lost their top positions to The Lazy Wife. This happened in spite of the fact that our test websites had far more on-page optimization than The Lazy Wife for the keywords in question. The Lazy Wife, although still new, was still far more established, and thus won in the rankings.
Do I even have to comment on the conclusion above? Unbelievable.
Some Not So Definite Conclusions
I am not ready to admit that Google actually gives preference to invalid HTML, but the results seem to want to point us in this direction. The idea that Google actively rewards websites that put errors into their code simply does not make sense.
It is possible, however, that there is some other factor which we are not seeing here that occurs with a website that has invalid HTML. In other words, it may not be the improper HTML causing the sites to rank higher, but some other factor that we cannot see.
Another possibility could be that invalid HTML just happens to 'fit in' better with most reliable websites. The fact is, there are very few high-profile sites that can pass the muster of a validation test - could it be that Google is discounting sites with valid HTML as being 'too good to be true'? Is valid HTML a form of over-optimization?
Another "over-optimization" believer. Shame. Shame. Shame.
I would lean towards disagreeing with this, but it is a possibility which should be discussed.
Back and forth we go. Have some guts and stick with your opinion, even if it is wrong.
A Parting Shot at Google - and Compliments to MSN Search
Although Google does not seem to reward site owners for putting together a site with valid HTML - a goal of many well respected webmasters - MSN seems to be flawless. Out of curiosity I checked the results for relpepiblus lost and startnipol pin on MSN search and found that not only did MSN rank the sites with valid HTML higher - they kicked out the sites with invalid HTML.
Okay, now I have had it. Props to MSN Search? Wait a minute, I forgot, he tests as well as the MSN Engineers did when they allowed all of that domain spam in there a few weeks back. I feel like I have been wounded in battle reading this article, "Help me Forrest, help me."
This would be consistent with the fact that MSN's search result pages validate, while Google's do not. MSN has a long way to go, but they seem to have gotten this part of their engine right.
Um, no. Google has it right, it is just that your testing methods are completely screwed and MSN accepts anything and everything.
Looking For Explanations
The results of the study say one thing, but common sense would say another. Is it possible that Google is somehow biased towards sites that have errors in their HTML? It does not have to be a philosophical bias - could there be a technical bias?
Or, was there a problem with the study itself? Were there too few examples to draw any conclusions at all?
I .... can't ..... type ..... I think I pulled a muscle from laughing too hard. Do you think there were too few samples? This is like ABC conducting a poll for the President of the U.S. and asking four people and publishing the forecasted winner. You wouldn't take that survey result very seriously and you shouldn't take this one seriously either.
Bottom Line: This person either doesn't have a clue, or they are reporting crap results in hopes of generating links to their site through Link Bait Spam. Regardless, what is happening here is wrong. Before you EVER make changes to your site, make sure that the information you are going by can be confirmed that it has been tested. This is your business and your reputation. Going by "free" information can be very damaging. Join the SEO Revolution and get the RIGHT information at the RIGHT time.
Bottom Line: DO NOT MAKE ERRORS ON YOUR SITE TO GET A BETTER RANKING IN GOOGLE!!
Note: We received a note of complaint from the author on April 26, 2006.
While I do not mind you commenting, even if those comments are harsh, I would appreciate that you either give credit as the terms of reprint ask, or to hold back and not publish the article at all. You have taken a middle ground, publishing the article and my name, then choosing to not include a source of the article.
I would also be happy to talk with you about the article. I recognize there are flaws, and these flaws were discussed. Of course, you did not give your readers access to this discussion in which I agree with many of the flaws of the 'study'. that we did not give reference to the article.
Not giving the link back was done unintentionally, but note that the "no follow" tag was included so this "link baited" article will not get further value. It is also interesting to note that while the author admits there were flaws in the study, there have been no changes to the article itself to reflect that. Thus, the Diseased SEO continues.
© 1999-2006, WebMarketingNow.com
Jerry West is the Director of Internet Marketing for Marketing Now. He has been consulting on the web since
1996 and has assisted hundreds of companies gain an
upper-hand over their competition. Visit Web
Marketing Now for the latest in marketing tips that
are tested and proven.
article can be reproduced on your site or e-zine as
long as the signature file remains.