[thelist] Validation errors == optimization?

Lee Kowalkowski lee.kowalkowski at googlemail.com
Fri Jun 3 03:30:53 CDT 2011


On 2 June 2011 23:57, Bill Moseley <moseley at hank.org> wrote:
> I like to validate because it makes me feel like "something not working" is
> less likely to happen in the first place.  And I also like to validate
> before hunting down Javascript errors.
>
> But, do those validation errors have anything to do with better SEO?

I doubt it very much.  I just Googled and Binged 'plagiarism detection
service' (just the words, not as a phrase), I didn't find their site.
For 'plagiarism checker', they were on page 3 of Google, couldn't find
them on Bing.

I don't see how having a keyword "author" without any content for it
(this was one of the validation errors) can possibly be beneficial.
Is he suggesting each of the hundreds of validation errors were
deliberate and considered?  That is the funniest defence yet,
hilarious!

I think your guy may have read
http://www.site-reference.com/articles/Valid-HTML-Does-Google-Care.
This was written shortly afterwards:
http://www.webmarketingnow.com/tips/link-baiting-article.html.

The (in)validity of the HTML is going to be irrelevant for SEO, as
long as the content is indexable.  Valid HTML is certainly not going
to be penalised.  Such fabricated tests are futile, the modification
date is probably more of a deciding factor if duplicate content is
discovered.

-- 
Lee
www.webdeavour.co.uk


More information about the thelist mailing list