[thelist] your research on search engines and the sites you work on

chris chris at mindsparkmedia.com
Wed Nov 6 15:07:01 CST 2002


Hi,
There is probably too much information out there, but you might want to
start with http://www.searchenginewatch.com/  for information about specific
search engines,
http://www.bruceclay.com/ for well written articles
http://hotwired.lycos.com/webmonkey/01/23/index1a.html?tw=e-business for a
basic introduction

Keep in mind that each website calls for a different optimization strategy,
but it sounds like you're focusing on the technical end of things right now.
I would highly recommend that you visit http://w3c.org as well, standards
can be an excellent justification.

Regarding dynamic content, yes dynamic content often traps spiders.  PHP
offers some work arounds, and another strategy is to archive content and set
a ROBOTS=nofollow on your dynamic pages.  I don't really have time to go
into depth on that, but you'll find plenty of resources if you search for
seo AND dynamic content, etc.  Good Luck!

chris hardy

-----Original Message-----
From: thelist-admin at lists.evolt.org
[mailto:thelist-admin at lists.evolt.org]On Behalf Of Chris W. Parker
Sent: Wednesday, November 06, 2002 12:48 PM
To: thelist at lists.evolt.org
Subject: [thelist] your research on search engines and the sites you
work on



hi.

i'm doing some research right now on why our site stinks and cannot be
found in all the search engines (unless of course you search for our
url, which defeats the purpose of a search engine).

to be more specific, i'm looking for references and information to back
up my claim as to why our site sucks. there are many different reasons
i've come up with, most of which fall into two different categories:
Site Content, and Site Technology.

Site Content would consist of how often the content changes, and how the
content is presented within the html.

Site Technology would be things like, frames, and dynamic content. This
is the part I am concerning myself with the most. This is what I would
be dealing with.

What I'm looking for is evidence on the intarweb (thanks jeff k.) that
states a case against these two site technologies, or any other
technology I may not be considering a part of our poor rankings. For
example, although I couldn't find anything, I've heard that one reason
search engine bots don't index pages with querystring links is because
they can get caught in a loop. How is this possible? What would be an
example in which this could happen? And as far as frames go, I've read
that you can use the <noframes> tag to allow bots to index and crawl
your site. However I think that unless your site is really small, less
than 50 pages (maybe even less than <20) it seems that using frames AND
using the <noframes> tag in an attempt to index your site would be a
maintenance mess. Time consuming and possibly not being fruitful.
Thoughts on that?

So I'm currently writing my own little outline as I search the web for
different resources on these issues.

I'm interested in hearing what sort of research you have done in the
past and/or what you are doing right now. Also experiences you've had
with this subject.

I'd also be interested to know among you IIS administrators, what URL
rewriting solution you've found to be the best? The two techniques I
know of are using a number of different ISAPI filters (some free, some
not) or setting up your 404 page to do the rewriting/redirecting. Please
give me your thoughts.


thanks a bundle!

chris.
--
For unsubscribe and other options, including
the Tip Harvester and archive of thelist go to:
http://lists.evolt.org Workers of the Web, evolt !






More information about the thelist mailing list