[thelist] Search Engines & Content sites using Server Side Stuff

Jason Lustig lustig at acsu.buffalo.edu
Thu Dec 21 00:09:56 CST 2000

I have the same problem with my fiction site; I use Javascript (since my
stupid university servers won't let us do server-side programming and I
can't afford expensive server space) to code my navigation, and the search
engines and crawlers can't get through my site. What I would suggest doing
is making a "site-map" page or something ilke that where you have a link to
each of the articles that the crawlers can get through. It worked for me!




Your signiture determines your reality

>-----Original Message-----
>From: thelist-admin at lists.evolt.org
>[mailto:thelist-admin at lists.evolt.org]On Behalf Of Andrew Forsberg
>Sent: Wednesday, December 20, 2000 11:35 PM
>To: thelist at lists.evolt.org
>Subject: [thelist] Search Engines & Content sites using Server Side
>I have a potentially stupid question. Stupid because I think I know
>the answer is a 'sort-of-no' - but here goes:
>I have a site that is content driven. Content comes, and goes, on a
>monthly-ish basis. With the current setup I've made a php app that
>loads up an article and displays it as requested, all from the
>index.php page. I don't particularly care if these articles get
>indexed by search engines or not, as they're going to swap out fairly
>soon. So far, so good -- here comes the stupid question: suppose I
>decide to bung up the entire archive of articles (somewhere around
>the 200 articles @ 3000 words each mark), and *do* want these to be
>indexed by search engines -- how can I arrange this to help bots
>index include files? Do I have to redesign the site so that articles
>are independent static pages?
>Suggestions would be *great*!
>For unsubscribe and other options, including
>the Tip Harvester and archive of TheList go to:
>http://lists.evolt.org Workers of the Web, evolt !

More information about the thelist mailing list