[Javascript] Best Practices
Paul Novitski
paul at juniperwebcraft.com
Mon Apr 17 15:09:26 CDT 2006
At 11:49 AM 4/17/2006, Triche Osborne wrote:
>Paul Novitski wrote:
>>Triche, my sense is that the Best Practices in this case are:
>>1) Set up your application to run perfectly in the absence of
>>DOM-aware JavaScript: when selections are made, post the page back
>>to the server which returns it with the correct number of input
>>fields built into the page.
>
>Thanks for the can opener, Paul. ;-) I wanted to think about my response
>a bit because this remark and a concurrent discussion of progressive
>enhancement on another list opened a can of worms that I've kept shelved
>for too long.
> I suppose I should be clear to that I'm not reacting
> negatively to the
>advice above. On the contrary, it's something I would do anyway, and it
>doesn't even present the sort of conflict that I've been pondering. It
>did remind me of it, though, so maybe it's time to deal with the worms.
> Pertinent to this list are those issues that deal with areas of
>accessibility and modernity. The increasing emphasis on separating code
>from mark-up (which I like) does rely heavily on the DOM. This is fine
>in cut-and-dried cases like the one I mentioned, where one can easily
>compensate for non-DOM browsers with a little server-side work, but
>suppose the issue were validation instead? Do you . . .
> (1) attach onchange and/or onsubmit events through an external script
>using the older forms collection and use alert-style messages instead of
>pretty DOM errors;
> (2) or go DOM the whole way and let the server-side sticky validation
>catch the non-DOM users?
(3): server + DOM + alerts.
I always validate input server-side. For DOM-aware browsers, I also
provide front-line validation client-side to save the user a few
seconds of their time or to provide some fancy footwork, but my
server-side application (where the real work is done because it's
where I have complete control) never assumes that the client-side
validation is present and awake to block the garbage-in.
If a browser isn't DOM-aware, it falls back to server-side
processing. When validation errors are detected on the server, I
include the error messages in the HTML markup so they stay on the
screen, and I add js alert scripts if desired to duplicate the error
messages to make them harder for the user to overlook.
> Re: graphic button image swaps: Do you . . .
> (1) use the old style of wrapping the image in an anchor tag which
>contains omo/omo events;
> (2) attach the omo/omo to the anchor via the DOM so that, although
>non-DOMs can't see the swap, the button still works;
> (3) or go DOM all the way, attaching the omo/omo/onclick behaviors
>directly to the image? Although this latter disables the buttons, it
>does not disable the functionality of the text links which you have (of
>course you have) included for text-only users.
(4): I use CSS a:hover for mouseover events and HTML anchors for
click processing, which eliminate the need to use JavaScript for
those basic navigation needs. I generally attach background images
to anchor elements rather than wrapping the anchors around foreground
image elements. I change images on rollover by toggling the CSS
background-position, including both image states in the same image
file so there's no time delay the first time an image is moused over
(eliminating the need for JS image preloading for nav menus).
Please understand that I love using JavaScript, it's just that over
the past couple of years I've learned that I can produce page
scripting that's faster to create and easier to maintain and more
elegently degradable if I shift cosmetic mouse events to CSS that I
used to handle with JavaScript. These days I use JS primarily for
things like client-side validation, converting anchor hrefs to form
actions or hidden form fields, drag & drop, and other nice-to-haves
that enhance the user's experience but don't leave the page broken
when they're not present.
I believe this is the point Steve Champeon is trying to make in
pushing "progressive enhancement" over "graceful degradation": with
PE you start with a page that will work in any browser and then you
add things to enhance them for smarter browsers, rather than the GD
approach of starting with a page that requires a modern browser and
then trying to plug the holes for the older browsers. While I
maintain that both strategies if done well will produce the same
page, the pro-PE argument is that we just get lazy with GD and
inevitably leave some holes unplugged. I've seen too much of human
nature to protest that too much.
> Perhaps it's just me, but areas like these seem fuzzy and
> ill-defined.
>Maybe I just need a new pair of reading glasses, but most of what I
>see written about modern scripting is either philosophical and/or
>advocating a particular point of veiw rather than addressing
>practical aspects of implementation. Anyone have any thoughts,
>practices, etc. to offer?
Speaking as someone who supports myself and my household with web
work, these are absolutely nitty-gritty, every-day, practical
concerns and not simply philosophical musings. I like to natter on
about design beauty and code elegance as much as the next person, but
ultimately I have to give my clients (and myself) robust pages that
aren't going to break and that I can change quickly & easily when the
clients change their minds (and they always do).
Regards,
Paul
More information about the Javascript
mailing list