[thelist] Enterprise Red Tape was: Web Based Employee Directory- prt 2

Ken Schaefer ken.schaefer at gmail.com
Wed Sep 15 20:20:44 CDT 2004

On Wed, 15 Sep 2004 14:39:41 -0500, Luther, Ron <ron.luther at hp.com> wrote:
> Ken Chase quoted from Steve Lewis thusly:
> >>Bingo! This is the message that we always receive from our tech shop
> >>managerial types (***NOT THE CODERS***). The promise is that eventually,
> >>through code re-use, development costs will be drastically reduced.
> Excluding for a moment what might be the world's best example of
> 'code re-use', the use of CSS files to template and use across your
> entire site, and thinking instead more of 'general purpose' coding ...
> Have you folks *really* seen this happen?
> _Dramatic_ cost reductions directly attributable to code re-use?

Yes, IMHO it is practically impossible to write any sort of large
scale application using anything other than OOP in this day and age.
It's not a matter of "dramatic cost reduction", it's a matter of
"there is no other way the project can be completed"

> ['modular' doesn't mean 'well-written']
> We found a bug in a nice modular "C" program someone had written a few
> years back.  I volunteered to take a look at the logic to track this
> bug down, so I asked for a copy of the code to print out. I was sent a
> file containing roughly five lines.
> It took days (and dozens of emails) for me to track down the 60 or so
> files I needed in order to print out all 89 pages of code associated with
> this "program"! 

Have you heard of a debugger? Why don't you use a debugger? Who
manually reads 89 pages of code? That sounds like a nightmare.

> Maybe that was an anomalous example of poorly written "C" code. I don't
> know. I try to avoid looking at "C" code whenever possible. But it's
> generally what I think of when folks start talking about how wonderful
> modular code is. Yuck!

Your "modules" (let's call them classes) should have a defined
interface that exposes a set of methods and properties. Your methods
should take defined inputs, and return defined outputs. That's
(relatively) easy to test. As long as the interface contract remains,
you can muck around with the internals of the class all you like.

> [artificial structure]
> I have some web-based reports where we have re-used some 'common' end-user
> controls. That's a 'good thing' and the users like that.
> However, it's meant that we also had to re-use loader scripts and re-use
> data aggregation strategies (whether needed or not for whatever new thing
> we were doing) because the control itself "required" them.
> Not a huge point ... just a recognition that code itself isn't 'plug and
> play' ... you have to meet all the pre-requisites for use.

You needed to scope the control in the first place. If the control is
too unwieldy, then it wasn't properly scoped in the first place.
Develop version 2, or get better business analysts :-)

No one is saying that OOP is simple, or easy, or is a bullet proof
solution to every problem. You do need to be aware of the
requirements, and do the necessary planning.

> It can take time and resources to get all that 'preliminary' junk set up
> so that you _can_ 'save time' by re-using the control code. I think there
> are situations where that 'preparation to save time' may take more time
> than coding something new.

Definately. And then all your bits of custom coding - how are you
going to *maintain* them? That's where your custom coding is going to
come back and bite you in the backside.

If you centralise all your data access through a DAL (Data Access
Layer), then you *know* that's the only place you need to change some
code in order to access a new data store. As long as the DAL's
interface contract with the upper layers remains the same, everything
will keep working (and if it doesn't, you know exactly where to look -
in your new DAL code).

If you have custom code, where you write a whole bunch of inline
stuff, your maintenance is going to be an absolute nightmare. Each and
every piece of code needs to be rechecked to ensure that it'll keep
working when you switch data stores (or network protocols, or
authentication providers, or whatever). And updating lots of
individual pieces of code results in a greatly increased chance of new
bugs creeping in.

> [testing costs]
> Ever had an enhancement, a feature add, some scope creep that required
> the modification of one of these "commonly used" code elements?
> Here's what I've usually seen in those cases ... the code monkey makes
> his/her change to facilitate the functionality in the new piece ... and
> then proceeds to test that change thoroughly ---- but *only in the new
> piece*.

Again, this sounds like poor planning or ignorance on the part of your
developer team. Take a look at Unit Testing, or Test Driven

You have a set of defined tests, and you use a testing framework to
fire these off at your code. Make some changes to your code - rerun
all the tests.

> If you change something used in 100 places, you (a) have to *know* that
> it's used in all those places, [that isn't something that is always easy
> to find out] and (b) have to test that change in all of them.

a) Don't change the contract


b) If you do change the contract, you publish a *new* interface or you
publish a new version of the component. That way, all the older code
does not need to be changed, and is not affected - it keeps using the
older, previous interface. Your upper level code is gradually
converted over, as required, to using the new interface (or to v2 of
your component).

> There _is_ a cost to that testing.

The alternative is even worse, cost wise.

> (I'm not saying it's 'a bad thing" ... maybe code re-use works better in a
> very specialized industry, like oil, where you might have a very complex
> equation that you could code once and call from multiple sources ... I just
> think the benefits of code re-use have been severely oversold.)

Then I'd say you're out-of-touch with modern programming paradigms :-)


More information about the thelist mailing list