Monday, April 7, 2014

Be Systematic

Years ago when my children had cousins who were into such things, I heard a cheer (ad nauseam) which went like this:

Be, Aggressive!B-E, Aggressive!B-E AGG-R-E-SS-IVE Aggressive!

While being impressed that you could effectively get a 7-year old to spell 'Aggressive' by putting it into a catchy cheer, a parallel occurred to me from my own experiences.

Be, Systematic!
B-E, Systematic!
B-E Sys-t-e-matic Systematic!
Some background.  In my career, one of the most frustrating things I have encountered  happens when trying to brainstorm a problem.  It usually goes something like this (in a group setting):

Facilitator:  So, the widget is failing every day.  Any ideas?
Suspicious Engineer #1: I suspect it is the hoozit of the whatzit.
Overconfident Engineer #1: Naw - it can't be that - the hoozit is finely-tuned.
Facilitator: Any other ideas?
Suspicious Engineer #2: What about the Ides of March?  It has been known to cause problems before?
Overconfident Engineer #2: Not anything I have seen.
Self-proclaimed SME and overall dominant personality:  I think it has to be muppets in the server room.
Facilitator:  I'll bet you are right!
All:  Let's go get those muppets!

The big problem here is the complete lack of discipline in testing conclusions and challenging assumptions.  The facilitator did the whole team a disservice by allowing dominant personalities (I.E., those with who make the strongest assertions) to squelch good ideas just by saying it isn't likely to be true.  A good facilitation of this sort of meeting should collect both pieces of information:  the identification of the problem area, and the likelihood (priority, or rank) that it is the culprit.

Instead of just throwing ideas around like clay pigeons and letting people take shots at them, the facilitator should collect all ideas and rank them according to the order the team agrees on, then test each idea in order.  This works not only on teams, but especially in individual work.  I can't count how many times I have been told by an engineer that the problem was identified, as well as a fix, but once the fix was done they discovered that either (1) there was an additional problem that hadn't been identified, or (2) the analysis was wrong and the fix didn't work.  This is terribly frustrating for a team - thinking a problem is fixed only to have it pop up again.  It is even worse for the team when the fix is reported and deployed to production only to have another team (usually production support, or worse the business users) report the lack of fix back to the team.

In conclusion:  follow the scientific method.  Observe and analyze the situation, make a hypothesis, test that hypothesis, then commit to work.  Challenge yourself.  Don't believe your own conclusions (let alone others) until you can see it (as close as possible - some problems in computer science just can't be observed - such as a rare race condition).

No comments:

Post a Comment