Bio and Publications

Wednesday, February 24, 2010

Sensible Metrics -- Avoiding Numerosity

In general, Software Engineering Metrics are not without value.

Expecting magic from metrics is what devalues them, reducing metrics to dumb numerosity.

Once code is in production, plenty of metrics are readily available. For example, the trouble-ticket history tells you everything you need to know about code that's in production. You don't need anything more than this.

Also, an attempt to do more statistical analysis of production code is largely doomed because it appears (to most managers) as zero-value work.

Software Engineering Metrics (Cyclomatic Complexity, for example) are used for their predictive power. There's not point in using them for post-production analysis.

Metrics as Leading Indicators

Metrics are a handy filter as part of an overall QA process. The point is this: sometimes they're a leading indicator of code smells.

Metrics have to be one part of the overall QA process. For example.
  1. All code is inspected.
  2. Code with more suspicious metrics are inspected more closely. Code with less suspicious metrics are not inspected as closely.
Now the questions become much more sensible. Can we quantify "suspicious" to support decision-making by thinking people?

Imagine this scenario. You establish a Cyclomatic Complexity threshold; you choose 5 as the upper limit on acceptable complexity.

Now what?

You start measuring code as it goes through development. And everything is between 5 and 15. What does that mean?

Until you inspect all that code, 5 doesn't mean anything.

Inspect First, Measure Second

If, on the other hand, you start inspecting every piece of code, you'll learn a lot.
  1. Some inspections are boring. The code is good. End the meeting; move on quickly. (Few things are more awkward than a manager who feels the need to control people by using the entire half-hour.)
  2. Some inspections are hard. The code is confusing. Cut to the Rework; reschedule.
  3. Some inspections are contentious. Some folks like one thing and other folks find they cannot reconcile themselves to this.
What's important is to use metrics to enhance the good stuff and expose the bad stuff. People still have to make the decisions. Metrics only help.

Find a metric that brackets the boring stuff to save you having to inspect every module that's "similar". Cyclomatic Complexity is popular for this. It's not the only thing, but it's popular. You can use feature count or lines of code, also. Short and sweet modules rarely suffer from code smell. But you still have to check them.

Find a metric that brackets the obviously bad stuff to alert you that something really bad is going on. Intervene and rework early and often. Large and complex modules are a leading indicator of a code smell. How large is too large? Inspect and decide.

Find a way to reduce the contention. Metrics -- because they're so simple -- are harder to fight over. A Cyclomatic Complexity of 20 is just too complex. Stop arguing and rework it. Often, bull-headed nerds can find a way to agree to a metrics program more easily than they can agree to detailed coding standards.

Which Metric?

That's the tough problem. In a vacuum, of course, it's an impossible question.

Given an inspection process, however, adding metrics to tune and enhance the existing inspection process can make sense.

There are many Software Science metrics. Here's the list: http://en.wikipedia.org/wiki/Software_metric.

Pick some at random and see if they correlate in any way with inspection results. If they do, you can trim down your inspection time. If they don't, pick other metrics until you find some that do.

But only use metrics to support you're code inspection process.

Tuesday, February 23, 2010

Numerosity -- More Metrics without Meaning

Common Complaint: "This was the nth time that someone was up in arms that [X] was broken ... PL/SQL that ... has one function that is over 1,500 lines of [code]."

Not a good solution: "Find someway to measure "yucky code"."

Continuing down a path of relatively low value, the question included this reference: "Using Metrics to Find Out if Your Code Base Will Stand the Test of Time," Aaron Erickson, Feb 18, 2010. The article is quite nice, but the question abuses it terribly.

For example: "It mentions cyclomatic complexity, efferent and afferent coupling. The article mentions some tools." Mentions? I believe the article defines cyclomatic complexity and gives examples of it's use.

Red Alert. There's no easy way to "measure" code smell. Stop trying.

How is this a path of low value? How can I say that proven metrics like cyclomatic complexity are of low value? How dare I?

Excessive Measurment

Here's why the question devolves into numerosity.

The initial problem is that a piece of code is actually breaking. Code that breaks repeatedly is costly: disrupted production, time to repair, etc.

What further metric do you need? It breaks. It costs. That's all you need to know. You can judge the cost in dollars. Everything else is numerosity.

A good quote from the article: "By providing visibility into the maintainability of your code base—and being proactive about reducing these risks—companies can significantly reduce spend on maintenance". The article is trying to help identify possible future maintenance.

The code in question is already known to be bad. What more information is needed?

What level of Cyclomatic Complexity is too high? Clearly, that piece of code was already too much. Do you need a Cyclomatic Complexity number to know it's broken? No, you have simple, direct labor cost that tells you it's broken. Everyone already agrees it's broken. What more is required?

First things first: It's already broken. Stop trying to measure. When the brakes have already failed, you don't need to measure hydraulic pressure in the brake lines. They've failed. Fix them.

The Magical Number

The best part is this. Here's a question that provides much insight in to the practical use of Cyclomatic Complexity. http://stackoverflow.com/questions/20702/whats-your-a-good-limit-for-cyclomatic-complexity.

Some say 5, some say 10.

What does that mean? Clearly code with a cyclomatic complexity of 10 is twice as bad as a cyclomatic complexity of 5. Right? Or is the cost function relatively flat, and 10 is only 5% worse than 5? Or is the cost function exponential and 10 is 10 times worse than 5? Who knows? How do we interpret these numbers? What does each point of Cyclomatic complexity map to? (Other than if-statements.)

Somehow both 5 and 10 are "acceptable" thresholds.

When folks ask how to use this to measure code smell, it means they're trying to replace thinking with counting. Always a bad policy.

Second Principle: If you want to find code smells, you have to read the code. When the brakes are mushy and ineffective, you don't need to start measuring hydraulic pressure in every car in the parking lot. You need to fix the brakes on the car that's already obviously in need of maintenance.

Management Initiative

Imagine this scenario. Someone decides that the CC threshold is 10. That means they now have to run some metrics tool and gather the CC for every piece of code. Now what?

Seriously. What will happen?

Some code will have a CC score of 11. Clearly unacceptable. Some will have a CC score of 300. Also unacceptable. You can't just randomly start reworking everything with CC > 10.

What will happen?

You prioritize. The modules with CC scores of 300 will be reworked first.

Guess what? You already knew they stank. You don't need a CC score to find the truly egregious modules. You already know. Ask anyone which modules are the worst. Everyone who reads the code on a regular basis knows exactly where actual problems are.

Indeed, ask a manager. They know which modules are trouble. "Don't touch module [Y], it's a nightmare to get working again."

Third Principle: You already know everything you need to know. The hard part is taking action. Rework of existing code is something that managers are punished for. Rework is a failure mode. Ask any manager about fixing something that's rotten to the core but not actually failing in production. What do they say? Everyone -- absolutely everyone -- will say "if it ain't broke, don't fix it."

Failure to find and fix code smells is entirely a management problem. Metrics don't help.

Dream World

The numerosity dream is that there's some function that maps cyclomatic complexity to maintenance cost. In dollars. Does that mean this formula magically includes organization overheads, time lost in meetings, and process dumbness?

Okay. The sensible numerosity dream is that there's some function between cyclomatic complexity and effort to maintain in applied labor hours. That means the formula magically includes personal learning time, skill level of the developer, etc.

Okay. A more sensible numerosity dream is that there's some function between cyclomatic complexity and effort to maintain in standardized labor hours. Book hours. These have to be adjusted for the person and the organization. That means the formula magically includes factors for technology choices like language and IDE.

Why is it so hard to find any sensible prediction from specific cyclomatic complexity?

Look at previous attempts to measure software development. For example, COCOMO. Basic COCOMO has a nice R×T=D kind of formula. Actually it's aKb=E, but the idea is that you have a simple function with one independent variable (likes of code, K), and one dependent variable (effort, E) and some constants (a, b). A nice Newtonian and Einsteinian model.

Move on to intermediate COCOMO and COCOMO II. At least 15 additional independent variables have shown up. And in COCOMO II, the number of independent variables is yet larger with yet more complex relationships.

Fourth Principle: Software development is a human endeavor. We're talking about human behavior. Measuring hydraulic pressure in the brake lines will never find the the idiot mechanic who forgot to fill the reservoir.

Boehm called his book Software Engineering Economics. Note the parallel. Software engineering -- like economics -- is a dismal science. It has lots of things you can measure. Sadly, the human behavior factors create an unlimited number of independent variables.

Relative Values

Here's a sensible approach: "Code Review and Complexity". They used a relative jump in Cyclomatic Complexity to trigger an in-depth review.

Note that this happens at development time.

Once it's in production, no matter how smelly, it's unfixable. After all, if it got to production, "it ain't broke".

Bottom Lines
  1. You already know it's broken. The brakes failed. Stop measuring what you already know.
  2. You can only find smell by reading the code. Don't measure hydraulic pressure in every car: find cars with mushy brakes. Any measurement will be debated down to a subjective judgement. A CC threshold of 10 will have exceptions. Don't waste time creating a rule and then creating a lot of exceptions. Stop trying to use metrics as a way to avoid thinking about the code.
  3. You already know what else smells. The hard part is taking action. You don't need more metrics to tell you where the costs and risks already are. It's in production -- you have all the history you need. A review of trouble tickets is enough.
  4. It's a human enterprise. There are too many independent variables, stop trying to measure things you can't actually control. You need to find the idiot who didn't fill the brake fluid reservoir.

Friday, February 19, 2010

Information Technology -- It's all about Decision-Making

Check this SD Times article out: Future of data analysis lies in tools for humans, not automatic systems.

"Andreas Weigend... said that “data is only worth as much as the decisions made based on that data."

This is the entire point of IT: IT Systems Support Decision-Making. The job is not to "automate" decision-making with a bunch of business rules. The job is to create systems to support decision-making by people. Buying a tool that allows "end-users" to drag and drop icons to create workflows and business rules misses the point. Automating everything isn't helpful.

Information should be classified and categorized to facilitate decision-making.

People need to be in the loop.

Management by exception can only happen when people see the data, can analyze, categorize, summarize and -- by manipulating the data -- discover outliers and unusual special cases.

Too many systems attempt to leave people out of the loop.

Business Rules

A canonical example of business rule processing is credit checks or discounts in an order processing system. This requires integrating a lot of information, and making a decision based on ordering history, credit-worthiness, etc., etc.

In some cases, the decision may be routine. But even then, it is subject to some review to be sure that management goals are being met. Offering credit or discounts is a business strategy decision -- it has a real dollar-valued impact. A person owns this policy and needs to be sure that it makes business sense.

These decisions are not just "if-statements" in BPML or Java or something. They are larger than this.

One good design is to queue up the requests, sorted into groups by their relative complexity, so a person can view the queues and either make (or confirm) the automated decisions. It's a boring job, but they're doing management by exception. They own the problems, the corner cases, the potential fraud cases, the suspicious cases. They should have an incentive payment for every real problem they solve.

You give up "real-time" because there's a person in the loop. For small value, high-volume consumer purchases, you may not want a person in the loop. Most of us, however, are not Amazon.com. Most of us have businesses that are higher value and smaller volume. People will look at the orders anyway.

All IT Systems must facilitate and simplify manual review. Even if they can automate, the record of the decisions made should be trivial to review. Screen shots or log scraping or special-purpose audit/extract programs mean the application doesn't correctly put people into the process.

"Automated" Data Mining

In most of the data warehousing projects I've worked on, folks have been interested in the idea of "automated data mining" discovering something novel in their data. For example, one of the banks I worked with was hoping for some kind of magical analysis of risk in their loan portfolio.

Data Mining is highly constrained by the implicit causal models that people already have. There's a philosophical issue with attempting to correlate random numbers in a database and then trying to reason out some theory or model for those correlations. The science of going from observation to theory requires actual thinking. There's a long analysis here: http://theoryandscience.icaap.org/content/vol9.2/Chong.html.

Indeed, the only possible point of data mining is not to discover something completely unexpected, but to confirm the details of something suspected but hidden by noise or complexity. People formulate models, they confirm (or reject) them with tools like a data warehouse with some data mining analytics.

Monday, February 15, 2010

Enterprise Applications (Revised)

Enterprise Applications really make people sweat. Look at this selection of StackOverflow questions. There are hundreds. People really get worked into a lather over this.
There's an important subtext to this. Your favorite tool (Python, PHP, LAMP) is not Enterprise Ready. My favorite tool is better because it's Enterprise Scale.

Some folks will reject the subtext and try to say that these are reasonable questions. Until push comes to shove and no one seems to be able to define "Enterprise Ready". Words like scalable and reliable crop up in vague hand-waving ways. But without a clear yardstick for Enterprise Scale, the term has no useful meaning.

It's import to separate useful considerations from deprecating something you don't like. In reading the Stack Overflow questions, I've figured out what the political consideration behind Enterprise Scale might be.

Mission Critical

In many cases, Enterprise-Scale is taken to mean that the software can be trusted to handle Mission-Critical or Business-Critical computing. Sadly, even this doesn't mean much. Numerous businesses do bad things and yet remain in business. For example, TJ Maxx suffers a huge theft of information, and they remain in business. In this case, the software that was compromised was -- somehow -- not actually business critical. The software failed; they're still in business.

[Information loss is not a zero sum game; information compromise is not like theft of tangible goods. However, everyone would say that credit card processing is mission critical. Everyone.]

We can use words like "critical", but actual destructive testing -- live business, live data, live bad-guys -- showed that is wasn't "critical". It was central, conspicuous and important. Based on the evidence, we need a new word, other than "critical".

Working Definition of Enterprise Scale

In talking with a sysadmin about installs, it occurred to me what the politically-motivated definition of Enterprise Scale is
The install is not "next-next-done" wizard

Desktop and "departmental" applications have easy-to-use installers with few options and simple configurations. Therefore, people who don't like them can easily say their not Enteprise Scale.

Some folks aren't happy with Enterprise applications unless they have configurations so complex and terrifying that it takes numerous specialists (Sysadmins, DBA's, programmers, managers, business analysts and users) to install and configure the application.

That's how some folks know that a LAMP-based application stack involving Python can't be enterprise-ready. Python and MySQL install with "next-next-done" wizards. The application suite installs with a few dozen easy_install steps followed by a database build script. They will then spend hours talking around numerous tangential, ill-defined, hard-to-clarify issues to back up what they know.

Anything that's simple can't scale.

This is the subtext of many "your application or tool isn't enterprise scale" arguments.

Tuesday, February 9, 2010

Layers of Management == Layers of Veto

In an organization with more than one layer of management, the default answer must be "no". Folks get needlessly frustrated by this. But it's a logical consequence of multiple layers of management.

Consider that direction must come down from above. If you're suggesting something up to your manager (or in your role as an outsider) the response must be "no". What you're suggesting is not the direction that came from above.

If any manager not at the top says "yes" to you -- an underling our outsider -- they've just committed an act of insubordination to their actual manager.

All managers in positions other than the apex, must say "no" or be insubordinate. And the top manager has "outside pressure" to say "no". Approval is largely impossible to obtain except at the very top.

Variations on the Theme

A "no" can come in a variety of forms.

For organizations that are CMM level 1, it's simply "no", without much justification. All ideas that didn't come down from above are inappropriate or unfunded or simply "not on our radar".

For organizations in CMM level 2, there are more complex and ritualistic forms of "no". Often they are filtered by "if it makes business sense." However, making business sense is largely impossible. Marketing doesn't have to jump through elaborate hoops to justify spending money on advertising. They mostly just do using vague back-of-the envelope justification.

For CMM level 3 or higher organizations, there's an approval process, that will net a big old "no" after a lot of work following the defined process.

What Gets Done?

Generally, ideas that come down from the top have a pre-approved "yes". After all, what comes from above was already in this year's budget. It was scheduled for this year. The schedule is sacred.

In a CMM 1 organization, you just do what you're told. In CMM 2 organization, there's some kind of "management" veneer wrapped around the word from on high. In CMM 3 organizations, there's a process to rubber-stamp the stuff that's trickles down from the top to be rubber stamped.

Process doesn't always help. Projects that are ill-defined -- but a pet project of a development director or CIO -- will still be vetted and approved. Projects that come from outside IT are often "more valuable" and get more ready approval, even though there are no more details in those project descriptions.

Projects that bubble up from below, however, have to be be rejected. They weren't in the budget.

Fixing The Approval Process

You pretty much can't fix the approval process. Taking things upstairs to your manager is -- generally -- insubordination. You weren't told to do it, so you're wasting your time, time that could be spent on things that were in the budget for this year.

You can be lucky enough to work for an organization that has little or no management. Such "entrepreneurial" organizations are characterized by more "yes" than "no"; more importantly an entrepreneurial simply has very little management. (Nothing is funnier than training managers to be entrepreneurial. You want entrepreneurial? Make the managers actually write code.)

The only thing you can do in a large organization is to take hostages. If it's a good enough idea, you have to start doing it anyway. Once people catch you at it, then you have to explain what a good idea it it. It will be an uphill fight all the way. No one will ever "greenlight" your idea.

If you're doing the right thing, it may be a struggle, and you will be in trouble until it appears in next year's budget. Then someone else will be assigned to it.

Interestingly, it more-or-less must work this way. Sadly, smart technical people are often unhappy with this. They either don't want to fight for their ideas, or fight too stridently for them. There's a happy medium in the middle of pushing a good idea without alienating managers that are forced to reject what is obviously a good idea.

Monday, February 8, 2010

Controlling the Message

I finally figured out what is so bad about folks who need to "control the message."

Architecture is as much politics as technology. And some folks think that political spin and message control is required. I think it's a mistake because the urge to control the message points up a deeper problem with the message itself.

Here's one of many examples of architecture being as much politics as technology.

A really good platform for web development is Linux, Apache, MySQL, Python, Django and mod_wsgi. Really good. Inexpensive, scalable, simple, and reliable. Standards-compliant.

However, when a client asks what they should use, that's almost never an acceptable answer. If the client is a VB/ASP shop, you can't tell them to ditch their installed technology. You can suggest looking at C# and ASPX and .NET, but even that simple change is going to get you into arguments over the upgrade costs. Many times customers just want Pixie Dust that magically makes things "better" without being a disruptive change.

("What about the personnel cost of a change? You can't ignore the people" Yes, actually, I can. Those people where not considered when VB and ASP were introduced. If they were, they'd still be using COBOL and CICS. Why consider them now when it's time to make another change? The "personnel cost" question is an excuse to veto change. The people will be more productive using C#. Start changing, stop preventing.)

Right vs. Wrong

I've been told that politics comes first when making a technical decision. That's perhaps the dumbest, most defeatist attitude I've ever heard (and I told them that, more than once.) There are right answers. There are -- in some cases -- answers that are absolutely right, but a political problem.

Anyone who puts politics first rejects the opportunity to know what's right. It's essential to have facts and evidence for the technically right answer, even if you never use those facts, and simply accept the politically-determined sub-optimal answer.

And there's no reason to be a jerk about political decisions. They may be bad decisions made for political reasons. They may be good decisions made for the wrong reasons. Your hard-won technical input may get disregarded. Publish your findings and move on.

My number one example of this is the slowness of most stored procedure languages. Benchmarks reveal that they're slow when compared with Java or even Python. That shouldn't be a surprise, but the politically-motivated response is that Stored Procedures have a place and will be used in spite of the problems. Politics trumps technology.

Controlling the Message

In my trips to the top of the IT food chain -- the CIO's office -- it seems like everything is political. It may not be, but the technical content is approximately zero and is often diluted by the political considerations of influence, favors and power.

I'm sure this is sampling bias. I don't meet a lot of CIO's. And I don't meet them to talk about things on which they are already deeply knowledgeable. I'm there because folks are lost and struggling.

The thing I like least, however, is any "Controlling the Message" part of the discussion. It happens in governmental politics as much as it happens in technology. In government, it's particularly odious. Elected officials often have separate public and private messages. They have a public message that is nothing more than pandering to the "shouting class" at the party fringe.

Why Control It?

Here's my lesson learned. If you need to control the message, then the message itself is flawed.

If the communication "requires appropriate background", or if the remarks "could be misinterpreted if taken out of context", then the message is inappropriate or -- perhaps -- wrong.

Here's an example. We're talking about teaching C programming. The customer says "We're not planning on re-educating a lot of our existing staff, we're mostly going to train the new hires." This message needed to be "controlled".

Why can't we simply say that we're going to teach C to new hires?

Because that message is fundamentally flawed. So it needs to be controlled to hide the flaws.

If you think you need to control your message, take the hint and rethink your message.

Thursday, February 4, 2010

ALM Tools

There's a Special Report in the January 15 SDTimes with a headline that bothers me -- a lot. In the print edition, it's called "Can ALM tame the agile beast?". Online it's ALM Tools Evolve in the Face of Agile Processes.

The online title makes a lot more sense than the print title. The print title is very disturbing. "Agile Beast?" Is Agile a bad thing? Is it somehow out of control? It needs to be "tamed"?

The article makes the case -- correctly -- that ALM tools are biased toward waterfall projects with (a) long lead times, (b) a giant drop of deliverables, and (c) a sudden ending. Agile projects often lack these attributes.

The best part of the special report is the acknowledgement that "barriers between developers and QA are disappearing". TDD works to blur the distinction between test and development, which is a very good thing. Without unit tests, how do you know you're finished coding?

How Many Tools Do We Need?

The point in the article was that the ALM vendors have created a collection of tools, each of which seems like a good idea. However, it's too much of the wrong thing for practical, Agile, project management.

The article claims that there were three tools for requirements, tests and defects. I've seen organizations with wishlists that are much bigger than these three. The Wikipedia ALM Article has an insane list of 16 tools (with some overlaps).

Of these, we can summarize them into the following eight categories, based on the kind of information kept. Since the boundaries are blurry, it isn't sensible to break these up by who uses them.
  • Requirements - in user terms; the "what"
  • Modeling and Design - in technical terms; an overview of "how"
  • Project Management (backlog, etc.) - requirements and dates
  • Configuration Management - technology components
  • Build Management - technology components
  • Testing - components, tests (and possibly requirements)
  • Release and Deployment - more components
  • Bug, Issue and Defect Tracking - user terms, requirements, etc.
Agile methods can remove the need for most (but not all) of these categories of tools. If the team is small, and really collaborating with the users, then there isn't the need to capture a mountain of details as well as complex management overviews for slice-and-dice reporting.

YAGNI

Here's a list of tools that shouldn't be necessary -- if you're collaborating.
  • Requirements have an overview in the backlog, on the scrumboard. Details can be captured in text documents written using simple markup like RST or Markdown. You don't need much because this is an ongoing conversation.
  • Modeling and Design is a mixture of UML pictures and narrative text. Again, simpler tools are better for this. Tool integration can be accomplished with a simple web site of entirely static content showing the current state of the architecture and any detail designs need to clarify the architecture. Write in RST, build it with Sphinx.
  • Project Management should be simply the backlog. This is digested into periodic presentations to various folks outside the scrum team. There isn't much that can be automated.
For UML pictures, ARGO UML is very nice. Here's a more complete list of Open Source UML Tools from Wikipedia.

Configuration Management

This is, perhaps, the single most important tool. However, there are two parts to this discipline.
For Revision Control, Subversion works very nicely.

Continuous Integration

The more interesting tools fall under the cover of "Continuous Integration". Mostly, however, this is just automation of some common tasks.
  • Build Management might be interesting for complex, statically compiled applications. Use of a dynamic language (e.g., Python) can prevent this. Build management should be little more than Ant, Maven or SCons.

    Additional tools include the Build Automation list of tools.

  • Testing is part of the daily build as well as each developer's responsibility. It should be part of the nightly build, and is simply a task in the build script.

    Overall integration or acceptance testing, however, might require some additional tools to exercise the application and confirm that some suite of requirements are met. It may be necessary to have a formal match-up between user stories and acceptance tests.

    There's a Wikipedia article with Testing Tools and Automated Testing. Much of this is architecture-specific, and it's difficult to locate a generic recommendation.

  • Release and Deployment can be complex for some architectures. The article on Software Deployment doesn't list any tools. Indeed, it says "Because every software system is unique, the precise processes or procedures within each activity can hardly be defined."

    Something that's important is a naming and packaging standard, similar to that used by RPM's or Python .egg files. It can be applied to Java .EAR/.WAR/.JAR files. Ideally, the installed software sits in a standard directory (under /opt) and a configuration file determines which version is used for production.

    Perhaps most important is the asset tracking, configuration management aspect of this. We need to plan and validate what components are in use in what locations. For this BCFG2 seems to embody a sensible approach.

For most build, test and release automation, SCons is sufficient. It's easily extended and customized to include testing.

More elaborate tools are listed in the Continuous Integration article.

Customer Relationship Management

The final interesting category isn't really technical. It includes tools for Bug, Issue and Defect Tracking. This is about being responsive to customer requests for bug fixes and enhancements.

The Comparison of Issue Tracking Systems article lists a number of products. Bugzilla is typical, and probably does everything one would actually require.

Old and Busted

I've seen organizations actively reject requirements management tools and use unstructured documents because the tool (Requisite Pro) imposed too many constraints on how requirements could be formalized and analyzed.

This was not a problem with tool at all. Rather, the use of a requirements management tool exposes serious requirements analysis and backlog management issues. The tool had to be dropped. The excuse was that it was "cumbersome" and didn't add value.

[This same customer couldn't use Microsoft Project, either, because it "didn't level the resources properly." They consistently overbooked resources and didn't like the fact that this made the schedule slip.]

When asked about requirements tools, I suggest people look at blog entries like this one on Create a Collaborative Workspace or these pictures of a well-used scrumboard.

Too much software can become an impediment. The point of Agile is to collaborate, not use different tools. Software tools can (and do) enforce a style of work that may not be very collaborative.

Bottom Line

Starting from the ALM overview, there are potentially a lot of tools.

Apply Agile methods and prune away some of the tools. You'll still want some design tools to help visualize really complex architectures. Use Argo UML and plain text.

Developers need source code revision control. Use Subversion.

Most everything else will devolve to "Continuous Integration", which is really about Build and Test, possibly Release. SCons covers a lot of bases.

You have some asset management issues (what is running where?) There's a planning side of this as well as an inventory side of confirming the configuration. Use BCFG2.

And you have customer relationship management issues (what would you like to see changed?) Use Bugzilla.

Monday, February 1, 2010

Project Narrative Arc -- Is there a "middle"?

See Paul Glen's opinion piece on Projects and the Ungrounded Middle.

There's a subtle issue here that bothers me. It's the phony narrative arc imposed on a project.

Glen says that "managers talk about beginnings and endings". This is a -- potentially -- phony narrative arc we're told to wrap around a project. Some projects are small, clear and have more-or-less definite beginnings.

Glen also says that "The beginnings are abstract and ambiguous". This is a more useful reality. The project's official "beginning" is unrelated to the conversations and decisions in which it actually began.

Details Details

The biggest issue with the imposed narrative arc is that it is a distortion of the real beginning of the project. See "Aristotle's Poetics and Project Management" The official beginning, the kick-off meeting, and the official project charter may not match the real situation as understood by accounting (in paying for the project) or the users (in having expectations) or the product owner (in prioritizing the sprints).

It's risky trying to impose a definite start and charter on a project. It's likely to be wrong; at the very least it can be misleading.

The details matter, and imposing a phony narrative arc can elide or obscure the details.

Better Practice

Background and context leading up to a project are very, very important and shouldn't be elided. While every little detail in the entire lead-up to a project isn't helpful, what is interesting is changing the concept of inception of a project.

From the first conversations, we should stop looking at all projects as discrete, unified things with a Beginning, Middle and End. Some projects may be discrete, but many software development projects will have a large backlog, numerous deliverables, constant upgrades, improvements and "maintenance" and will be a career move more than a project.

We have to realize that the very first, preliminary conversations about software involve a prioritized backlog of feature requests.

Later, when we try to impose our standard narrative framework, and have a kick-off meeting, we already understand the project as a series of sprints to build some things. The "deliverables" are ranked from most valuable to least valuable.

Don't Create a Middle

Honesty about the lack of a clear start allows us -- from the very beginning -- more latitude to rethink and re-prioritize. There is no separate moment when clarity sets in. It was always clear that the end-state was "enough" software, not "all of these features and nothing but these features". We avoid the ungrounded middle, and it's emotional low-point.

We avoid the ungrounded middle because we don't impose a phony narrative arc on the enterprise. We can be much more clear that we're involved in a sequence of sprints. We can pace ourselves for the long haul.