Moved

Moved. See https://slott56.github.io. All new content goes to the new site. This is a legacy, and will likely be dropped five years after the last post in Jan 2023.

Monday, December 28, 2009

The Secret Architect's Cabal

Recently, I had two very weird "meta" questions on the subject of OO design.

They bother me because they imply that some Brother or Sister Architect has let slip the presence of the Secret Technologies that we Architects are hiding from the Hoi Polloi developers.

These are the real questions. Lightly edited to fix spelling and spacing.
  • "What are the ways to implement a many to many association in an OO model?"
  • "Besides the relational model, what other persistence mechanisms are available to store a many to many association?"
These are "meta" questions because they're not asking anything specific about a particular data model or set of requirements. I always like unfocused questions because all answers are good answers. Focus allows people to see through the smoke and mirrors of Architecture.

The best part about these questions (and some similar questions that I didn't paste here) is that they are of the form "Is there a secret technique you're not telling us about?"

It's time to come clean. There is a Secret Cabal of Architects. There are things we're not telling you.

Many-to-Many

The many-to-many question shows just how successful the Society of Secrets (as we call ourselves) has been about creating a SQL bias. When folks draw higher-level data model diagrams that imply (but don't show) the required many-to-many association table, the Architects have failed. In other organizations the association table is So Very Important that it is carefully diagrammed in detail. This is a victory for forcing people to think only in implementation details.

In the best cases, the DBA's treat the association table as part of the "dark art" of being a DBA. It's something they have to dwell on and wring their hands over. This leads to developers getting wrapped around the axle because the table isn't a first-class part of the data model, but is absolutely required as part of SQL joins.

It's a kind of intellectual overhead that shows how successful the Secret Architecture Society is.

The presence of a dark secret technique for implementing association leads to smart developers asking about other such intellectual overhead. If there's one secret technique, there must be many, many others.

It is to laugh.

The Secret Techniques for Associations

The problem arises when someone ask about the OO implementation of many-to-many associations. It's really difficult to misdirect developers when the OO implementation is mostly trivial and not very interesting. There's no easy to add complexity.

In Python there are a bunch of standard collections. The language has a bunch that are built in. Plus, in Python 2.6, the collections module has Abstract Base Classes that clearly identify all of the collections.

There isn't too much more to say on the subject of many-to-many associations. That makes it really hard to add secret layers and create value as an architect.

The best I can do with questions like this is say "I was sworn to secrecy by the secret Cabal of Architects, so I can't reveal the many-to-many association techniques in a general way. Please get the broomstick of the Wicked Witch of the West if you want more answers."

Persistence

The persistence question, however, was gift. When someone equates "relational model" with a "persistence mechanism", we have lots of room to maneuver. I know that we're talking about a "relational database" as a "persistence mechanism". However, it's clear they don't know that, and that's my opportunity to sow murkiness and confusion.

Sadly, the OS offers us exactly one "persistence mechanism". Yet, the question implies that the Secret Cabal of Architects know about some secret "alternative persistence mechanisms" that mere programmers can't be told about.

Every device with a changeable state appears as a file. All databases (relational, hierarchical, object, whatever) are just files tarted up with fancy API's that allow for performance and security. Things like indexing, locking, buffering, access controls, and the like are just "features" layered on top of good-old files. But those features are So Very Important, that they appear to be part of persistence.

Excellent.

Logical vs. Physical

What's really helpful here is the confusion folks have with "Logical" vs. "Physical" design layers.

Most DBA's claim (and this is entirely because of ERwin's use of the terms) that physical design is when you add in the database details to a logical design. This is wrong, and it really helps the Architect Secret Society when a vendor misuses common terms like that.

The Physical layer is the file-system implementation. Table spaces and blocks and all that what-not that is the underlying persistence.

The Logical layer is what you see from your API's: tables and views.

The relational database cleanly separates logical from physical. Your applications do not (indeed, can not) see the implementation details. This distinction breaks down in the eyes of DBA's, however, and that lets us insert the idea that a database is somehow more than tarted-up files.

Anyone asking about the "relational model" and "persistence mechanism" has -- somehow -- lost focus on what's happening inside the relational database. This allows us to create Architectural Value by insisting that we add a "Persistence Layer" underneath (or on top of or perhaps even beside) the "Database Layer". This helps confuse the developers by implying that we must "isolate" the database from the persistence mechanism.

Many-to-many and ORM

Sadly, these two questions may turn out to be ORM questions. The problem with ORM layers is that the application objects are trivially made persistent. It's really hard to add complexity when there's an ORM layer.

However, a Good Architect can sometimes find room to maneuver.

A programmer with SQL experience will often think in SQL. They will often try to provide a specific query and ask how that SQL query can be implemented in the ORM layer. This needs to be encouraged. It's important to make programmers think that the SQL queries are First Class Features. The idea that class definitions might map directly to the underlying data must be discouraged.

A good DBA should insist on defining database tables first, and then applying the ORM layer to those tables. Doing things the other way around (defining the classes first) can't be encouraged. Table-first design works out really well for imposing a SQL-centered mind-set on everyone. It means that simple application objects can be split across multiple tables (for "performance reasons") leading to hellish mapping issues and performance problems.

No transaction should make use of SQL set-oriented processing features. Bulk inserts are a special case that should be done with the database-supplied load application. Bulk updates indicate a design problem. Bulk deletes may be necessary, but they're not end-user oriented transactions. Bulk reporting is not transactional and should be done in a data warehouse.

Subverting the ORM layer by "hand-designing" the relational database can create a glorious mess. Given the performance problems, some DBA's will try to add more SQL. Views and Dynamic Result Sets created by Stored Procedures are good ways to make the Architecture really complex. The Covert Coven of Architects likes this.

Sometimes a good developer can be subvert things by creating a "hybrid" design where some of the tables have a trivial ORM mapping and work simply. But. A few extra tables are kept aside that don't have clean ORM mappings. These can be used with manually-written SQL. The best part is populating these extra tables via triggers and stored procedures. This assures us that the architecture is so complex that no one can understand it.

The idea of separating the database into Logical and Physical layers hurts the Architectural Cabal. Wrapping the Logical layer with a simple ORM is hurtful, too. But putting application functionality into the database -- that really helps make Architecture appear to be magical.

The Persistence Mechanisms

The bottom line is that the Secret Conference of Architects doesn't have a pat answer on Persistence Mechanisms. We have, however, a short list of misdirections.
  • API and API Design. This is a rat-hole of lost time. Chasing API design issues will assure that persistence is never really found.
  • Cloud Computing. This is great. The cloud can be a great mystifier. Adding something like the Python Datastore API can sow confusion until developers start to think about it.
  • Multi-Core Computing. Even though the OS handles this seamlessly, silently and automatically, it's possible to really dig into multi-core and claim that we need to rethink software architecture from the very foundations to rewrite our core algorithms to exploit multiple cores. Simply using Unix pipelines cannot be mentioned because it strips the mystery away from the problem.
  • XML. Always good a for a few hours of misdirection. XML as a hierarchical data model mapped to a relational database can really slow down the developers. Eventually someone figures it out, and the Architect has nothing left to do.
  • EJB's. This is digging. It's Java specific and -- sadly -- trumped by simple ORM. But it can sometimes slow the conversation down for a few hours.

Sunday, December 20, 2009

The Data Cartel and "Users"

I work with a CIO who calls the DBA's "The Data Cartel". They control the data. Working with some DBA's always seems to turn into hostage negotiation sessions.

The worst problems seem to arise when we get out of the DBA comfort zone and start to talk about how the data is actually going to be used by actual human beings.

The Users Won't Mind

I had one customer where the DBA demanded we use some Oracle-supplied job -- running in crontab -- for the LDAP to database synchronization. I was writing a J2EE application; we had direct access to database and LDAP server. But to the data cartel, their SQL script had some magical properties that seemed essential to them.

Sadly, a crontab job introduces a mandatory delay into the processing while the user waits for the job to run and finish the processing. This creates either a long transaction or a multi-step transaction where the user gets emails or checks back or something.

The DBA claimed that the delays and the complex workflow were perfectly acceptable to the users. The users wouldn't mind the delay. Further, spawning a background process (which could lead to multiple concurrent jobs) was unacceptable.

This kind of DBA decision-making occurs in a weird vacuum. They just made a claim about the user's needs. The DBA claimed that they wouldn't mind the delay. Since the DBA controls the data, we're forced to agree. So if we don't agree, what? A file "accidentally" gets deleted?

The good news is that the crontab-based script could not be made to work in their environment in time to meet the schedule, so I had to fall back to the simpler solution of reading the LDAP entries directly and providing (1) immediate feedback to the user and (2) a 1-step workflow.

We wasted time because the data cartel insisted (without any factual evidence) that the users wouldn't mind the delays and complexity.

[The same DBA turned all the conversations on security into a nightmare by repeating the catch-phrase "we don't know what we don't know." That was another hostage negotiation situation: they wouldn't agree to anything until we paid for a security audit that illustrated all the shabby security practices. The OWASP list wasn't good enough.]

The Users Shouldn't Learn

Recent conversations occurred in a similarly vacuous environment.

It's not clear what's going on -- the story from the data cartel is often sketchy and missing details. But the gaps in the story indicate how uncomfortable DBA's are with people using their precious data.

It appears that a reporting data model has a number of many-to-many associations. Periodically, a new association arrives on the scene, and the DBA's create a many-to-many association table. (The DBA makes it sound like a daily occurrence.)

Someone -- it's not clear who -- claimed this was silly. The DBA claims the product owner said that incremental requirements causing incremental database changes was silly. I think the DBA is simply too lazy to create the required many-to-many association tables. It's a table with two FK references. A real nightmare of labor. But there were 3 or maybe 4 instances of this. And no end in sight.

It appears that the worst part was that the data model requirements didn't arrive all at once. Instead, these requirements had the temerity to trickle in through incremental evolution of the requirements. This incremental design became a "problem" that needed a a "solution".

Two Layers of Hated User Interaction

First, users are a problem because they're always touching the data. Some DBA's do not want to know why users are always touching the data. Users exist on the other side of some bulkhead. What the users are doing on their side is none of our concern as DBA.

Second, users are a problem because they're fickle. Learning -- and the evolution of requirements that is a consequence of learning -- is a problem that we need to solve. Someone should monitor this bulkhead, collect all of the requirements and pass them through the bulkhead just once. No more. What the users are learning on their side is none of our concern as DBA.

What's Missing?

What's missing from the above story? Use Cases.

According to the DBA, the product owner is an endless sequence of demands for data model features. Apparently, adding features incrementally is silly. Further, there's no rhyme or reason behind these requests. To the DBA they appear random.

The DBA wanted some magical OO design feature that would make it possible to avoid all the work involved in adding each new many-to-many association table.

I asked for use cases. After some back and forth, I got something that made no sense.

It turns out that the data model involves "customers" the DBA started out describing the customer-centric features of the data model. After all, the "actor" in a use case is a person and the database contains information on people. That's as far as the DBA was willing to go: repeat the data model elements that involved people.

If It Weren't For the Users

The DBA could not name a user of the application, or provide a use case for the application. They actually refused to articulate one reason why people put data in or took data out. They sent an angry email saying they could not find a reason why anyone would need these many-to-many association tables.

I responded that if there's no user putting data in or getting data out then there's no system. Nothing to build. Stop asking me for help with your design if no person will ever use it.

To the DBA, this was an exercise in pure data: there was no purpose behind it. Seriously. Why else would they tell me that there were no use cases for the application.

Just Write Down What's Supposed to Happen

So I demanded that the DBA write down some sequence of interactions between actual real-world end-user and system that created something of value to the organization. (My idea was to slide past the "use case" buzzword and get past that objection.)

The DBA wrote down a 34-step sequence of steps. 34 steps! While it's a dreadful use case, it's a start: far better than what we had before, which was nothing. We had a grudging acknowledgement that actual people actually used the database for something.

We're moving on to do simplistic noun analysis of the use case to try and determine what's really going on with the many-to-many associations. My approach is to try and step outside of "pure data" and focus on what the users are doing with all those many-to-many associations.

That didn't go well. The data cartel, it appears, doesn't like end-users.

The Final Response

Here's what the DBA said. "The ideal case is to find a person that is actually trying to do something and solve a real end user problem. Unfortunately, I don't have this situation. Instead, my situation is to describe how a system responds to inputs and the desired end state of the system."

Bottom line. No requirements for the data model. No actors. No use case. No reality. Just pure abstract data modeling.

Absent requirements, this approach will turn into endless hypothetical "what if" scenarios. New, fanciful "features" will inevitably spring out of the woodwork randomly when there are no actual requirements grounded in reality. Design exists to solve problems. But the DBA has twice refused to discuss the problem that they're trying to solve by designing additional tables.

Tuesday, December 15, 2009

The Problem with Software Development is...

Folks like to say that there's a "Software Crisis". We can't build software quickly enough, cheaply enough or well enough.

I agree with EWD -- software is really very, very complex. See EWD 316 for more justification of this position.

Is my conclusion is that the cost of software stems from complexity? Hardly news.

No, my conclusion is that the high cost of software comes from doing the wrong things to manage the high cost of software.

The Illusion of Control

Nothing gives a better illusion of control than a project plan. I think that software development project management tools -- MS Project specifically -- is the biggest mistake we can make.

As evidence, I look at Agile methods. One key element of Agile methods is to reduce (or eliminate) the project management nonsense that accumulates around software development.

I think that software development projects are generally pretty complex and a big MPP file doesn't reduce the complexity or help anyone's understanding. I think that we should not make an effort to capture the complexity -- that's simply silly.

If you find that you need a really complex document to capture a lot of really complex complexity, you're doing something wrong.

Hands in the Pocket Explanations

I think that user stories are great because they reduce the complexity down to something that we can articulate and remember. This gives us a fighting chance at understanding.

If the use case requires a big, complicated document, we're missing something essential. It should have a pithy, easy-to-remember, easy-to-write-on-a-sticky-note summary. It can have a detailed technical appendix. But it has to have a pithy, easy-to-articulate summary.

If you can't explain the use case with your hands in your pockets, it's too complex.

Architecture

An architecture diagram is helpful. Architecture -- as a foundation -- has to be subject to considerable analysis to be sure it's right. You need to be absolutely confident that it works. And like any piece of mathematical analysis, you need diagrams and formulas, and you need to show your work.

A miraculous pronunciation that some architecture will work is a terrible thing. A few pithy formula (that we can remember) and some justification are a whole lot better.

The WBS Is The Problem

I find that projects with complicated WBS's have added a layer of complexity and management that aren't helpful. The cost of software is high, so lets add management to try and reduce our costs. On the surface, adding labor to reduce labor doesn't make sense.

Rather than waste time adding work, it would be better to introduce someone who can facilitate decision-making (i.e., a Scrum Master) and keep progress on track.

Incremental releases of partial solutions have more value than weekly status reports.

Meetings with product owners have more value than a carefully-written schedule for doing the poorly-understood process of detailed design.

Justifications

We can justify project management by saying that it somehow makes the software development process more efficient by eliminating "roadblocks" or "inefficiencies".

I used to believe. I no longer buy this.

Let's look at some candidate roadblocks that a project management might smooth out.
  • User Involvement. Or rather, the lack of user involvement. I don't see how a PM does anything except nag the users. If the users aren't motivated to help with software development by answering questions or reviewing the results of a sprint, then the software isn't creating any value. Stop work now and find something the users really want.
  • Technical Resources. Coordinating technical resources (DBA's, sysadmins, independent testers, etc.) doesn't require a complex plan, status meetings or reports. It only requires some phone calls among the relevant folks. Directly.
  • Decision-Making. The PM isn't the product owner, nor are they a user, nor are they technically proficient enough to understand what's really at stake. Essentially, they only act as a facilitator in a conversation that don't fully understand. That's fine, as long as they stick to facilitating and don't take on responsibilities that aren't assigned to them.
At this point, I can find a use for a facilitator ("Scrum Master"). But I can't see why we have just an emphasis on IT project management. The Agile folks seem to have it right. Reduce cost and complexity by actually reducing the cost and complexity. Not by adding management.

Wednesday, December 9, 2009

Hypothetical Designs and Numerosity

I love hypothetical questions. Got a bunch recently. I guess it's the season for hypotheticals.

These all seem to come from the "Averted Glance" school of management. The best part about the "I don't want to know the details" management is that we need to substitute metrics for understanding. One could also call this the "Numerosity" school of management. It's one step above numerology.

There is no substitute for hands-on work. Quantity leads directly to Quality. Bottom Line: touch the technology early and often.

Easier

I described the Sphinx production pipeline as "easier" than DocBook.

Someone asked for a definition of "easier". I had to research the definition of "easier" and found the Standard Information Management Process and Logical Effort index (SIMPLE). This index has a number of objective scoring factors for platform, language, toolset, performance, steps, problems encountered, rework and workaround, as well as the price to tea in China.

I find the SIMPLE index to be very useful for answering the random questions that arise when someone does not want to actually besmirch their fingers by touching the technology.

Considering that Sphinx and the DocBook processing components are both largely free, it seemed easier to me to actually rig them up and run them a few times to see how they work. But that relies on the undefined term "easier". To cut the Gordian Knot while keeping the eyes averted, one resorts to numerosity.

Cleaner and More Uniform

I described XML as cleaner and more uniform than LaTeX. (Even though I've switched to LaTeX because it produces better results.)

Someone asked for a definition of Cleaner and More Uniform. I tried using the Flesch-Kincaid Readability Index, but it was limited to the content and didn't work well for the markup. I tried using this calculator, but it barfed. So I invented by own index based on the Halstead complexity metrics.

I find the Halstead complexity to be very useful for answering random questions that arise when someone does not want to actually burden themselves with looking at the technology. I suppose actual examples of XML vs. LaTex vs. RST would burn holes in the brain, running the risk of killing one of the few working brain cells.

Inheritance vs. Delegation

My favorite, however, is the question on "criteria for when to use / not use inheritance". Asking for criteria is the leading indicator of the Numerosity School of Design. How do I know this?

No Example.

Hypothetical questions never have practical class definitions. They either have no classes at all, or an overly simplified design based on Foo, Bar and Baz. Rather than actually write some code, we'll talk about what might go into writing some code.

The most important part of learning OO design is to actually do a lot of OO design. Code Kata works. Quantity translates directly to Quality.

Don't talk about writing code. Write code. Write lots of code. Write as much code as possible.

I'm revising my Building Skills in OO Design book to make it a better answer to the Inheritance vs. Delegation question. Do the exercises. At the end of the book, you'll know the answers.

Sadly

Sadly, the bulk of IT management does not believe in skill-building. Training is limited to one or two weeks out of 52 (just under 2% of your working life) and that's as often cancelled as granted. Any time spent trying something to see if it will work is aggressively punished ("Quit gold plating that solution and put some garbage into production right now! Learn on your own time. King Cnut Demands It.")

Monday, December 7, 2009

Mutability Analysis

First, there are several tiers of mutability in requirements. These tiers define typical levels of change context of the problem, the problem itself and the forces that select a solution to the problem.
  1. Natural Laws (i.e., Gravity, Natural Selection). As well as metaphysical "laws" (i.e., reality). These don't change much. Sometimes we encapsulate this information with static final constants so we can use names to identify the constants. PI, E, seconds_per_minute, etc.
  2. Legal Context (both statutory law and case law), as well as standards and procedures with the effect of law (i.e. GAAP). Most software products are implicitly constrained, and the constraints are so fundamental as to be immutable. They aren't design constraints, per se, they are constraints on the context space for the basic description of the problem. Like air, these are hard to see, and their effects are usually noted indirectly.
  3. Industry. That is to say, industry practices and procedures which are prevalent, and required before we can be called a business in a particular industry. Practices and procedures that cannot be ignored without severe, business-limiting consequences. These are more flexible than laws, but as pervasive and almost as implicit. Some software will call out industry-specific features. Health-care packages, banking packages, etc., are explicitly tailored to an industry context.
  4. Company. Constraints imposed by the organization of the company itself. The structure of ownership, subsidiaries, stock-holders, directors, trustees, etc. Often, this is reflected in the accounting, HR and Finance systems. The chart of accounts is the backbone of these constraints. These constraints are often canonized in customized software to create unique value based on the company's organization, or in spite of it.
  5. Line of Business. Line of business changes stem from special considerations for subsets of vendors, customers, or products. Sometimes it is a combination of company organization and line of business considerations, making the relationship even more obscure. Often, these are identified as special cases in software. In many cases, the fact that these are special, abnormal cases is lost, and the "normal" case is hard to isolate from all the special cases. Since these are things change, they often become opaque mysteries.
  6. Operational Bugs and Workarounds. Some procedures or software are actually fixes for problems introduced in other software. These are the most ephemeral of constraints. The root cause is obscure, the need for the fix is hidden, the problem is enigmatic.
Of these, tiers 1 to 3 are modeled in the very nature of the problem, context and solution. They aren't modeled explicitly as constraints on problem X, or business rules that apply to problem X, they are modeled as X itself. These things are so hard to change that they are embodied in packaged applications from third parties that don't create unique business value, but permit engaging in business to begin with.

Layers 4 to 6, however, might involve software constraints, explicitly packaged to make it clear. Mostly, these are procedural steps required to either expose or conceal special cases. Once in a while these become actual limitations on the domain of allowed data values.

Considerations.

After considering changes to the problem in each of these tiers, we can then consider changes to the solution. The mutation of the implementation can be decomposed into procedural mutation and data model mutation. The Zachman Framework gives us the hint that communication, people and motivation may also change. Often these changes are manifested through procedural or data changes.

Procedural mutation means programming changes. This implies that flexible software is required to respond to business changes, customer/vendor/product changes, and evolving workarounds for other IT bugs. Packaged solutions aren't appropriate ways to implement unique features of these lower tiers: the maintenance costs of changing a packaged solution are astronomical. Internally developed solutions that require extensive development, installation and configuration aren't appropriate either.

As we move to the lower and less constrained tiers, scripted solutions using tools like Python are most appropriate. These support flexible adaptation of business processes.

Data Model.

Data lasts forever, therefore, the data model mutations fall into two deeper categories: structural and non-structural.

When data values are keys (natural, primary, surrogate or foreign) they generally must satisfy integrity constraints (they must exist, or must not exist, or are mandatory or occur 0..m times). These are structural. The data is uninterpretable, incomplete and broken without them. When these change, it is a either a profound change to the business or a long-standing bug in the data model. Either way the fix is expensive. These have to be considered carefully and understood fully.

When data values are non-key values, the constraints must be free to evolve. The semantics of non-key data fields are rarely fixed by any formalism. Changes to the semantics are rampant, and sometimes imposed by the users without resorting to software change. In the face of such change, the constraints must be chosen wisely.
"Yes, it says its the number of days overdue, but it's really the deposit amount in pennies. They're both numbers, after all."
Mutability Analysis, then, seeks to characterize likely changes to requirements (the problem) as well as the data and processing aspects of the solution. With some care, this will direct the selection of solutions.

Focus.

It's important to keep mutability analysis in focus. Some folks are members of the Hand-Wringers School of Design, and consider every mutation scenario as equally likely. This is usually silly and unproductive, since their design work proceeds at a glacial pace while they overconsider the effects of fundamental changes to company, the industry, the legal context and the very nature of reality itself.

Here's my favorite quote from a member of the HWSoD: "We don't know what we don't know."

This was used to derail a conversation on security in a small web application. Managers who don't know the technology very well are panicked by statements like this. My response was that we actually do know the relevant threat scenarios, just read the OWASP vulnerabilities. Yes, some new threat may show up. No, we don't need to derail work to counter threats that do not yet exist.

Progress.

The trick with mutability analysis is to do the following.

1. Time-box the work. Get something done. Make progress. A working design that is less than absolute perfection is better than no design at all. Hand-wringing over vanishingly unlikely futures is time wasted. Wasted. Create value as quickly as possible.

2. Work up from the bottom. Consider the tiers most likely to change first. Workarounds are likely to change. Features of the line of business might change. Company changes only matter if you've been specifically told the company is buying or for sale. Beyond that, it's irrelevant for the most part. ("But my software will change the industry landscape." No it won't. But if it is really novel, then delivery soon matters more than flexibility. If the landscape changes, you'll have to fundamentally rewrite it anyway.)

3. Name Names. Vague hand-waving mutation scenarios are useless. You must identify specific changes, and who will cause that change. Name the manager, customer, owner, stakeholder, executive, standard committee member, legislator or diety who will impose the change. If you can't name names, you don't really have a change scenario, you have hand-wringing. Stop worry. Get something to work.

But What If I Do Something Wrong?

What? Is it correct? Is it designed to make optimal use of resources? Can you prove it's correct, or do you have unit tests to demonstrate that it's likely to be correct? Can you prove it's optimal? Move on. Maintainability and Adaptability are nice-to-have, not central.

Getting something to work comes first. When confronted with alternative competing, correct, optimal designs, adaptability and maintainability are a way to choose between them.

Thursday, December 3, 2009

The King Cnut School of Management

See this story of King Cnut ruling the waves.

The King Cnut School of Management is management by fiat. Declaring it so.

PM: "When will this transition to production?"

Me: "After the firewall and VM configuration."

PM: "So, can we say Thursday?"

Me: "You can say that, if you want, but you have no basis for that. The firewall hardware is sitting on the loading dock, and the RHEL VM's won't run Python 2.6 with the current SELinux settings. I have no basis for expecting this to be fixed in a week."

PM: "We can just pick a date, and then revise it."

Me: "Good plan. Pick a random date and complain when it's not met. While you're at it, hold the tide back for a few hours, too."