Monday, March 2, 2015

Assumptions behind No Estimates

No Estimates does not currently have a single, clear definition. I hope the upcoming No Estimates book will change that. While we wait for the book, I think we need to outline some of the underlying assumptions behind No Estimates:

  1. Software is complex. Fred Brooks (of Mythical Man-Month fame) described the complexity very well in his seminal article No Silver Bullet, and concluded that there is nothing in sight that can make software significantly easier to work with within the next decade — in the years since, we still haven't seen any "silver bullet", and there is no hope on the horizon. In theory, software is deterministic and thereby "merely" complicated, but the advent of multi-processing has reduced this argument. There are also ways of using formal logic to identify and categorize each and every possible state of a program, but this is not feasible for anything beyond a few thousand SLOC.
  2. People are complex. People are definitely not machines. Given the same input on two different days, they can react very differently, for no apparent reason. For an in-depth discussion, you can dip a toe into the field of social complexity.
  3. Coding is design, not manufacturing. The blueprint of software is not the design spec, but source code. From the source code, the manufacturing process — compiling, linking, smoke testing, packaging, deploying etc. — is very deterministic and can be carried out quickly and cheaply by computers. Translating the design spec into source code is detailed design, and requires creative thinking as well as a sense for details and, in any non-trivial project, collaboration skills.
  4. Software development is not a repeatable activity. If you give the same requirement to the same coder twice, he will either copy-paste the previous code and be done in no time, or he will rewrite the code as it should have been written the first time. If you give the same requirement to two different coders, the difference in time can be 3x (some people say 10x). 
  5. Software estimates are uncertain. This should be clear from points 1-4. Also check out Barry Boehm's work on the "cone of uncertainty" and Steve McConnell's books on estimation. To be sure, some technological domains are more predictable than others, but few domains can be predicted more than say 4 months ahead, and the general trend is towards lower and shorter predictability, i.e. higher uncertainty. (There are ways to cope with this, including working with estimate ranges, or using the wisdom of crowds.)
  6. Having targets lowers your performance. When estimates have inherent, non-trivial uncertainty, you are unlikely to ever be on schedule: in other words, you are either early or late compared to target. Running late, some people take risks and cut corners in order to meet the target, while others wear themselves down by working overtime. Both strategies result in mistakes and defects that will have to be fixed later, making people even more late. (People are seldom early: this is because if you deliver ahead of time, you will be punished with tighter schedules in the next project. Smart people don't repeat that mistake.)
  7. Software can be split in smaller, still valuable pieces. You are not forced to implement a software system bottom-up. In fact you can take a piece of end-user functionality and write only the code necessary to make that piece work. The smaller piece you work on, the higher the overhead though, as you may have to implement large pieces of functionality below the surface. However there are methods for identifying user needs and then splitting the functionality so that the most important needs are covered first (e.g. story mapping, feature injection and impact mapping). This means that the early end-user functionality pieces, which are costly, also can have high value. This is supported by the Pareto principle: 20% of the work gives 80% of the value.
  8. Estimates are not goals. (We are for the moment disregarding the question of whether estimates are accurate or not.) Estimates are a means for managing and synchronizing assumptions and expectations. By giving and taking estimates, we might get closer to a joint understanding of the system to build and of its complexities. Our goal should however be to get as much value (features) as we can in the shortest possible time (lowest cost), and in this we should be limited only by the laws of physics, not by the agreements of man.
  9. Managing scope is easier than managing time or cost. This is because of how the iron triangle (cost, scope, time) fits together. In a typical software development project, costs are mainly wages and other personnel-related costs, and they are inexorably linked with time. Adding or removing people (changing the cost) has an impact on time, as the throughput of the team will drop (see The Mythical Man-Month by Fred Brooks). Similarly, changing the time constraint has an impact on the cost. Thus the only remaining variable is scope. Luckily, as per point 7, managing the scope is easier than one might think.
  10. Flow is your friend. The law of large numbers says that when you get enough samples, the variations tend to cancel out. If you can split your work down to the level of < 1 day, you are in the green.
This is a preliminary list of assumptions, based on a couple of hours of work. I reserve the right to update them in the future. Maybe some of them will be proven wrong, who knows.

Getting to the "practices of No Estimates" is not something I will attempt today.

However, it can be noted that these assumptions are in direct contrast with some of the traditional assumptions, to wit:
  • Developing software is to translate design specifications into source code
  • Software estimates are essential
  • Setting targets is the only way to get stuff done
  • Software is valuable only when it's (almost) complete - there is no value in getting halfway
  • Estimates help define the price tag, which is then written into a contract
  • Buying software is much like buying a car or a house - "this is what I want, what's the price tag?"
  • Software development is engineering, and it is repeatable
  • People with similar skill sets (roles) are interchangeable
  • A competent engineer can pick up a new project in a matter of days
I think these differences in assumptions are what makes it so difficult to hold a conversation about No Estimates. No Estimates is only a few years old, it's still emerging and doesn't have a Body of Knowledge to rely on. We may know something more when the book is done.

Thursday, August 28, 2014

Would the real agile governance please stand up...?

I just saw that the highly esteemed Bob Marshall has thrown out a new blog post on agile governance and wanted to call straw man on it. Bob has a lot of good points but I think the article takes several shortcuts when it comes to presenting what governance in general is about, and what agile governance is about.

Governance covers:
  1. what decisions are important,
  2. who makes those decisions, and
  3. how account is rendered
This definition of governance comes from the Canadian Institute of Governance, and they have been thinking about it for a while.

For each and every decision a governance structure either pre-exists (implicitly or explicitly) or has to be created (ad hoc). If you don't have any explicit governance structures, then you are bound to invent one for each decision. If you ignore a decision and it defaults, you have just used the "default" governance model.

Governance has a bad name because it is often associated with centralised, top-down organisational models and tends to be centralised in itself. Devolved and distributed organisations don't talk much about governance, although they also have governance structures in place. Centralised governance can help with achieving economies of scale, but on the other hand such governance structures tend to become outdated and inadequate for rapid decision making. This causes a lot of pain for people who need to get things done before it's too late — in the modern world this means pretty much everyone — and they blame the governance people and by implication, governance itself.

Agile governance does not mean "assuring the business's investment in IT generates business value", because that is called agile portfolio management. Neither does it mean "mitigating the risks that are commonly associated with Agile projects", which is better filed under project management and control.

Agile governance means, quite simply, that the organisation has decision making structures in place that support and are supported by (off the top of my head):

  • working from (and on) a common vision
  • transparency and awareness
  • clear and simple policies
  • distributed consensus-based decision making
  • professional judgement (within agreed limits)
  • regular, rapid cadences
  • learning from feedback
  • etc.

Applying Scrum or some other agile method without changing the governance structures will simply result in organisational schizophrenia. People work at cross purposes, confounding each other and wasting time and energy. For example, management thinks that each project is a separate entity, and requires oversight through weekly reports and a monthly PMO meeting. The Scrum teams believe that projects are unnecessary, and work should instead flow on a company-wide product portfolio board. Put these two together, and there will be confusion and wasted work. If people know what governance is, they have a common vocabulary and can have meaningful conversations about the conflict. If they don't know what governance is, well, better enjoy the show from the outside.

As an aside, this is what Joseph Pelrine means when he talks about "organisational friction". A team starts using Scrum and increases their decision-making frequency while lowering the batch size to match. Other parts of the organisation do not understand the need for high frequency and small batches, and this conflict causes friction and waste heat. People get frustrated and one or more of the following happen: 1) people set up padding to protect themselves, 2) the Scrum teams slows down to a manageable frequency or stops altogether, 3) the organisation speeds up to match the Scrum team.

Some hints for the agile governance road:

  1. Don't make the mistake that governance equals central governance equals slow and inadequate decision making. Governance is ubiquitous — you can ignore it if you want, but governance will not ignore you. 
  2. Avoid blindly copying institutionalised models such as the matrix organisation or the project organisation. How do you know those are the best for you? Best practice is always past practice, and by adopting an existing structure you are simply limiting yourself. Read Porter's seminal article "What is strategy?"
  3. Make organisational experiments, but make them safe-to-fail. Don't be afraid to try new things, but make sure they can be unrolled without further repercussions. Use the relevant theories and thinking models to design potential solutions.
Good luck!

Thursday, February 28, 2013

Video of my TGA'12 talk

A video of my speech on Agile Governance that I held at Tampere Goes Agile 2012 is now available online! It may have up for a while already, I only just noticed.

I'm not happy with the speech, because I think it lacks focus and has too much ad-libbing. This was the first time I held it, and I got very positive feedback. However I clearly need to tighten up the speech next time.

http://vimeo.com/60042038

Friday, August 17, 2012

My good friend and coffee-drinking companion Vasco Duarte posted another blog post on the virtues of not estimating in Agile projects some weeks ago — with data! which I think is both cool and necessary for this discussion. (Actually, it's only anecdotal evidence at this stage but a very good start. I hope people will start contributing data to the GDocs spreadsheet; I will if I only can!)

In summary, Vasco says that counting stories always leads to better predictions than summing estimates. This is not true without modifications. Off the top of my head, here are some of Vasco's assumptions:

  1. There is a significant overhead spent on estimating and maintaining estimates, and the overhead grows exponentially with the number of items (finding one specific item from a list of ten is MUCH faster than finding one from a list of 1000)
  2. The estimation activity does not include working on the acceptance criteria, APIs, architecture etc. 
  3. There are lots of stories (1000s per release)
  4. The stories are pretty small (on the order of hours)
  5. The team's estimates are worse than random — meaning that the team doesn't really know how to work with stories

Assumption #5 is in itself sufficient to void the "story counts are better than story points" controversy. Further, assumptions #1 and #2 may be mutually exclusive. And further, in his blog post Vasco uses data from a team where assumptions #3 and #4 are true, which indicates data bias.

Through Vasco there's some interesting data available now, and I'll try to make use of it and contribute to the information and knowledge we have. People seem to have so many opinions, but it's time to slam some data down on the table!

Thursday, April 26, 2012

On the importance of focus and feedback

Without focus and rapid feedback, IT projects are severely crippled. Through this post I'll prove (using Excel) that focus and rapid feedback can not only improve your project but shorten it dramatically as well.


For the sake of argument, I've assumed a project team that works over ten units of effort (the horizontal axis: you can think of this either as time units or as money units spent), consumes a cost of 10 monetary units and produces 20 monetary units worth of value. These numbers are pulled out of my hat and don't matter much: only the numbers on the scales will change but the graphs themselves will be identical. The ROI is of course value minus cost compared to cost: with these numbers the ROI is 100% which is fairly low as ROIs go but serves our illustrative purpose.

I will use this same team and project to draw up and project the ROI for four different approaches:

  1. A traditional plan-driven project delivering near the end
  2. An unfocused project with continuous delivery
  3. A focused project with an 80/20 Pareto distribution of value
  4. A focused project with an 80/50 Pareto distribution of value


First, let's consider the traditional plan-driven IT project. The team implements requirements in any old order (easiest first? most interesting first? software stack from bottom up?) and makes a 1.0 delivery quite late in the project followed by 1.0.1 and 1.1 deliveries. In this case, the return on investment looks like this from the customer's viewpoint:


Doesn't it look realistic? :-) Please note that this is a successful plan-driven IT project that actually delivers on budget! The costs are accumulating all the time, but the customer receives their first dose of value quite late. The last deliveries add some random functions and fix a number annoying bugs, but the ROI (the dotted line) doesn't increase much anymore but hovers around 100%.

From the start up until the first delivery, the customer doesn't really know what's happening. A lot of reports have dropped in, but no working code. The customer would not know if the project was in trouble.

Now consider the same team doing continuous delivery. They're still implementing requirements — errr, backlog items! — in any order that seems reasonable, but deliver to the customer on a weekly basis. From the customer's perspective this is a game-changer: after the initial product has been delivered, the value just racks up!


This does require some basic enablers such as a system for continuous integration and automated testing. Plus a strongly disciplined team that doesn't tolerate bugs.

However, while all requirements are of equal value, some just might be more equal than others. In fact, some researchers [1, 2] think that the value distribution should be a Pareto curve, also known as the "20/80 rule". In plain language this means that 20% of the work brings 80% of the value. If the team could somehow (hint hint: ask the customer!) determine which requirements are the most important, the situation would instead look like this:


What's happening here? Instead of going up linearly, the return on investment rises sharply before turning into a slow decline! Indeed, there seems to be a point of maximum ROI somewhere around 3 effort units, where the return on investment is almost 150%: way more than the projected 100%.

Now if you were the business owner and were looking at the ROI only, when would you terminate the project? Most likely at some point between 2 and 4 effort units, because the ROI curve is quite flat at the top and there's a quite large span of effort that would bring you almost the maximum ROI. So let it run to 4 E.U., then terminate. It really doesn't make economic sense to continue after that.

Is it possible to terminate the project at 4 E.U.? Yes of course. Since the team is delivering regularly on a weekly or perhaps even daily basis, the customer always has a working system and there are no technical objections to terminating the development project. Taking a leaf from Jeff Sutherland's "Money for Nothing and (Your) Change for Free", the contract could specify that the customer can terminate the project with a one-sprint notice period, by paying a certain percentage of the extant work.

This approach also requires discipline, but this time it's from the customer. The customer must work with the team early and often, maintaining and prioritizing the backlogs. The customer must also be prepared to "cut out" a large swatch of the initial "requirements". (It helps if you're the kind of person who sees a half-empty glass as half full.)

And here's a more moderate 50/80 Pareto curve, taking 50% of the effort to reach 80% of the value. In this case the maximum ROI is reached at around 6 effort units, but anything between 3 and 9 E.U. will bring in more than the projected original ROI. Or to put it another way, the supplier could double or triple their hourly fees and still meet the cost expectations of the customer.


So what's the point of this Excel exercise? As I stated at the very beginning, relentless focus and rapid feedback (in the form of continuous delivery) can be a game-changer.

These models are of course severely simplified, e.g. they don't account for the fact that it takes a while to whip up the first usable version of the product. But the point is still valid, I think. What do you think?


References:

[1] B. Boehm. Value-based software engineering: reinventing. SIGSOFT Softw. Eng. Notes, 28(2):3–, 2003. ISSN 0163-5948. doi: 10.1145/638750.638775.

[2] J. Bullock. Calculating the value of testing. Software Testing and Quality Engineering, pages 56–62, May/June 2000.

Wednesday, April 11, 2012

Systems theory

Kenneth Boulding (1956) generated a hierarchy of systems to support the General Systems Theory of Ludwig von Bertalanffy (1968). (Bertalanffy developed the GST from 1937 onwards.) Each level in the nine-level hierarchy includes the functionalities and attributes of all the lower levels.

The lowest level in the hierarchy is static, containing only labels and lists. The second level is comparable to clockwork, simple motions and machines, balances and counter-balances. The third level is cybernetic, self-controlling with feedback and information transmission. The fourth level is open, living, self-maintaining and self-reproducing. The fifth level is genetic, where labor is divided between differentiated, mutually dependent components that grow according to blueprints (e.g. DNA). The sixth level is animal, featuring self-awareness, mobility, specialized receptors and nervous systems. The seventh level is human, with self-consciousness and a sense of passing time. The eight level is social organization, with meanings and value systems. The ninth level is transcendental, metaphysical.

It's important to note that current natural science has not gone much beyond level four. Organizations are level eight. This means that there is a four-level gap between on one hand the organizations we wish to study, and on the other the scientific tools we have at our disposal.

Ludwig von Bertalanffy. General System theory: Foundations, Development, Applications. George Braziller, New York, 1968.
Kenneth Boulding. General Systems Theory. The Skeleton of Science. Management Science, 2, 3, pp.197-208, April, 1956. http://www.panarchy.org/boulding/systems.1956.html

Software and industrialism

Software development is post-industrial. It is so by definition: according to Alvin Toffler (1970) the computer and telecom industry have ignited a social revolution. Daniel Bell (1973) concurs: post-industrial society is organized around knowledge creation and the uses of information, activities that have been revolutionized by the computer. The post-industrial era is also called "the information era" by Bell and others.

Is it possible to use  to use industrial methods to manage post-industrial activities like software development?

Alvin Toffler. Future Shock. Random House, London, 1973.
Daniel Bell. The coming of post-industrial society. Basic Books, New York, 1970.