Sunday, April 28, 2013

Agile Clinic: Dear Allan, we have a little problem with Agile...

Consider this blog an Agile Clinic. On Friday an e-mail dropped into my mailbox asking if I could help. The sender has graciously agreed to let me share the mail and my advice with you, all anonymously of course…



The sender is new to the team, new to the company, they are developing a custom web app for a client, i.e. they are an ESP or consultancy.



“the Developers work in sprints, estimating tasks in JIRA as they go. Sprints last three weeks, including planning, development and testing. I have been tasked to produce burndowns to keep track of how the Dev cells are doing.”



OK, sprints and estimates are good. I’m no fan of Jira, or any other electronic tool but most team use one so nothing odd so far. But then:



“three week sprints”: these are usually a sign of a problem themselves.



I’ll rant about 3-week Sprints some other time but right now the main points are:


3 weeks are not a natural rhythm, there is no natural cycle I know of which takes 3 weeks; 1 week yes, 2 weeks yes, 4 weeks (a month) yes, but 3? No.



In my experience teams do 3 weeks because they don’t feel they are unto shorter iterations. But the point of short iterations is to make you good at working in short cycles so extending the period means you’ve ducked the first challenge.



“planning, development and testing” Good, but I immediately have two lines of enquiry to peruse. Planning should take an afternoon, if it needs to be called out as an activity I wonder if it is occupying a lot of time.



Second: testing should come before development, well, at least “test scripts”, and it should be automated so if it comes after development it is trivial.



Still, many teams are in this position so it might be nothing. Or again, it could be a sign the team are not challenging themselves.



Also, you are doing the turndowns but by the sounds of it you are not in the dev teams? And yo have Jira? I would expect that either Jira produces them automatically or each devl “cell” produces their own. Again, more investigation needed.



Continuing the story:



“The problem I’m encountering is this: we work to a fixed timetable, so it isn’t really Agile.”



No, Agile works best in fixed deadline environments. See Myth number 9 in my recent Agile Connection “12 Myths of Agile” - itself based on an earlier blog “11 Agile Myths and 2 Truths”.



“We have three weeks to deliver xyz and if it gets to the end of the sprint and it isn’t done, people work late or over the weekend to get it done.”



(Flashback: I worked on Railtrack privatisation in 1996/7, then too we worked weekends, death march.)



Right now the problem is becoming clear, or rather two problems.



Problem 1: It isn’t done until it is done and accepted by the next stage (testers, users, proxy users, etc.). If it isn’t done then carry it over. Don’t close it and raise bugs, just don’t close it.



Problem 2: The wrong solution is being applied when the problem is encountered, namely: Overtime.



As a one-off Overtime might fix the problem but it isn’t a long term fix. Only the symptoms of the problem are being fixed not the underlying problem which explains why it is reoccurring. (At least it sounds like the problem is reoccurring.)



Overtime, and specifically weekend working, are also a particularly nasty medicine to administer because it detracts from the teams ability to deliver next time. If you keep taking this medicine it you might stave off the decease but the medicine will kill you in the end.



The old “40 hour work week” or “sustainable pace” seems to be ignored here - but then to be fair, an awful lot of Scrum writing ignores these XP ideas.



Lines of enquiry here:


  • What is the definition of done?
  • Testing again: do the devs know the end conditions? Is it not done because it hasn’t finished dev or test?
  • What is the estimation process like? Sounds like too much is being taken into the sprint
  • Are the devs practicing automated test first unit testing? aka TDD
  • Who’s paying for the Overtime? What other side effects is it having?
“This means burndowns don’t make any sense, right? Because there’s no point tracking ‘time remaining’ when that is immaterial to the completion of the task.”

Absolutely Right.



In fact it is worse than that because: either you are including Overtime in your burn downs in which cases your sprints should be longer, or you are not, in which case you are ignoring evidence you have in hand.



The fact that burndowns are pointless is itself a sign of a problem.



Now we don’t know here: What type of burn-downs are these?



There are (at least) two types of burn downs:


  • Intra-sprint burn-down which track progress through a sprint and are often done in hours
  • Extra-sprint burn-down which tracks progress against goal over multiple sprints; you have a total amount of work to do and you burn a bit down each sprint.
I’ve never found much use for intra-sprint burn-downs, some people do. I simply look at the board and see how many cards are in done and how many in to-do.

And measuring progress by hours worked is simply flawed. (Some of my logic on this is in last years “Story points series” but I should blog more on it.)



Extra-sprint burn-downs on the other hand I find very useful because they show the overall state of work.



From what is said here it sounds like hours based intra-sprint burn-downs are in use. Either the data in them is bad or the message they are telling is being ignored. Perhaps both.



“I was hoping you might be able to suggest a better way to do it? I feel like we should be tracking project completion, instead, i.e. we have xyz to do, and we have only done x&y. My main question is: Is there a useful way to use estimates when working to a fixed deadline by which everything needs to be completed by?”



Well Yes and Yes.


But, the solution is more than just changing the burn-down charts and requires a lot of time - or words - to go into. I suspect your estimating process has problems so without fixing that you don’t have good data.



Fortunately I’ve just been writing about a big part of this: Planning meetings.



And I’ve just posted a Guide to Planning Meetings on the Software Strategy website. It is designed to accompany a new dialogue sheet style exercise. More details soon. I should say both the guide and sheet fall under my “Xanpan” approach but I expect they are close enough to XP and Scrum to work for most teams.



This quote also mentions deadlines again. I have another suspicion I should really delve into, another line of enquiry.



Could it be that the Product Owners are not sufficiently flexible in what they are asking for and are therefore setting the team up to fail each sprint? By fail I mean asking they to take on too much, which if the burn-downs and velocity measurements aren’t useful could well be the case?



We’re back to the Project Managers old friend “The Iron Triangle.”



Now as it happens I’ve written about this before. A while ago in my ACCU Overload pieceTriangle of Constraints” and again more recently (I’ve been busy of late) in Principles of Software Development (which is an work in progress but available for download.)



This is where the first mail ended, but I asked the sender a question or two and I got more information:



“let's say the Scrum planners plan x hours work for the sprint. Those x hours have to be complete by the end - there's no room for anything moving into later sprints.”



Yikes!


Scrum Planners? - I need to know more about that


Plan for hours - there is a big part of your problems.



No room to move work to later springs - erh… I need to find out more about this but my immediate interpretation is that someone has planned out future sprints rather rigidly. If this is the case you aren’t doing Agile, you aren’t doing Scrum, and we really need to talk.



I’m all for thinking about future work, I call them quarterly plans these days, but they need to be flexible. See Three Plans for Agile from a couple of years back(long version is better, short version was in RQNG.



“Inevitably (with critical bugs and change requests that [deemed] necessary to complete in this sprint (often)) the work increases during the sprint, too.”



Work will increase, new work will appear, and thats why you should keep the springs FLEXIBLE. You’ve shot yourself in the foot by the sounds of it. I could be wrong, I might be missing something here.



Right now:


  • Bugs: I’m worried about your technical practices, what is your test coverage? How are the developers at TDD? You shouldn’t be getting enough bugs to worry about
  • Change requests are cool is you are not working to a fixed amount of work and if you haven’t locked your sprints down in advance.
You can have flexibility (space for bugs and change requests) or predictability (forward scheduling) but you can’t have both. And I can prove that mathematically.

You can approach predictability with flexibility if you work statistically - something I expose in Xanpan - but you can only do this with good data. And I think we established before your data is shot through.



“This leads to people 'crunching' or working late/weekends at the end of the sprint to get it all done. It is my understanding that this isn't how Agile is supposed to work.”



Yes. You have a problem.



So how should you fix this?



Well obviously the first thing to do is to hire me as your consultant, I have very reasonable rates! So go up your management chain until you find someone who sees you have a problem and would like it fixed, if they don’t have the money then carry on up the chain.



Then I will say, first at an individual level:


  • The intra-sprint sprint hours based burn-downs are meaningless. Replace them with an extra-sprint charts count your delivery units, e.g. User Stories, Use Cases, Cukes, Functional spec items, what ever the unit of work is you give to developers and get paid to deliver; count them and burn the completed units each sprint
  • Track bugs which escape the sprint; this should be zero but in most cases is higher, if its in double figures you have series problems. The more bugs you have the longer your schedule will be and the higher your costs will be.
  • See if you can switch to a cumulative flow diagram charting to show: work delivered (bottom), work done (developed) but not delivered, work to do (and how it is increasing change requests), bugs to do
  • Alternatively produce a layered burn-down chart (total work to do on the bottom), new work (change requests) and outstanding bugs (top)
  • Track the overtime, find who is paying for this, they have pain, find out what the problem is they see
None of these charts is going to fix your problems but they should give you something more meaningful to track than what you have now.

Really you need to fix the project. For this I suspect you need:


  • Overhaul the planning process, my guess is your estimation system is not fit for purpose and using dice would be more accurate right now
  • Reduce sprints to 1 week, with a 4 weekly release
  • Push Jira to one side an start working with a physical board (none was mentioned so I assume there is none.)
  • Ban overtime
We should also look at your technical practices and testing regime.

These are educated-guesses based on the information I have, I’d like to have more but I’d really need to see it.



OK, that was fun, thats why I’ve done it at the weekend!



Anyone else got a question?


Monday, April 22, 2013

To estimate or not to estimate, that is the question

Disparaging those who provide software estimates seems to be a growing sport. At conferences, in blogs and the twitter-verse it seems open season for anyone who dares to suggest a software team should estimate. And heaven help anyone who says that an estimate might be accurate!



Denigrating estimation seems to be the new testosterone charged must-have badge for any Agile trainer or coach. (I’ve given up on the term Agile Coach and call myself and Agile Consultant these days!)



Some those who believe in estimation are hitting back. But perhaps more surprisingly I’ve heard people who I normally associate with the Scrum-Planning Poker-Burndown school of estimation decry estimation and join in the race to no-estimation land.



This is all very very sad and misses the real questions:


  • When is it useful to estimate? And when is it a waste of time and effort?
  • In what circumstances are estimates accurate? And how can we bring those circumstances about?
These are the questions we should be asking. This is what we should be debating. Rather than lobbing pot-shots at one another the community should be asking: “How can we produce meaningful estimates?”

In the early days of my programming career I was a paid up member of the “it will be ready when its ready” school of development. I still strongly believe that but I also now believe there are ways of controlling “it” (to make it smaller/shorter) and there are times when you can accurately estimate how long it will take.



David Anderson and Kanban may have fired the opening shots in the Estimation Wars but it was Vasco Duarte who went nuclear with his “Story Points considered harmful” post. I responded at the time to that post with five posts of my own (Story points considered harmful? - Journey's start, Duarte's arguments, Story points An example, Breakdown and Conclusions and Hypothesis) and there is more in my Notes on Estimation and Retrospective Estimation essay and Human’s Can’t Estimate blog post - so I’ve not been exactly silent on this subject myself.



Today I believe there are circumstances where it is possible to produce accurate estimates which will not cost ridiculous amounts of time and money to make. One of my clients in the Cornish Software Mines commented “These aren’t estimates, that is Mystic Meg stuff, I can bring a project in to the day.”



I also believe that for many, perhaps most, organisations, these conditions don’t hold and estimation is little more than a placebo used to placate some manager somewhere.



So what are these circumstances? What follows is a list of conditions I think help teams make good estimates. This is not an exhaustive list, I’ve probably missed some, and it may be possible to obtain accuracy with some conditions absent. Still, here goes….


  • The team contains more than at least two dedicated people
  • Stable team: teams which add and loose members regularly will not be able to produce repeatable results and will not be able to estimate accurately. (And there is an absence of Corporate Psychopathy.)
  • Stable technology and code base: even when the team is stable if you ask them to tackle different technologies and code bases on a regular bases their estimates will loose accuracy
  • Track record of working together and measuring progress, i.e. velocity: accuracy can only be obtained over the medium to long run by benchmarking the team against their own results
  • Track the estimates, work the numbers and learn lessons. Both high level “Ball Park” and detailed estimates need to be tracked and analysed for lessons. Then, and only then, can delivery dates be forecast
  • All work is tracked: if they team have to undertake work on another project it is estimated (possibly retrospectively) in much the same manor as the main stream of work and fed into the forecasts
  • Own currency: each team is different and needs to be scored in its own currency which is valued by what they have done before. i.e. measure teams in Abstract Points, Story Points, Nebulous Units of Time, or some other currency unit; this unit measures effort, the value of the unit is determined by past performance. In Economists’ lingo this is a Fiat Currency
  • Own estimates: the team own the estimates and can change them if need be, others outside the team cannot
  • Team estimates: the team who will do the work collectively estimate the work. Beware influencers: in making estimates the teams needs to avoid anchoring, take a “Wisdom of crowds” approach - take multiple independent estimates and treat experts and anyone in authority like anyone else.
  • (Planning Poker is a pretty good way of addressing some of these points, I still teach planning poker although there may be better ways out there)
  • Beware The Planning Fallacy - some of the points above are intended to help offset this
  • Beware Goodhart’s Law, avoid targeting: if the estimates (points) in anyway become targets you will devalue your own currently, when this happens you will see inflation and accuracy will be lost
  • Don’t sign contracts based on points, this violates Goodhart’s Law
  • Overtime is not practiced; if it is then it is stable and paid for
  • Traditional time tracking systems are ignored for forecasting and estimating purposes
  • Quality: teams pay attention to quality and stove to improve it. (Quality here equates to rework.)
  • The team aim for overall accuracy in estimates not individual estimates; for any given single piece of work “approximately right is better than precisely wrong”
  • Dependencies & vertical teams: Teams are not significantly dependent on other groups or teams; they posses the skills and authority to do the majority of the work they need to
  • The team are able to flex the “what” the thing they are building through negotiations and discussions. (Deadlines can be fixed, teams members should be fixed, “the what” should be flexible.)
  • The team work to a series of intermediate deadlines
  • It helps if the team are co-located and use a common visual tracking system, e.g. a white board with cards
  • Caveat: even if all the above I wouldn’t guarantee any forecasts beyond the next 3 months; too much of the above relies on stability and beyond 3 months, certainly beyond 6, that can’t be guaranteed
My guess - and it is only a guess - is that when these conditions don’t hold you will get the random results that Duarte described. Sure you might be able to get predictable results with a subset of these factors but I’m not sure which subset.

The more of these factors are absent the more likely your velocity figures will be random and estimates and forecast waste. When that happens you almost certainly are better off dumping estimation - at best it is a placebo.



Looking at this list now I can see how some would say: “There are too many conditions here to be realistic, we can’t do it.” For some teams I’d have to agree with you. Still I think many of these forces can be addressed, I know at least one team that can do this. For others the prognosis is poor, for these companies estimation is worse than waste because the forecasts it produces mislead. You need to look for other solutions - either to other estimation techniques or to managing without.



I’d like to think we can draw the estimation war to an end and focus on the real question: How do we produce meaningful estimates and when is it worth doing so?

Monday, April 15, 2013

Requirements and Specifications

As I was saying in my last blog, I’m preparing for a talk at Skills Matter entitled: “Business Analyst, Product Manager, Product Owner, Spy!” which I should just have entitled it “Requirements: Whose job are they anyway?” and so I’ve been giving a lot of thought to requirements.



I finished the last blog entry noting that I was concerned the way I saw Behaviour Driven Development (BDD) going and I worried that was becoming a land-grab by developers on the “need side” of development. (Bear with me, I’ll come back to this point at the end.)



Something I didn’t mention in the last blog was that I thought: if I’m doing a talk about “need” I’d better clearly define Requirements from Specifications. So I turned to my bookshelves….



The first book I picked up was Mike Cohn’s User Stories Applied, the nearest thing the Agile-set has to a definitive text on requirements. I turned to the index and…. nothing. There is no mention of Specifications or of Requirements. The nearest he comes is a reference to “Requirements Engineering” efforts. Arh.



Next up Alistair Cockburn’s Writing Effective Use Case, the shortest and best reference I know to Use Cases. No mention of Specifications here either, although there are some mentions of Requirements there isn’t a definition of what Requirements are.



So now I turned to a standard textbook on requirements: Discovering Requirements: How to Specify Products and Services by Alexander and Beus-Dukis. A good start, the words Requirements and Specify are in the title. Specifications gets a mention on page 393, thats it. And even there there isn’t much to say. True Requirements runs throughout the book but doesn’t help me compare and contrast.



Now I have a lot of respect for Gojko Adzic so I picked up his Specification by Example with great hope. This has Specifications running through it like the words in seaside-rock, and there are half a dozen mentions of requirements in the index. But….



When Gojko does talk about Requirements he doesn’t clear differentiate between Requirements and Specifications. This seems sloppy to me, unusual to Gojko, but actually I think there is an important point here.



In everyday, colloquial, usage the words Requirements and Specifications are pretty interchangeable. In general teams, and Developers in particular, don’t differentiate. There are usually one or the other, or neither, and they are both about “what the software should do.” On the occasions were there are both they are overkill and form voluminous documentation (and neither gets read.)



The fact that so many prominent books duck the question of requirements and specification makes me think this is a fairly common issue. (It also makes me feel less guilty about any fuzziness in my own mind.)



To solve the issue I turned to Tom Gilb’s Competitive Engineering and true to form Tom helpfully provided definitions of both:


  • “A requirement is a stakeholder-desired, or needed, target or constraint” (page 418)
  • “A ‘specification’ communicated one or more system ideas and/or descriptions to an intended audience. A specification is usually a formal, written means for communicating information.” (page 400)
This is getting somewhere - thanks Tom. Requirements come from stakeholders, Specifications go to some audience. And the Specification is more formal.

Still its not quiet what I’m after and in the back of my mind I knew Michael Jackson had a take on this so I went in search of his writing.



Deriving Specifications from Requirements: An Example (Jackson & Zave, ACM press 1995) opens with exactly what I was looking for:



  • “A requirement is a desired relationship among phenomena of the environment of a system, to be brought about by the hardware/software machine that will be constructed and installed in the environment.
  • A specification describes machine behaviour sufficient to achieve the requirement. A specification is a restricted kind of requirement: all the environment phenomena mentioned in a specification are shared with the machine; the phenomena constrained by the specification are controlled by the machine; and the specified constraints can be determined without reference to the future. Specifications are derived from requirements by reasoning about the environment, using properties that hold independently of the behaviour of the machine.”
There we have it, and it fits with Tom’s description. Lets me summarise:
  • A requirement is a thing the business wants the system to bring about
  • A specification a restricted, more exact, statement derived from the requirement. I think its safe to assume there can be multiple specifications flowing from one requirement.
From this I think we can make a number of statements in the Agile context:
  • Agile work efforts should see requirements as goals
  • A User Story, or plain Story, may be a Requirement itself, or it might be Requirement or Specification which follows from an previous Requirement.
  • At the start of a development iteration the requirement should be clear but the specification may be worked out during the iteration by developers, testers, analysts or others.
  • Over analysis and refinement to specifications will restrict the teams ability to make trade-offs and will also prove expense as requirements change during the development effort.
  • Therefore while requirements should be know at least before the start of the iteration specifications should only be finalised during the iteration.
In discussing this on Twitter David Edwards suggested the example of a business requirement to provide a login screen. Presumably the business requirement would be something like “All users should be validated (by means of a login system).” From this would flow the need to be able to create a user, delete a user, administer a user, etc. etc. These could be thought of as requirements themselves or as specifications. Certainly what would be a specification would be something like “Ensure all passwords contain at least 8 characters and 1 numeric.”

Which bring us back to BDD.



Having worked through this I conclude that BDD is an excellent specification tool. After all BDD is an implementation of Specification by Example.



And while fleshing out specifications may lead to the discovery of new requirements, or the reconsideration of existing requirements, BDD is not primarily a requirements mechanism and probably shouldn’t be used as one.



Requirements need to be established by some other mechanism, deriving specifications from those requirements may well be done using BDD or another SbE technique.



Now, while BDD and SbE may well give Developers first class specification tools these tools should not be mistaken for requirements tools and shouldn’t be used as such.



Pheww, does that all make sense?


I need to ponder on this, I suspect there is a rich seem of insight in being clear about specifications and requirements.

Requirements whose job are they anyway?

Later this week I’m giving a talk at Skills Matter entitled: “Business Analyst, Product Manager, Product Owner, Spy!” The talk title is a reference to the John Le Carre book “Tinker Tailor Solider Spy!”, its probably too clever by half and I should just have entitled it “Requirements: Whose job are they anyway?”



The talk idea was born out of what I see as confusion and land-grabbing in the requirements space, or as I prefer to think of it “the need side” i.e. the side of development which tries to understand what is needed.



I think there are a number of problems on this side of the business...



First all too often this side is neglected, companies believe that Developers will somehow comprehend what is needed from a simple statement. In the worst cases this is a condition I refer to as: “Requirements by Project Title”. Just because Developers understand the technology does’t mean they understand what is needed.



Unfortunately Agile tends to make this problem worse because a) developers think they get to decide what is needed, b) business see Agile as a cure all.



The second problem is the exact opposite of the first: Developer exclusion from requirements. In this case a Requirements Engineering type is the one who is tasked with understanding need, probably producing excessive documentation, and probably giving it to developers who are then expected to create something. In the extreme this means developers never get to meet, talk to or understand the people and businesses that will be using the product.



However, it was another problem that was on my mind more when I thought up the talk: the confusion of roles between Business Analysts and Product Managers, made worse by the appearance of the Scrum Product Owner title.



In the UK it seems to me that too many companies think requirements are done by Business Analysts. This is often the case when development groups are developing software for the company’s own use or by a specific client. But when the product is being Developers for multiple external customers, when it will be sold as a product then requirements are the job of a Product Manager. I’ve written about this role before, several times so I won’t repeat myself right now - see Inbound & Outbound marketing or Product Management in the UK.



Part of the problem is that in the UK - I can’t really talk for other countries but I think most of Europe is the same - software companies appoint Business Analysts to do what us essentially a Product Manager role. I base this statement partly on the fact that when I deliver my Agile for Business Analysts course (at Skills Matter later this week and another version then later this month at Developer Focus) I find people on the course who I would regard as Product Managers but they - and their employees often have never heard of the Product Manager role.



Finally, I’ve also become concerned in recent months the Behaviour Driven Development is being used by Developers in an attempt to occupy the requirements space - a land grab!



On the one hand this needn’t be a problem, if BDD allows Developers to better understand the problem they are trying to solve then I would expect development to go smoother.



On the other hand there are three reasons why I’m concerned about this trend:



The “need side” is a fussy, messy, ambiguous area and I sometimes wonder if the rational engineering mindset is right tool here.


  • I wonder if Developers really have the skills to understand the need side. Undoubtedly some do but I’m far from convinced they all do. Indeed those who do might be better off moving entirely from development into the BA or Product Manager role.
  • Time: perhaps is the main concern
I have long believed that to really understand the need side, and to get to the bottom of what is needed requires time. If Developers are trying to tackle the need side while still coding then I question whether they have the time to do both. If they do not then I believe that opinion and assumptions will substitute from fact finding and requirements validation.

(I should say that on a small work effort, with a suitable Developer(s), having one person do the needs assessment and coding can be ideal.)



I’ve only really stated the problem here not the solution. I’m still working all this out in my own head, Wednesday’s talk should move the discussion forward and I’ve already started to sketch another blog entry on this subject - coming up soon “Requirements & Specifications.”