Sprint execution

Baking In Quality with Agile

One of the things that I love about Agile and especially the related techniques of BDD and Test Automation is that if done correctly your teams are essentially baking Quality into your products AS they develop, not after. Traditional SDLC (read waterfall) took a very linear approach to project delivery which in turn translated into a similar approach to how we developed and delivered our software products.

In our traditional delivery methods, quality was not 'baked' in but tested out and we never ever got to the point where we are able to test out all of the 'defects' that are found, instead we create a bug database where 'bugs' go to die.  Every organization typically has some form of a bug database with bugs that were 'found' sometimes years ago and were never fixed (of course begging the question were they ever bugs in the first place?)  In Agile we really don't want to see a bug database because we should be instilling a zero defect policy for each sprint, meaning that we should never be introducing new tech debt into our product.

As I progressed on my Agile journey I came to realize that there are actually two types of 'bugs' that we encounter when we develop software:

  1. Bugs as Missed or Undefined Requirements
  2. Bugs as True Bugs

-- Bugs as Missed or Undefined Requirements - I will argue (and in a book that I am starting to write about this topic) that a majority of the bugs that we find and document aren't really bugs at all, but rather functionality that has been misinterpreted based upon the requirements definition that comes out of segregated development processes, ie Write Requirements, Develop Software, Test Software and Deploy Software.

Language is such an imprecise way of communicating that it almost virtually guarantees that if you have more than 1 person developing the software for your product you will get different interpretations of how to implement the requirement functionally.

Remember the old 'The System Shall' statement? that is commonly used in writing Business Requirements documents?  I always thought that this missed the scope of the requirement, what about what the System Shall Not Do?.  We focus so much on happy path for our functional development that we miss large segments of functionality based upon what an application shouldn't be allowed to do.

Let's take an example of an English word to convey what I'm meaning:

What do you think of when you see the word - BASS

Do you think of this:

Bass_Guitar

OR this?

Bass_Fish

These have the same spelling in the English language yet they have two entirely different meanings. Our life experiences become a prism for how we interpret what we read and how we react to it. (PS, I'm a musician so I think of the instrument before the fish)

Teams that rely on written requirements documents that are reviewed and worked on independently, meaning developers develop the code/functionality and then pass it on to testers will inherently have issues/bugs related to how something was interpreted and then developed.  In the above example the developers may have thought they were delivering a bass guitar, but the testers were expecting a bass fish (yeah extreme I know) but I believe this is the root of many of our issues when trying to deliver what the business expected in the first place.

-- Bugs as True Bugs - I believe that true bugs are more technical than functional (or should be).  Bugs related to how integration happens are very common because although we can describe the behavior of our feature we can't always anticipate issues related to how independent systems will work together.  Many 'true' bugs in Agile are caught in the moment and fixed before they ever make their way to production.  For the Finance people out there this is a tremendous cost saving that has been proven time and again.  ROI is greatly enhanced when you deal with tech debt up front rather over time.  And please don't ever think really that it is more important to get 'something' to production over making sure that the product is operationally sound.

So what to do?

To address the  inherent limitations with our  communication, we need to abstract our thinking into more concrete descriptions of behavior over broad-based statements such as the System Shall.

User Stories and the corresponding BDD acceptance criteria are a great way to do this as a BDD example table clearly defines the behavior of our product functionality via outcome based upon inputs.  The 'language' of BDD is unique so that everyone can begin to have a shared understanding of what the story and behavior of the product(aka system) will do.  BDD abstracts our communication and removes individual interpretation.

In Agile we start by ensuring that the teams understand that they OWN the quality of their delivery. One of the things that I absolutely love about high performing Agile team is that there is no finger-pointing, if a Sprint fails to deliver what the team committed to then everyone shares the blame, not just an individual or functional group.

How we bake quality into our products is by understanding how to write User Stories that provide context without the ability to misinterpret the meaning of the requirements.

To do this we must first ensure that we have a well written user story, what does that look like?

A good user story needs to identify the What and then the Value statement.  Many teams that I have worked with start to write stories that only identify the actor and What but leave out the value statement, which is really the proof that what we are working on is of sufficient value to devote our resources to.  A good user story should never have any Creative or Technical design conveyed, I know that many people like to show their technical knowledge by writing stories that convey what they think the design or system will need to utilize, but all that does is start the team down a path before exploring all options.

Behavior over language interpretation is what you are striving for when writing contextually rich user stories.

BDD with its accompanied Example statements takes an otherwise basic user story and brings it to life.  Much like we do when we take basic ingredients for cookies and then bring them together in the right amounts to deliver awesomeness every time.

Software quality is much like baking, you need:

  • The right ingredients - Good individual team members, honest communication, commitment to quality
  • The right process - Write good user stories, add quality BDD acceptance criteria, code and test in parallel and then deliver, involve the entire team in writing BDD, involve the entire team in the estimation process.

By taking the time to build contextually rich user stories and define the story with BDD acceptance you move towards a shared understanding of the 'behavior of your product' over one that is driven by functional requirements.  Requirements tend to convey to little of behavior and focus more on the big win that is being conveyed to Sr. Management regarding what is being delivered.

Agile is a very disciplined delivery process and in order to bake the quality into your product you need to develop efficient processes that keep the User Story/BDD train running smoothly.  If you are entering a sprint and then writing your BDD then you are already behind, you need to develop a process by which teams are working on current sprint development AND building context for the next one.  It can be done and when it is you get what I call progressive regression with the automation that comes out of your BDD work.

Agile is very disciplined and to think that going Agile will make your current life easier, well guess again.  What Agile will do is highlight EVERY current weakness you have in your current product delivery process and then focus your attention on finding ways of improving on them.

For those of you looking for workshops regarding User Story/BDD techniques, please reach out to me at soundagile@gmail.com.

Agile Metrics

One of the hardest things that we struggle with in Agile is with is how to report how we are doing with respect to our projects.  Agile or otherwise we still primarily think of our software development in a project orientation. All of our historical metrics from our waterfall days talk about headcount, resourcing, man hours (days), project milestones, etc…..

In Agile we don’t produce the type of metrics that management has historically used as progress tools.

You hear quite often that in Agile velocity is king (kind of like cash) and in a real sense that is true for me.

Velocity represents the amount of value that a single Scrum team can deliver.  We know that each Scrum team has a specific set fixed cost and the value that they ‘consistently’ deliver reflects a key metric for management to key on.

  • Velocity – What can you glean from a single number?
    • Effective Estimation – A Scrum team that doesn’t deliver accurate or confident estimates they will see their velocity roller coaster from sprint to sprint.
      • What to look for? –
        • If a team consistently has to move a story to the next sprint because it couldn’t be completed that is an indicator for estimation improvement.  This has the effect of reducing the points for that sprint as they can’t accept an incomplete story.
        • If a team consistently has large stories entering their sprint, ie larger than 8 points, this is an indication that they haven’t decomposed their stories enough.  Lack of story breakdown results in more unknowns leading to inaccurate sprint commitments.
  • Planned vs Actual – Another flavor of identifying estimation issues.
    • What to track? – Once a team develops a consistent velocity (usually takes 3-5 sprints) then that is what we should expect each iteration, providing people aren’t on vacation, etc…A team that commits to varying levels of story points each sprint typically hasn’t performed enough story development or they lack a well-groomed backlog.
    • Burndown – I specifically like this as it reflects the above issues very nicely.
      • What to look for? – Pretty simple really.  A team that has performed effective story decomposition, planning and estimation should expect to see their work being delivered and completed throughout the entire sprint.
      • What to look for? – Teams who have a cliff dive burndown where all of the story points in development hit QA at the end of the sprint.  (Solution? – QA needs to set up a working agreement so that they will only test ‘x’ number of points in any given day.  If the team delivers 12 points on Friday and the agreement is 8 points for testing, then 4 story points will not get accepted, regardless if ‘development is done’ (cold uh?).
    • Sprint changes – Though harder to track with most of the tools out there (Rally has an app but I’m not sure it provides information that is easily derived regarding which stories actually left the sprint and which ones came in).  Regardless of how you track it, this one metric will tell you a lot about an organization and it’s overall planning effectiveness.  Remember – Agile accepts that change is inevitable, but in the real world you can’t have unmanaged change, teams can’t work effectively or productively in that manner.
    • Sprint Plan vs Actual – As teams improve in their planning and estimation skills, they should also get better at providing accurate estimates as to the number of Sprints it will take to complete a feature (aka project).  One of the things I had a VP say to me recently is that he wasn’t sure Agile was the way because at the end of the day he still had to make commitments to Sr. Mgt.  The notion that Agile can’t provide plans that have accuracy is wrong, however you  but need to focus on Discovery, Planning and Estimation efforts.  And we need to understand that there are places and times where we need to take time to do this work.  Thinking that we should always just be coding is a sure way to build something, but probably not the right something.