top of page
  • Writer's pictureAlexander Robson

Improving Software Delivery With Single-Piece Flow

A car drives down a road free of traffic
Going fast is easy when you're the only car on the road

There are a lot of different reasons engineering organizations should look for ways to improve their rate of delivery. Learning as early as possible from product experiments means less time spent going in the wrong direction. Making valuable releases available to customers earlier results in closing deals sooner. In subscription-based SaaS models, delivering value even one month earlier can have a significant impact on ARR.

What Is Single-Piece Flow?

In manufacturing, single-piece flow (a.k.a. "one piece flow") was developed as part of Toyota's system of manufacturing and later became a part of the Theory of Constraints. Its advantages for the creation of physical goods are decreased overhead costs and finished goods becoming available significantly earlier.

This 47 second video provides a nice visual illustration of how units moving through a system with a work-in-process (WIP) limit of 1 will result in finished pieces becoming available sooner despite the same initial rate of arrival.

Does It Have Applications In Software?

The visualization helps us understand why limiting WIP results in finished products becoming available sooner in a manufacturing setting. Can we get do the same in the context of delivering software with different team structures and different ways of organizing the work?

Approach and Key Assumptions

I made some diagrams in an effort to illustrate how different strategies for organizing and assigning work to teams and individuals have played out in my experience. I've had the opportunity to observe and work with teams that tried one or more of these approaches. I am operating on a number of assumptions:

  1. Stakeholders and customers prefer business outcomes to occur sooner

  2. Quality has to be built into the process and is limited by time available

  3. Cohesion solutions require non-trivial effort between engineers to achieve

  4. Planning ahead can break interdependencies and support parallelized work

  5. Unrelated work streams in the same code base cause "log jams" and overhead

  6. Context switching between projects or types of work introduces significant overhead

  7. Engineers are not "fungible" - you can't just shuffle them between tasks or projects without investing in them

  8. More activity means more cognitive load for everyone involved

I have diagrammed out four different approaches for comparison:

  1. A cross-functional team that only works on a single feature/project at a time

  2. A cross-functional team that spends time planning to break dependencies by focusing on the contracts between parts of the system

  3. A cross-functional team that interleaves multiple projects and releases when one is finished

  4. A "full-stack" team that expects each engineer to own delivery for their feature/project

Let's examine each one in turn. Take everything with a grain of salt. These are fictitious projects and the scale of projects and their dependencies were chosen based on ease of representation to demonstrate how outcomes change in response to team structure and approach to assigning the work.

Each example shows the same 4 features, a skill mix, and how the stories are worked based on the team's chosen operating model. A story represents a "day of work" because our teams are all very good at right-sizing their stories to represent roughly the same effort. Even if this sounds too far-fetched, stick around until the end. Since the same assumptions

Cross-functional Team with Single-Piece Flow

A diagram showing the duration of interrelated tasks across a team of cross-functional engineers

In this example, the team works on each task based on the order the work needs to happen. No work is done on other features that would compete with the team's full focus and top priority being the current feature. The low utilization allows for engineers to pair with one another so there is knowledge sharing and a reduction in reliance on one individual for a part of the system. The ability to pair and collaborate will catch defects early, improve designs, result in higher cohesion, and reduce rework. In this scenario, the risk to the schedule is low since the team has plenty of slack to address interruptions.

Cross-functional Team with Single-Piece Flow and Dependency Planning

In this example, everything is the same as our other single-piece flow team, but this team has decided to spend 3 days at the beginning of each project and establish enough initial direction that individuals can work more independently than before. Even with some remaining dependencies, the team saw an 18% average improvement in delivery time without dramatically increasing capacity commitment, introducing quality and cohesion problems, or significant schedule risk.

I included this example primarily because I've run into teams and managers who were very skeptical about the value of this kind of planning. The concern was that the benefit of planning wouldn't offset the time spent. Granted, this is a totally contrived example, so while healthy skepticism is still warranted, I hope teams will conduct their own experiments and measure the outcomes.

Cross-functional Team with Interleaved, Parallel Work Streams

While it looks like the kind of workstreams most engineers would be frustrated by, I've been surprised by how frequently I've come across this exact strategy for organizing work. I'm always assured that this is the best way to take advantage of the time available.

As you can see, I've introduced penalties to the duration of stories that force the individual to jump between projects. The end result is worse lead times, longer times in flight for each project, and significantly higher utilization. More interdependencies between tasks that take longer means less time for collaboration while adding greater scheduling risk. In this scenario, one person being out during the first 20 days is likely to push all features' deliveries out as there isn't usually readily available slack from an engineer with good overlap. There's also less time for knowledge sharing meaning that anyone taking on a team member's work is going to need at least a day to get up to speed.

Teams that are taking this approach should be aware of the trade-offs they are making regarding how early they can deliver value and how much risk they're accepting across schedule risk, quality, and technical cohesion.

Full-Stack Team with Parallel Work Streams

I've recently talked with a few leaders in software about why they prefer to run their teams at 100% capacity and how the engineers understand they are expected to be capable of working on any part of the stack with equal effectiveness. I've never met anyone who was equally effective across the entire stack from infrastructure to the client. If those folks exist, you'd only be able to employ them at a premium. If you're going to go to the expense of finding, hiring, and retaining the best talent, why implement a system that will pit their work product against their team members? This diagram adds a penalty for the overhead that is typically introduced when multiple work streams are all happening at the same time within the same code base. Just as we saw in the YouTube video at the beginning, multiple pieces of work in flight at the same time necessarily slow down the movement of any given piece through the queue. Because software isn't as nice and uniform as moving widgets through predefined, well-known processes, the likelihood of conflicting change sets, regressions identified later in the process, and incompatible directions grows significantly. In this contrived example, the most concerning aspect is that all team members are 100% committed for the duration of their project. This creates a disincentive to collaborate on design, quality, and shared approaches. At a 100% commitment level, you can guarantee that the team will not be capable of delivering to the schedule. Meaning that the worst business outcomes will be even worse than predicted.

The Relationship Between Utilization, Risk, Quality

a bogus graph intended to show the relationships between utilization, quality, cohesion, and schedule risk
As Utilization Increases, Bad Things Happen

Please pardon the bogus graph (no units, no y-axis, and no data). I find it helpful to have illustrations like this that visualize the relationships between concepts as some contributing factor changes. If you go back and look at each of our contrived flow diagrams, you'll recognize that schedule risk increases with utilization. As for how cohesion and quality change, look at how there are fewer and fewer coinciding incidents of "slack" that would allow team members to work together in a collaborative fashion.

Run Your Own Experiments

If you've never put WIP limits in place, I hope this post has made you curious about ways you could experiment with single-piece flow. I have been fortunate to see how adoption can really improve the business outcomes teams are able to deliver. Are you still feeling skeptical? I'd like to hear more about your experiences. Send me an email to alex AT robsonconsulting DOT services or share your perspective with other readers through comments.

70 views0 comments

Recent Posts

See All
bottom of page