Team Flow
Velocity vs throughput, WIP limits, focus factor, and capacity. Lead time and cycle time. Why a board with eight in-progress tickets per person is not productive, it is stuck.
Team flow metric
Track velocity each sprint and trend it over time. The number going up means the team is improving.
Team flow metric
Track throughput (stories shipped per sprint) and lead time (idea to deploy). Velocity is a planning aid; throughput is the outcome.
Velocity going up can mean the team got faster, or it can mean the team learned to point bigger. Without throughput as a counter-check, the trend is unreadable.
Throughput counts what users got. Lead time names how long they waited. Velocity is a useful capacity number for the next sprint, but it is not the goal; treating it as one creates pressure to inflate point estimates rather than ship.
In-progress policy
Engineers pull whatever they can while waiting on review. Average four cards in progress per person.
In-progress policy
Each engineer has at most two cards in progress. If a card is blocked, swarm to unblock it before pulling new work.
Four cards in progress means three are waiting for something. The team is busy and slow at the same time. Throughput drops, lead time stretches, and the board fills up with 'in progress' that is not progressing.
WIP limits force the team to finish before starting. Two cards per person caps the context-switching cost; the swarm-to-unblock rule turns 'waiting on review' from a personal problem into a team one.
Sprint planning
Pull stories totalling the team's average velocity. Commit to finishing all of them.
Sprint planning
Pull stories totalling about 70% of the team's median velocity. The remaining capacity absorbs incident response, support, and the work the team will discover mid-sprint.
Pulling to 100% of velocity means every interruption forces a story to be cut, and the team drifts into a culture of over-commitment. Reviews become rationalization sessions instead of learning.
70% leaves room for the work the team will discover mid-sprint. Incidents, support escalations, and 'oh, this story was bigger than we thought' all need somewhere to land. A pre-allocated buffer means they land cleanly.
Time-on-the-board metrics
Lead time = time from story creation to deploy. Cycle time = time from In Progress to Done. Both reported as a single average.
Time-on-the-board metrics
Lead time = idea (or customer request) to live in production. Cycle time = pulled into In Progress to Done. Both reported as p50 and p85, not as averages.
An average flattens the long-tailed distribution that matters most. The 5% of stories that take three sprints are exactly the stories the team needs to talk about, and the average buries them.
Lead time tells the customer how long they wait. Cycle time tells the team how long the work itself took. Reporting percentiles (not averages) catches the long tail that averages hide.
When a story does not finish
Roll the story into the next sprint. Subtract its remaining points from next sprint's capacity.
When a story does not finish
Move the story back to the backlog and re-refine. If it carried because of missing scope, split it. If it carried because of estimation error, raise that in retro.
Auto-carryover is a way of pretending the sprint succeeded. The same story will roll again, the velocity number will be unreadable, and the retro will not see the pattern because the carry happens silently.
Stories that did not finish are signal. Sending them back through refinement forces the team to name what went wrong (under-scoped, over-estimated, blocked). Auto-rolling buries the signal and the same problem recurs.
Capacity for next sprint
Five engineers × 10 working days = 50 person-days available. Pull stories worth 50 person-days.
Capacity for next sprint
Five engineers × 10 working days × ~0.6 focus factor (meetings, reviews, on-call rotation, support) = ~30 person-days. Subtract holidays and known interrupts. Pull to that.
Treating every working day as a delivery day means the sprint is over-pulled by 40% before it starts. The team finishes 60% of the plan and is told they are slow.
Engineers do not spend 100% of their day shipping. Meetings, reviews, on-call, and support are real. A ~0.6 focus factor is honest; pretending the number is 1.0 produces over-commitment every sprint.