The bottom part of the diagram is a representation of a single step in the software production cycle.
It consists of two Amplifiers (A1, A2) with gains G1 and G2.
[The top part is an outline of the possible succession of steps in the process. This too has interesting characteristics. Not addressed here.]
Part of the output of A1 leaks back, via A2, into the input of A1.
A classic schematic for a wide-band amplifier or an oscillator.
How much is fed back, and in what time-relationship to the input?
The value "R". (Really it should be "Z" for impedance, and have lead/lag/in-phase components)
The crucial aspect of the diagram is that all inputs are "positive" - if the output directly follows the input, not the inverse (as found in a classic "negative feedback", or correcting, system.)
The model says there are only 3 internal metrics to be measured an one external inflow and two outflows.
The 3 internal metrics:
- G1 - Raw Lines of Code per Day
- R - errors per hundred Lines of Code
- G2 - Time to fix one error. In a normal Project, this can't be less than 1/2 day because of overheads, analysis and diagnosis, testing and change commit.
I've forgotten to draw its inverse, an outflow to previous steps.
The most obvious outflow is "Delivered Lines of Code".
You can see from the nature of "R", there may be significant 'lag' (time delay) so at notional Project Delivery, Errors may not be fully realised.
There are exactly 3 dynamic states of this system, given that G1, G2 and R are constant.
The "gain around the loop", G = G1 * ( G2 * R ).
[if r = 1/R; then G = G1 * (G2 / r ). Possibly more intuitive.]
The Loop Gain, G, can be greater than, equal to, or less than 1. [mistake in diagram, I wrote '0' (zero)]
The system dynamic are:
- G < 1. System "converges". It heads towards the X-axis asymptotically (?). The smaller G, the faster the convergence.
- G = 1. System is in a "holding pattern". It never moves towards completion, but doesn't move away.
- G > 1. The System "diverges". Every day of effort produces more than one day's extra effort. The classic "Death March".
Having 3 variables, they can be manipulated to push the system in a desired direction. If you were paying for the project and wanted a result, you'd want the smallest values for R and G2.
- Reduce G1. Slow the coding process. Counter-intuitive and probably would cause the Project Managers' head to explode. But its there in the figures. Spend more time fixing errors and less in creating more new code and their concomitant errors.
- Reduce G2. Better people, better systems, better tools, better training to find faults faster and produce better solutions. So do subject-area experts. Lesson: Don't move people away from what they know.
- Reduce R. Better Design, Design and Code Reviews and better people/tools/processes for coding. Create less rework, adopt practices that reduce errors (like pair-programming). And most crucially, don't over-work people. Even a small increase in Error rate can push a project into the Red Zone...
Projects do not "fail/get later one day at a time". They fail before they start.
Let go, they often gallop away from a solution.
No amount of effort can retrieve them - the more effort expended the more that's needed to complete them. This was the genesis of Microsoft's 2005 "Longhorn Reset". 25,000 person-years were discarded.