When our software development organisation is driven by processes and metrics, we sometimes overlook a very important aspect: the craftsmanship of our team, human beings. Listening to the wrong metrics result in the wrong decisions being taken, as well as unhappy professionals by lack of trust and empowerment.
OGF stands for “Overall Gut Feeling”
To break this cycle of more processes and less trust, we started using a simple metric for the status of our release: ‘OGF’. OGF stands for “Overall Gut Feeling” and can be expressed as a number between 1 and 10: 1 means ‘all hell breaks loose if we release’, 10 means ‘ship it!’. Without any complicated processes, you can adopt OGF just by asking someone “on a scale of 1-10, what is your overall gut feeling on X?”. If it doesn’t meet your expectations, ask “why”, listen and decide the minimal next step together. This brings the opportunity for professionals to express any concern they might have that isn’t on your radar yet.
Of course, we looked at other things, like defects dashboards and our automated pipeline. OGF is not an excuse not to be well-informed. Sometimes all the measurements just did not add up to the same Overall Gut Feeling. It could have been something small as just this one defect that really bugged our tester. For example, an obvious visual regression (which easily fell away from our high severity Jira filters). Our OGF could have been low simply because we didn’t have enough time to test something, or we saw many unverified changes still coming in.
Quantifying OGF started somewhat as joke, when we realized the type of metrics our upper management was asking for was part of the underlying problem. In our case, teams were being benchmarked on test cases in a traditional manual test case system, while the goal was to get quality earlier and deliver more often. These measurements placed the emphasis on the situation our team moved (successfully) away from and so, in our opinion, shifted the focus away from decoupling and adopting a more successful automated test strategy.
OGF stayed around for many years. And even when our processes matured and we were able to release multiple times a day to our integrators, we never fully dropped OGF. Our OGF became more concentrated around business functionality rather than bugs and testing capacity limits (which tells me we were on the right track).
For example, we still expressed a low OGF when we felt the claimed MVP (Minimum Viable Product) was maybe a bit too minimal.
Trying out OGF is amazingly simple. It does not require any tooling or a formalized process. Yet, it can be an incredibly useful way to get the insights your Jira dashboards or test case management solutions do not give you. After all, people are continuously doing risk assessments, often without realizing it. Just as important, it empowers people to voice concerns before decisions are made.