Continuing the alternate week pattern of posting chapter extracts from the forthcoming third edition of my book “Guide to Organisation Design,” this week’s extract is the opening section of Chapter 6, “Measurement”. Next week will be a discussion related to this chapter.
Measurement is important in organisation design. It is a crucial part of assessing how to improve the design to improve performance. Ideally, measurement prompts reflection on what is being done, how it is being done, the effects in the organisation and the wider world, and how things could be different. The purpose of measurement is for learning and reflection in order to improve the design and how it is delivered.
That can include measurement for control: to check, for example, whether accountabilities are being discharged effectively, or whether targets are being met, or where progress or performance is not as expected. When measures are used for learning or control, it is important that they are considered in the context of the design, the organisational system and the wider ecosystem.
Organisation design measures usually include resources used (who, where), business processes (doing what); outputs (how many things produced, how quickly); and real-world outcomes (revenues, client satisfaction, employee wellbeing etc). Some of these will be leading (real-time) measures, some will be lagging (only measured after the fact).
Starting out on organisation design work, whether project-based design or continuous design, people generally want to answer some key questions, and most of these benefit from metrics of one kind or another (marked below with [M]):
- Where are we now: can we measure our baseline in headcount, costs, locations, activities, skills [M]
- What outcome measures indicate that a new design is needed? [M]
- Are we sufficiently aligned on our mission and values? [M – or by qualitative observation]
- What should our new design be? To what extent can we quantify our to-be state in terms including headcount, costs, locations, activities, skills? [M] To what extent must we leave our to-be state under-determined? [qualitative assessment]
- What measurable gaps do we need to close? [M]
- What can we measure during the organisation design change process to make sure we are on track? [M]
- What can we measure as an outcome of the organisation design change to confirm whether or not it has delivered? [M]
Most organisation design work has to start at least with a sense of ‘where we are now.’ And that is certainly possible to achieve: measurements are available inside most organisations on financial performance, customer data, workforce profile and so on though with varying degrees of consistency, reliability and cleanliness. This means that many organisation design initiatives start off with a house-cleaning exercise on the organisational data.
The challenge of getting the baseline right, and managing data of mixed quality from multiple measures and sources often overwhelms the good intentions of the organisation design team. Yet people have always had to make decisions and to move forward, sometimes knowing the data to be incomplete or imperfect, and sometimes knowing that the future is still to be shaped, so that data cannot yet exist. So, a key skill for an organisation designer is to know when to declare the data ‘complete enough’ to go forward.
As Carlo Rovelli, a physicist, says, ‘In this uncertain world, it is foolish to ask for absolute certainty. Whoever boasts of being certain is usually the least reliable. But this doesn’t mean either that we are in the dark. Between certainty and complete uncertainty there is a precious intermediate space – and it is in this intermediate space that our lives and our decisions unfold.’
Quantitative data derived from measurement can support decision making, if their limitations are accepted and factored in. Organisations are complex systems in a constant state of flux, of creative evolution and not in laboratory-controlled conditions or market equilibrium. Thus, many quantitative organisational measures are only indicators at a point in time and must be interpreted in their own context.
This is especially true of data coming from surveys. For example, Gallup, an American analytics and advisory company that tracks employee engagement, found that in early May 2020, employee engagement in the U.S. accelerated to a new high. One month later, came the most significant drop in engagement that they had recorded in their history, dating back to 2000, of tracking employee engagement. They attributed this drop to various combined stressors including the ongoing pandemic and related restrictions, mounting political tension as the election neared, the killing of George Floyd in late May with the subsequent protests and riots and societal unrest surrounding racial tensions.
By the time of the next measuring of employee engagement, the context, or attitudes, may well have changed and the sets of measures are not directly comparable. The interesting point about the change in score is not the score itself, but investigating why it has changed – what are the possible (multiple) reasons for this, what does a change mean? The score in only relevant to prompt questions and discussion.
The blog image shows a single score for which countries have the most engaged workers. However, the relationship between engagement and, for example. country productivity is not made begging the question whether countries lower on the ‘average engagement’ chart below – such as Singapore, Germany and Japan – have grown less strongly in the last 50 years than countries ‘higher’ on the list, such as USA, France and Canada. This leads to the further question of what engagement means:
- Is it about feeling happy at work?
- Is it about being absorbed in what the work is?
- Is it about being energized by the work?
- Is it about having work that is meaningful?
- Is it about improving productivity?
- Or is it all of the above? 
As this example shows, a focus on the number doesn’t tell the full story. Additionally, it is often the case that organisations present a single ‘score’ on, say, engagement with any outliers in the measures contributing to this removed. But there is always the possibility that one of the outliers is the “black swan” – the rare event that brings large consequences that cannot be ignored.
For greater impact, an organisation should look into the detail behind, in this example, overall ‘engagement’ metrics, in order to understand what the driving factors in engagement are: leadership, strong mission, alignment to company values, fairness of pay, diversity, line manager impact and so on. These can help to shape organisational design decisions, in a way that a single overall score cannot.
In any event no particular score should determine the next action – the numbers cannot ‘drive’ an organisation’s design. Using quantitative measures as general indicators and sources of feedback to spur reflection is sensible. But analysing and interpreting the data depends on individual perspectives. Statistician Nate Silver reinforces the point saying, ‘The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.’
Qualitative data is useful to add depth and richness to the quantitative data – to give the human experience and surrounding story, to understand alignment around the organisation’s vision and goals, and the perceived effectiveness of the existing system. Focus groups, interviews, listening circles and similar are worth including in the portfolio of measurement tools.
To measure design effectiveness, carefully consider selecting a few measures (and employee engagement might be one of them, or not) that will help tell a whole organisation story. Use data sources and approaches that reflect the interdependencies of the organisational elements and their impact on human performance and well-being.
Reflective question: What are the limitations and strengths of quantitative measures in assessing organisation design effectiveness?