To Kiran: Thanks for your email about how to evaluate the effectiveness of organisation design and development work. I take it that by 'evaluate' you mean passing a judgement on whether the design/development activity has delivered whatever it was supposed to deliver? Evaluation is something I've never really cracked because it is virtually impossible to show any cause/effect connection between organisation design work and what then happens in the organisation.
Even if you start off a project with measures of success, critical success factors, and clear objectives that the work is supposed to deliver then the time lapse, context changes, and the fact of intervening all mean that what you judge at the 'end' may bear little resemblance to what at the beginning you thought you would be judging.
We just had a research project done for us on evaluating our work. The researcher made some excellent and thought provoking points. As she said, 'It is critical to understand that OD & D is not just about org charts in terms of hierarchy and reporting lines but also about the relationships and interactions of work and people throughout the organisation and across any partner organisations'.
This implies that what you choose to evaluate is 'a political process' which depends on 'who is looking' at the evaluation: a Head of Finance might judge effective organisation design in a very different way from a Head of Research and Development, or the Head of Customer Experience.
You may be able to get over this 'who are you evaluating for' issue to some extent by agreeing evaluation measures at the start of the intervention and starting the evaluation right then. This causes a different possibility that in focusing on a specific area for evaluating – for example streamlining decision making processes – you may miss opportunities for design/development work that could foster other benefits.
Because OD & D is, consciously or unconsciously 'a set of values in practice' any evaluation method should recognise that. Unfortunately, however, 'the Holy Grail of measures of OD & D practice and intervention is based on the world of scientific materialism'. The researcher's view is that although 'Everyone looks for hard measures to prove effectiveness … logic and quantity are inappropriate devices for describing people and their interactions'.
As an example, take streamlining decision making processes. You may be able to show that you have speeded up the process but that it doesn't follow that the decisions made are any better (and indeed they may be worse) but you wouldn't know that if you are only judging OD & D effectiveness on speed of decision making.
She suggested that OD & D evaluations 'need to be directed at the total system'. This says to me that rather than 'evaluating' organisation design and development work, in academic terms, as a summative assessment, it would also be sensible to seek feedback on the work as a formative assessment; to improve organisation effectiveness we need both summative and formative evaluation. (Evaluation against pre-determined criteria is essentially about summative assessment. In an academic setting it would be an end-test. Formative assessment on the other hand, is about gaining information in order to provide guidance on performance improvement).
Formative assessment is less about judgement and more about information that will encourage improvement. Often we confuse the two as Robert Poynton discusses in a useful blog.
So in organisation design and development work seeking ongoing feedback on system performance during the course of the work is just as important as doing end-point evaluation when it 'closes'. Be aware however, that neither feedback nor evaluation is objective – both are open to multiple interpretations.
You may be able to temper this by approaching organisation design and development via an adaptation of agile's 'test and learn' principles. (See how it is adapted to marketing here).
What's your view on measuring the success of organisation design and development work? Let me know.