Evaluating organization design work

(Each of my blogs in August is an edited extract from my book Organisation Design: the Practitioner’s GuideThis is the third – from Chapter 8)

In organization design work there is often little appetite to evaluate whether the work has led to performance improvements.   But there are many benefits in doing an evaluation, for example:

  • It focuses the OD work on the context of the business strategy, because it forces people to answer questions like ‘Why did we do this?’, ‘How will are we achieving the return on investment in doing it?’ and so on.
  • It defines improvement in both qualitative and quantitative terms, and ties it closely to achieving business objectives using measures that feed into the overall organization performance measures.
  • It puts the design work in a timeframe and helps the client see what outcomes might be expected as ‘quick wins’ and what results will take longer to achieve and measure.
  • It places accountability for improvement in the hands of the line manager – which usually means a close eye is kept on progress and quick decisions are made if called for.
  • It fosters sharing of learning on successes and failures in OD work
  • It enables issues to be identified and action taken as needed.
  • It identifies where there are opportunities to take things further and deliver greater benefit than originally thought.
  • It suggests routes to building organizational resilience: that is, ‘the ability of an organization to anticipate, prepare for, and respond and adapt to incremental change and sudden disruptions in order to survive and prosper’ (Denyer, 2017).
  • It assesses whether and how the design is solving organizational problems and adding value as it does so.

Assuming agreement to conducting an evaluation, follow these seven steps:

Step 1 – Agree the evaluation need

  • Work with the business unit to help define what success would look like, not just at this point but into the next year or so. Because the context is changing all the time, there is a need to judge whether the design is on the right course to meet the goals set at the start, and if it will continue to do so as new goals emerge – or, if this is looking doubtful, what action to take.
  • Make sure that the sponsoring manager considers how the new design contributes to the overall organizational strategy and goals – and what other interrelated factors need to be considered when making OD changes in their area. This is important because it helps people remember that their piece of work is one element in a whole system. Clients often forget that what they do in their part of the organization is interdependent with other parts.

Step 2 – Agree who the evaluation is for and why

  • Discuss the reasons for doing the evaluation: as well as determining whether the new design is delivering the intended outcomes, there may be a need to decide something, seize an opportunity, learn something new or to assess the return on OD investment.
  • Agree who the audiences are for the information from the evaluation (e.g., customers, bankers, funders, board, management, staff, customers, clients, etc.). This will help decide what evaluation tools to use and how to present the information from the evaluation process.
  • Consider the kinds of information appropriate to the intended audiences. For example, is it information to help understand:
  • Encourage people to question the metrics they are currently tracking: are they inputs, outputs or outcomes? Quite often effort goes into measuring the wrong things or measures something that will encourage perverse behaviours.

Step 3 – Choose the evaluation methods

Determine which of the three types of evaluation data to collect: quantitative (numbers), qualitative (words and observation), or mixed (numbers, words and observation). The choice depends on the context, as each type of data has advantages and disadvantages, and none is perfect. Any data captured should be valid, current, relevant and reliable.

  • Assess the tools available in the market for design evaluation. Some tools will be better than others for particular jobs.
  • Bear in mind when making evaluation tool choices:
  • From what sources should the information be collected? For example, employees, customers, clients, groups of customers or clients and employees together, programme documentation?
  • How that information can be collected in a reasonable fashion? Through questionnaires, interviews, examining documentation, observing customers or employees, conducting focus groups among customers or employees?
  • When is the information needed? By when must it be collected? What resources are available to collect the information?
  • Agree, at an organizational level, a ‘basket’ of measures that leaders can pick from so that you will be able to compare one organization design with another. This ensures some consistency across the organization.
  • Ensure you pick measures which can be tracked on an ongoing basis, preferably from before the design work began, through its progress into the new design and beyond. This means thinking carefully about measures that will be appropriate throughout the life cycle of the design.
  • Avoid measuring the same thing in two different ways, so review any measures that are already in organizational use (e.g. on leadership, innovation, collaboration, etc.) to check that they are the right ones, and develop measures that fill any gaps.

Step 4 – Agree how the tool or tools will be applied

Remember, almost any tool, quantitative or qualitative, can be applied in a number of ways. For example, the choice of a quantitative survey raises a number of questions: should it be paper-based or web-based? Should it be administered to a sample of the population (what type/size of sample?) or to the whole population? Should it be at one time point or several time points, or should it be a continuous real-time data collection?

Step 5 – Prepare the ground for success

Be aware that there can be unexpected consequences of applying an evaluation tool, as the context is usually complex. For example, deciding to do a skills-level analysis could result in trade union intervention if it was felt the results of the analysis would be used to select individuals to make redundant.  Identify and manage the risks of things going wrong.

Step 6 – Decide who will do the evaluation

Selecting the right people to evaluate the outcomes of the design work involves finding those who are some or all of the following:

  • members of the department/consultancy conducting the review;
  • people with working knowledge of the business area under review and its processes;
  • people with relevant technical knowledge;
  • strategy planners with knowledge of the organization’s business strategy and the OD’s contribution to it;
  • people involved in meeting the objectives of the new design but not directly involved in its design and planning.

 Step 7 – Agree how the evaluation findings will be communicated and to whom

Evaluation yields different types of information and knowledge to share with other project teams and with stakeholders. Many large organizations describe themselves as ‘siloed’ and have difficulty learning from their own members. Communicating evaluation findings to the different stakeholder groups, using a variety of communication channels, helps spread good practice and develop common values and consistent approaches.

There are some common problems that may be encountered in evaluating but these can be minimized by:

  • harmonizing the measurements across business units (preferably in the assessment phase);
  • establishing protocols for capturing and retrieving design work documentation
  • making formal agreements with departments/BUs to participate in the review process (as part of the business case).

Done systematically, the evaluation will yield actionable information on things that must be addressed to optimize the new organization design.

Do you evaluate your organization design work outcomes?  If so, how?  Let me know.

Image: https://patternedpetals.com/home/2017/12/5/art-through-time