Algorithmic organization design: let’s not be greedy

This week I started reading Aurora, a sci-fi novel by Kim Stanley Robinson. I hadn't read anything he'd written before but I got intrigued when I read an interview with him in which he was talking about positive futures. His view is that 'the stories we tell have the power to shape our future'.

That struck me as relevant as we (more or less) confidently tell ourselves the story that if we (re)design the organization then we will get positive outcomes – otherwise why design or redesign?

In between reading Aurora, I am writing a chapter on evaluating organisation designs for my forthcoming book. So, I'm asking myself how do we know what our design work is bringing, has brought, or will bring, in terms of positive futures such as efficiency gains, quality improvements, problems solved, or opportunities seized. Can we actually ascribe changes in any metrics we are tracking to something we've done in our design work?

It's the story told in Aurora. People design a spaceship to transport citizens from the now- unliveable-on Earth to likely-to-be-liveable-on Tau Ceti, aiming to get there 170 years after launch.

As the years progress, the metrics being tracked start to tell the astronauts that something is wrong – but the astronauts don't know how to find out what is the root cause of the problem – a flaw in the physical design of the space ship, something in the systems and processes, something in the way the astronauts are evolving themselves, normal wear and degradation? And thus, they don't know how to adapt the design. A reviewer remarked:

It [Aurora] is, for one thing, superbly insightful on the way entropy actually works in complex systems; how things break down or degrade, the stubbornness of the cosmos, the sod's-lawishness of machines. … The fixes are desperately costly of labour and resources. The ship is increasingly subjected to bodge jobs, and so liable to further breakdowns.

In Aurora, the ship's leading scientist is urging the AI system to tell give her the information needed to help her solve this problem. She asks for it as a narrative. But the AI system is stuck responding, questioning itself: 'How to decide how to sequence information in a narrative account? Many elements in a complex situation are simultaneously relevant. An unsolvable problem: sentences linear, reality synchronous. Both however are temporal.'

The scientist urges the AI to 'get to the point' and gets the response; 'There are many points. How to decide what's important? Need prioritising algorithm'. I almost burst out laughing, this is so akin to my world. If only I had the prioritising algorithm to hand.

Unfortunately, there are no prioritising algorithms and the AI muses that 'in the absence of a good or even adequate algorithm, one is forced to operate using a greedy algorithm, bad though it may be.' and in the Aurora case the AI has been programmed to have enough intelligence not to want to go down the greedy algorithm route. Organization designers take note. We, also, don't want to, and should not want to, select greedy algorithm approaches:

Greedy algorithms are simple and straightforward. They are short-sighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. They are easy to invent, easy to implement and most of the time quite efficient. Many problems cannot be solved correctly by the greedy approach.

The AI warns of the danger of using greedy algorithms as they are 'known to be capable of choosing, or even be especially prone to choosing, the unique worst possible plan when faced with certain kinds of problems.'

Exactly. If only we could inject more complexity recognition into in organization design work. It's too easy to focus on one simple and not necessarily relevant element e.g. the organization chart, or a customer satisfaction score, not acknowledging or understanding the complex and simultaneously relevant elements that the metric doesn't come even close to representing.

We look at various organizational metrics – see a list of The 75 KPIs Every Manager Needs To Know – and as these change over time, don't know whether our design interventions are causing change, whether our designs are solving current problems and/or if they are creating future problems because we can't identify what elements are 'simultaneously relevant' to look at/address, and the metrics can be interpreted in multiple ways.

Not only that, the metrics are not the whole story, John W Gardner explains:
What does the information processing system filter out? It filters out all sensory impressions not readily expressed in words and numbers. It filters out emotions, feeling, sentiment, mood and almost all of the irrational nuances of human situations. It filters out those intuitive judgements that are just below the level of consciousness. So the picture of reality that sifts to the top of our … organizations … is sometimes a dangerous mismatch with the real world. We suffer the consequences when we run head on into situations that cannot be understood except in terms of those elements that have been filtered out.'

This thinking didn't get me very far in writing the chapter on evaluating organization designs – but I am enjoying the sci-fi novel, so take comfort in the value of this displacement activity. (ED NOTE: What is this measured by ???)

How do you avoid the dangers of greedy algorithm thinking in your design work – doing it or evaluating it? Let me know.