Designing for emergence

I got a request last week 'to explore developing and delivering a Business Change function: how it might be structured, and its milestones and deliverables'. It came at the same point as the question, from Peter Murchland, 'can we or how can we design for emergence'?

At first glance, they seemed to be opposing notions. In the Donald Rumsfeld spectrum of 'There are known knowns, there are known unknowns, and there are unknown unknowns', business change typology seems to be more comfortable at the known knowns end, a kind of reductionist view of the work, and emergence typology at the unknown unknowns end a kind of holistic view of the world.

As I was musing on the two I remembered a picture I once had on my wall. It was a drawing by French illustrator Jean Olivier Heron, called 'Comment naissent les bateaux' (How boats are born). It showed a yawl gradually emerging through a sequence of drawings that started with a butterfly-winged mermaid hatching. (See it here).

This sparked the thought that maybe you could develop a business change function that designs, if not emergence, then the conditions for emergence. The originating business change function would incubate the principles and conditions that enable emergence. Peggy Holman describes two types of emergence.

  • Weak emergence describes new properties arising in a system. … In weak emergence, rules or principles act as the authority, providing context for the system to function. In effect, they eliminate the need for someone in charge. Road systems are a simple example.
  • Strong emergence occurs when a novel form arises that was completely unpredictable. We could not have guessed its properties by understanding what came before…. Nor can we trace its roots from its components or their interactions.

The emergence of the yawl from the butterfly-mermaid would be completely unpredictable if we were just looking at the start point and trying to predict an outcome from what we know of butterflies and mermaids. It would be a strong emergence. Except that in the picture you can trace the emergence of the yawl back to the butterfly/mermaid. So maybe its a weak emergence?

I'm interested in designing the conditions for emergence, because I'd like to see a business change function that is focused on helping organizational members manage uncertainty, be curious about what could happen, be willing to participate in experimentation, be happy to continuously adapt and learn new things, be skilled in meeting things that come in from unexpected directions, and be capable of continuously renewing the organization in a variety of ways. That roughly equates to being able to handle emergence rather than comply with processes.

This does not mean abandoning all programme management protocols but it may mean expressing them in a different form e.g. the concepts of 'deliverables' and 'milestones' could change. There might not be a detailed plan. There might, instead, be a trust that 'business change' will emerge and continue to emerge from an agreed direction and some principles.

The story of Miles Davis's, 'Kind of blue' illustrates. No doubt there was a contract and a 'programme' intent on getting an album of a certain quality produced by a specific point in time, with various risks managed by processs like insurance. But Davis approached the task within the programme from an emergent perspective:

'As was Davis's penchant, he called for almost no rehearsal and the musicians had little idea what they were to record. As described in the original liner notes by pianist Bill Evans, Davis had only given the band sketches of scales and melody lines on which to improvise. Once the musicians were assembled, Davis gave brief instructions for each piece and then set to taping the sextet in studio.'

The album is among the top most successful jazz albums ever recorded.

A similar interplay of programme protocols and emergent thinking is hinted at in Peter Corning's article (referenced in Holman's work) where he closes with the point that that both reductionism and holism are essential to a full understanding of living systems. So, it may be in attempting business change – it has to be done with both reductionist and systems thinking.

Leandro Herrero offers 12 simple rules of social change that I think are useful discussion starters for those developing a business change function that would enable continuously changing the business (the weak emergence) and working with the emergence of the unexpected (the strong emergence). One that I'm thinking would cause debate is 'Readiness is a red herring', and another 'Recalibrate all the time. Stay in beta.'

Coincidentally as I was writing this, I read an example about finding a treatment for multiple sclerosis that opens: 'Experiments that go according to plan can be useful. But the biggest scientific advances often emerge from those that do not.' It's a good story of designing the conditions that enable emergence.

Do you think you can develop a business change function that either designs for emergence, or designs the conditions for emergence? Let me know.

NOTE: This blog also appears on LinkedIn

Digital ecosystems – any thoughts?

Jim sent me an email last week saying: 'I am doing a webinar on ecosystems and with all the hoopla on digital ecosystems in HBR recently I think there is a possible org design perspective on this.'

He went on to mention alliance management functions, ecosystems of the future, and centralized/decentralized models. Finishing with the challenge 'Any thoughts?' So, here goes:

Beginning with the 'eco'. Ecosystems has recently entered the common language of business – to such an extent that it's in the top three management buzzwords of 2016.

Before it was a business word it was an ecologist's word and in that literature, there are many definitions on what ecosystems 'are'. A simple definition, from National Geographic is: An ecosystem is a geographic area where plants, animals, and other organisms, as well as weather and landscape, work together to form a bubble of life. Ecosystems contain biotic or living, parts, as well as abiotic factors, or non-living parts. Every factor in an ecosystem depends on every other factor, either directly or indirectly.

More detailed ecosystem definitions include concepts of pattern formation, self-organization, coevolution and co-existence between organisms and their environments, interaction across multiple scales of space, time, and complexity, and feedback loops.

Similarly, there are various definitions of digital ecosystem. Gartner's is: 'A digital ecosystem is an interdependent group of enterprises, people and/or things that share standardized digital platforms for a mutually beneficial purpose (such as commercial gain, innovation or common interest). Digital ecosystems enable you to interact with customers, partners, adjacent industries -— even your competition.' This definition is closest to the business ecosystem discussed in an article on three types of economic ecosystems (business, innovation, and knowledge).

Having got this far I paused. Earlier in the week I'd watched an Adam Grant TED talk in which he asserts that procrastination is an aid to thinking. That helped me feel ok watching a programme on Scottish and Icelandic seabirds around the Shiant Isles. It turned out to be about ecosystems. And all is not well. 'Overfishing, global warming disturbing the ocean food webs, pollution and the introduction of rodents and other animals to breeding places are ushering in an apocalypse. Some scientists estimate that by the end of this century most of the world's seabirds will have disappeared'. The Shiant Isles seabirds are part of this story.

The players (in the jargon – 'agents') and stakeholders in this seabird ecosystem are many, and interconnected. They have differing power positions and differing interests it. Most of the players are voiceless and many are powerless to protect themselves. It's an unsettling narrative that we should bear in mind when designing our digital ecosystems.

Having said that, there's a lot about designing digital ecosystems that make it sound do-able and relatively easy because basically you're designing a digital platform that you own and others use. There are some commonly cited leaders in the field: Danske Bank, Amazon, Philips Healthcare, and Fiat are among them. They have reportedly 'designed' digital ecosystems each based on a common platform.

But let's not get too excited or jump in designing them without pause for thoughts – here are my five:

  1. Digital ecosystems are complex interacting and interlocking networks. In designing one how do we answer questions like: what are its boundaries? Whose perspective are we looking at designing it for and from? Where will the power lie in it? Is it controllable? If you apply these questions to the seabird analogy you start to see the complexity of the interactions.

  2. The notion that digital ecosystems are 'beneficial' or add 'value' – as the definition says – is an assumption worth challenging. (Think seabirds again). Is it possible to design a digital ecosystem that is beneficial to all participants? Amazon, Google, Uber, Airbnb, are examples of companies known for their digital ecosystems some of the participants/agents in them – governments, regulators, and in some cases their workforce – are suggesting they are not beneficial. In designing digital ecosystems what discussions are we having about the value and benefit they bring?

  3. Having 'designed' the digital platform it is not possible to control its ecosystem. Once functioning it is continually adapting at a cellular/local level. In the seabird example, small colonies of gannets adapted their behaviours. In organizational terms, call centre agents, for example, develop work-arounds and adapt their behaviour in response to something happening in their environment (IT outages, new policy, etc). In digital ecosystems people hack-in, developers tweak bits, or interfaces fail … (See The Digital Ecosystem Paradox – Learning to Move to Better Digital Design Outcomes for more on this).

  4. Designing beyond the digital platform towards an ecosystem involves maintaining the ecosystem's capability to thrive over time. It involves long-term pattern watching using AI, big data, and extremely good interpretive analytics. If we see failures or the equivalent of 'poor health' then, it means trying out thoughtful adjustments. (Remembering that any adjustment will have consequences elsewhere in the ecosystem.) Organizational leaders tend to be poor at watching patterns over time. They are more interested in 'snapshots', or events with causes, so pattern watching may need different leadership skills.

  5. The idea of 'pattern watching' supposes that we know the boundaries within which we are watching. In seabird terms are we looking just at the puffin ecosystem and its agents, or the seabird population of the Shiants, or the wider seabird population or …? Does the pattern within the boundary matter more, from a design perspective, than the interactions and overlaps of patterns across ecosystem boundaries? Perhaps there are numerous citizens who are simultaneously a participant/agent/customer in Danske Bank, Amazon, Philips healthcare, and Fiat – what useful patterns would be revealed looking across these individual ecosystems that aren't revealed by looking within them.

How would you respond to Jim's email? Let me know.

NOTE: this blog is also on LinkedIn

Algorithmic organization design: let’s not be greedy

This week I started reading Aurora, a sci-fi novel by Kim Stanley Robinson. I hadn't read anything he'd written before but I got intrigued when I read an interview with him in which he was talking about positive futures. His view is that 'the stories we tell have the power to shape our future'.

That struck me as relevant as we (more or less) confidently tell ourselves the story that if we (re)design the organization then we will get positive outcomes – otherwise why design or redesign?

In between reading Aurora, I am writing a chapter on evaluating organisation designs for my forthcoming book. So, I'm asking myself how do we know what our design work is bringing, has brought, or will bring, in terms of positive futures such as efficiency gains, quality improvements, problems solved, or opportunities seized. Can we actually ascribe changes in any metrics we are tracking to something we've done in our design work?

It's the story told in Aurora. People design a spaceship to transport citizens from the now- unliveable-on Earth to likely-to-be-liveable-on Tau Ceti, aiming to get there 170 years after launch.

As the years progress, the metrics being tracked start to tell the astronauts that something is wrong – but the astronauts don't know how to find out what is the root cause of the problem – a flaw in the physical design of the space ship, something in the systems and processes, something in the way the astronauts are evolving themselves, normal wear and degradation? And thus, they don't know how to adapt the design. A reviewer remarked:

It [Aurora] is, for one thing, superbly insightful on the way entropy actually works in complex systems; how things break down or degrade, the stubbornness of the cosmos, the sod's-lawishness of machines. … The fixes are desperately costly of labour and resources. The ship is increasingly subjected to bodge jobs, and so liable to further breakdowns.

In Aurora, the ship's leading scientist is urging the AI system to tell give her the information needed to help her solve this problem. She asks for it as a narrative. But the AI system is stuck responding, questioning itself: 'How to decide how to sequence information in a narrative account? Many elements in a complex situation are simultaneously relevant. An unsolvable problem: sentences linear, reality synchronous. Both however are temporal.'

The scientist urges the AI to 'get to the point' and gets the response; 'There are many points. How to decide what's important? Need prioritising algorithm'. I almost burst out laughing, this is so akin to my world. If only I had the prioritising algorithm to hand.

Unfortunately, there are no prioritising algorithms and the AI muses that 'in the absence of a good or even adequate algorithm, one is forced to operate using a greedy algorithm, bad though it may be.' and in the Aurora case the AI has been programmed to have enough intelligence not to want to go down the greedy algorithm route. Organization designers take note. We, also, don't want to, and should not want to, select greedy algorithm approaches:

Greedy algorithms are simple and straightforward. They are short-sighted in their approach in the sense that they take decisions on the basis of information at hand without worrying about the effect these decisions may have in the future. They are easy to invent, easy to implement and most of the time quite efficient. Many problems cannot be solved correctly by the greedy approach.

The AI warns of the danger of using greedy algorithms as they are 'known to be capable of choosing, or even be especially prone to choosing, the unique worst possible plan when faced with certain kinds of problems.'

Exactly. If only we could inject more complexity recognition into in organization design work. It's too easy to focus on one simple and not necessarily relevant element e.g. the organization chart, or a customer satisfaction score, not acknowledging or understanding the complex and simultaneously relevant elements that the metric doesn't come even close to representing.

We look at various organizational metrics – see a list of The 75 KPIs Every Manager Needs To Know – and as these change over time, don't know whether our design interventions are causing change, whether our designs are solving current problems and/or if they are creating future problems because we can't identify what elements are 'simultaneously relevant' to look at/address, and the metrics can be interpreted in multiple ways.

Not only that, the metrics are not the whole story, John W Gardner explains:
What does the information processing system filter out? It filters out all sensory impressions not readily expressed in words and numbers. It filters out emotions, feeling, sentiment, mood and almost all of the irrational nuances of human situations. It filters out those intuitive judgements that are just below the level of consciousness. So the picture of reality that sifts to the top of our … organizations … is sometimes a dangerous mismatch with the real world. We suffer the consequences when we run head on into situations that cannot be understood except in terms of those elements that have been filtered out.'

This thinking didn't get me very far in writing the chapter on evaluating organization designs – but I am enjoying the sci-fi novel, so take comfort in the value of this displacement activity. (ED NOTE: What is this measured by ???)

How do you avoid the dangers of greedy algorithm thinking in your design work – doing it or evaluating it? Let me know.

Empowering: is it a control device?

Someone sent me a note asking 'I wonder if you are interested in writing a blog for the resource pack for the Culture tool? It is missing a few stories about how others 'do things'. I thought of you for the bit on empowering.'

The Culture Tool is a discussion diagnostic where teams talk about various questions and then rate themselves. When all the questions have been rated a radar chart is generated and the group then decides what, if anything, they want to do to change the picture. The question on empowerment reads: 'To what extent do you feel your team/ group are confident/able to empower people?'

Thinking about this, it seems to me that the question behind that question is about the relationship between power/control/autonomy and empowering. The nature of the statement 'able to empower people' suggests that empowerment is a gift given by those with power to those without power, and if that is the case then the gift could be withdrawn as part of a control system.

I found several discussions on this as I started to explore my line of thinking. For example, an HBR article by Robert Simons, Control in an Age of Empowerment. He wrote it in 1995, and although I'm not keen on the mechanistic language, I found he has interesting and topical ideas around types of control 'levers': diagnostic, beliefs, boundary and interactive.

He says it is the 'Beliefs systems [that] empower individuals and encourage them to search for new opportunities. They communicate core values and inspire all participants to commit to the organization's purpose', but in Simons' view participants (employees) are only empowered within 'boundaries [that are'] in modern organizations, embedded in standards of ethical behavior and codes of conduct, and are invariably written in terms of activities that are off-limits. … Telling employees what not to do allows innovation, but within clearly defined limits.'

Another (1998) discussion talks about two types of empowerment: structural, and personal. Structural empowerment dealing with authority expressed as 'pushing the decision making down to the lowest level'. And personal empowerment expressed as the ability for the individual to develop and apply autonomous* decisions and behaviours.

The relationship between the two suggests that 'the only power of hierarchy is to limit it (autonomy) not to give it… Management can prevent but cannot grant power … organizations and societies often create environments which takes away the potential choices an individual can have.' It is Simon's boundary discussion from a different angle.

So 'empowerment' is the extent to which organizational boundaries enable the ability of groups and individuals to make autonomous decisions and choices. In other words, autonomy is organizationally bounded. (Dan Pink, author of Drive: the surprising truth about what motivates us, thinks 'empowerment' is a control mechanism somewhere on the spectrum 'between meaningless and insidious – fundamentally flawed').

So, I am back where I started with the thought that where notions of empowerment in organizations seem to get tangled is in the tensions of power/control/autonomy. We speak about 'empowering' but it's always within the context of the organisational structures, processes, and risk appetite.

However, if we really do want people to assume more control, ask forgiveness not permission, and believe that managers would like staff to make sensible decisions and choices off their own bat, which many organizations say they do want, then what needs to happen? A prescription, or copying from how others do things, won't work because different contexts will respond differently to it. But maybe some questions could stimulate a discussion.

  • Why is 'empowerment' being seen as desirable for the organization? (What does it look and feel like in practice?)
  • What is the outcome leaders are hoping for when they say they want to 'empower' employees?
  • Who does the 'empowering' to whom?
  • What are the boundaries of 'empowerment' and how does it relate to 'autonomy'?
  • What are the control tools and devices that reward or punish to much or too little evidence of 'empowerment'? And are they consistently applied?
  • How will people know when they are 'empowered'?

If we do want to know how others do things, Spotify, a digital music service, is lauded as an example of how to 'empower' and how 'autonomy' works in their organization. They talk about 'autonomous teams – fully empowered to fulfill their mission'. In one of the many, many articles about Spotify we learn how Spotify 'balances employee autonomy and accountability, balances freedom to innovate versus following proven routines, and balances alignment with control.'

This HBR piece is worth reading because it offers insight into how one organisation grapples with the relationships between empowering/power/control/autonomy. It confirmed my view that empowering is a control device, and it how it is used that makes the difference between enabling degrees of autonomy and being, in Dan Pink's words, fundamentally flawed.

Do you think empowering is a control device? Let me know?

*Autonomy is "the ability to make informed choices about what should be done and how to go about doing it. This entails being able to formulate aims and beliefs about how to achieve them, along with the ability to evaluate the success of those beliefs in the light of empirical evidence"

Morals and ethics in design

My daughter is expecting a baby. I remember when I was expecting her, I was very taken with a Louis MacNeice poem Prayer Before Birth. I read it again last week. The stanza

I am not yet born; O fill me
With strength against those who would freeze my
humanity, would dragoon me into a lethal automaton,
would make me a cog in a machine ..

made me shiver. We're dangerously close to designing a future where if we are not exactly cogs in machines, machines may increasingly be cogs in us: for example, we already have brain implants used to help manage Parkinson's disease, and an artificial retina to help people with retinitis pigmentosa.

This rapidly developing field of designing human performance enhancement (HPE), that makes use of the 'convergence of nano-technology , biotechnology , information technology and cognitive science is creating a set of powerful tools that have the potential to significantly enhance human performance as well as transform society , science, economics and human evolution.' (James Canton)

Advanced technologies like these change the relationship between humans and the way we interact with the world, (described well in this short video). Read Never Let Me Go or some of the many other dystopian sci-fi novels that describe the various worlds our yet-to-be-borns will live in. All chillingly touch moral and ethical issues that we are still far from getting to grips with. (If dystopia is not for you there is a list of utopian sci-fi novels too).

It's not just HPE that has moral and ethical implications for society. Anything and everything designed does. Each time we design something – product, service, organisation, etc. It comes, as Sebastian Deterding says 'with certain values embedded in it. And we can question these values. We can question: Is it a good thing that all of us continuously self-optimize ourselves to fit better into that society?'

I had this in mind, when I got an email asking me to 'share your insights on the topic "Designing for the future: trends we need to consider now".

In my view, the design trend that is most pressing for us to consider now is that which explores, debates, and confronts the moral and ethical dimensions inherent in both our designs, and in our methods and approaches to designing.

Some say that 'ethics and mores are being established on the fly' rather than through considered societal discussion. But I see a distinctly emerging trend suggesting that morals and ethics are rising up the design agenda.

Look, for example, at the Reilly Centre's annual top 10 list of ethical dilemmas and policy issues in science and technology. They span many design fields: this year's includes: brain hacking, automated politics, and the self-healing body. The Centre invites people to participate in the moral and ethical debates on these and offers tools and resources to generate discussion.

Or look at the relatively recently established (2003) Oxford University Uehiro Centre for Practical Ethics with its mission to help guide people to make good choices about 'novel problems and challenges, for which our traditional institutions and norms were not developed'.

Designers are beginning to acknowledge that designs are 'moral mediators' and ask what can we do with that knowledge as a designer?' Educator Peter-Paul Verbeek suggests three possibilities:

  • You could simply anticipate the mediations that are involved when you design … , just to make sure that nothing might happen that you would not want to happen.
  • You could systematically assess the mediations by going through all the potential mediating effects, and do an ethical assessment of them.
  • You could actually really try to design (moral) mediations into the design.

These three points are relevant to organization designers. We could use them to help us determine how far we wanted to go in designing moral and ethical organizations.

Suppose we wanted to consciously design organizations for 'good work' i.e. work that is "fair and decent, with scope for development and fulfilment". What moral and ethical dimensions would we have to consider? They would have to include the discussions of the automation of work, the organizational structures that inhibit or foster ethical and moral behaviours, the use of HPE in the workforce, the use of big data, surveillance and monitoring of employees, and so on. How far would we go in designing good work amidst the tensions and competing voices of stakeholder value, efficiency, and customer expectations?

We haven't come anywhere near to addressing the moral and ethical dimensions that the accelerating technologies, combined with new designs of societal and organizational interactions that the technologies facilitate, could have on our future. But we can, and must, participate in the trending groundswell of discussion on this. Without doing so, we will not get to designs that yield a future where we can provide those not yet born:

With water to dandle me, grass to grow for me, trees to talk
to me, sky to sing to me, birds and a white light
in the back of my mind to guide me.

What's your view on the moral and ethical dimensions of organization design? Let me know.

Note: "This blog post is a part of Design Blogger Competition organized by CGTrader"