You want to boost productivity by leveraging AI and AI agents, Data Mesh, or advanced analytics (BI-DS-ML). Everybody does.

But that requires a shift in how you manage your operations. It’s never about the tech. It’s always about sociotechnical practices. Obsessing over the ‘technical’ parts while ignoring the ‘socio’ parts and the sociotechnical dynamics is a recipe for disaster in three dangerously easy steps.

  1. Blindly Chase Technology Without StrategyAdoption of data-driven and AI-boosted ways of working requires managers to design and operate sociotechnical systems
  2. Underestimate the Sociotechnical QuagmireThe risk of embedding fateful blind spots within these systems borders on certainty
  3. Stick to Outdated Management ModelsManagement blind spots, such as short-termism or tech solutionism, can ruin a company

Management blind spots litters history with cautionary tales

Technology does not make or break a business. It’s the blind spots, the inability to see the errors of our ways, that kills companies. Consider Kodak and Facit. They collapsed despite being innovators and early adopters of the tech that killed their business. Blockbuster, Nokia, and Sears? They suffered from blind spots that left diminishing returns and vicious circles in their wake.

The idiom “the floggings will continue until morale improves” is meant to be satire. In Sears’ case, the slogan for their decline was “cost-cutting will continue until sales improve”. A ‘solution’ that worsens the problem it is supposed to solve, also known as the cobra effect, is an amusing kind of poetic justice when it happens to others. It’s less funny when you inadvertently release the cobras into your own company.

And then we have scandals like Well’s Fargo (false accounts), Enron (financial misconduct), and Volkswagen (emissions). Or actual disasters like Union Carbide in India. Blind spots that led to crime-inducing perverse incentives and organisational pathologies.

Blind spots hide our mistakes

Blind spots makes us ignore the errors of our ways by hiding our mistakes from our perception. Management blind spots are system by-products that mask or obscure undesirable consequences, thereby encouraging or reinforcing counter-productive practices in operations. Managment blind spots are depressingly common, for two reasons.

Which is a polite way of saying that organisations are functionally stupid. In fact, Alvesson & Spicer’s concept of functional stupidity [*] in organisations highlights the problem that the language and semantics of management is ill-equipped to deal with the blind spots it produces. An outside view, ideally transdisciplinary, is probably necessary to pre-empt the dangers of grouphthink and exformation.

[*] See Alvesson & Spicer (2012), “A Stupidity-Based Theory of Organizations”, Journal of Management Studies 49:7

Three heuristics for combatting management blind spots

In the Lord of the Rings, Galadriel gives Frodo a crystal phial in which the Light of Eärendil is captured and says “may it be a light to you in dark places when all other lights go out”. In the here-and-now, business environments are turning increasingly dark: more volatile, uncertain, complex, and ambiguous (VUCA). Managers are in dire need of Eärendil’s Light to illuminate their path to success.

Below, we describe three vital heuristics for reducing the risk of blind spots turning your Data or AI initiatives into a hot mess and for increasing your chances of achieving Data-AI-Ready Management. We believe they will still shine when all other lights go out in VUCA places.

  • Companies, organisations, and operations are people
  • They are also sociotechnical systems
  • Kahneman’s principle of What You See Is All There Is

We resort to heuristics to cope with complexity

Why ‘heuristics’? Organising and operating a business is an optimisation problem that just becomes more complex and non-linear the closer you look. There are no optimal solutions, so we must rely on heuristics (simplifications, approximations) as guardrails to deal or cope with the problem, to set our priorities and our approach. While we named our own species homo sapiens, that name is aspirational at best. We are boundedly rational creatures, satisficers by nature. So, when it comes to management, heuristics is as good as it ever gets.

Organisations are people

Mary Parker Follett, the mother of modern management, said that management is the art of getting things done through people. This perspective is essential. Whatever else a company might be, it is embodied by people that are busy doing their jobs. This includes the managers, being busy making sure that others do and can do their jobs. Patrick Hoverstadt provides a simple summary of what managers do.

At its core, the purpose of management is very simple, it is to do two things: firstly to decide what needs to happen, and secondly to ensure that what should happen does actually happen. [from The Fractal Organization]

All systems nominal — managers maintain business as usual

If we rephrase Hoverstadt, a manager’s job is making sure that their part of the organisation and the people in it are working and behaving as intended. In short, managers maintain business as usual (BAU). And they need to decide what that means in practice, in terms of organising and operating the work and the people doing the work. The very ability of the operations they manage to keep on working and behaving as intended hangs in the balance.

Our main message here is that the art of getting things done through people should explicitly involve encouraging and nudging people towards productive behaviours and away from the other type. This includes the managers. Because management constitutes reinforcement learning loops for everyone involved, regardless of manager awareness of this fact. People will form ways of working and of thinking in response to both the explicit and the implicit messages that managers send, regardless of whether or not the managers intend or even understand the messages sent.

Learning by doing

The usual meaning of learning by doing is learning to do something by trying to do it; learning on purpose. But we also learn things incidentally, unintentionally, and automatically. By doing, we learn things. The results and effects of doing something is data we crunch for future decision-making and action-taking. This happens regardless of whether we are aware of it or not. Kahneman talks about System 1, the parts of the brain’s operations that are largely unconscious. System 1 is responsible for maintaining and updating our causal models of how the world works, in part or in full, informing us how to behave and respond in any given situation, depending on its “fast-thinking” perception of the situation. Stimulus and response. Cause and effect. Whether we like it or not, we become part of lots of different feedback loops [*] simply by doing our jobs or living our lives.

[*] The feedback loop is the core idea of cybernetics: an iterative process where actions lead to outcomes which are then evaluated and used to inform future actions. In fact, the concept of feedback loops originates from this discipline.

A light in dark places: double the feedback loops

Tradition decrees that managers send and the managed receive. Managers should break this vicious circle by doubling up on the feedback loops they create. The art of getting things done through people places the managers and the managed in reciprocal relations where both are both senders and receivers. The “people” can only get their things done through their managers. This implies servant leadership.

Ultimately, managers lead by example. The only behaviours you can hope to reinforce are your own; managers set the standard. If you don’t care about blind spots, no-one will.

Organisations are sociotechnical systems

Reinforcement learning is just manager-designed feedback loops that ‘teach’ organisational behaviour. Fundamentally, managers design and operate multiple sociotechnical systems [*] of incentives and regulations, carrots and sticks, dos and don’ts. The important part of these systems are the sociotechnical practices they institute and reinforce for operations and for themselves. The least ‘technical’ and most ‘socio’ of management systems is direct supervision, leading by example, and coaching. This makes managers participant observers in agent-based models of their own making.

[*] We must note that system semantics are dangerous. Thinking of something as a system conjures up the image of something that we can neatly define and cleanly separate from other systems. In reality, systems are fractal things; any system is part of other systems. Systems Thinking is necessarily recursive. According to Peter Senge, it’s “a framework for seeing interrelationships rather than things, for seeing patterns rather than static snapshots. It is a set of general principles spanning fields as diverse as physical and social sciences, engineering and management” [from The Fifth Discipline].

Sociotechnical systems are made up of agent-to-agent interfaces

Generally speaking, the full complexity of a sociotechnical system cannot be adequately captured as a single system. Instead, we must direct our attention to the agents, the people the company employs to do things for the company. This includes the managers. The agents and their sociotechnical practices give rise to a plurality of systems. Lots of interfaces and institutions emerge [*] from the agent-agent interactions and relations that occurs as part of agents simply doing their jobs, minding their own business. Ultimately, any company, organisation, or unit (and its operations) is embodied by a mesh network of agents orchestrated by the managers. Organisations should be seen as agent-based models.

[*] See for instance Schelling’s book Micromotives and Macrobehaviour. His model of unintended segregation illuminates how small decisions can compound to large-scale effects.

Preoccupation with the ‘technics’ of management produces blind spots!

As soon as managers add processes and tech for KPIs or OKRs, for reward mechanisms and performance reviews, and for monitoring in general, the systems get technical quite rapidly. This does not change the fact that managers orchestrate their agent-based models. But manager preoccupation with the technical parts of their systems can very well hide or mask the agent-based nature of operations (the ‘socio’ part). Hence, this preoccupation becomes a blind spot. Never forget that organisations are people, that your organisation is a mesh network of agents!

A light in dark places: management is a technology

Tech exists to serve the users. Sociotechnical systems are never about the tech, but the use of it. It’s always about the users, their usages, and their value-in-use. Tech has a purpose: to boost their sociotechnical practices and their productivity with regards to some utility they enjoy.

In seeing organisations as sociotechnical systems, we must see management as the technology that operations use to boost their productivity. To root out management blind spots, we must see the management ‘system’ as the tech being used.

“What You See Is All There Is”

Managing the sociotechnical practices of an operation is itself a sociotechnical practice. Ideally, management as a practice boosts operational effectiveness. At the very least, management should ensure that operations stay on course. To make a nautical analogy, management fails when system blind spots shift the heading of the operations-as-a-ship, making it veer off course. In other words, management fails when it nudges or pushes agents in either operations and management into unintended ways of working. Or both.

No sane person would ever willingly design or adopt self-sabotaging practices. We can use Kahneman’s principle of What You See Is All There Is (WYSIATI) to understand why we do it anyway. The immediate and direct effects of an incentive mechanism or reward system can (and should be) beneficial; this expectation is why we implement them in the first place. However, the loop can also have indirect effects that are not beneficial, encouraging behaviours that are undesirable by nudging people into unintended ways of working. The “imperious immediacy of interest” [*] in the desired effects can burn so brightly that it blinds us to the undesirable ‘by-products’ of the loop. Bright lights create blind spots. Because bright lights make it difficult to see anything else.

[*] Merton’s third source of unintended consequences, after ignorance and error.

All models are wrong (but some are useful)

Statistician George Box observed that all models are wrong, but that some are useful. To get some utility out of a model, we must be “alert to what is importantly wrong” with a model because “it is inappropriate to be concerned about safety from mice when there are tigers abroad”. This informs any model, quantitative and qualitative, such as organisational heuristics — our sociotechnical practices, our ways of working, and our ways of thinking about our ways of working (business ontologies). Blind spots makes us not look for tigers.

Box Thinking applies to Box’s own statement and this article. Box Thinking certainly omits important epistemological complications, but it is supremely useful. This article is wrong in reducing organisations to mesh network of agents, but we are pretty sure it’s the most productive perspective when it comes to turning tech into growth opportunities. Or at least when it comes to reducing the risk of crippling blind spots.

A light in dark places: ‘You Are Wrong But You Can Be Useful’

Blind spots are insidious. Cognitive biases are part and parcel to our ways of thinking, which makes it difficult to use our thinking to combat them. In precisely the way, organisational blind spots make it difficult to organise them away. Good scientists learn Box Thinking or methodic doubt as a practice. While we did posit managers as participant observers of the ‘models’ they create, adding Box Thinking to the already hefty burden on managers would be unreasonable. But the light version of it — entertaining the notion that you are wrong, seeking outside views on things, and mulling it over — is reasonable. Which can be summarised as the principle of You Are Wrong But You Can Be Useful (URRBYCBU).

Illuminating the Sociotechnical Path

Steering through (or at least coping with) the sociotechnical complexity of melding data, AI, and human systems is not easy. The trilogy of “A Light in Dark Places” tries to distill open-ended heuristics for making decisions with less regrettable consequences. Recognizing and addressing the interplay between ‘socio’ and ‘technical’ aspects is not optional; it’s the keystone of Data-AI-Ready Managementin VUCA environments, letting the arches of your sociotechnical systems bear weight.

Doubt is your searchlight

Blind spots are the Achilles’ heel of any system. The bounds on human rationality make blind spots inevitable and numerous. The only thing we can do is to try to find them. Every blind spot dealt with clears the path to the future. So, embrace doubt! Entertain the notions

  • That you are wrong but can learn to be useful and
  • That your management model is a work in progress.

You are a service provider

Tech should amplify human capabilities. The sociotechnics of Data & AI reframe management as one technology among others used by operations. The people you manage depend on you getting things done so they can get their things done. They are the ones doing the heavy lifting.

About the Authors

At DAIRDUX, our busness is to help you understand the sociotechnics of Data & AI. We cannot in good conscience use Box’s metaphor and say that we help you hunt critically endangered felines. But we do help you

  • Root out blind spots
  • Boost the productivity of your operations
  • Turn Data & AI into growth opportunities