AI, people, and the future of work

Redesigning work and accountability for an AI era.

Buki Obayiuwana
14/04/2026
woman-and-man-looking-at-laptop-at-night

When organisations talk about AI, the decision usually starts with familiar themes: use cases, pilots, efficiency gains, job displacement and the risk of being left behind. Those concerns are real, but they are not the full story.

What is often overlooked is that AI is not just changing what work is done. It is reshaping who or what performs the work, how decisions are made, and where accountability sits. Once AI becomes embedded into everyday operations, the challenge shifts from adoption to design, control, and accountability.

The shift organisations are not fully accounting for

Most organisations still treat AI as a tool that supports people. Work is now executed through interactions between humans, AI systems, and non‑human identities – agents, bots, and automated workflows. Decisions and actions flow across these actors in ways that are increasingly opaque. Yet accountability structures continue to assume that work is done by a single human individual. The result is a system that operates but is not fully understood. This is the gap leaders must close. 

The ten questions leaders need to be asking

To understand and manage this shift, organisations must move beyond capability and culture and ask more fundamental questions.

  1. Who or what is actually performing the work?
  2. Who is accountable, and legally responsible, for the outcomes?
  3. What happens when AI fails or degrades?
  4. How will human expertise evolve?
  5. How are employees experiencing this shift?
  6. Who will perform the work in the future?
  7. Who owns and governs the non-human workforce?
  8. Where does responsibility sit across external providers?
  9. Who is really making decisions?
  10. What work should remain human?

Individually, none of these are new. What is new is how tightly they now interact, and how quickly gaps in one area create exposure in another. This is our differentiated thinking; we treat these issues as a connected system, not isolated risks. 

Where this shows up in practice

Who is doing the work?

Operational outcomes are often produced by chains of humans and AI. Yet accountability is still assigned as if a single person completed the task, creating ambiguity and at times, no true ownership.

Who owns the decisions?

AI is now shaping hiring, claims, underwriting and fraud detection. Bias, explainability and regulatory exposure remain with the organisation, even when decisions are AI-influenced. Employees may be accountable for outcomes they cannot fully explain.

What happens when AI fails?

AI tends to fail gradually, not dramatically. Degraded performance can affect prioritisation, processing or risk detection — leading to breaches of service expectations. Resilience depends on human capability, governance, data quality and fallback design.

How is human capability changing?

As roles shift from doing to reviewing, skills can erode. This reduces challenge capacity and increases dependency — a structural resilience risk.

How are people experiencing AI?

AI changes trust, fairness perceptions and behaviour. Opaque or unchallengeable systems drive workarounds and weaken control.

Who owns the non-human workforce?

Bots and agents act independently but are often not governed as part of the workforce, creating security and accountability gaps.

Together, these questions highlight a critical point: AI transformation is no longer about skills or technology alone. It is about redesigning the system of work, how decisions are made, how accountability flows, and how humans and non-human actors operate together. Organisations that address these questions in isolation will miss the deeper structural shift. Those who view them as an interconnected system will be better positioned to build organisations that are compliant, resilient, and genuinely fit for an AI-enabled future.

Why a system view is needed

These issues are interdependent: weak accountability increases legal exposure; skill erosion reduces resilience; poor governance drives control failures. Yet organisations still address them in silos.

But AI does not operate in silos.

It creates a connected system where people, technology, and processes are intertwined; decisions are distributed; and accountability blurs. Managing this requires shifting from tackling individual risks to designing and governing the system as a whole.

Most organisations are already operating within this model. The real question is: Are you intentionally designing it or discovering its weaknesses after the fact?

Human AI and Non-Human Identities

 

A holistic framework for managing human and non-human identities

Our holistic framework enables you to assess how well-equipped you are and to identify the actions you need to take. AI is already reshaping the system of work, whether organisations design for it or not. The organisations that will lead are those that confront these questions head-on, map their human and non-human workforce, and build clarity of accountability before gaps turn into failures. 

To find out more, please explore our website at AI, Change and Transformation.

Contact us


Buki Obayiuwana
Buki Obayiuwana
Managing Director, AI, Change and Transformation ConsultingLondon

Insights