When AI Gets It Wrong, Who Gets It Right?

When AI Gets It Wrong, Who Gets It Right?
AI is shaping more decisions than we realise. Quietly influencing outcomes in ways that affect people’s lives, often without explanation or recourse.
There’s leadership challenge – and opportunity – that comes with that shift. One that’s more than managing technology, that’s about doing accountability right.
If you’re leading teams, shaping policy, or building systems, this is a call to lead AI with clarity, care, and true accountability.
Listen here
Read here
AI is no longer on the horizon. It’s here. Woven into the everyday decisions that shape people’s lives.
It’s in the systems that determine who gets a loan, which CVs make it through, how patients are triaged, what students are shown, and which neighbourhoods are policed.
It’s not visible. It’s not neutral. And it’s not always right.
The question isn’t if or when AI will affect people – it already is.
The question is how we lead its presence with clarity, care, and accountability. Because these outcomes aren’t just technical errors. They’re signals of where leadership needs to show up more strongly.
A recent Wharton article Who’s Accountable When AI Fails? puts it clearly: Responsibility must scale with innovation.
This is where Accountability Done Right matters, not just as a mindset, but as a mechanism. As infrastructure. Because when AI goes wrong, it’s rarely about the code alone. It’s about what was prioritised. Who was consulted. How risk was defined. And whether the right people were in the room, asking the right questions.
Leading accountability in the age of AI means:
- Clarifying who holds the line. Many people may contribute to building or deploying a system. But leadership means identifying who is answerable for its impact.
- Expecting explainability. Transparency isn’t optional. It’s foundational to trust and safety. If we can’t explain how a system makes decisions, it’s not ready for deployment.
- Making values visible. AI can process data, but it can’t reflect deeply. It can’t weigh social context or moral consequence. That’s the work of leadership.
And perhaps most importantly, we need to build toward an aspirational culture of accountability. Fuelled by clarity rather than driven by fear and shaped by leaders bold enough to hold the line on what matters.
This is how accountability shifts from being a backstop to a cultural context. The goal isn’t to assign blame after the fact. It’s to embed clarity and care into how we innovate from the beginning.
AI is a tool. Powerful, evolving, and increasingly present. But it’s still a tool. What determines its impact isn’t what it knows, it’s what we choose to do with it. And how we lead the choices that surround it.
So, let’s lead it.
Boldly. Wisely. Humanly. With the same clarity and care we’d offer a team of people, because in many ways, that’s exactly what it is.
And that’s the work in front of us.