Implementing AI without losing yourself 3/5
We show that AI redistributes power, roles, and accountability. We argue for clear mandates and normalizing dissent. We ask whether AI is building control or professional trust.
AI changes power, roles, and the undercurrent
Not only systems — relationships are redesigned too
AI redistributes power and responsibility. Here you define who is allowed to overrule, who explains, and who corrects.
AI is often sold as a smart assistant. In reality, it redistributes power, responsibility, and trust. Whoever designs the technology is also designing relationships within the organization.
Three shifts
1) Who prepares decisions?
Where a professional used to open the file, a model now delivers a first assessment. Do you trust the model or the professional? Does deviation need to be justified? Is dissent “difficult behavior” or professional craftsmanship?
2) Who is accountable?
If someone is unfairly disadvantaged by an AI recommendation: who explains, who apologizes, who can correct it? Without clarity, a no-man’s-land emerges: “that’s just what the system does.”
3) Who defines the norm?
Models reproduce the history of your decisions (and biases). What used to be implicit becomes explicit and scalable. Do you want to keep that norm — or deliberately adjust it?
The psychological layer
Leaders may feel a loss of status or control when patterns become visible.
Professionals sometimes experience AI as surveillance or as devaluing their expertise.
Boards may be tempted to steer by dashboards — and in doing so impoverish the dialogue.
What management needs to do
1 Make the division of roles explicit
Who remains ultimately responsible for which decisions? Where is human overruling mandatory? Capture this in mandates, processes, and the story you tell internally and externally.
2 Normalize dissent
Say it out loud: “It is professional to question AI outcomes and to overrule them when necessary.” Celebrate examples where this happened.
3 Discuss the undercurrent
Name the fear and tension. Combine honesty with a serious development path.
AI as a mirror
Use AI to expose patterns you no longer want: systematic disadvantage, steering for speed at the expense of carefulness. That way AI becomes an instrument for organizational maturity, not just for efficiency.
What now, what later, what not
Now: create an “overrule log”: where did professionals deviate from the model — and with what result?
Later: design a dissent ritual (for example, a quarterly red team review).
Not: treat deviation from AI automatically as an error.
Reflection question
Does your use of AI mainly reinforce control and hierarchy — or trust and professional dialogue?
