The human in the algorithm (EN)
This blog explores how we, as a society, can ensure that algorithms support human values, rather than subtly determining our behavior.
Algorithms are deeply interwoven into our daily lives—from search results and news feeds to medical diagnoses and credit assessments. They offer convenience and efficiency, but they also carry risks. Biases in data can lead to discriminatory outcomes, and the logic of the algorithm is often opaque. It is tempting to see algorithms as neutral technology, but they are always a product of human choices, assumptions, and interests.
Many decisions that were once made by people are now in the hands of systems we do not fully understand. A job vacancy may never reach you because an algorithm does not match your profile. A loan can be rejected based on patterns that are not explainable to the applicant. And which news articles you read is largely determined by what the algorithm thinks appeals to you—and therefore also by what you do not get to see.
The effect of this is subtle yet far-reaching. Slowly, our range of choices shifts, often without us noticing. Our autonomy is influenced by logics we did not choose ourselves, and which may sometimes run counter to our values.
That is why we must continue to test technology against human values. This begins with making those values explicit: justice, transparency, inclusion, human dignity. They function as a compass in designing, testing, and deploying algorithms.
A fair algorithm does not discriminate based on irrelevant characteristics such as ethnicity, gender, or postcode. Transparency means that the functioning and decision-making of the system are explainable to users. Inclusion implies that diverse perspectives are included in the design process, so that the outcome is not biased by a limited group of designers.
Explainability—often referred to as explainable AI—is crucial for trust in algorithms. People must be able to understand why a decision has been made, especially when that decision has major consequences for their lives. An algorithm that provides a medical diagnosis, for example, must not only give the result, but also the underlying reasoning. Without that explanation, a technological black box emerges that makes any conversation about values and justice impossible.
A common misconception is that the algorithm “makes the decision” and that responsibility therefore lies with the technology. But behind every algorithm stand people: developers, managers, policymakers. They determine the data used, the rules applied, and the goals the system pursues.
Responsibility therefore also means taking ownership of the consequences of algorithmic decisions. That requires governance structures in which ethical evaluation is as self-evident as technical quality control.
Social media form a clear example of how algorithms shape our reality. These systems are optimized for attention: the longer you keep watching, the better for the business model. This means that algorithms often favor content that provokes emotion—anger, outrage, sensation—because it holds your attention longer. The result is that public debates harden and polarization increases.
Here lies a direct link to human values. If we want to preserve a healthy public space, we must critically examine how these algorithms are designed and which goals they serve. Do we want the attention economy to determine the course, or do we steer based on social and democratic values?
Leadership in a data-driven world means having the courage to ask these questions—not only from a technical perspective, but above all from a human one. It requires embedding ethics in policy, diversity in design teams, and continuous evaluation and adjustment. Technology is never finished; its social impact changes and requires ongoing attention.
Ultimately, the question is not whether we use algorithms, but how. Algorithms can add enormous value—from finding medical treatments more quickly to making logistics chains more sustainable. But only if we preserve the human scale will these systems continue to support us rather than steer us.
Bringing the human into the algorithm means seeing technology as an extension of our values, not a replacement for them. It means accepting that technology is never neutral, and that it is our collective responsibility to use it for good.
In a world increasingly driven by data, it may be our greatest challenge to remain human—and to ensure that our technology does the same.
Rene de Baaij
