Skip to content
Mensch Maschine Management_Header
Prof. Dr. Daniel Keller29.4.2026

People, Machines, Management

Why the use of AI is shifting the focus of management work

 

Listen instead of read: The blog post as audio
4:27

 

Dear readers,

The use of artificial intelligence is becoming more and more accepted and is sparking increasing enthusiasm across society.  In everyday life, AI is used for everything from writing a love letter to helping with homework or crafting the perfect reply to a hard-to-reach landlord. Programs like ChatGPT take over research and much of the mechanical thinking for us. And what works or seems to work well on a small scale can be transferred to the business world. AI has now arrived everywhere.

Companies are placing high expectations on Artificial Intelligence. In conversations with leaders, I often encounter the following assumption:

If algorithms can analyze data faster and recognize patterns more accurately than humans, shouldn’t they also make better decisions? At first glance, that sounds plausible. But it overlooks a fundamental difference: the difference between analysis and judgment. 


Near-perfect analysis does not equal perfect judgment

Modern AI systems can process large amounts of data, identify correlations and generate forecasts. In areas such as planning, strategy or controlling, this can significantly improve the quality of decision-making foundations. But decisions are more than sound analysis. They always involve judgment and with judgment comes personal moral responsibility. And that is where the real distinction lies. 

Every decision has two components: analysis and judgment.

Analysis evaluates data and models, while judgment determines which option makes the most sense under the circumstances, given the risks involved. Algorithms can recognize patterns and calculate probabilities, but they have no independent sense of a problem and bear no responsibility for the consequences of a decision. Although they can generate options, they cannot judge which of them is responsible. That - and this is crucial! - remains a human task.

We can already see this in organizations using automated decisions, for example in credit assessments or pre-screening job candidates. Even these systems are ultimately based on criteria defined by people beforehand.


The machine does not decide the rules, it executes them.

This is a profound difference that becomes even clearer when we look at how management work has changed. For a long time, a core part of leadership work involved gathering information, interpreting it and deriving recommendations for action. This is exactly where AI is becoming especially powerful.

As analysis becomes increasingly automated, the focus of management shifts. The emphasis moves away from interpreting data and toward deeper-rooted questions such as: 

  • What are the assumptions behind this analysis?

  • What risks arise from the proposed option?

  • Which decision is responsible in the context of the organization?

AI therefore does not reduce the need for decisions, but changes their nature. The focus shifts from analysis to judgment. This makes a dimension of management more visible that is often overlooked in the discussion about AI: The normative orientation of organizations.


In the St. Gallen Management Model, this level is described as a normative dimension alongside operational and strategic management. It deals with fundamental questions of orientation, namely values, responsibility and goals. AI can provide enormous support, especially at the operational and strategic level, while the normative level remains deeply human. Higher-order goals and questions play a central role in decision-making.

  • Does the decision align with the goals we pursue as an organization?
  • What criteria guide our decisions?
  • What responsibility do we carry toward employees, customers and society?

Such questions cannot be answered algorithmically.


The more analysis is automated, the more important personal judgment becomes.

This is exactly where we see the central challenge for many organizations in our work with leadership teams.

It's not just about using and understanding the technology effectively, but also about strengthening decision-making skills under new conditions.


The more organizations rely on AI, the more important 
one non-automatable capability becomes: Judgment in leadership. Because in the end, it is the decisions of one or several real human beings that make the difference.


Yours,
Daniel Keller

 

Image source: Photo by syda_productions on Freepik

avatar

Prof. Dr. Daniel Keller

I am the founder and CEO of KellerPartner, a specialist in strategy and transformation, with extensive management, consulting, and training experience across a variety of industries and countries. As a professor of General Management at Steinbeis University, I combine application-oriented research with broad, interdisciplinary expertise.

MORE POSTS