‘Critical thinking in a world smoothed over by AI’
03-02-2026 | Interviewer: Hilde van der Baan , Gijs Linse | Author: Emely Nobis | Image: Gregor Servais
It is a cold day on the campus of Tilburg University with the last of the snow slow to thaw. The disappearing, increasingly gray snow reminds Roos Slegers of her years in Boston. From the Dante Building on campus, home to the Tilburg School of Humanities and Digital Sciences, Slegers conducts her research, which takes place at the intersection of philosophy, literature, and business ethics. Often with a remarkable perspective. Slegers has delved into the origins of capitalist thinking, moral discourse, self-help literature, dragon phobias, and science fiction. But always with a link to the present: to the consequences of AI and digitization on human relationships. A conversation about the urgent need for courage and friction, about our addiction to pleasure machines, and about human contact as a glimmer of hope.
We are here for an interview about the digital age, and I notice that you are the only one here with a paper notebook. Is that a statement?
‘Yes, it is striking, is it not? But it is not a statement of principle. Of course, I have all the digital tools at my disposal, I love gadgets and technology, and I have a subscription to several AI services, but at the same time, I know that I can concentrate better with pen and paper.’
Let us talk about ethics in the digital age, and in particular the dilemmas and questions that the boardroom is facing and still will face. How do you think the business community is dealing with the digital age?
‘I am just beginning with a large project on ethical guidelines for AI use in small and medium-sized businesses. I see roughly two trends there. One group of companies is doing nothing at all. They are waiting to see which way the wind blows and are waiting for legislation and regulations, for example. The second group, which is somewhat larger, is simply going all-in. We have to keep up, is what they say. If you do not keep up, there is a good chance you will end up among the losers. The fact that there is little to no regulation also offers opportunities, in their view. It is currently impossible to say where developments are heading. We do not have a clear picture of who the winners, losers, and victims will be. We will only be able to conclude that later.’
How do we deal with all the changes around us from a moral perspective?
‘With every new technological development, you see a kind of moral panic. When the train was introduced, some people thought it was unwise to let women travel on it; their wombs might fall out. There is moral panic now too. The only question is: is this just normal moral panic surrounding new technology, or is it fundamentally different? Unfortunately, we cannot assess that yet either. Those are conclusions for later. We can only say something about what we feel now if, for example, we have GenAI write a text. That text looks perfect, but it is not your text. And that feels a bit strange. I recently visited an advertising agency, and they are really doing their best these days to make their materials look less AI-like. Because of AI, many materials look too polished. Smooth and polished used to be the ultimate goal, especially in advertising, but now it has become somewhat repulsive, and we want that bit of imperfection. Apparently, that appeals to us humans more. Companies want to convince clients that something is not purely AI-generated. The same thing is happening here at the university. I deal with a lot of students who have their papers written by AI. They then insert a few errors to make it more credible. I get it: if I see a perfectly written thesis, I am immediately suspicious.’
But it is still strange. The ambition should be to write the best possible piece. When I ask my employees for a memo, I want a good one. Does it matter how the result is achieved? Why should you roughen it?
‘I think the environment in which it happens is an essential difference. Of course, you would rather have a good memo than a bad one. But we are at an educational institution here. I have to teach people something, and if you did not write the text yourself, I cannot assess whether you have mastered the material. There are many lecturers who want to ban AI use altogether. That might be a sympathetic and understandable idea. But it is unenforceable.
I also see opportunities now. I have students write a piece, and if we can have a meaningful conversation about that text—whether it comes from ChatGPT or from them—I am satisfied. In fact, I have noticed that the adjustments we have to make because of AI—such as oral exams—can add something important. In recent years, I have had students graduating who have never had a serious conversation with an adult. I have students who cannot or dare not make eye contact with adults. I see revolutionary potential: having a meaningful conversation about a scientific topic.’
AI tends to be flawless. Is it not also important that you learn to make mistakes? Taking risks, accepting that things can go wrong, and being able to learn from those mistakes, for example, is inextricably linked to successful entrepreneurship.
‘Yes, I think you are right about that. I foresee a problem in the field of innovation anyway. GenAI's innate nature is to project patterns from the past into the future. It repeats what has been most successful. The chance that GenAI will come up with something surprising or niche is therefore not very high. If that is what we strive for, we actually need some old-fashioned human virtues again. Such as courage. You need someone who dares to question things, who dares to be critical, who dares to speak their mind. We need to learn to think critically in a world that is being smoothed over by AI. Innovation often comes from conflict.
Innovation also comes from daring to fail, from failure and from criticizing that failure. It is very difficult to argue with ChatGPT right now. ‘Chat’ is very flattering, thinks everything is great, and says that every worthless piece has potential. If innovation is high on your company’s agenda, I am not sure that ‘Chat’ is your friend. Where is the friction?’
You research science fiction from the past. What strikes you about those visions of the future?
‘I find it fascinating to examine the stories that were once told about the world of the future, for example, Aldous Huxley's Brave New World. What was Huxley warning us about? He is not necessarily warning about oppression by robots; no, it is much more subtle. He warns that one day we will be so entertained that we will stop paying attention. More or less voluntarily. Books do not need to be banned if no one reads them anyway, if everyone just wants to lie on the pleasure machine. Huxley's scenario has now largely become reality.’
What can or should the answer be?
‘There is a movement of techno-optimists who say that all questions and all problems can be answered or solved with more technology. We no longer know how to make friends, so Mark Zuckerberg invents an AI friend. I think that is a worrying development. If that is the solution, we are doomed. But technology will always have the advantage. After all, philosophy thrives on inertia. That is why philosophy is perceived as so irritating in the business world. There is always an ethicist urging us to sleep on it. Former Google CEO Eric Schmidt once said that we humans talk too much. That technology will solve everything for us. Talking is a waste of time. I am of the opposite school of thought. I think we should take a moment to consider what new technology does to you. But these are really two conflicting movements.’
Back to the question of how disruptive this new era is. We can indeed refer back to the train, but are current digital developments not much more unique? Can you put that in historical perspective?
‘The big difference is that the train was not in your pocket back then. GenAI is. The scale is much bigger now; it is available to everyone. GenAI also engages in conversation with you. And on top of that, the economic dependence is enormous. The entire American economy is driven by data centers. The scale and economic dependence make this unprecedented in history. That could be disruptive.
This also applies to personal relationships. I do a lot of research on people who have romantic relationships with AI. Why do they want that? Because it is a frictionless relationship. It is someone who is always available, never difficult, who you can simply mold into your ideal image. My biggest concern is not so much that we humans will soon be the pets of a robot, but rather that people apparently think a perfect relationship is one without friction.’
Many rules are being devised in the field of data to ensure that consumers or companies are better protected. To what extent are rules the solution?
‘From an ethical perspective, compliance is never sufficient. Rules quickly become outdated. We have a tendency to come up with rules for everything. Look at our diversity and inclusion policy here at the university.
It is getting increasingly comprehensive. We create a new rule for every situation. Meanwhile, the policy has grown into something that feels to some like a kind of police state. You hear some people say: you cannot say anything here anymore. There certainly is some truth to that. Because where is the conversation if all you have are rules? Writing rules can set in motion its own crushing machinery, in which human judgment ultimately threatens to disappear. People have always had a tendency to write rules. Think of the Ten Commandments. It is nice to have that guidance. But the moment you write down the rule, someone always comes up with an exception. So, you can never quite resolve it.’
Besides, I might be able to create the rules easily. But what if no one follows them? Do we not need a more cultural or ethical shift?
‘That is the big question. That is also the frustration of ethics. Often you do not get beyond the exclamation that people just have to get better. That sounds rather naive. Here too, the medium is the message. When I teach, I can get my students on board. We agree on most ethical issues. But as soon as we are in a different context, for example, all sitting behind a screen, you suddenly see decent, respectable people lose their decorum. The medium does not force you, but it does invite you.
You see that with Elon Musk's AI tool, Grok. It now offers a nudifier tool, where you can undress someone in a photo. This tool is being used extensively. And it all falls under the umbrella of ‘free speech’ or ‘good fun.’ But the victims of these jokes are usually vulnerable groups who have no voice and no power.
We should protect these vulnerable groups. But that protection probably will not come from technology (Social media platform X announced shortly after the interview that the rules for AI chatbot Grok will be tightened. This should make it impossible to digitally ‘undress’ photos of people, ed.). We need critical people for that. How do we ensure that people continue to think critically, continue to show courage in this frictionless world?’
Suppose I am a board member of a company active in social media. And directors come up with a plan that is within the rules, but I feel: ‘Should we really do this?’ How should I deal with that? Would someone on Grok's board not have said: ‘Yeah, but guys, we really need to talk about that nudifier tool...’?
‘I fear this discussion has not been very active in the US. Obama once assembled his Team of Rivals, under the motto ‘surround yourself with people who disagree with you.’ But who dares to do that anymore? It is no wonder we all now have tools and media focused on entertainment and frictionlessness. Frictionlessness is also a high priority for all those tech companies. But what does it mean when you do not experience any resistance? Dissent has now become something radical, something that requires courage.’
Should we, for example move to a policy where it is always someone's job to disagree?
‘Appointing people to be critical does not seem like the solution to me. Then I would be saying: I do not take you seriously, because you always have to be difficult. What we need to do is take our own discomfort and the discomfort of others very seriously. You have to create a climate in which you can express doubts and express your gut feelings. That is where morality begins—with that feeling. It starts with the courage to raise your hand and say: this does not feel right. That is ethics—the stick in the mud. Every individual has a limit somewhere.’
What should be top of mind in the boardroom?
‘I think the lesson should be that you have to pay attention to the micro level. By that I mean, for example, our ability to sense whether our opinion is welcome. Back to Grok, Elon Musk's chatbot: there will undoubtedly have been people around Musk who had reservations about that nudifier tool. But they know from experience that it is better not to go against Musk's will. In the boardroom, you naturally want there to always be the opportunity for conversation. At the micro level, that space can arise from something small: for example, when someone dares to express doubt or dares to be a little vulnerable. We have all experienced this: the atmosphere suddenly becomes freer and more open thanks to such a small display of courage. Courage lies precisely in those little things. Once you realize that, it does not feel like such a huge challenge.’
Are we all heading in the wrong direction? Or do we ultimately have a sufficient moral compass? Does hope or despair prevail?
‘We are going in both directions. There is despair and there is hope. You cannot help but despair when you are doom-scrolling through your news app in the morning. But there are also plenty of hopeful moments. For example, when I see students who are truly thinking for themselves. Who are struggling and sharing that with each other. The remarkable thing is that hope always plays out at the micro level, and despair usually at the macro level. We have to realize that the macro level almost always comes to us mediated, depersonalized through a screen and almost never in its pure form. Fortunately, we will always maintain personal contacts. That is where the hope lies.’
This interview was published in Management Scope 02 2026.
This article was last changed on 03-02-2026