Masterclass AI: 'Clean Data Are the New Gold'

Masterclass AI: 'Clean Data Are the New Gold'
Now that artificial intelligence (AI) offers unprecedented opportunities in many areas, organizations face the challenge of anticipating these developments— consciously, and cognizant of the risks and limitations. ‘Workflows everywhere will change,’ was the urgent message during a masterclass for board members organized by Management Scope in collaboration with The Board Practice. What should leaders focus on during this transition?

The British comedy group Monty Python once created the sketch ‘100 Yards for People with No Direction.’ In it, a group of sprinters stand on the line for a race in a full stadium. As soon as the starting gun fires, the participants dash off to loud cheers, but they all run in different directions. The hilarious video, still available on YouTube, illustrates how companies are currently dealing with AI. Thus observed Jan Veldsink, a lecturer in artificial intelligence and cyber at Nyenrode Business University, and strategic AI advisor at Rabobank. Veldsink notes that many companies have various pilot runs, employees experimenting independently with ChatGPT, and sometimes a chief AI officer has been appointed. ‘Yet there often is a lack of clear direction.’ According to the AI expert, direction is crucial. ‘Because developments are moving rapidly and international legislation is on its way, it is essential for companies to develop a strategic vision on AI. The assumption should be that AI will affect the entire organization. AI will not only impact the business model and competitive position but will also require attention around compliance.’ During a breakfast masterclass organized by Management Scope in collaboration with knowledge partner The Board Practice, a select group of board members was briefed on the latest developments and the challenges posed by AI.

ChatGPT as a Swiss Army Knife
First, a brief history. AI has existed since 1950. It was the English mathematician Alan Turing who tried to make computers think and communicate like humans. He gave his name to the Turing test, which is still used to determine the extent to which a computer exhibits human intelligence. In 1980, machine learning was introduced. Humans no longer instructed computers through code; computers learned by analyzing data. In 2010, deep learning was added. This form of artificial intelligence uses neural networks with many deep layers to analyze large amounts of complex data and recognize patterns, inspired by the functioning of the human brain.
Although these technologies were groundbreaking at the time, AI only became ‘really exciting’ when the American company OpenAI launched ChatGPT in 2022. The language model finds patterns in text, images, and audio, creating something entirely new—generative AI. Moreover, unlike machine and deep learning, ChatGPT can communicate in a human-like way. This made AI accessible to everyone, accelerating its development.
Generative AI opens a new world of possibilities. Veldsink: ‘ChatGPT is trained on enormous amounts of text data and can provide realistic answers to questions, write articles, create summaries, generate poetry, and recognize images.’ The AI expert describes ChatGPT as a Swiss Army knife. ‘We suddenly have a tool that can be used in every industry and in many different processes within an organization. With this new form of AI, customer conversations can be analyzed, complex reports can be interpreted at high speed, and fraudulent or money-laundering activities can be detected.’ AI can make work more efficient, faster, better organized, and safer and more sustainable. ‘For example, the Dutch Rijkswaterstaat could use AI to inspect roads, bridges, locks, buildings, and the road network. It can be filmed with drones and the images assessed by AI. This way, Rijkswaterstaat knows where maintenance is needed, without the need for inspectors on cranes, and without disruption to the traffic.’

Decision-making under the microscope
But AI also brings risks. Users must realize that the language model is built on a database of human-created data, and the newly generated data becomes part of ChatGPT's learning process. ‘ChatGPT draws connections within this database, which means the language robot cannot think of new things.’ This becomes tricky, according to Veldsink, especially were ChatGPT to be used in decision-making processes. ‘Suppose nine out of ten decision-makers consulted ChatGPT to delve into a particular issue. Only one person thought independently and believes the organization should move in a different direction. In practice, the decision-makers who used ChatGPT will quickly agree. They might have received slightly different stories, but they will ultimately come to the same—ChatGPT-provided—conclusion.’ Veldsink urges directors and board members to be alert to this confirmation bias. ‘It is crucial to scrutinize decision-making processes. Do minority voices get enough space? Is there sufficient transparency about how decisions were made?’
Additionally, ChatGPT's outputs should always be validated by a human. ‘The language robot lacks common sense and deeper logic. It has learned from texts, images, and videos but cannot think for itself.’ For example, ChatGPT assumes that objects can fall because it has been taught this. ‘But it does not understand the logical concept of gravity as we humans do.’ Veldsink’s conviction is that this is a current limitation, an enormous amount of work is under way to give ChatGPT a more human understanding. ‘Algorithms are being coded, but these are incredibly complex. We are in a transition phase where generative AI can do very useful things, but we must also be very wary of the pitfalls, risks, and limitations.’

Data as the most important asset
Added to the above, the success of AI stands or falls on the quality of the data a company has. ‘Forget the algorithm; let a university develop that,’ Veldsink argues. ‘Invest in reliable data instead; in the AI era, clean data is a company’s most important asset. Clean data is the new gold.’ Veldsink explains why AI is only as good as the data it is based on: ‘Without the correct underlying data to address a specific question, you cannot reach a correct solution.’ It is also essential to examine beforehand whether there is bias in dated data. ‘A bank might have been cautious about lending in the Amsterdam-Zuidoost region years ago. If the organization does not take account of recent upgrading plans in the area, it could lead to wrong decisions.’ Veldsink emphasizes the importance of standardizing data. ‘Only when data is standardized can an AI tool be applied.’

A different reward system
Companies face the challenge of ‘shaking up’ their organization now that AI offers opportunities in so many areas. They need to rethink how cognitive tasks can be reorganized with language models like ChatGPT. ‘Workflows everywhere will change,’ says Veldsink. He suggests companies focus on two key things with this transformation. One: ensure good checks and balances around the processes where AI is used. What exactly is AI used for, and how? What data is used? And most importantly, who validates the results? And two: improve the reward system. ‘In most organizations, this system is related to responsibility. The more people a manager has under him or her, the higher the reward. Thanks to AI, this will no longer apply. Many processes will need fewer people. The reward system of the future will therefore need to be based to a far greater extent on the added value a person provides."
It is also clear that employees will need to have different competencies as AI takes over many tasks. AI already has a significant impact on software development. Thanks to a program like GitHub, it is no longer necessary to be able to code. AI takes over the simple programming work. ‘That does not mean that software developers will become redundant, but it does mean that their role will change.’ In the new AI era, there will be a need for many new skills, according to Veldsink. ‘An employee will need to be able to think conceptually and systematically but also be able to formulate prompts—commands for ChatGPT. It will also be crucially important for employees to be able to validate ChatGPT's outputs. Do the results make sense?’

AI transformation will be a tough journey
Board members and supervisors too will need to further develop their skills, says Victor Prozesky, managing partner of The Board Practice. ‘For the board of directors and supervisory boards, technical knowledge is not necessarily what is called for, while conceptual and critical thinking is more crucial than ever before. Moreover, board members and supervisors need to think beyond their own domain.’ As AI affects the entire organization, knowledge about this topic is indispensable. ‘There is still a considerable way to go,’ notes Prozesky, as according to research by The Board Practice ‘across all industries, board members in the top 40 AEX-listed companies lack adequate knowledge of AI.’
A frequently asked question is whether board members and supervisors will have an AI colleague at the table in the future. In 2016, a venture capital investment firm in Hong Kong appointed a robot as a board member. Although this was partly a marketing stunt, Prozesky believes that AI will become commonplace in the boardroom within a few years. ‘It will assist with fact-checking and scenario planning.’
Prozesky predicts that the transformation to an AI-driven organization will be a tough journey. ‘Similar to the digital transformation, the technical part will be the least challenging. The cultural shift, the need for employees to adapt as we move through the transition, will be complex. How do you bring everyone along on the journey? Like with the digital transformation, companies will need to invest most of their energy in change management.’

Towards a fluid organization
Veldsink also acknowledges that the AI transformation will not be easy and advises companies to without delay formulate a clear AI strategy. ‘Preferably, companies should opt for a more 'fluid organization,' a structure able to easily adapt to continuous change.’ His tip is to create sandboxes. ‘Let employees play with AI in a safe environment. People are energetic about this.’ Recent research by Microsoft and LinkedIn shows that as many as 78% of AI users bring their own AI tools from home to work, which carry significant risks. ‘It's better to facilitate the use of AI and set clear rules about what is and isn't allowed.’
Additionally, Veldsink advises companies to choose a different type of leadership. ‘Managing AI is now often referred to as a top-down approach. It is better to allow employees more space. Leaders must ensure that all employees are taken along on this journey. All colleagues should understand what AI is, what it can mean, and how they can do their work differently, better, and faster. Encourage people to challenge each other.’ According to Veldsink, this requires managers who are not only caretakers, but active leaders.

Ensure an AI register
Veldsink finally warns directors and supervisors to be ready for the European AI Act. ‘Make certain you are not caught off guard. When the legislation comes into effect, expected in 2026, companies will have six months to comply with the first requirements. It will be important to be able to demonstrate with a register that you do not have prohibited AI, such as biometric identification based on sensitive characteristics like political or religious beliefs, sexual orientation, or race.’
What impact does AI have on people and society? ‘If citizens' personal freedom is compromised as a result of a company’s AI, that company will fall into a higher risk category, and stricter rules will apply.’

A major task for leaders
In conclusion, the message to the supervisors during this masterclass is not an easy one. Shaking up the organization and making it AI-proof—it is a mouthful. ‘Which processes to start with, and which can wait?’ is a question from the floor. Another is concerned about the need for new skills. ‘Some competencies, like systematic thinking, are not easily taught. At the same time, recruiting talent remains challenging.’ And then there is the impact that AI will have on productivity. ‘What do you do with people who will no longer contribute to the organization?’ is another question. Although the promises of AI are great, the AI transformation currently leads to challenging questions that directors and supervisors will need to grapple with in the coming period.

This article was published in Management Scope 07 2024.

facebook