Masterclass Artificial Intelligence: ‘ChatGPT Is Not That Clever’
The hunger for knowledge on the still magical topic of artificial intelligence is great, as evidenced early Tuesday morning during a breakfast session at Van der Valk Hotel Amsterdam Zuidas. A number of Company Secretaries hang on every word of Jan Veldsink, lecturer in artificial intelligence and digital security at Nyenrode Business University and Strategic Advisor on AI at Rabobank. Veldsink has been involved with AI since the nineties, and he has an infectious way of talking about it. Victor Prozesky, Managing Partner of The Board Practice, also sees benefits. The international consulting firm has valuable best practices in the areas of board effectiveness and evaluation, CEO succession and renewal of top echelons. ‘It would be super powerful if we can convert that data – anonymized, of course – into an AI tool which we can use to generate new knowledge’, says Prozesky.
Contrary to popular belief, AI is not new. Artificial intelligence made its appearance in 1950, but back then the technology was reserved for academia. ‘What is special about recent developments is that AI is now accessible to everyone. There are no more barriers,’ Veldsink says. ‘With well-formulated commands – or in AI parlance: prompts – a program like ChatGPT can make summaries, write letters, scan chunks of content or produce an easy-to-read text on a particular topic.’
Thanks to AI, software development is also becoming more efficient. With programs like GitHub, you no longer need to be able to code to write software. Moreover, the computer is many times faster. Veldsink put it to the test himself. He once spent two years developing a useful functionality for the Rabobank app. When he submitted his idea and the model used to GitHub recently, usable software code rolled out within half an hour. His conclusion: ‘The computer is excellent at coding and makes programmers largely redundant. Still, humans will continue to be needed as software engineers to provide good ideas and the necessary architecture.’
Data juggler ChatGPT
AI applications offer endless opportunities for various industries and professions. In the banking world, AI has been used for some time now to combat fraud and money laundering. The technology recognizes anomalous patterns in payment transactions. But AI can also be used to detect risks in the supply chain or write commercial texts for marketing purposes.
At the same time, humans must be mindful of AI’s shortcomings. ChatGPT, for example, is ‘very dumb,’ according to Veldsink. ‘The chatbot is a language program that uses language as basis to, through artificial intelligence, create models and draw connections. It ‘juggles’ statistical connections in language and create something from it. But the result of this process need not be true,’ warns Veldsink. ‘A human being will therefore always be needed to validate the outcome. In doing so, it is good to know that ChatGPT will always want to please the user. Ask the program what 3 plus 7 is and it will give the answer as 10 the first time. So will it the second and third time. But if you keep claiming that 3 plus 7 really is 14, at some point the program will change its mind and agree. The underlying reason is that ChatGPT is made by a commercial party, with the premise that the product will be most valued if its answers fit the user’s profile.’ This can be dangerous. ChatGPT can be made to believe, for example, that an historical event like the Holocaust, never existed.
Another point of interest: the information ChatGPT reproduces is biased. The computer draws on data predominantly produced in the Western world, and thus contains preconceptions. ‘That is not necessarily a bad thing, but we need to be aware of it.’ In the coming period, the development of models and applications will be swift and the quality of what will be delivered will improve by leaps and bounds.
Proprietary AI tool
Users also need to understand how ChatGPT works, according to Veldsink. The language robot gets its data from a large number of ‘buckets’ filled with information on a particular topic, input collected by thousands of poorly paid workers in, say, Africa. The computer understands the context of a task, as long as it is information from one bucket. ‘If you ask ChatGPT to write a TV script about the 1953 flood disaster following the format of the Dutch television series ‘De Wereld Draait Door’, it will not succeed. They are two separate buckets, so it cannot currently make the connection. You can assist it to get started, however, by first requesting information about the 1953 flood disaster and then about ‘De Wereld Draait Door’. That way you train the chatbot and it will be able to make the link.
AI expert Veldsink expects that many organizations will build their own ChatGPT in the future. Think of the knowledge available to news agencies, law firms, consulting firms or universities. There are participants already seriously thinking about this.
None of the Company Secretaries present are using ChatGPT in their daily work. In preparation for this breakfast session, Victor Prozesky asked ChatGPT what tasks the chatbot can take over, and he got a whole bunch of answers. For example, ChatGPT can distil key insights from a report, list trends or notice anomalous patterns in documents. It can also summarize a 1,000-page contract into 30 pages. ‘But how do you know it will include the most important things?’ one participant commented. According to Veldsink, you do not have that certainty. ‘As a human being, you will always have to keep checking it yourself. But with a good summary, you can make additions more easily. That is a different task than having to read through the whole document and summarize it yourself.’
At the same time, it is important to be careful, warns Veldsink. Cybercriminals are also taking advantage of ChatGPT technology and will get more creative in slipping into corporate networks. It will become even more difficult for employees to distinguish phishing emails from real ones. Especially since hackers can manipulate even voices and videos thanks to AI. Employees should also never share company-sensitive information with big tech companies’ AI programs because it simply is not secure. That seems logical, yet there are caveats when it comes to sharing data securely. For example, Veldsink notes the increasing prevalence of AI in Microsoft Office applications. In the Teams meeting tool, it is now possible to transcribe a conversation almost live. This makes a notetaker unnecessary. Teams can also use input from a meeting to create an action list. ‘That is uncanny,’ says Veldsink, ‘because how secure is the information if it is transcribed or turned into an action list online?’ ‘Companies may until now have believed that the content of a conversation via Teams always stayed on the local company network, or in the ’proprietary’ cloud. But ask the question to the suppliers, the developers of the software, and guess what? Somewhere in the fine print of the contract are exceptions, even suppliers seem caught off guard by the sudden possibilities. And where does the processing of the data take place? Is it forwarded to America to be processed there? That gives food for thought.’
In short, the rapid advance of these forms of AI leads to numerous uncertainties and risks. Many organizations are currently experimenting with this new availability of AI, but some companies are already declaring temporary AI stops and allowing experiments to take place only in safe environments.
Urgent boardroom advice
The big question is what AI will bring us in the future. The applications are unprecedented, and AI can lead to solutions to major social issues. At the same time, AI leads to just as many dilemmas. ‘We are at the beginning of a great search for responsible handling of AI,’ says Veldsink. His urgent advice to Company Secretaries, confirmed by Prozesky: ‘Make sure knowledge about AI in the boardroom is in order. Although the topic is IT-related, it is broader than that. Executives and Supervisory Directors should be aware of the opportunities and risks of AI for their organization and society.’
This article was published in Management Scope 07 2023.