Masterclass AI: ‘A Huge Impact on the Corporate World’
Artificial intelligence (AI) is expanding in the corporate world. Currently, online customer services often use chatbots to answer simple, repetitive customer queries. But when the questions become more complex, the caller is redirected to a human employee. The use of AI enables the chatbot to handle more complicated questions as well. A second example: through the application of AI in marketing, customers can get an even better and more personalized approach. Since ChatGPT entered the market, new opportunities have emerged, and companies are exploring how to leverage these possibilities. But what are the pros and cons of using AI? Should employees be allowed to use AI in performing their tasks? Where have things already gone wrong? And which rules would companies soon need to take into consideration when the European AI Act becomes effective? These questions were discussed during an AI masterclass for company secretaries, organized by A&O Shearman and Management Scope.
AI can be of use in the daily work of company secretaries. A company secretary, for instance, can save time when minutes and summaries of board meetings could be automated. One possible solution for this is ‘Boards’, an AI tool recently introduced by A&O Shearman. In a demonstration by Stephen Beattie, head of corporate client solutions at A&O Shearman, this caught the audience’s attention, as minute-taking is still done manually. ‘But AI tools can also be of value at board level,’ Beattie said. ‘AI can even act as an additional executive committee member or supervisory director, as the robot can generate insights that assist in decision-making.’
A comprehensive regulatory framework
While AI is not something new, the introduction of ChatGPT in November 2022 has given many companies food for thought. ChatGPT has demonstrated that large amounts of information can be made accessible in a user-friendly manner. A&O Shearman developed its own AI tool, ContractMatrix, which is linked to Harvey, the legal variant of ChatGPT. ContractMatrix streamlines the drafting and reviewing of contracts, which saves a significant amount of time. Harvey can also assist in evaluating or rewriting contracts.
Regardless of which AI applications companies use, they will soon need to comply with regulations stemming from the European AI Act. In December 2023, Europe reached a political agreement on the text of the AI Act. Compared to an earlier draft, it was remarkable that AI applications like ChatGPT were not regulated. With the rise of ChatGPT, European policymakers revisited the text. The final version of the AI Act includes a separate chapter dedicated to general-purpose AI models. The AI Act will be the world's first comprehensive regulatory framework applicable to AI.
The AI Act can have a substantial impact on businesses, says Nicole Wolters Ruckert, data, privacy, and AI expert at A&O Shearman. ‘The law has a risk-based approach and categorizes AI systems based on their risk levels, ranging from unacceptable, to high, limited and minimal. The higher the risks associated with an AI system, the more restrictive the requirements. For instance, it prohibits the use of an AI system which would manipulate an individual’s free decision-making to such an extent that the individual would not have made that decision without the manipulation. The rules apply to parties that market AI systems in the European Union (EU), EU parties using AI systems, as well as non-EU parties whose AI systems’ outputs generate effect within the EU.’
More transparency
Notably, the AI Act does not use the term generative AI but opted for general purpose AI. The AI Act distinguishes between a general purpose AI model and a general purpose AI system. To clarify: with ChatGPT, GPT is the large language model and thus the general purpose AI model, while Chat – the search function – is the general purpose AI system.
‘Most of the rules in the AI Act relate to the model, rather than the system,’ says Wolters Ruckert. According to the AI Act, there are two risk categories for general purpose AI. First, there are high-risk models, which have enormous computing power and use vast amounts of data, posing a danger of systemic risks. The second risk category includes models with limited risk: everything outside the first category. For these, suppliers must respect the IP rights (intellectual property) of authors, artists, or publishers. Additionally, suppliers must disclose what data they have used to train the model. ‘That is significant,’ Wolters Ruckert notes. ‘Not so long ago, there was no one who knew what information ChatGPT used, but that is about to change.’
In addition to the specific chapter on general purpose AI, there is also a separate chapter outlining the rules applicable to AI systems with limited risk. Suppliers of AI systems that generate sound, image, video, or text will need to spell out that the output was generated using AI.
With the help of ChatGPT
It is not only the big tech companies that will need to comply with the requirements of the AI Act. ‘Organizations that have built their own ChatGPT tool can also be classified as providers in the AI Act and will need to comply with the rules.’
Another far-reaching consequence: companies that incorporate AI into their products will need to be transparent about this to their customers in the near future. Within A&O Shearman, this is already leading to discussions, Wolters Ruckert shares. ‘Colleagues who now write an advice do not mention this advice was partly generated with the help of our AI tool. In the future, such an advice may need a disclaimer.’
Wolters Ruckert has three recommendations for companies subject to the AI Act:
- Ensure control
Many companies currently have little oversight of which AI tools are being acquired by HR, customer service, and marketing departments. Employees might be excited about the potential of AI, while losing sight of the risks. - Define clear rules
Define clear rules about the use of AI. Which tools are allowed within the company or are being considered for use? Which employees are permitted to use them, and how can they use them responsibly? Formulate rules, but do not make them too complex. It is better to have ten concise rules than a huge document that no one reads. - Prepare for the EU AI Act
Each company must determine whether it will only be a user in terms of the AI Act or whether it might also become a developer/provider. Many companies seem to have plans to develop their own ChatGPT-like AI tool – with proprietary content isolated from the internet. It is crucial to analyze what your role will be within the AI Act and what requirements apply.
Second, consider the contracts with AI tool suppliers. The AI Act will require suppliers to provide far more information on their product to users than they currently do. However, at this point you already want to know if a supplier uses AI, what kind of AI, and what data is used to train the AI model.
The future AI Act should also be considered in mergers, acquisitions, or with partnerships. The questions will then be how companies plan to handle the forthcoming AI Act, what role they will play, what AI systems they use, what data they use, where the data originates from and how it is used, how they ensure (cyber)security, and what their internal policies look like.
Hefty fines
The timeframe within which companies must comply with the rules depends on the category of the AI system. Companies offering or using prohibited AI systems must comply within six months and therefor seize to use them. For other risk categories, there generally is a period of one to two years before the rules apply. It is important to note that the rules are likely to take effect in late July or early August.
Wolters Ruckert expects that monitoring compliance with the AI Act will be complicated. The European Commission and, specifically, the AI Office will be responsible for overseeing general purpose AI models. Supervision of high-risk AI systems, however, will be managed nationally. ‘There does not appear to be a single AI regulator: supervision will require collaboration between various regulators. In the Netherlands alone, 36 regulators and inspections have been identified to deal with AI Act supervision.’ Despite this complexity, Wolters Ruckert advises every organization to start preparing for the upcoming EU AI Act. Not the least because the fines for non-compliance are substantial, up to a maximum of 35 million euros per violation or 7 percent of global turnover.
A mature AI policy
During this well-attended masterclass, it became clear that there is plenty of work to be done. The difficult question is with whom best to leave this responsibility within the organization. According to Wolters Ruckert, many companies assign this responsibility to the Data Privacy Officer or Data Protection Officer. ‘But that is too simplistic,’ she argues, as ‘AI requires a complete different set of expertise it is more sensible to identify where in the organization the most comprehensive expertise on AI lies and assign the responsibility to one of these experts.’ Given the complexity of AI, it would be wise for companies to address this issue first. It would be an important first step towards a mature AI policy.
This article was published in Management Scope 06 2024.