Sponsored
Sponsored content is premium paid-for content produced by the Irish Times Content Studio on behalf of commercial clients. The Irish Times newsroom or other editorial departments are not involved in the production of sponsored content.

How the EU AI Act will impact on business

Businesses need to start preparing now: once the AI Act comes into force, it will apply to all systems in operation

'AI is going to affect all businesses and individuals, and we need to put in place the right measures so we can control that impact and ensure it is positive,' says David O'Sullivan of Mazars.
'AI is going to affect all businesses and individuals, and we need to put in place the right measures so we can control that impact and ensure it is positive,' says David O'Sullivan of Mazars.

The EU AI Act will have profound implications for Irish businesses of all sizes across every sector. “Every business is likely to be using AI in some form whether they realise it or not,” explains David O’Sullivan, a senior manager with the Mazars privacy and data protection team.

“The Act will apply to every organisation that uses AI systems. It doesn’t matter where they are located or where the AI system has been developed, if it is put into operation in the EU, on the market in the EU, or its outcomes impact EU citizens, it is within the scope of the Act.”

David O’Sullivan, a senior manager with the Mazars privacy and data protection team
David O’Sullivan, a senior manager with the Mazars privacy and data protection team

He also points out that unlike GDPR which applied to actions taken by organisations after a certain date, such as the case with data protection impact assessments (DPIAs), the AI Act will effectively be retroactive as it will apply to all systems in operation regardless of when.

“Businesses need to start preparing now,” O’Sullivan adds. “The Spanish presidency has put the Act at the top of the agenda, and wants to have a final text available in December. We will likely see the Act come into force in early 2026. That’s less than three years’ away.”

READ SOME MORE

The Act sets out six principles for providers of AI systems to follow. They begin with human agency and oversight where systems should assist humans in decision-making, and humans should be able to override decisions made by the system.

In addition, systems should be designed to work well, be safe to use and be predictable; have data and privacy protection in mind with proper governance of the datasets used to train them; systems should be transparent with providers making available clear information on the system’s capabilities and limitations, as well as the data sources used to train it; designed to avoid discrimination and bias and promote diversity; and be able to contribute to sustainable and inclusive growth, social progress, and environmental wellbeing. Furthermore, providers should consider the potential impact of AI systems on society and the environment.

A few studies have been done on large language models, and the amount of electricity they are using is monumental

On that last point, O’Sullivan notes the energy usage associated with AI systems. “A few studies have been done on large language models, and the amount of electricity they are using is monumental. The amount of pollution they create is huge.”

Overlaying those principles is a risk-based approach. The Act categorises AI into four different risk levels – minimal or no risk, limited risk, high risk, and unacceptable risk. Systems falling into the minimal or no risk category will only have to make ‘best efforts’ to conform with the principles and as such will not be required to comply with the Act but the organisations using them will have to establish their risk level first.

For limited risk systems, which include generative AI and chatbots, the focus is on transparency. “Humans interacting with the systems need to know they are dealing with AI,” says O’Sullivan.

High risk AI is defined by its use cases and includes medical devices, energy supply networks, employment recruitment systems, credit scoring applications, educational applications such as grade prediction technology.

“There will be a number of requirements for organisations using high risk AI,” O’Sullivan explains. “These include having human oversight of the systems, a robust risk management system, a high degree of transparency for users, and keeping to rigorous record keeping standards.”

We will likely see the Act come into force in early 2026. That’s less than three years away

Unacceptable risk means what it says, organisations simply aren’t allowed to use systems in this category, with certain exceptions such as for state security. “This category can include the use of chatbots to manipulate or coerce people into dangerous behaviour or systems for social scoring,” O’Sullivan says. “Any AI systems could fall into it. If it hasn’t been designed properly, the system could learn and change its parameters and become a biased or coercive system. There are massive fines for getting it wrong – up to 7 per cent of turnover or €40 million.”

While compliance may appear burdensome, it will also have benefits. “It will enable the innovative use of AI in a safe space,” O’Sullivan points out. “It will also help to build trust among consumers who are wary of organisations that use AI.”

He advises organisations to take steps immediately to prepare for the Act. “Organisations need to understand their AI strategy, identify the areas they are going to use it in, and how and where they are going to procure it,” he says.

“After that, they need to assess which risk category these uses fall into and create a compliance framework that includes governance, risk management, data governance, and quality management. AI is going to affect all businesses and individuals, and we need to put in place the right measures so we can control that impact and ensure it is positive.”