Navigating the EU AI Act: Implications for Cybersecurity

This piece is by Si West, Director of Customer Engagement at Resilience. It looks at how AI is being regulated by the latest EU legislation.

The EU AI Act came into effect on 1st August, 2024, with the goal of establishing trust in artificial intelligence. The Act is the first extensive AI regulation by any major authority, and the first attempt at defining AI in legislation, meaning it has the potential to become a global benchmark on balancing AI safety and transparency with innovation. This will have significant consequences for how companies across the world manage cybersecurity, allocate resources, balance the role of AI with humans and will prompt businesses to focus even more on their cyber resilience.

AI as a double-edged sword for cybersecurity

AI and machine learning have the capacity to transform cybersecurity, both positively and negatively. A National Cyber Security Centre report from April 2024 found that AI lowers the barrier for new cyber criminals to engage in unlawful activities. This allows threat actors to use AI to increase the efficiency and effectiveness of malicious operations, including reconnaissance, phishing and coding. AI can also assist with malware development, allowing malware to evade detection by current security filters.

On the other hand, AI also plays a crucial role in combatting these threats. Security controls, such as anomaly detection, fraud detection, and behavioural analysis, all utilise AI to identify such activities and assess an organisation’s risk exposure. This supports companies to monitor, analyse and respond to cyber threats in real time. By processing vast amounts of data, businesses can more proactively manage cyber risks.

Importance of human intervention in AI tools

To address issues of transparency and accountability, the Act requires users to be informed when interacting with AI systems, monitor AI decision-making processes, and intervene when necessary.

A human-in-the-loop model is essential as AI plays an increasing role in security measures. Security controls, such as cyber risk modelling and simulations, are already dependent on AI, but human involvement is crucial to actively manage cyber threats. Such controls need continuous monitoring and updating to keep pace with evolving threats, and humans are key to enhancing a feedback loop.

The Act will therefore empower individuals, encouraging employee training to improve their understanding and management of AI systems, and give companies the capability to intervene quickly to prevent harm.

A risk based approach

The EU AI Act classifies AI applications based on their risk levels, from minimal to unacceptable, with higher-risk AI systems subject to stricter requirements. This helps minimise harmful consequences from AI, particularly in sectors like healthcare, where mistakes can have severe consequences. High-risk AI also include those with access to financial services, critical infrastructure or employment.

However, security is more than qualitative, tiered categories, which provide little nuance for impact on businesses. Translating risk into quantitative, financial terms provides C-suite leaders with more tangible actions to manage risk, and a better understanding of how businesses are affected.

Quantifying cyber risk is essential for companies to adopt a comprehensive incident response strategy. The Resilience Solution, for example, uses integrated Breach and Attack simulations and modelling to translate cyber risk into business value, enabling financial leaders to make better investment decisions in security controls and insurance coverage. By doing so, organisations can manage risk more effectively and build cyber resilience.

Challenges that companies can face to remain compliant

Companies need to ensure that their AI systems comply with the regulatory standards set by the EU AI Act, including transparency, safety, fairness, and accountability. This will likely incur additional costs for companies, including investments in technology, documentation capabilities, and potentially higher insurance premiums.

SMEs with limited finances are particularly vulnerable as their business focus is usually on growth rather than establishing robust cyber resilience. They may perceive cyber risk management as an additional burden and struggle with allocating resources effectively.

Additionally, keeping up with evolving AI technology makes investments costly in terms of both time and resources. Tailored solutions like the Resilience Solution can offer a practical approach to cyber risk quantification and help determine which investments to make. Furthermore, companies that work closely with insurers can better develop an understanding of AI risks and best practices for mitigating them in line with the EU AI Act.

The Act must strike a balance between regulation and innovation, maintaining a supportive environment for businesses while delivering crucial ethical and safety measures. This will sustain the EU’s global competitiveness around AI, while developing its role in cyber resilience. With the Act likely to set a global framework for AI governance, businesses across the world must enhance their risk management to meet the new standards for regulations and embrace cyber resilience.

About alastair walker 19483 Articles
20 years experience as a journalist and magazine editor. I'm your contact for press releases, events, news and commercial opportunities at Insurance-Edge.Net

Be the first to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.