Compliance
New EU Legislation For Artificial Intelligence – What It Means For Wealth Management

The arrival of new EU rules governing AI looks imminent. This article considers its main features and genuflects on what this could mean for the wealth management sector.
As the European Union moves towards adopting regulations on artificial intelligence, this means one of the most important collections of developed countries now has a rulebook ready for it. Other nations, such as the US, have moved in this direction. We carry the following commentary and analysis. The authors of this article are Sean Donald John Musch, chief executive, founder of AI & Partners, Michael Borrelli, chief operating officer and co-CEO at that firm, and Charles Kerrigan, partner at CMS UK. (We have carried analysis from CMS and AI & Partners in August about this legislation here, here and here.)
The editors of this news service are pleased to share these views; the usual editorial caveats apply to comments from outside contributors. Email tom.burroughes@wealthbriefing.com if you wish to respond.
The recently negotiated Artificial Intelligence Act by the European Union (EU) stands as a significant milestone, positioning the EU as a global leader in regulating AI technologies (1). The comprehensive rules, agreed upon by Members of the European Parliament and the Council, aim to ensure the responsible development of AI while upholding fundamental rights, democracy, and environmental sustainability.
Key provisions and safeguards
The Act includes a set of prohibitions on specific applications
of AI that pose potential threats to citizens' rights and
democratic values. Notably, it prohibits biometric categorisation
systems based on sensitive characteristics, untargeted scraping
of facial images for recognition databases, emotion recognition
in workplaces and educational institutions, social scoring based
on personal characteristics, and AI systems manipulating human
behaviour.
Law enforcement exceptions are outlined, allowing the use of biometric identification systems in publicly accessible spaces under strict conditions, subject to judicial authorization, and limited to specific crime categories. The Act emphasises targeted searches for serious crimes, prevention of terrorist threats, and the identification of individuals suspected of specific crimes.
Obligations for high-risk AI systems
High-risk AI systems, identified for their potential harm to
health, safety, fundamental rights, environment, democracy, and
the rule of law, face clear obligations. This includes mandatory
fundamental rights impact assessments applicable to sectors like
insurance and banking. AI systems influencing elections and voter
behaviour are also classified as high-risk, ensuring citizens'
right to launch complaints and receive explanations for decisions
impacting their rights.
Guardrails for general AI systems
To accommodate the diverse capabilities of general-purpose AI
(GPAI) systems, transparency requirements have been established.
This involves technical documentation, compliance with EU
copyright law, and detailed summaries about the content used for
training. For high-impact GPAI models with systemic risk,
additional obligations such as model evaluations, mitigation of
systemic risks, adversarial testing, reporting on incidents,
cybersecurity measures, and energy efficiency reporting are
introduced.
Support for innovation and SMEs
Recognising the importance of fostering innovation and preventing
undue pressure on smaller businesses, the Act promotes regulatory
sandboxes and real-world testing. National authorities can
establish these mechanisms to facilitate the development and
training of innovative AI solutions before market placement.
Wealth management and extra-territorial
application
The Act's extra-territorial application is noteworthy, extending
its regulatory reach to any business, irrespective of their
location, when they deal with the EU. This underscores the EU's
commitment to global standards for AI regulation. In its
political position, the EU took a “pro-consumer” stance,
prioritizing consumer rights. However, this stance drew varying
reactions from tech firms and innovation advocates who might have
preferred a different balance between regulation and freedom.
When it comes to wealth management, the use, development, and/or deployment of AI by wealth management businesses must comply with the EU AI Act. This encompasses aspects such as lead generation, client retention, compliance management, investment advice, market trend analysis, increased efficiency, and data centralisation, aligning with the Act's overarching goals of responsible and ethical AI development.
Sanctions and entry into force
The Act introduces substantial fines for non-compliance, ranging
from €35 million or 7 per cent of global turnover to €7.5 million
or 1.5 per cent of turnover, depending on the severity and size
of the company.
What rapporteurs say
Co-rapporteur Brando Benifei highlighted the legislation's
significance, emphasising the Parliament's commitment to ensuring
rights and freedoms are central to AI development. Co-rapporteur
Dragos Tudorache underscored the EU's pioneering role in setting
robust regulations, protecting citizens, SMEs, and guiding AI
development in a human-centric direction.
During a joint press conference, lead MEPs Carme Artigas (Secretary of State for Digitalisation and AI) and Commissioner Thierry Breton expressed the importance of the Act in shaping the EU's digital future. They emphasised the need for correct implementation, ongoing scrutiny, and support for new business ideas through sandboxes.
Next steps
The agreed text awaits formal adoption by both Parliament
and Council to become EU law. Committees within Parliament will
vote on the agreement in an upcoming meeting.
Significance and global impact of the Act
This legislation stands as a monumental achievement, positioning
the EU as a trailblazer in responsible AI governance. Without its
approval, the absence of unified regulations could have led to
unchecked AI deployment, risking citizens' rights. Its approval
ensures a framework for ethical AI development, setting a global
standard. Geopolitically, the EU asserts influence by extending
regulations extra-territorially, contributing to international
norms. Economically, the Act inspires confidence, driving a
sustainable AI economy. Its approval safeguards against potential
abuses, fosters global collaboration, and charts a course for a
technologically advanced, ethically grounded future.
Conclusion
The Artificial Intelligence Act represents a groundbreaking
effort by the EU to balance innovation with safeguards, ensuring
the responsible and ethical development of AI technologies. By
addressing potential risks, protecting fundamental rights, and
supporting innovation, the EU aims to lead the world in shaping
the future of artificial intelligence.
The Act's successful implementation will be crucial in realizing this vision, and ongoing scrutiny will ensure the continued alignment with the EU's commitment to rights, democracy, and technological progress.
Footnote
1, https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai#:~:text=The%20AI%20Act%20sets%20rules,of%20technology%20by%20public%20authorities.