AI Regulation

By Elliott Hoffman

As AI is increasingly being adopted, offering benefits such as automation and improved decision-making, there are also risks, including biases and potential malfunctions. The European Union has introduced the AI Act to regulate AI in fintech, aiming to ensure data quality, transparency, and human oversight. The legislation could impact AI companies globally, but some are concerned about its potential impact on innovation and the complexity of enforcement. Similar AI regulations are being considered in Brazil and the UK, making it crucial for AI businesses to stay informed about evolving regulations.

There’s almost no area in our lives that has remained untouched by artificial intelligence (AI). As its adoption becomes increasingly more widespread, so does its level of sophistication and potential use across multiple sectors.

Applications that AI has been successfully deployed in range from production line robots in factories to diagnosing serious illnesses and diseases in patients. In finance, it’s being used too, to detect fraud, assess lending and investment risks and provide credit scores.

The key benefits of AI are that it speeds up and automates processes, while also having the potential to provide more accurate results than if the same task was performed by a human. That leads to improved decision-making and greater efficiency, while freeing up workers to focus on more critical jobs that add customer value.

Despite all the clear advantages that AI tools have to offer, however, it also presents some big risks for the user. Chief among them is the potential for it to be biased towards or discriminate against certain groups of people in its decision or findings.

When something goes wrong, it has the potential to go spectacularly wrong. One of the most high-profile examples of this was when Microsoft’s AI chatbot, Tay, ended up posting more than 95,000 tweets on Twitter, which rapidly turned racist, misogynist and anti-Semitic.

AI regulation

The implications for AI malfunctioning in banking and finance are just as serious, if not more so, particularly where businesses’ and people’s money and, ultimately, livelihoods are at stake. For example, if a wrong trading call is made because AI made an incorrect prediction, shaving millions of the value of a company’s stock.

As a result, the calls have grown louder for AI to be regulated in the fintech industry. While it may be a good idea in principle, it’s much harder to enforce given the fragmented global nature of the market, with each country having its own particular set of regulatory rules.

The most significant move so far to regulate fintech has come from the European Union (EU). The aim of the AI Act, which was recently approved by the European Parliament, is to tighten up the rules governing data quality, accountability, transparency and human oversight.

The new regulation applies to organisations inside the EU, but also those outside that supply AI systems to organisations within it or whose output is used within the Union. That could impact AI companies almost anywhere in the world if they provide a service to the EU.

By using a simple classification system, the new rules can be used to work out the threat AI poses to a person’s health and safety or basic rights. That then provides a ranking of unacceptable, high, limited or minimal, enabling the company to take the appropriate action.

What it means for fintech

Essentially, the new legislation will determine how businesses provide, use, distribute or import software for credit assessment of individuals, biometric identification or human capital management. It’s also designed to force providers to be more transparent while protecting vulnerable people from being exploited by software that uses subliminal techniques.

The fines for breaking the new rules are eye-watering. They can amount to €10 to 30 million or two to six percent of a firm’s global yearly turnover, whichever figure is higher.

While it may be well-intentioned in its goals, however, the AI Act may have the unintended consequence of forcing many technology firms out of the EU. It’s already a big concern for many, with 50% of AI startups claiming the legislation will slow down innovation in Europe and 16% deciding whether to stop developing the technology altogether or move outside the EU.

Even outside of this, the AI Act creates a minefield of wider problems. The biggest one is figuring out what software is classed as an AI system. Then there’s the issue of determining which companies are subject to the new law, which becomes more complex yet if they have operations in many different countries or regions – on top of the specific county regulations that they must already comply with.

It’s not just Europe where AI regulation is being implemented either. Brazil’s Congress has passed a bill creating a legal framework for the technology, while a consultation for establishing a regulatory regime has been launched by the UK government.

The AI Act is set to be adopted in June, opening the floodgates for other global regulators to follow suit. It’s therefore paramount that AI businesses keep up to date with all these changes, so they don’t have any nasty surprises further down the line.

This article was originally published on 16 July 2023

About the Author:

Elliott HoffmanElliott Hoffman is a college dropout turned serial entrepreneur focusing on business development and emerging technology. He co-founded AI Tool Tracker, the largest AI tools directory on the internet. Having played key roles in leading companies within the FinTech space like Yield App, S21, and one of the world’s largest P2P exchanges, his strategic mindset continues to drive business growth and position AI ToolTracker as a transformative force in the world of AI technology.