Cyberspace Security and Data Protection and Privacy Verification Technology

By Christian Jacob

Everyone in the world of finance understands that fraud is a constant and ever-changing challenge. Fraudsters in the 19th and early 20th centuries used simple, hands-on techniques to perpetrate their crimes. The methods included forging checks, fabricating fake identification documents, and stealing physical credit cards. For example, during the early days of credit cards in the 1950s and 1960s, criminals would physically steal cards or card information to carry out unauthorised transactions. Printouts of compromised credit card details, commonly known as “hot lists,” were the only way for businesses to identify and stop illegal purchases at the time. The arrival of magnetic stripe cards in the 1970s presented a fresh set of challenges as criminals began using skimming devices to steal card information. Fraud prevention heavily depended on human monitoring and manual verification procedures, which were both time-consuming and susceptible to mistakes. The identification and prevention of identity and payment fraud started to develop into the advanced processes we have today with the introduction of automated systems and later, more sophisticated machine learning algorithms. However, as technology advanced, so did the strategies employed by fraudsters, resulting in the current landscape where artificial intelligence plays a crucial role in both perpetrating and preventing fraud.

The recent rise of artificial intelligence has transformed various industries through the optimisation of processes and fostering of innovation, and the financial world is no exception. However, a more sinister trend is emerging with these advancements: the evolution of payment fraud. This paradoxical outcome highlights the ambivalent nature of AI. While AI offers numerous advantages, it also introduces novel tools and strategies for criminals. An illustrative example is the persistent problem of fraudsters obtaining counterfeit IDs and self-portraits to evade KYC (Know Your Customer) verifications. However, nowadays, the process of fabricating new identities or even generating realistic deepfakes has grown progressively sophisticated, convincing, and easy.

Earlier this year, tweets surfaced on X (formerly Twitter) showing how Stable Diffusion, a free and open-source image generator, can create synthetic images of a person against any background, like a living room. Why is this important? If you’ve ever used a fintech app, you’ve likely gone through verification stages. In one of the verification stages, you may be required to take a picture of yourself with a valid government-issued identification. This ensures that the person opening the account or making transactions is who they say they are, that they possess their ID, and that the document is valid at the time of verification. Usually, someone—or an algorithm—reviews and cross-references the image to prevent identity theft or fraud attempts.

Fraud has never been more accessible than it is today. In the past, the production of fake identification images with realistic lighting, shadows, and backgrounds required advanced knowledge of photo editing. Now, that’s not necessarily the case. With a little trial and error, an attacker can even tweak renderings to insert a fake, and sometimes real but stolen identification document into a deepfaked person’s hands. Feeding these deepfaked KYC images to an app has become easier too. For example, Android apps running on a desktop emulator like BlueStacks can be tricked into accepting deepfaked images instead of a live camera feed. Similarly, web apps can be fooled by software that turns any image or video source into a virtual webcam. Recent tests carried out by Payment Village using the “Deepfake Offensive Toolkit” confirm that real-time deepfakes can be injected into virtual cameras as they successfully bypassed security verifications at banks during the tests. This same technology is increasingly being used to impersonate company executives or financial officers, convincing employees to authorise large payments or reveal sensitive information. In a recent case, a finance worker authorised a $25 million payment after a video call with a deepfake posing as the chief financial officer.

The battle between financial institutions and fraudsters has escalated into a high-stakes fight, with billions at risk each year. As fraudsters develop AI-driven methods to exploit vulnerabilities, financial institutions must respond by deploying their own AI and machine learning systems, much like skilled fencers who anticipate and parry each strike. Just as a fencer’s success depends on agility and precision, financial institutions must continuously adapt and refine their AI tools to detect and prevent new forms of fraud that are as fast and unpredictable as the fraudsters behind them.

Yet, this is far from a one-time battle; it’s an ongoing, ever-evolving arms race. As soon as a new fraud prevention AI is developed, fraudsters are already devising ways to bypass it using their advanced techniques. This constant cycle of attack and defence highlights the importance of staying ahead in the fight against fraud. Understanding the various emerging AI threats and techniques is crucial for institutions striving to protect themselves in this relentless digital battlefield.

Phishing and social engineering have reached new levels of sophistication as more people face advanced phishing tactics. Machine learning algorithms are now used to analyse individuals’ social media profiles and online behaviour, preferences, and communication patterns to craft tailored messages that are more likely to deceive targets. This personalised approach increases the chances of successful fraud attempts, such as spear-phishing or fraudulent wire transfers. These fraudulent emails are harder to detect because they mimic the language, tone, and context the victim is accustomed to, increasing their likelihood of success. Automated phishing attacks can now be launched on a large scale too, targeting thousands of individuals simultaneously, making them even more dangerous and widespread.

Adversarial AI is another new and serious threat where cybercriminals manipulate data inputs to deceive machine learning models, allowing fraudulent transactions to bypass AI-based security systems. They achieve this by creating adversarial examples—small, often imperceptible changes to input data—that exploit weaknesses in the model’s pattern recognition. These attacks may take place during both the training phase (poisoning attacks), where malicious data corrupts the model, and the inference phase (evasion attacks), where the goal is to make the artificial intelligence misclassify or overlook fraudulent activities. The implications are significant as Adversarial AI can adapt and outpace traditional defences, posing a major threat to the security of financial institutions, online platforms, and any system that relies on AI. This means that even as we advance our artificial intelligence systems to combat fraud, we must maintain a relentless focus on proactive innovation.

As we continue to embrace new technology in the financial world to ensure our defences always remain one step ahead of AI-driven fraud, it’s crucial to remember that technology is much more potent when combined with strategy. Financial institutions can no longer rely on one-size-fits-all onboarding systems that are implemented only to meet regulatory requirements. Understanding that KYC (Know Your Customer) and CDD (Customer Due Diligence) are more than just checkbox processes is essential, especially now. These processes have always involved many parts for a reason, and it’s important to pull in as much valuable data and real-time intelligence from multiple points such as email, device, IP, and geolocation as you go. This way, financial institutions can gain valuable confidence in a user even before they begin to enter their Personally Identifiable Information like identity verification and biometrics. A user’s behaviour, such as how they swipe, type, and even tap their phone, will always be unique to them – this is why behaviour is quickly becoming one of the most important fraud signals for KYC, and it’s even more valuable when combined with thousands of other data points. By strategically layering real-time fraud signals into KYC decision-making systems, organisations are sure to significantly fortify their defences against all types of fraud even as the world around us continues to change. In every battle, victory hinges on having the right weapons and strategy, and this battle is no different – it’s the only way we can win.

About the Author 

Christian Jacob is a Payments and FinTech Compliance professional with years of experience in developing and managing secure, compliant fintech products and systems at Paystack and currently at global payroll leader, Deel.