Voice of the Industry

GenAI developments in 2025: impact, regulations, and the rise of impostor scams

Monday 14 April 2025 10:27 CET | Editor: Irina Ionescu | Voice of the industry

Irina Ionescu, Senior Editor at The Paypers, tackles the latest developments in AI, discussing the correlation between GenAI, deepfakes, and fraud, as well as the rise of Agentic AI.


In 2025, the world finds itself at the intersection of technological advancement and ethical challenges, with generative artificial intelligence (GenAI) leading the change. The global GenAI market is forecasted to reach a market size of little under USD 67 billion on 2025, with an annual growth rate (CAGR 2025-2031) of 37%, revealing the rising interest in new technologies and their applications in real life.  


Understanding GenAI and Agentic AI

GenAI refers to artificial intelligence (AI) systems capable of generating new, realistic content  (text, images, videos, and even audio) by learning from large sets of data. The technology has rapidly evolved over the past few years, bringing immense benefits to industries ranging from entertainment and healthcare to finance and marketing. At the same time, it has raised concerns and difficulties in implementing it, as some of the data models used for learning are either inaccurate or fake. 

While, at its core, AI is great for automating tasks and reducing human involvement, various bots and models run by large companies have produced flops, from X’s chat bot, Grok, to the Microsoft-powered chat bot of New York City that encouraged business owners to break the law. Other companies like Air Canada and Sports Illustrated found themselves in hot water for either providing incorrect advice or using authors generated by AI to conduct journalistic investigations.

The progress made in the utilisation of GenAI also prompted fraudsters to boost their social engineering techniques, with the technology being used in various types of fraud, including deepfakes and impostor scams. Additionally, as GenAI becomes more widespread, the global regulatory landscape is racing to catch up with its capabilities, and the emergence of new AI models is also leading to a new conversation about GenAI, Agentic AI, large language models (LLMs), and the quality of data fed to these algorithms. 

Generative AI encompasses several subfields, including natural language processing (NLP), computer vision, and generative adversarial networks (GANs), which can create everything from text and speech to images and realistic videos. In its simplest form, GenAI uses machine learning algorithms to ‘learn’ patterns from large datasets and apply that knowledge to generate new content. By 2025, these systems have achieved remarkable sophistication, and they are now able to mimic the nuances of human creativity and even offer near-perfect imitations of real-world entities.

For example, in content creation, GenAI can automatically write articles, generate realistic images for digital art, or produce deepfake videos of public figures. The constant tweaks and ‘self-improvement’ of the technology already damages digitally unpaired individuals as fraudsters use a combination of social engineering methods and deepfake-generated content to impersonate celebrities, authorities, or governmental institutions to unlawfully access funds or personal data. 

On the other hand, Agentic AI encapsulates the capabilities with previous AI assistants with predefined sets of rules and empowered with proactiveness, in the sense that it can perform tasks on behalf of someone else. Thus, the role of Agentic AI is to autonomously perform tasks and achieve goals without the need for constant human guidance and supervision. 

The pressing deepfake problem and its link to impostor scams

While deepfakes were initially seen as a tool for entertainment and digital artistry, they have since been weaponised by malicious actors, particularly for fraud and deception purposes. By 2025, the quality of deepfakes has reached a sufficiently advanced level that distinguishing between real and fake content has become difficult without specialised software or human intervention. This technological leap has paved the way for more sophisticated and believable scams.

Impostor scams are one of the most prevalent and damaging uses of GenAI, especially in the context of financial fraud. These scams often involve fraudsters posing as celebrities, influencers, or even romantic partners to establish a relationship with their victims. The scammers use GenAI-generated content, such as personalised messages, fake social media posts, and realistic video calls, to gain the trust of their targets. Once the scammer creates an emotional bond, they begin to manipulate their victim into sending money or providing other forms of financial support.

The allure of GenAI-generated deepfakes lies in their ability to create hyper-realistic media that feels authentic and personal. For example, a scammer might use a deepfake to simulate a video call with a well-known celebrity or a person the victim believes to be a romantic interest. According to specialists, it now takes AI just a few seconds of listening to a person’s voice to generate audio content that mimics the voice one-on-one, making these scams extremely difficult to identify.

By using advanced voice synthesis tools and realistic facial animations, the deepfake video can make it appear as if the victim is truly engaging with a celebrity or their love interest, building a false sense of intimacy and trust. These types of fraud are particularly effective because they prey on human emotions, with victims more likely to fall for scams that include an emotional or personal factor. One case that gained global media coverage involves a French woman being conned of EUR 830,000 by believing she was in a romantic relationship with Hollywood actor Brad Pitt for more than a year.

The crucial turning point of deepfakes comes when the scammer introduces a financial request. For instance, they may claim to be in a difficult situation and require money for a medical emergency, legal fees, or even an investment opportunity. The victim becomes less reluctant to these requests if they are emotionally invested in the relationship, often prompting the fraudster to make several (successful) fund requests. 

In some cases, the fraudster may create a sense of urgency by sending a deepfake video pleading for help, thus escalating the emotional manipulation. The combination of the realistic deepfake content and the emotional appeals makes it much harder for victims to detect that they are being scammed.

Unfortunately, AI tools available to the general public made deepfakes harder to spot, investigate, and eliminate. Previously, fraud prevention specialists shared tips on how to identify a scammer, which often included the scammer’s inability to join video calls (pretending their phone camera doesn’t work) or relying on low-quality pictures stolen online which could have been easily identified through mirroring techniques. However, as GenAI and deepfakes continued to evolve, these tips have become obsolete without proper, specialised training.


Global regulatory updates on generative AI

As GenAI technology has become more accessible and powerful, governments around the world have started to address the growing concerns about its misuse, particularly regarding deepfakes and fraud. While regulation is still evolving, several countries plan on implementing regulations and guidelines aimed at curbing the negative impacts of GenAI.

In the European Union, the Artificial Intelligence Act (AI Act) is one of the first comprehensive attempts to regulate AI across various industries. The act includes specific provisions for the use of AI in content generation, especially in areas like deepfakes, where the technology can be used maliciously to deceive the public or harm individuals, and can even threaten national security in the context of manipulating content during political elections. The act places strict limitations on the use of AI-generated deepfakes in media, mandating that any content created using AI must be clearly labelled as such, ensuring transparency for consumers.

In the US, there have been growing calls for stronger regulation around the use of AI-generated content. While the federal government has yet to pass comprehensive AI legislation, individual states like California have drafted laws that specifically target deepfake technology. For example, California’s Deepfake Accountability Act, introduced in 2023, holds individuals accountable for using deepfakes to commit fraud, including identity theft or online impersonation. The legislation aims to protect individuals from scams, while simultaneously allowing for the continued development of AI in other sectors.

Additionally, international organisations such as the United Nations and the Organization for Economic Cooperation and Development (OECD) have recognised the need for global cooperation in addressing the challenges posed by AI technologies, and they are currently working towards developing international frameworks for the ethical use of AI, including guidelines for how AI-generated content should be regulated, disclosed, and used for commercial purposes.

GenAI vs. Agentic AI

While GenAI focuses on creating content, another branch of artificial intelligence is emerging – Agentic AI. The distinction between these two types of artificial intelligence is important, especially as we consider their potential duality, for both social good and harm.

GenAI primarily focuses on generating new content based on input data, mimicking human creativity, whether through writing, visual art, or speech synthesis. It is a tool that creates outputs designed to simulate human-like interaction or creativity.

On the other hand, as discussed briefly in the first section of the editorial, Agentic AI refers to autonomous systems designed to take actions or make decisions on behalf of humans or organisations. These systems can be used in a variety of domains, including autonomous vehicles, healthcare diagnostics, military applications, and ecommerce or online shopping. While previous AI assistants were limited by rules and couldn’t act independently, Agentic AI is empowered to run tasks on behalf of someone else, streamlining processes. Thus, Agentic AI focuses more on decision-making, taking actions, and achieving objectives rather than simply generating content, which is mainly the case with GenAI.

However, the key difference between the two lies in the degree of autonomy. In its current form, GenAI still requires human input and oversight, whereas Agentic AI systems are designed to operate autonomously and, in some cases, make decisions without direct human intervention. While both have their risks, Agentic AI’s potential for real-world impact, such as in the case of autonomous weapons or decision-making in critical sectors, raises concerns about regulation and its accountability, hence the imperious need of adopting strong regulations internationally.

Conclusion

By 2025, generative AI has made great strides in transforming various industries and enabling new forms of content creation. However, as the technology has evolved, so too have the risks associated with its misuse. The rise of deepfake technology has opened the door for new types of fraud, particularly in the form of impostor scams, contributing to the global rise of fraud rings and so-called ‘fraud labs’ in certain Asian countries. 

As the technology continues to evolve, global regulators are stepping up their efforts to address the ethical challenges posed by GenAI. Additionally, the distinction between GenAI and Agentic AI is becoming increasingly important as the debate on the ethical use of artificial intelligence continues to unfold. Moving forward, balancing innovation with responsibility will be essential to ensuring that GenAI is used for the greater good, without compromising individual safety or societal trust.


About Irina Ionescu

Irina is a Senior Editor at The Paypers, specialising in fraud and online payments. Leveraging her Ph. D. in Economics and a strong economic academic background, she constantly observes new developments in tech, innovation, and regulation, educating the audience about trends in fraud prevention, chargebacks, scams, social engineering, digital identity, GenAI, and ecommerce. You can reach out to her via LinkedIn or email at irina@thepaypers.com.


Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: generative AI, GenAI, deep fake, online fraud, fraud management, fraud detection, fraud prevention, identity fraud, Agentic AI, fraud rings, regulation
Categories: Fraud & Financial Crime
Companies:
Countries: World
This article is part of category

Fraud & Financial Crime