AI fraud and accountants – economia

Many predictions about AI taking our jobs have not yet materialised, but the threat of AI-driven fraud is a much more worrying development.

Few technologies have attracted as much hype as AI, particularly the most recent versions of generative AI such as ChatGPT, with its uncanny capabilities to create text or images, or hold disturbingly human-like conversations via text. These unsettling technologies have inspired some frightening predictions.

So far, in terms of their impact on accountancy, AI tools are proving useful for completing time-consuming tasks accurately and efficiently, just as other time-saving, efficiency-improving technologies did before them. They are not yet destroying thousands of jobs. Nor do they seem likely to wipe out humanity. But some AI tools will be used by criminals to commit fraud, against victims including accountancy firms and their clients.

It is difficult to be certain about the scale of AI-enabled fraud, in part because fraud tends to be under-reported or even undetected. A spokesperson for Action Fraud, which collects and analyses fraud reports, says it cannot supply data on AI-enabled fraud, as victims would need to know if AI was involved when reporting an incident.

But figures from the governments most recent Cyber Security Breaches Survey, published in April 2023, suggest that almost one in three businesses (32%) were affected by cyber security incidents during 2022. These are among the types of fraud where AI technologies are most likely to be used, often to help create convincing fake emails, documents or images which could be used in phishing emails.

Fake materials created with AI might also facilitate payment diversion or invoice fraud (also known as mandate or push payment fraud), in which a recipient is suckered into making payments to fraudsters. The same techniques might be used to persuade a recipient that an email they receive has come from the firms bank, or from HMRC.

AI tools can also be used by fraudsters to gather useful information from company websites, social media platforms and other online sources, which they can then use to make emails and/or supporting fake documents more convincing. While such social engineering methods are not new, they are resource-intensive, so using AI enables fraudsters to create tailored materials and messages more quickly and efficiently, and on a wider scale, possibly even in multiple languages.

Michelle Sloane, a partner at legal firm RPC, which specialises in resolution of white-collar crime, has seen examples of fake documents and images that appear to have been created using AI. She warns that accountants who cannot detect these clever forgeries may be unwittingly involved in money laundering or tax fraud.

Sloane thinks this type of AI-enabled activity is becoming more widespread: Its definitely growing and will continue to grow as the technology gets better. Her colleague Alice Kemp, a barrister and senior associate at RPC, says several criminal cases involving use of these technologies are due to come to trial in the near future.

ICAEW Head of Tech Policy Esther Mallowah highlights another way fraudsters can use AI: to create fake voices on phone calls. This technique had been used even before the newest forms of generative AI had been created. In 2019 and 2020 there were cases in which CEOs of subsidiary companies a UK-based energy company in 2019 and a Hong Kong-based subsidiary of a Japanese firm in 2020 thought they were being contacted by the CEO of their overseas-based parent company, who then asked them to transfer large amounts of money.

The 2019 fraud led to the theft of $243,000. In the 2020 case, supported by fake emails supposedly sent from the parent company CEO and from a law firm about a fictitious corporate acquisition, the fraudsters asked the firm targeted to transfer $35m. The amount they obtained is not in the public domain, but certainly exceeded $400,000. I think thats a technique that could move into the accountancy space, says Mallowah.

ICAEW economic crime manager Mike Miller also highlights the growing use of AI technologies to perpetrate synthetic identity theft, in which AI-generated false data or documents are combined with stolen genuine personal data, for purposes including making fraudulent credit applications.

Fraud knows no boundaries and anyone working for a smaller practice should not assume they are less likely than a larger business to be targeted. Smaller firms or their clients may be seen by a fraudster as profitable soft targets, less able than larger firms to invest time and money in fraud countermeasures.

Mallowah suggests that one big problem for the accountancy sector is that accountants often dont see themselves as being attractive targets, especially those in smaller firms. But, she warns: From what were seeing, thats not necessarily true. Sloane thinks the threat posed by AI-enabled crime may actually be greater for smaller accountancy firms.

As with other cyber security threats, the strongest form of defence may be training staff to identify risks and take action to mitigate them. Kemp also advises providing guidance to employees about the dangers of revealing personal details including hobbies and interests on social media, thus limiting the material a fraudster might feed into an AI tool when creating phishing emails.

Training must be complemented with good practice. For example, the risk of falling for payment diversion fraud is significantly reduced if staff have to use an independently checked phone number to verify the identity of someone who is emailing the business asking for funds to be transferred.

These measures can be supplemented by use of company and/or identity verification services, alongside a Companies House check or, in the case of a new supplier, viewing a copy of a companys certificate of incorporation (although RPC says there have already been cases of these certificates being faked).

HMRC did not respond in detail to questions about its knowledge of the scale and nature of AI-based attempts to commit tax-related fraud, but it did provide a short statement: Tax crime doesnt stand still and neither do we. The adoption of new technologies is a constant and evolving threat that we are alive to, including the potential risks posed by AI.

In future we may also see more use of AI to fight AI-enabled fraud. There are already AI-based tools for identity verification, for ensuring compliance with anti-money laundering rules, or for preventing other forms of fraud. Examples include solutions provided by Capium and DataRobot.

The largest accountancy and audit firms have also developed AI-based tools to detect anomalies in general ledgers and improve the accuracy of audit processes. Such tools may use machine learning, algorithms and natural language processing to sift through huge quantities of data and text, looking for patterns or behaviour that suggests fraudulent activity.

Mallowah says ICAEW is working hard to spread awareness of all these issues and best practice within the accountancy sector, at events and via content including the monthly Insights cyber round-up. She also thinks businesses of all kinds will need to invest in AI expertise.

But she again emphasises the most important change that could help accountancy firms resist AI-enabled fraud might be overturning the misplaced belief that they are unlikely to be targeted: Shifting that mindset is really important.

Some of the hype about AI may be overblown, but dont let that blind you to the real dangers these tools could pose. Accountants will need to exploit both artificial and human intelligence over the years ahead to keep their own businesses, employees and clients safe.

Originally posted here:

AI fraud and accountants - economia

Related Posts

Comments are closed.