ACFE Insights Blog

The Dichotomy of Modern Government Fraud Schemes and Government Oversight’s Contrasting Approach

There appeared to be an overwhelmingly substantial number of “high loss” fraud schemes being perpetrated in 2022 (as reported by the Federal Trade Commission). This was shocking as it appeared to show more high loss cases than low loss cases. This would signal a marked departure from decades of historical statistics. Statistics we, in program oversight, rely upon to strategize our anti-fraud endeavors.

By Guest Blogger January 2024 Duration: 11-minute read
Please sign in to save this to your favorites.

By: Erik Halvorson

While reading a recent Bloomberg article about deepfake imposter scams and new fraud trends, I was struck immediately by their graphic. There appeared to be an overwhelmingly substantial number of “high loss” fraud schemes being perpetrated in 2022 (as reported by the Federal Trade Commission). This was shocking as it appeared to show more high loss cases than low loss cases. This would signal a marked departure from decades of historical statistics. Statistics we, in program oversight, rely upon to strategize our anti-fraud endeavors.

Upon first glance, I felt a sense of righteous vindication as I have been shouting about the increased risk of sophisticated, high dollar loss schemes for years. Risk to the types of government programs that we see far too often splashed across the headlines of the news. The type of programs funded by massive budgets, guided by program managers who just review self-certifications; programs with funds spread out across multiple federal or state agencies, and overseen by contract or grant administrators trying to review hundreds of awards at a time. In fact, my fear over the last few years has changed. It is no longer the company that gets awarded a few grants or contracts and underperforms on them or fibs on an hourly timecard. My fear, and what keeps me up at night, has turned to stare down a massive tidal wave of domestic organized crime groups, international state sponsored groups, coordinated Fraud-as-a-Service (FaaS) groups, and creative hackers across the globe with technical prowess and seemingly unlimited energy. All these groups share one common goal: the constant targeting of slow-moving and poorly defended government programs. These same government programs run by bureaucrats who fear if they slow the release of funds, even a little, to put controls in place they will be lambasted by their department or in the media.

Then I looked deeper into the chart and was offended as both a researcher and investigator. I realized first this is a terribly misleading graphic and second that even though I dislike the graphic the underlying premise of the article holds true. So, let us talk (briefly) about the graphic first, then explore the underlying message. You will notice a small footnote on the graphic that there were approximately 360,000 fraud reports of $1,000 or less. Combining the foot note with the strange Y-Axis groupings we see objectively there are significantly more small cases than large. Thus, bringing order back to our expectations of fraud case loss sizes and the volume of fraud we experience in government oversight.

Setting the graphic off to the side, let us talk about the underlying premise of the article. There is absolutely a new wave of fraud that is being perpetuated by new technology. Deepfakes are a part of those new fraud schemes, as the articles describe. Another fraud area on the rise is the use of generative AI Large Language Models (LLMs) and the use of “AI technology” to boost schemes by bad actors. AI Large Language Models are programs like ChatGPT. Technology boosts are technological enhancers like AI generated photos, voice simulators, phone number spoofers or AI coding systems that will write a malicious code for you. Both AI technology and technology boosts serve to bridge the gap for less technologically savvy bad actors who want to increase the complexity and sophistication of their fraud across their area of attack. One of the more interesting cases to emerge recently is dubbed FraudGPT. This program reportedly automates different phishing schemes and helps coach would-be fraudsters with suggestions on where to place malicious links. It also contains information on what the most frequently exploited online resources are and can even generate harmful code for its customers.

The rise of a Generative Pretrained Transformer (GPT) program without ethical constraints, used to support fraud, is fairly predictable. But, this GPT program has a surprising difference, the way this GPT service (purchasable through the dark web or via telegram’s messaging secure instant messaging groups) is marketed. FraudGPT has a monthly subscription fee. That is right, not even criminals can get away from the modern era of micro-billing services. Reportedly, the cost to use FraudGPT is $200 per month or $1,700 per year; which admittedly does provide a healthy 29% discount for annual membership. Despite being impressed by the discount, a subscription fee-based model is a business practice I did not think I would ever see in fraud. A practice that seems to demonstrate the organization and quasi-legitimacy of modern fraudsters. Because in the past can any of us see anyone linking their payment information to a service designed to break the law? That seems like one good cyber-based law enforcement operation away from unmasking a ton of bad actors.

Thinking about this situation objectively brings me back to sitting on a panel at Denver’s annual fraud retreat. During a panel I was asked how we should think about AI as it revolutionizes the world around us, and specifically in the area of fraud and anti-fraud mitigation. In my opinion, as oversight professionals, we need to expect to see two things. The first is that with AI boosting technological proficiencies we will see more fraud generally, both in increased sophistication and by volume of cases. The truth is technology has made fraud easier. We need to do everything we can to prepare for this. The next is that while the area of technological fraud grows (and government slowly incorporates technological countermeasures and proactive analytics units) we will also see a rise in unsophisticated traditional fraud schemes. If you follow any of the financial crimes groups you will see that check fraud, for instance, almost doubled between 2021 and 2022 (up to ~680,000 instances in 2022) and will cost consumers a staggering $24B in 2023.

Technology has created a very strange dichotomy in our everyday world and fraud is a microcosm of that world. On the one hand, we have some of the most sophisticated technological fraud schemes ever now being attempted more often and by a broader swath of bad actors. On the other hand, as our focus, manpower and resources, move to combat technology enhanced schemes we leave a blind spot for unsophisticated schemes to thrive in. This idea about voluminous unsophisticated schemes thriving is supported by the footnote found on the initial chart we looked at. At a ratio of 2.2:1, schemes of $1,000 or less are outpacing all other fraud scheme loss amounts. If we look at scams which stole $10,000 or less on our initial chart, we see more than 523,000 cases (a ratio of more than 6:1 in favor of smaller schemes). While at the same time, year over year, we see a baffling continuous growth in larger, more sophisticated fraud schemes as well. Forcing us (in many cases) to either ignore one of the two types of fraud schemes or at best split our focus, funding and manpower. And in large government programs mired by years of “more with less” and “get the money out ASAP; we will chase the fraud later” this is a recipe for disaster. 

Ultimately as fraud fighters trying to combat a rising tide of fraud of all levels of complexity, we need to be more flexible and dynamic. To do this we should leverage new technology and fraud analytics techniques by implementing tools being developed by private and commercial software companies. We should understand that social science research has progressed in areas of psychology, criminology, sociology and machine learning/AI that support our effort to identify fraud schemes earlier. This will give our controls a competitive advantage in identifying fraud and keeping losses low.

We also need to invest time and money into growing our people skills. As a special agent of 13 years, I have witnessed a steep decline in our ability to manage human assets, especially post-pandemic. We need to hone interview skills. We need to understand how to build (and maintain) long term strategic relationships with individuals outside our oversight field. And, even with the growth in technology we should remember our best assets are still our people. Especially supporting those that still possess their creative problem solving and optimism in the face of an increasingly difficult bureaucratic system to work and thrive in.

Fusing the human element with technological and research-based advancements might just give us the boost we need to combat these schemes. And while I cannot say for sure it will work, I can say for sure that if we fail to innovate and implement agile oversight we will fail. And failing means we continue to see headlines about losing billions of dollars in federal funds to fraud — headlines that will continue to erode public trust in our ability and the necessity of our work. 

Topic:
Tags: