Finance News

AI and machine learning helped Visa combat $40 billion in fraud activity


Blocks forming robot on white background.

Yuichiro Chino | Moment | Getty Images

Payments giant Visa is using artificial intelligence and machine learning to counter fraud, James Mirfin, global head of risk and identity solutions at Visa, told CNBC.

The company prevented $40 billion in fraudulent activity from October 2022 to September 2023, nearly double from a year ago.

Fraudulent tactics that scammers employ include using AI to generate primary account numbers and test them consistently, said Mirfin of Visa. The PAN is a card identifier, usually 16 digits but can be up to 19 digits in some instances, found on payments cards.

Using AI bots, criminals repeatedly attempt to submit online transactions through a combination of primary account numbers, card verification values (CVV) and expiration dates – until they get an approval response.

This method, known as an enumeration attack, leads to $1.1 billion in fraud losses annually, comprising a significant share of overall global losses due to fraud, according to Visa.

“We look at over 500 different attributes around [each] transaction, we score that and we create a score –that’s an AI model that will actually do that. We do about 300 billion transactions a year,” Mirfin told CNBC.

Each transaction is assigned a real-time risk score that helps detect and prevent enumeration attacks in transactions where a purchase is processed remotely without a physical card via a card reader or terminal.

“Every single one of those [transactions] has been processed by AI. It’s looking at a range of different attributes and we’re evaluating every single transaction,” Mirfin said.

“So if you see a new type of fraud happening, our model will see that, it will catch it, it will score those transactions as high risk and then our customers can decide not to approve those transactions.”

Using AI, Visa also rates the likelihood of fraud for token provisioning requests – to take on fraudsters who leverage social engineering and other scams to illegally provision tokens and perform fraudulent transactions.

In the last five years, the firm has invested $10 billion in technology that helps reduce fraud and increase network security.

Generative AI-enabled fraud

Cybercriminals are turning to generative AI and other emerging technologies including voice cloning and deepfakes to scam people, Mirfin warned.

“Romance scams, investment scams, pig butchering – they are all using AI,” he said.

Pig butchering refers to a scam tactic in which criminals build relationships with victims before convincing them to put their money into fake cryptocurrency trading or investment platforms.

“If you think about what they’re doing, it’s not a criminal sitting in a market picking up a phone and calling someone. They’re using some level of artificial intelligence, whether it’s a voice cloning, whether it’s a deepfake, whether it’s social engineering. They’re using artificial intelligence to enact different types of that,” Mirfin said.

Generative…



Read More:
AI and machine learning helped Visa combat $40 billion in fraud activity

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More