Three major financial industry groups are demanding federal action against AI-powered fraud after deepfake incidents targeting financial institutions jumped 700% in 2023. The American Bankers Association, Better Identity Coalition, and Financial Services Sector Coordinating Council released a joint paper outlining ten attack categories now hitting banks, from real-time deepfake calls to AI-generated phishing campaigns. Deloitte projects AI-enabled fraud losses could hit $40 billion by 2027, up from $12.3 billion in 2023.
The real story here isn't the fraud numbers—it's that LLMs have made phishing attacks 95% cheaper while maintaining the same success rates as manual campaigns. When 60% of people fall victim to AI-automated phishing, we're looking at a fundamental shift in the economics of cybercrime. Legacy authentication like SMS codes and push notifications were already phishable; AI just made exploiting those weaknesses profitable at scale.
The groups' solution centers on cryptographic authentication—specifically mobile driver's licenses using asymmetric public key cryptography. Their reasoning is sound: you can't deepfake possession of a private key. They're also pushing to expand the Social Security Administration's Electronic Consent-Based verification system beyond its current credit-focused limitations to broader identity validation use cases.
For developers building authentication systems, this is a wake-up call. If you're still relying on SMS or even push notifications for identity verification, you're building on quicksand. The window for implementing cryptographic identity proofing isn't closing—it's already closed. The question is whether policymakers will move fast enough to standardize these systems before the fraud losses get worse.
