How Ghost Students Are Exploiting College Enrollment Systems to Steal Federal Aid
Criminal fraud rings are targeting college aid systems with fake student identities. These scams use automation, identity theft, and AI to steal financial aid, lock out real students, and overwhelm public institutions. Here’s how it works and what security leaders in higher ed need to know.


Heather Brady was asleep on a Sunday afternoon in San Francisco when a police officer knocked on her door with an unexpected question: Had she applied to Arizona Western College?
Heather Brady hadn’t applied to any college. But someone else had, using her name, Social Security number, and other stolen data. When she checked her federal student aid records, she saw that more than $9,000 had already been disbursed. One of the loans had gone to a California school she didn’t even recognize.
“I just can’t imagine how many people this is happening to that have no idea,” Brady later told the Associated Press (Fast Company).
What happened to Brady is now a nationwide crisis. Across the United States, public colleges and financial aid systems are being flooded by ghost students: fraudulent identities created by criminal rings to steal government aid at scale.
These attacks are highly coordinated. Fraudsters use automation and generative AI to mimic real applicants, enroll in online classes, and collect aid before vanishing. They exploit gaps in verification, overwhelm faculty with AI-written coursework, and displace legitimate students who need those seats.
In 2024, California’s community colleges alone reported:
- 1.2 million flagged applications suspected to be fraudulent
- Over 223,000 enrollments tied to fake identities (Fast Company)
- Between 20 and 60 percent of applicants at some institutions flagged by ID verification tools (Fortune)
This is not just a financial scam. It is an identity crisis across higher education that exposes how vulnerable our enrollment systems have become to modern forms of attack.

How the Scam Works
Ghost student scams are not isolated incidents. They are structured, repeatable operations built to scale across public aid systems. The goal is simple: create fake students that look real, enroll them in online programs, and extract as much aid as possible before disappearing.
This type of financial aid fraud is enabled by automation, powered by AI, and sustained by weaknesses in identity verification systems. Below are the key stages that make this type of fraud so effective.
Step 1: Stealing Personal Data
Fraud rings begin by harvesting personal data like names, Social Security numbers, and birthdates, that are often obtained through phishing attacks, healthcare breaches, or criminal marketplaces. These details are used to build synthetic identities that can bypass basic verification steps in FAFSA applications and school enrollment portals.
Once accepted, these identities are assigned real student accounts, financial aid packages, and course access credentials.
Step 2: Automating the Application Process
From there, automation does the heavy lifting. Bots are used to:
- Submit FAFSA applications at scale
- Register for asynchronous or remote courses
- Generate email replies that sound human
- Create login activity to mimic real student behavior
This creates the illusion of a legitimate learner. On the surface, it looks like real people applying, registering, and checking into class. But behind the activity is a coordinated system of synthetic identities.
Online programs are especially vulnerable to this kind of fraud. These environments often rely on asynchronous learning with limited face-to-face interaction or identity checks. That makes them ideal targets for fraudsters using bots and automation to operate at volume.
Community colleges are particularly exposed due to their large online enrollment numbers and limited administrative capacity. At one institution, over 400 financial aid applications were traced back to just a few recycled phone numbers. And according to a recent Fortune investigation, between 20% to 60% of student applicants at some schools have been flagged as fraudulent by identity verification vendors.
Step 3: Deploying Generative AI in the Classroom
Once enrolled, these fake students must appear active just long enough to receive aid. Many use generative AI to submit essays, quizzes, and discussion posts that appear convincingly human.
This activity is designed to pass early scrutiny. Instructors see assignment submissions and logins, while aid offices see forms completed and accounts verified. But the activity is fully orchestrated by fraud bots.
Wayne Chaw, one victim of this scam, discovered that a grant had been awarded in his name for a course at De Anza College, one he had never signed up for. Someone else was logging in and turning in homework on his behalf.
“This person is typing as me, saying my first and last name,” Chaw said. “It’s very freaky when I saw that” (Fast Company).
What looks like student activity is often just automated behavior designed to pass system checks without raising alarms.
Step 4: Extracting the Aid and Vanishing
The goal for these attackers is not graduation, it’s payout.
Once federal loans or grants are disbursed, these accounts often go dark. Logins stop. Coursework ceases. Bots vanish from the system, and real victims are left to clean up the wreckage.
By the time schools notice a pattern, the funds are already gone. And because many systems don’t score user risk in real time, these attackers can repeat the process across multiple schools before detection.

The Human Fallout of Synthetic Fraud
Behind every fake enrollment is a real person paying the price. For some, it’s thousands of dollars in stolen aid tied to their name. For others, it’s the loss of access to required classes. While fraudsters collect federal funding and vanish, victims are left with damaged credit, missed opportunities, and months of institutional cleanup.
Identity Theft Victims Carry the Debt
Brittnee Nelson, a small business owner in Shreveport, Louisiana, first realized something was wrong when she received an alert that her credit score had dropped by 27 points. Someone had used her name to take out over $5,000 in federal loans for a college she had never applied to. Another loan, linked to a second institution, was already in progress.
“It’s like if someone came into your house and robbed you,” Nelson told Fortune. “The difference is, they used my name and left the mess behind.”
Nelson had enrolled in identity theft protection and monitored her credit closely. Still, the fraud nearly pushed her into collections. It took two years to remove the loans from her record. In the meantime, she was treated as a delinquent borrower for an education she never received.
She is far from alone. According to the Department of Education, more than $90 million in aid has been distributed to ineligible students, including $30 million linked to deceased individuals. Yet, most institutions still rely on outdated verification systems that confirm identities only after funds are released.
Real Students, Shut Out
While some victims are left managing fraudulent debt, others are denied access altogether.
At Santiago Canyon College in California, thousands of bots flooded the enrollment system during the fall term. Fake students took up spots in high-demand courses, while legitimate students were pushed to waitlists or blocked from enrolling entirely.
“They were in our classrooms as if they were real humans,” said Jeannie Kim, the college’s president. “Our real students were saying they couldn’t get into the classes they needed” (Fortune).
The burden didn’t stop at enrollment. Instructors started the term grading homework, only to discover that many of the submissions were generated by AI and linked to fraudulent accounts. Across financial aid and admissions offices, staff were forced to verify class rosters by hand, work through mounting backlogs, and contact impacted students using systems that were never designed to handle fraud at this scale.
This has become a system-wide strain. Faculty are spending critical time investigating fake coursework instead of teaching. Administrative teams are being pulled away from academic support to manage fraud recovery. Meanwhile, the students who most need timely access to courses are locked out, delayed, or quietly pushed aside.

Higher Ed at a Breaking Point
Ghost student fraud has placed public colleges in an impossible position: protect access or protect public aid. Most are trying to do both, and many are struggling.
Faculty and staff are responding with manual triage. Admissions and aid teams are verifying identities by phone. Instructors are monitoring first-day coursework for signs of AI misuse. But when attacks surge over weekends or hit during enrollment deadlines, response capacity breaks down. Systems that were never built for fraud detection are now at the center of a high-volume, adversarial campaign.
Nowhere is this tension more visible than in California. With 116 community colleges serving millions of students, the state has become a primary target. In 2024, more than 31 percent of applications were flagged as fraudulent, and officials estimate that bots stole over $13 million in financial aid in a single year.
“Students who are already on a degree or certificate path are sometimes finding barriers,” said James Todd, assistant vice chancellor of the California Community Colleges system. “Because colleges have found that it’s all enrolled with fraudulent students” (Inside Higher Ed, May 2025).
Colleges are trying to modernize, but progress has stalled. In May 2025, California administrators proposed a small student fee to help fund fraud prevention — including identity verification systems and AI detection tools. The Board of Governors rejected the proposal after students raised concerns that any added cost could block access for low-income or undocumented applicants. The idea was tabled, but the risk remains.
This is no longer an administrative nuisance. It is a systemic security failure. Fraud rings are operating with the sophistication of modern cybercriminals. They move quickly, automate their activity, and exploit scale. Most institutions, by contrast, still rely on compliance-era workflows that cannot detect threats until the damage is done.
The worst part is that what’s happening in higher education is not an isolated crisis. Rather, it’s a glimpse into the future of fraud across every industry. Just like ghost students mimic real learners to extract aid, attackers in enterprise environments impersonate employees, partners, and customers to steal credentials, manipulate payments, and move laterally through systems. These threats no longer resemble traditional phishing campaigns. They are adaptive, AI-powered, behaviorally convincing, and they exploit one thing above all: human trust.
The same social engineering that floods college classrooms is now targeting corporate inboxes, expense systems, and collaboration platforms. And in both environments, it’s not just the infrastructure that’s being tested, it’s the human layer.
That’s why the solution isn’t more forms or manual checks. The path forward is User Adaptive Risk Management — a real-time, behavior-driven approach to identity security that responds to how users behave, not just who they claim to be.
Higher education was not built for today’s social engineering threats. But it can be rebuilt with smarter visibility, faster detection, and adaptive security that evolves as fast as the adversary. The same is true for the enterprise.
Never Miss a Human Risk Insights
Subscribe to the Dune Risk Brief - weekly trends, threat models,and strategies for enterprise CISOs.
FAQs
Complete the form below to get started.
Never Miss a Human Risk Insights
and strategies for enterprise CISOs.
