Artificial intelligence is transforming education technology and expanding cybersecurity risks. In the Education Week article, experts examine how AI adoption is creating new challenges for schools.
Read the article to learn how AI tools are expanding the cybersecurity attack surface in education, why automated threats and phishing attacks are becoming more sophisticated, and what security considerations schools should evaluate as AI adoption grows.
Why are AI-powered cyberattacks such a concern for K-12 schools?
AI-powered cyberattacks are a growing concern for K-12 schools because they amplify risks that were already present in the sector.
Schools have long been attractive targets because:
- They hold large volumes of sensitive student data (including Social Security numbers) that can be sold at a premium on the dark web, since children typically have clean credit histories and limited monitoring.
- They manage substantial financial transactions and store staff personal data.
- They often have fewer resources and weaker defenses than sectors like banking or healthcare.
AI is reshaping this threat landscape in several ways:
- Generative AI tools can create highly polished phishing emails with fluent American English, removing the spelling and grammar red flags staff used to rely on.
- Attackers can mimic the writing style of trusted leaders (like a superintendent) to trick staff into clicking malicious links or sharing credentials.
- Deepfake tools can clone a leader’s voice or appearance, making fraudulent payment requests over phone or video calls more convincing.
- AI can quickly mine public information—such as budgets, vendor lists, and staff roles—to tailor attacks to each district.
- Agentic AI tools can now automate complex attack steps, allowing a single, relatively unskilled person to execute campaigns that previously required an organized ransomware group.
All of this lowers the barrier to entry for cybercriminals and increases the volume and sophistication of attacks, at a time when many districts are already stretched thin on cybersecurity funding and expertise.
How have funding cuts affected school cybersecurity efforts?
Recent federal funding cuts and policy shifts have made it harder for schools to keep pace with AI-driven cyber threats.
Key changes include:
- Reduced federal support for MS-ISAC (Multi-State Information Sharing and Analysis Center), which had been a major source of free cybersecurity services for schools. The cooperative agreement with the federal government ended, and districts now generally pay membership fees unless their state covers the cost.
- Suspension of the K-12 Cybersecurity Government Coordinating Council, which previously brought together federal agencies, states, districts, and ed-tech vendors to share threat intelligence and coordinate responses.
- Closure of the U.S. Department of Education’s Office of Educational Technology, which had helped states and districts navigate emerging technology issues, including AI and cybersecurity.
While some initiatives continue in modified forms, the net effect is:
- Less centralized, government-backed threat visibility and guidance for K-12.
- More financial pressure on districts to fund their own cybersecurity tools and memberships.
- Greater fragmentation in how information about new threats—especially AI-enabled ones—is shared.
There is one notable bright spot: a three-year FCC pilot program that initially set aside up to $200 million in competitive grants to help schools and libraries purchase cybersecurity products and services through E-rate. However, experts note that it’s unclear what will happen after the first round of grants and whether the program will become permanent.
In this environment, many technology leaders see a widening gap between the sophistication of AI-powered attacks and the resources available to defend against them.
What practical steps can schools take to counter AI-enabled attacks?
Even with constrained budgets, districts can take concrete steps to better protect themselves against AI-enabled cyberattacks.
1. Strengthen the basics
- Enforce multi-factor authentication (MFA) for staff and administrators.
- Require strong, unique passwords and regular updates.
- Keep operating systems, browsers, and critical applications patched and up to date.
2. Invest in staff awareness and training
- Run ongoing phishing simulations using software that sends realistic fake phishing emails.
- Automatically route staff who click on simulated phishing links to short, targeted training videos.
- Emphasize that urgency in emails or calls—even if it appears to come from the superintendent—should never override verification steps.
3. Formalize verification processes
- Establish clear protocols for financial transactions: no immediate payments based solely on email, text, or phone instructions.
- Introduce code words or secondary verification methods for sensitive requests made over phone or video.
- Document incident response procedures so staff know exactly whom to contact and what to do if they suspect an attack.
4. Leverage collaboration and shared services
- Join information-sharing networks where possible. Some states (including Alaska, Connecticut, Kansas, Maine, Mississippi, New Jersey, Oregon, Texas, and Vermont) provide MS-ISAC services to districts at no additional cost.
- Participate in state or regional CoSN chapters or similar groups to share best practices, policies, and vendor evaluations.
5. Practice response through tabletop exercises
- Run low-cost tabletop exercises with district leadership to walk through realistic attack scenarios (e.g., ransomware, business email compromise, deepfake payment request).
- Use these sessions to clarify roles, refine communication plans, and identify gaps in current policies.
Survey data from CoSN shows that 60% of district technology leaders believe AI will lead to new forms of cyberattacks, and another 34% are moderately concerned. In that context, consistently executing these fundamentals—what some leaders call the “blocking and tackling” of cybersecurity—can meaningfully reduce risk, even as AI continues to reshape the threat landscape.