Can AI Stop School Shootings?
Obviously not, but some narrowly written legislation means startups can make millions of dollars trying!
For decades, lawmakers and education leaders have sought quick interventions that would prevent school shootings. Metal detectors haven’t proven effective, and can contribute to students’ sense of unease in school. Ditto clear backpacks and an active police presence on campus.
Research suggests that U.S. school shootings are the complicated products of a trigger-happy culture that valorizes aggression, and that an epidemic of gun violence cannot be solved with simple school security measures. But what if we shrugged at those thorny systemic causes and instead tried to stop school shootings with artificial intelligence surveillance programs?
Multiple states are considering—or have already passed—legislation that would award public funds for AI gun-detection programs in schools. Groups like the ACLU have warned that such programs can usher in heavy-handed new surveillance, while some officials have voiced a more practical concern: some of the bills use language so specific that it only allows schools to buy AI from one company.
In Kansas, a Republican lawmaker’s AI bill initially stipulated that schools could apply for state grants to buy gun-detecting services from the security company Zeroeyes. Although the bill was updated to remove the company’s name, its new language included such hyper-specific requirements (the software must be from a company with an anti-terror designation, and already in use in at least 30 states) that only Zeroeyes met all the criteria.
Some watchdogs accused the bill of creating an anti-competitive carveout for the company, with one school safety director telling the Associated Press that the case was “probably the most egregious thing that I have ever read” in legislation.
Kansas’s governor vetoed the bill in May, on the grounds that it essentially created a $5 million no-bid contract for Zeroeyes. But other states have passed similar laws. Last year, Michigan and Utah created firearm-detection programs under which Zeroeyes is the only apparently qualified vendor, the AP reported. In May, Iowa passed its own version of law, against the objections of one dissenting Republican lawmaker, who argued that some of the bill’s specific requirements made it impossible for Iowa-based security companies to qualify.
Accidentally or not (Zeroeyes told the AP it wasn’t paying lawmakers to write their bills that way) multiple states currently have multi-million dollar programs through which schools can buy Zeroeyes programs—and only Zeroeyes programs.
So, does AI make schools any safer? As with so much emerging AI technology, the image is a little murky upon close examination.
Zeroeyes, which launched after the 2018 Marjory Stoneman Douglas High School shooting, says it uses machine learning to recognize more than 300 kinds of guns. When the system believes it’s spotted a gun on school security cameras, it flags the image to Zeroeyes employees who attempt to discern whether the object is actually a gun.
As with many AI products, sometimes the security systems are outright wrong. A rival AI weapons-detection company, Evolv Technology, advertised next-generation metal detectors that would allow students to walk straight into school without stopping at airport-like security checkpoints. But the supposedly intelligent devices routinely sounded the alarm over items like laptops, notebooks, and metal water bottles. A school district that spent $5 million on Evolv detectors soon had to release a video tutorial, which “recommended that students remove those objects from their bags and carry them,” the New York Times reported in 2022. In the video, a school official “showed students how to avoid triggering the system — by walking through an Evolv scanner in the school lobby holding a laptop with his arms stretched above his head.”
Some schools have gone into lockdown over false Zeroeyes alarms. Months after Texas’s Brazoswood High School began using Zeroeyes on its cameras, the school went into lockdown during dropoff hours. The AI had detected some shadows from a ditch and a shrub and flagged them as a potential gun. A human reviewer agreed that the shape suggested a potential threat.
A parent of a student in the school told KPRC that an overcautious approach could mean dismissing real threats in the future.
“I think it’s one of the things that say, ‘Hey, safe and sorry,’ but at the same time, it’s kind of like the boy who cried wolf,” the parent said. “So if you say, ‘Hey, oh, it’s another incident, oh, it’s another false alarm,’ you know, and I pray to God that this never happens here.”
In February, a Washington school district that installed Zeroeyes addressed concern that the software had resulted in three false-alarm lockdowns in three months. All three incidents originated from different misfires and miscommunications: in one case, Zeroeyes sent out an email with a picture of a person holding a gun, which a staffer misinterpreted as a real-time threat. In another, water damage triggered an alert, and in a third, a panic button was accidentally triggered while being repaired during school hours.
“Three times in three months is more than we’ve ever had before,” one school board member noted during the February meeting.
It’s hard to determine how many of these errors are the growing pains of a new technology, and how many are here to stay, because AI gun-detection hasn’t had many large use cases. Pennsylvania’s SEPTA transit agency launched a $5 million pilot program with Zeroeyes last year but called it off this spring, with both Zeroeyes and SEPTA calling it a bad match.
Zeroeyes’ co-founded told WPVI that the SEPTA camera surveillance system was “archaic, it’s old.”
The transit agency’s spokesperson denied that the cameras were outdated, and said SEPTA would instead spend the funds on police overtime and equipment.
But for civil liberties experts, even an extensive camera system can be a concern, especially when coupled with AI software that can misfire or be misappropriated.
Writing in 2022 of companies like Evolv and Zeroeyes, ACLU senior policy analyst Jay Stanley cautioned that Americans cannot AI their way out of gun violence.
“At the end of the day, ubiquitous surveillance is not the solution to gun violence,” he wrote. “Americans should say no to intrusive technologies that threaten privacy, bring dubious benefits, and have negative side effects such as racial profiling and disrupting education.”