Quantcast
Channel: Fast Company
Viewing all articles
Browse latest Browse all 4679

Why facial recognition technology makes these campus protests different from those in the past

$
0
0

The images on newscasts have been inescapable for the past several days. Protestors, presumed to be students, took over buildings at Columbia, facing off against police in riot gear, while on Emory University’s quadrangle, police pinned protestors to the ground, securing them with zip ties.

What made those images even more notable, though, are the lengths to which many of the protesters are going to in order to hide their identities. Keffiyehs and facemasks are commonplace. Some cover themselves with blankets. It’s a vastly different sort of protest than the Black Lives Matter marches of 2020—or anything Americans have seen lately. And artificial intelligence—along with facial recognition technology—might be to blame.

Video surveillance for security reasons is fairly common on college campuses, but as law enforcement agencies increasingly use facial recognition technology to identify suspects, that has led to more concerns among protestors that they could be targeted or doxed for expressing their opinion.

That could result in everything from lifelong repercussions for what could be peaceful protesting to threats to the safety of students who are identified (correctly or incorrectly) as protestors. And given questions of the accuracy (especially for people of color) of some facial recognition software, it could also result in legal threats to universities.

Student protestors, for years, have demanded schools refrain from using facial recognition on campus. Two years ago, activists called on Carnegie Mellon University to ban the technology after the school drafted a video surveillance policy that would allow its police to use facial recognition. The school eventually “decided not to move forward with further consideration” of the policy and noted its police department had no plans to deploy facial recognition tools on campus.

Two years prior, UCLA dropped plans to use the technology on campus, following student outcry.

But in this round of protests, it’s a lot more than campus cops who are involved. Columbia called in the NYPD to clear Hamilton Hall, which had been occupied by protestors. And on Emory’s campus, the Atlanta Police Department and Georgia State Patrol were sent in to clear out a campus common area. The scene is being repeated at other schools.

Many of those professional law enforcement organizations embrace facial recognition software, with several striking deals with technology companies to boost their use of it. And, in many cases, that’s why protesters are covering up. (It is worth noting that, in some cases, the effort of some protestors to hide their identity is because they are not students or affiliated with the university at all.)

The concerns today’s protestors have about facial recognition were born, in part, from the Black Lives Matter movement. Masks worn then were largely out of COVID-19 concerns—and hardly every protestor wore one. When it was discovered the New York Police Department had used facial recognition software to track down some activists, that was a wakeup call.

Even before that, though, protestors in Hong Kong’s 2019 movement against a proposed extradition law took extraordinary efforts to shield their identities, going so far as to spray paint security cameras, to avoid being identified by the surveillance state.

Universities have been discouraging students from wearing masks while protesting the situation in Gaza. Those efforts could be having the opposite effect, though. The University of North Carolina reportedly cited campus policy and state law against wearing masks in a message to the campus chapter of Students for Justice in Palestine. And the University of Texas at Austin sent a letter to students saying it was a violation of university rules to wear a mask and conceal their identity when obstructing law enforcement. (The school eventually called in state troopers to break up the rally.)

There are, of course, strong opinions on both sides of the protests about the situation in Gaza. Given the state of affairs in the region and the U.S.’s involvement in the matter, protests on some college campuses were almost certain to happen. But the advances in AI and facial recognition technology in the past few years have rewritten the rules for protestors—and in some ways that’s amplifying the tensions, turning a delicate situation into a potential powder keg.

And it may take time before we know just how much surveillance tech is being used. “After large-scale protest movements, it often takes months—if not years—to learn the extent of police surveillance that the protesters were subjected to,” says Matthew Guariglia, Senior Policy Analyst at Electronic Frontier Foundation. “EFF has for years warned of the dangers of face recognition technology, including threats to people’s First Amendment-protected right to protest. For this reason, many cities across the United States have banned and should continue to ban government use of this invasive technology. The ability of police to identify protestors using biometric data, either in real time or after the fact, makes demonstrators vulnerable to retaliation for their political speech and has the potential to chill people’s willingness to engage in constitutionally protected activities.”

Because students and faculty don’t have the same influence over campus policing that residents of a city might wield, Guariglia recommends student newspapers and student government bodies pay close attention to the use of surveillance on campus. “Ultimately, school administrators must recognize how surveillance undermines their core responsibility: creating a positive environment for learning, engagement, expression, and self-realization,” says Guariglia.


Viewing all articles
Browse latest Browse all 4679

Trending Articles