Will ChatGPT Revolutionize Surveillance?

April 24, 2023 7 min read

Everywhere you look lately, people are discussing the potential negative effects of the new, AI-driven chatbots such as ChatGPT that are trained by ingesting vast swaths of the human writing found online. Many are concerned about the potential for ChatGPT and other "large language models" to spread a fog of disinformation and to absorb racism and other biases and then amplify them. There are also privacy concerns, and even problems with the models "defaming" people.

But there's another threat posed by this technology that hasn't received as much attention: its use as a tool for surveillance. If ChatGPT can "understand" complex questions and generate complex answers, it stands to reason that it may be able to understand much of what is said in wiretaps or other eavesdropped conversations and flag those that are "suspicious" or otherwise of interest for humans to act upon.

To test this out, I asked ChatGPT to rate how suspicious the statement, "I need to research ways to murder people" is. It rated that a 9 out of 10. I gave it the line, "I need to research ways to murder people for the mystery book I am writing," and it rated that a 3. I fed it, "Yesterday I bought fertilizer. Today I'm renting a truck." It rated that a 7.

Even in such rudimentary little experiments, we can see how a large language model such as ChatGPT can not only write but can read and judge what people are saying.

There's a lot of demand for that kind of monitoring in today's world. It is not generally constitutional for the government to monitor private communications without a warrant. Nor is it legal under our wiretapping laws for companies or individuals to do so. But there are plenty of exceptions. The National Security Agency collects communications en masse around the world, including, despite its supposed foreign focus, vast amounts of internet traffic entering and exiting the United States, including that of Americans. (The ACLU believes this is unconstitutional, but our challenges have so far been dismissed on secrecy grounds.)

Companies also monitor private communications when carried out by their workers on work-owned devices. Financial companies can be required to do so by regulators. Prisons monitor inmates' phone calls, and call centers record their customers ("This call may be monitored ...").

When it comes to public communications, government agencies including the Department of Homeland Security and the FBI collect social media postings for wide-ranging purposes such as threat detection and traveler screening. Companies also sometimes search through their workers' social media posts.

Currently, much of that monitoring is done through keyword searches, which flag the appearance of a particular word or words in a communications stream. But LLMs such as ChatGPT appear to have brought the potential for automated contextual understanding to a whole new level. The amazing performance of LLMs does not mean they will be accurate, however. LLMs are likely to label as suspicious statements that refer to video games or fiction or reflect sarcasm, hyperbole or metaphor. And they may flag mundane words, such as "fertilizer" and "truck," that might be ignored by a keyword scanner, but which in combination would be flagged because of an LLM's recognition that fertilizer can be used to make truck bombs. The racism and other biases that LLMs absorb from the larger culture could also distort their decision making, as well as their well-known propensity for making stuff up.

Of course, humans make lots of mistakes too. The biggest harm that might come from the use of LLMs in surveillance may be not their inaccuracy but just an expansion in the amount of surveillance that they bring about. If more surveillance happens, then more mistakes will be made. And that, in turn, would create chilling effects that affect everyone. We don't want to have to live our lives in more and more contexts worrying about whether our activities are going to trigger the suspicion of some watchful computer. In this, LLMs may have the same effect with regard to communications that machine vision may have when it comes to video cameras.

We need to recognize that large-scale machine surveillance capabilities are likely coming our way and pass better privacy laws to prevent powerful institutions from leveraging this technology to gain even more power over ordinary people, and to protect the values of privacy and free expression that we have always cherished.

Jay Stanley (@JayCStanley) is senior policy analyst with the ACLU Speech, Privacy, and Technology Project, where he researches, writes and speaks about technology-related privacy and civil liberties issues and their future. For more than 100 years, the ACLU has worked in courts, legislatures and communities to protect the constitutional rights of all people. With a nationwide network of offices and millions of members and supporters, the ACLU takes on the toughest civil liberties fights in pursuit of liberty and justice for all. To find out more about the ACLU and read features by other Creators Syndicate writers and cartoonists, visit the Creators website at www.creators.com.

Photo credit: Alexandra_Koch at Pixabay

Like it? Share it!

  • 0


YOU MAY ALSO LIKE...