Texas needs legislation to combat bots — yesterday

Photo by Getty Images/iStockphoto/Aichun Wang

Texas legislators have filed hundreds of bills in anticipation of the 2019 legislative session. None of them addresses the increasing flow of misleading and false information that is being spread across the state by armies of bots each day.

‘Bots’ are computer programs that can replicate limited aspects of human behavior and are becoming increasingly numerous and sophisticated on social media and other online forums. They are widely believed to be the perpetrators of misinformation and election interference targeted toward voters in the 2016 and 2018 elections.

Missing out on addressing a significant emerging problem in the state just won’t cut it for a body that meets every other year. Texas must take the lead on how intentionally false messages that are authored or spread by these artificially intelligent computer programs are regulated.

Limiting the influence of bots is not as simple as penalizing the creator. The global nature of the Internet makes finding out who created a bot difficult. Also, bots often create messages that — while the creator set the program’s parameters — the bot itself created the message or linked to, shared or otherwise commented on another posting.

Access to truthful information is crucial to democracy. After all, Texans have received more than their fair share of misinformation campaigns this year. The Austin bombings in March and the Santa Fe High School shooting in May, for example, were both followed by widespread misinformation campaigns and conspiracy theories that were liked, shared, retweeted and otherwise spread by Artificial Intelligence (AI) actors within the communities that people form online.

So what kind of bill should legislators file?

Here are four things lawmakers could focus on when seeking to limit intentionally false and misleading information that is published by AI communicators:

1. The First Amendment. First and foremost, the bill must avoid limiting freedom of expression. This can be tricky, and it certainly limits the range of options. The courts have not determined whether AI should receive the same First Amendment protections as humans. We know the Supreme Court has found that corporations, also non-human actors, have free speech rights. These guidelines assume that bots have limited First Amendment protections, which can be limited if they do not contribute to public discourse.

2. Transparency. One of the main tools lawmakers can use is requiring transparency. AI actors must be labeled. This approach does not limit any expression, it merely allows us to know we are interacting with AI.

California’s new bot law, which goes into effect next summer, uses this approach. While it can be argued that First Amendment rights to anonymity could be violated by such an approach, the Electronic Frontier Foundation, a non-profit that advocates for protecting civil liberties in digital communication, supported the final version of the law. Similarly, the European Union released a set of agreed-upon practices regarding online disinformation in September that includes identifying AI accounts.

3. Encourage fact-checking. Texans don’t want the government telling them what is and is not true. Also, the First Amendment does not allow the government to censor a person or a corporation. At the same time, since the Facebook/Cambridge Analytica scandal last spring, we have become increasingly aware of the power these tech giants have in deciding what messages we do and do not see, and how much misinformation flows through their forums.

These companies essentially have outsized power, and Section 230 of the Communications Decency Act shields them from almost any liability for what happens in their forums. This means the government cannot compel them to remove false and misleading accounts or the information those accounts produce. Thus, creating grants to support organizations that observe bot activity and watch for misinformation campaigns and inform the social media giants about them, provides a First Amendment-neutral approach to protecting online discussions. Such organizations can alert social media outlets, as well as the media, of misinformation campaigns. When social media companies fail to take action, the news media can let people know.

4. Media literacy in schools. Another approach that does not limit free expression is for Texas to take the lead in arming students with the ability to spot false and misleading information. Mandating media literacy training in the schools could give our students an advantage when it comes to navigating the increasingly complicated information environments online.

This is likely an imperfect list, but it takes great care to keep the government from intruding in our rights to communicate and in deciding what is true and is not. Writing a law that protects the public against false and misleading information that is created, shared, and liked by AI communicators is not an easy task. That does not mean that our elected leaders should shirk their responsibility to address this emerging threat to democratic society.

Disclosure: Southern Methodist University has been a financial supporter of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.

Jared Schroeder

Assistant professor of journalism, Southern Methodist University