The rise of AI and ChatGPT capabilities over the last year seems to have made artificial intelligence a part of our daily lives, and it’s only a matter of time before it makes its way into political advertising.
Normally politics is years, if not decades, behind technological advancements. At best they usually aim to play an appeasement or containment role for technology that vastly outpaces their ability to write laws to control or stop them. But shockingly, the Texas Legislature saw the idea of deep fakes coming and passed an amendment to an existing law (255.004, Texas Election Code) in 2019 to add a ban on the use of deep fakes in elections.
Now, though, the fate of the law is in question. In May 2023, the Fifth Court of Appeals in Dallas handed down Ex Parte Stafford No. 05-22-00396-CR (Tex. App. May 1, 2023) which declared that at least one part of the law is unconstitutionally broad under the First Amendment because political speech is protected under the law and any attempt to limit it must be narrowly tailored and further a compelling state interest. The state proved its “compelling state interest” prong but failed the narrowly tailored prong.
BUT WAIT THERE’S MORE: The problem is that the court didn’t specifically rule on the deep fake part of the statute. They ruled on subsection (b) which states:
(b) A person commits an offense if, with intent to injure a candidate or influence the result of an election, the person knowingly represents in a campaign communication that the communication emanates from a source other than its true source.
The deep fake portion of the statute is subsection (d) & (e):
(d) A person commits an offense if the person, with intent to injure a candidate or influence the result of an election;
- Creates a deep fake video; and
- Causes the deep fake video to be published or distributed within 30 days of an election.
(e) In this section, “deep fake video” means a video, created with the intent to deceive, that appears to depict a real person performing an action that did not occur in reality.
SO WHAT?: Well, while the deep fake portion of the statute hasn’t yet been tested, subsection (b) has failed judicial review. This means that potentially, someone could produce a deep fake using real actions that occurred (satisfying (d) and (e) above) and put it online without any real acknowledgment that the deep fake wasn’t produced by the person they are deep faking.
Scary huh? There are some checks and balances on this idea, but without clearer guidance from the legislature or the courts, the possibility is very real and we should all take any political ads with a grain of salt until this is sorted out.
BOTTOM LINE: Do your own research. Don’t trust political ads just because they say something you like or something you hate because there’s no guarantee at this point that they’re real.
DIG DEEPER: https://casetext.com/case/ex-parte-stafford-11