Sorry, you need to enable JavaScript to visit this website.

ChatGPT makes an exception to its rule on political messages: What this means for AI and its users


Regulation Challenges and Enforcement Gaps 

The challenges OpenAI faces in enforcing its rules on political messages by ChatGPT highlight a larger issue in the AI space — the enforcement gap. Despite outlining clear regulations, the implementation and adherence to these rules often fall short, a pattern that has been observed in multiple instances in the tech industry. The issue is further complicated by the increasing sophistication of AI tools and the diverse applications they possess, which instigates concerns about their potential misuse. 

“If it’s an ad that’s shown to a thousand people in the country and nobody else, we don’t have any visibility into it.” - Bruce Schneier, a cybersecurity expert and lecturer at the Harvard Kennedy School

Current regulations are undoubtedly lagging, and the Federal Election Commission's ongoing review of a petition filed by advocacy group Public Citizen illustrates this. The petition seeks to ban politicians from deliberately misrepresenting their opponents in ads generated by AI. However, commissioners from both parties have expressed concerns about the agency's authority to intervene without explicit direction from Congress, indicating potential political hurdles. 

Political Firms and AI Usage 

Despite these challenges, political firms are keen to harness AI technology. Higher Ground Labs, for instance, has been publicizing how its start-ups are employing AI. Such firms, including Swayable and Synesthesia, are using AI to optimize political messaging and to create multi-language videos from text, respectively. Such approaches underline the paradigm shift in how politicians engage with voters and the growing influence of technology on politics. Nevertheless, these advancements also bring forth the risk of abuse, and the potential for spreading disinformation more rapidly and at a lower cost. 

Addressing the Risks 

OpenAI CEO Sam Altman has expressed concerns about the impact of AI on future elections, particularly the potential for "one-on-one interactive disinformation". This concern is shared by other tech executives and politicians alike. In response to these challenges, OpenAI has been recruiting former social media company workers to develop policies addressing the unique risks posed by generative AI. 

“The company’s thinking on it previously had been, ‘Look, we know that politics is an area of heightened risk. We as a company simply don’t want to wade into those waters.” - Kim Malfacini, Product Policy at OpenAI

It seems OpenAI is aware of the need to strike a balance between allowing political use of ChatGPT and preventing its misuse. The recent rules update to ban "scaled uses" in political campaigns or lobbying is a step in that direction. However, the challenge remains in enforcing these rules effectively, with Malfacini acknowledging that the nuanced nature of these rules complicates enforcement.

“We want to ensure we are developing appropriate technical mitigations that aren’t unintentionally blocking helpful or useful (non-violating) content, such as campaign materials for disease prevention or product marketing materials for small businesses,” she said.
A host of smaller companies that are involved in generative AI do not have policies on the books and are likely to fly under the radar of D.C. lawmakers and the media.
Nathan Sanders, a data scientist and affiliate of the Berkman Klein Center at Harvard University, warned that no one company could be responsible for developing policies to govern AI in elections, especially as the number of large language models proliferates.
“They’re no longer governed by any one company’s policies,” he said

"We aim to ensure the development of appropriate technical measures that do not inadvertently hinder helpful or valuable (non-violating) content, such as disease prevention campaign materials or marketing content for small businesses," she commented.

Numerous smaller companies engaged in generative AI lack established policies and are likely to remain unnoticed by both lawmakers in Washington, D.C., and the media.

Nathan Sanders, a data scientist and an affiliate of Harvard University's Berkman Klein Center, cautioned that it is unfeasible for any single company to shoulder the responsibility of creating regulations for AI in elections, particularly as the prevalence of large language models continues to grow.

"As the number of large language models multiplies, they are no longer bound by the policies of a single company," he remarked.


Shariff share buttons