Effects of Using AI in Political Campaigns

Artificial intelligence is a hot topic. I’m not researching implications of AI for politics, but have questions I’d like to answered.

What are the effects of using AI in political campaigns? Under what conditions are AI generated political messages effective? Are there conditions where they backfire against the person or organization that uses them? By use in political campaigns, I’m thinking about more than electoral campaigns by candidates, parties, and political action groups; I’m also thinking about ongoing issue-driven political campaigns, like ongoing public campaigns for or against abortion access, gun control, and other salient issues.

Microtargeted Political Messages

Microtargeting refers to the practice of tailoring campaign messages to specific demographic groups based on detailed data analysis. This involves using AI algorithms to process vast amounts of information, including voter preferences, behaviors, and even individual interests.

Microtargeting is often executed through online platforms and social media. AI algorithms analyze user data to identify patterns, allowing political campaigns to understand the nuanced preferences of different voter segments. For instance, individuals who engage with certain types of content, respond to specific issues, or follow particular pages may be categorized into distinct microtargeted groups.

The effectiveness of microtargeting in political campaigns is a subject of ongoing debate. By tailoring messages to address the unique interests of subgroups, campaigns aim to create a more personalized and persuasive communication strategy. However, if microtargeting strategies are perceived as manipulative or intrusive, they may have unintended consequences for a campaign’s overall reputation.

Surveys and experiments can be conducted to measure the impact of microtargeting on voter behavior, assessing whether targeted messages lead to increased support or changed opinions. I think recent experience has shown that targeting is effective, but I wonder about the mechanisms and conditions. Does it work better on some political issues than others? Are certain types of messages more effective than others; for example, local facts versus a personalized candidate greeting?

Maybe the best question is: Are microtargeted political messages effective when the recipient knows he or she is being microtargeted? Pursuing this further, is there a difference between an explicit disclosure of microtargeting and ads being exposed as microtargeting? Does the basis for microtargeting affect how the recipient receives the ad? For example, are people likely to feel different about ads targeted based on their zip code, public-directory type information, social media history, search history, etc. Some styles of microtargeting feel more invasive than others do, but I don’t know if people have the sensibility about it.

Deepfakes and Political Attacks

Deepfakes refer to manipulated audio or video content created using artificial intelligence, often employing deep learning techniques to superimpose a person’s likeness onto another’s. This technology enables the fabrication of realistic-looking videos or audio recordings that can deceive their audience into believing false narratives.

The effectiveness of deepfakes in spreading disinformation lies in their ability to exploit the visual and auditory trust that people place in media. When a deepfake convincingly portrays a public figure or political figure saying or doing something they never did, it has the potential to sway public opinion, influence elections, or damage reputations.

Surveys and controlled experiments can assess the impact of exposure to manipulated media on people’s beliefs and attitudes. Researchers may explore the cognitive processes involved in distinguishing real from fake, as well as the factors that contribute to susceptibility to misinformation.

The questions I have about deepfakes for political attacks are similar to those I have about microtargeting. Are deepfakes effective in political campaigns? Does it matter if the audience knows they are receiving a deepfake message; is there a different between explicit disclose of deepfaking versus the deception being exposed by someone else? When answering questions like these, I think it’s important to distinguish between whether people like a deepfake message, think it’s normatively correct expression, and whether they are effective. I imagine that some people would say deepfakes are bad and wrong, but also be affected by them in manner the creator intended.

More questions: Can audiences identify real vs. deepfake messages? My guess would be that audiences can identify extreme examples – a candidate saying something they would “never say” – but have a difficult type identifying deepfakes of something the candidate “might say.” This potentially makes subtle deepfakes more damaging than extreme ones.

I’d also like to know how deepfakes play to different audiences. If, for example, a deepfake video meant to ridicule Joe Biden is shown the Republic audiences, will it be effective even if they know it is fake? How would a Democrat audience feel about that same Biden deepfake?

Many more questions than answers in this post. Hopefully, I’ll get the chance to learn the answers soon!

Leave a Reply

Your email address will not be published. Required fields are marked *