The New Threat Affecting Brand Reputation: Misinformation
In 2023, images of a Satanic-themed children’s collection on sale at Target spread like wildfire across social media and a new boycott was born. The only problem? The collection wasn’t real. A designer admitted to creating the images with Midjourney, an AI image generation platform.
A new study reveals that Target is not alone. In a survey of leaders of top U.S. companies, more than 6 in 10 reported that misinformation had affected their corporate reputation with 1 in 10 reporting a substantial impact. When asked what steps their firms are taking to prevent potential misinformation, 60 percent of respondents said they had plans in place and stated that unhappy consumers, competitors and internal employees were the biggest sources of concern.
AI: Friend or Foe?
Yes, AI does have a role in perpetuating misinformation, from creating misleading AI overviews in search results (looking at you, SGE) to making it all too easy for an unhappy consumer to create an AI-generated image.
But what many brands don’t realize is that AI can detect misinformation before it spreads. These conspiracies start on social media and consumer reviews. By the time most brands see the post, it’s already gone viral.
But in today’s day and age, crisis response is all about speed. Getting ahead of the narrative that your stores are selling Satanic clothing or using horse meat, for example, requires that you know what’s coming and prepare your response before it’s too late.
Here’s where AI comes in: flag those reviews and social media posts and alert the brand so the team can take immediate action – all in real time. Many brands today are using AI in reputation management to automate their tasks, like creating social media posts or responding to reviews. But what they’re missing is the ability to use AI to spot the needle in the haystack: the one review or social media post that alleges a serious threat to the business.
This is why our team created Risk Monitoring. When you’re a brand that gets thousands of reviews and social media posts, it’s not possible to read every single one and flag the ones with alarming claims. Even keyword searches will miss out on misinformation; would anyone at Target thought they needed to search for “Satan” in their reviews? Probably not.
The Rise of AI-Generated Images
Since OpenAI launched the first AI image generator DALL-E-2 in April 2022, AI has created more than 15 billion images in DALL-E-2, Adobe Firefly, Stable Diffusion, Midjourney and more.
These aren’t just artists creating art; anyone can create and publish these images. It takes three seconds to type in “mouse in pasta” and about five for an AI image generator to come up with an image.
While a very cute image of a mouse, it would only take a few more tweaks to make it identical to a terrifying and very unappetizing photo of a mouse swimming around in your linguine.
And with the right following, these can quickly go viral. It takes, on average, 14 days for a video to reach peak viewership on Tiktok. It takes one hour for an Instagram reel to get the engagement it needs to go viral. And even worse: tweets that are highly critical or negative are more likely to get a significant number of views.
This is also why Chatmeter analyzes images along with text feedback in reviews, social media posts, etc. Photos are featured prominently on your brand’s homepage of almost every major review page, listing and more. Even a hundred five-star reviews can’t counteract one terrible image showing safety, discrimination, harassment or other issues. And now that those images take just a few clicks to create, this is a real risk.
The bottom line: misinformation isn’t going to go away. It’s going to get worse and brands today need to not only have a plan in place to respond after a misinformation crisis happens, but use an AI-powered platform to see it before it happens in the first place. Get ahead of it by requesting your demo of Chatmeter or trying out our interactive product tour.