We all know that there is a lot of shady stuff happening online, especially when it comes to advertising and AI. Both mess with people’s trust, and most people don’t realize it until it’s too late. Case in point, let’s talk about my husband and my father.
Advertising
One of the biggest issues is when companies try to sneak in advertising by making it look like regular content. Influencers do this constantly, posting as if they are sharing something personal, but they are getting paid. There’s no label, no transparency. This breaks FTC rules and totally crosses the line on trust (Tate, 2019). When people figure out that they were misled; they feel played—and it makes them question the brand and the whole platform.
It is not just about who is delivering the message, but how it is designed. Native advertising is meant to blend in so well with real content that people don’t even realize they’re being sold something. Case Study, my husband Joe. One of the smartest people I know. He has a doctorate for pete sake. But ask him to do anything online and it’s like asking a 5-year-old for the square root of 30. A man he follows cooking, yes cooking, uses the same spices on all his posts. Why? Because he makes his own spices, but then he will tell you where he gets everything in his post, from gadgets to decor with links under his post to all. Underneath it will say #ad.
One study showed that when brand placement and content quality are high, people are more likely to trust the advertisement, even if they do not know it is an advertisement (Kim et al.), 2020). That’s the problem: people are influenced without knowing it. And when they find out, trust can vanish fast. My husband swears by this man, and I have a ton of gadgets and spices to prove it. He has finally realized that he didn’t need everything this man was selling just to make a great steak.
AI and Fake Content
Then there’s AI. First, a lot of people don’t even realize when they’re talking to a bot. No one tells them. So, they respond like they would as if a real person was listening when it’s just a chatbot.
It gets even messier when AI is used to generate fake articles or deepfakes that look totally real. Tate (2019) points out that even trained researchers can struggle to find out when something is fake (p. 118). If the pros can’t tell, regular users definitely can’t. Let’s say a brand wants to promote a new supplement. Instead of hiring real customers or experts, they generate dozens of glowing reviews using AI video reviews that sound like they’re real people with specific experiences. The post might even include a fake profile picture made with AI.
None of it is true, but it looks legit. If someone buys that product based on those reviews and finds out later, it was all manufactured by AI? That’s not just misleading; it’s unethical.
References
Kim, K., Pasadeos, Y., & Barban, A. (2020). Editorial content in native advertising: How do brand placement and content quality affect native advertising effectiveness? Shapiro Library. https://libguides.snhu.edu
Tate, M. A. (2019). Web wisdom: How to evaluate and create information quality on the web (3rd ed.). CRC Press.