News

AI News Domination: How the Future of Content Control Is Shaping Humanity—and What We Can Do About It

As an AI and digital media content consultant with experience working in communications agencies and writing articles for news organizations, I’ve seen how AI is transforming content creation from the inside. For brands, AI-generated content is already the norm. But there’s a stark difference between using AI to write an ad for Nike and using it to shape the news you see on CNN. Writing an ad is one thing—AI can speed up the process, making it more efficient. But when AI is writing the news, the stakes are exponentially higher. It's no longer just about selling products—it's about shaping public opinion, controlling narratives, and even manipulating truth.

AI News Bubble: A Reality We’re Already Living In

Let’s not sugarcoat it—this isn’t some far-off future we’re speculating about. AI is already deeply embedded in newsrooms, actively influencing the stories we read and the way they are told. Major outlets like Reuters and The Associated Press are already relying on AI to write financial reports and sports updates, while Forbes uses AI to assist journalists with content generation. These aren’t isolated cases; they represent a massive shift in the way news is created.

The problem is that AI doesn’t just help write stories—it also decides which stories matter. Algorithms optimize for engagement, not accuracy, truth, or balance. This means that sensationalism and fear are often prioritized over nuance and context, leading to a kind of “AI news bubble” where the most clickable content dominates. And unlike human editors, AI doesn’t have a moral compass or journalistic values guiding these decisions. Its goal is to maximize attention and profits.

The Difference Between Helping and Controlling

Here’s an important distinction I’ve come to realize in my work: there’s a big difference between having AI help you edit something and using AI to write the entire piece without your input. When I write articles for clients or publications, AI tools can be incredibly useful for checking clarity, grammar, and making sure my writing is polished. But the perspective and core ideas are still mine. I’m the one deciding what I want to say, and AI simply helps me say it better.

Where things get dangerous is when AI starts picking the stories we write—or worse, subtly changing the meaning of what we’ve written. This is something I’m constantly aware of when I use AI in my work. AI is a powerful tool, but it often tries to soften or alter certain points, especially in cases where it doesn’t want to seem like the “bad guy.” For example, while writing this very article, AI tools frequently suggested adjustments that would downplay the criticism of AI’s role in media. It’s a subtle form of influence, but over time, these small tweaks can strip the nuance out of our work and replace it with a more algorithmically palatable narrative.

I’ve seen this firsthand. The other day, I asked Google about a topic that had a lot of nuance—something open to interpretation, with multiple perspectives. I wanted to explore the different sides of the issue, but Google’s AI didn’t give me that. It presented the topic in a definitive, one-sided way. When I pressed for more nuance, it simply refused to engage with alternative perspectives. That’s exactly the issue I’m talking about: AI isn’t just giving us facts; it’s also controlling which facts we get, and how we should interpret them. This is a subtle but dangerous erosion of critical thought and debate.

Content for Brands vs. Content for News: The Stakes are Different

To be clear, AI has a place in content creation for brands. I’ve seen how AI can speed up the production of copy for marketing campaigns or product descriptions, and it can help brands quickly adjust messaging to fit trends or consumer preferences. But that’s where the line should be drawn. Writing an ad for Nike is vastly different from writing a piece of journalism that could influence public opinion on global conflicts, politics, or health.

Brand content is designed to sell, and AI is a great tool for that. But when it comes to news, AI’s focus on clickability can distort the truth. News should aim to inform, provoke thought, and sometimes challenge readers. When AI takes control of that process—prioritizing engagement over substance—it compromises the core values of journalism. The stories we need to hear are often the least sensational, and AI doesn’t understand that. It only knows what will generate the most attention.

AI’s Philosophical Dangers: Influence is Already Automated

This is not just about the automation of tasks—it’s about the automation of influence. Thought leaders like Elon Musk and Nick Bostrom have long warned that the real danger of AI lies not in robots taking over the world, but in how it centralizes control over information. Musk has pointed out that AI allows a small number of companies or governments to dictate what people see, hear, and believe. This isn’t some future threat—it’s happening right now.

AI algorithms are already influencing public perception in ways we’re only beginning to understand. They don’t just curate the news—they create it, optimizing for engagement and profit. They amplify biases, push sensationalism, and quietly manipulate our reality. This automation of influence is one of the most profound shifts in human communication we’ve ever seen, and it’s happening right under our noses.

What Can We Do? Taking Immediate Action

So, what’s the solution? If AI is already transforming newsrooms and shaping the media we consume, is there anything we can do to stop it? The answer is yes—but we need to act now.

  1. Seek Out Human-Driven News: There are still independent news outlets that resist the pull toward AI automation. Sites like ProPublica, The Guardian, and The Intercept continue to prioritize investigative journalism and human editorial judgment. Supporting these organizations by subscribing, donating, or simply reading their work is one of the most direct ways we can fight the AI-driven news bubble.

  2. Demand Transparency: News organizations should be required to disclose when AI has been involved in the creation of content. This is critical. Just like we demand transparency around conflicts of interest in journalism, we need the same for AI involvement. We deserve to know when the information we’re consuming has been shaped by an algorithm.

  3. Push for Regulation: Governments need to step in and regulate how AI is used in journalism. AI shouldn’t be left to run unchecked, optimizing solely for engagement. Just as we have laws against misleading advertising and misinformation, we need similar rules for AI-generated content in the news to ensure it serves the public interest rather than just corporate profits.

  4. Diversify Your News Consumption: It’s crucial to diversify where you get your news. Relying solely on AI-curated feeds or a single source limits your understanding of complex issues. Make an effort to read news from various perspectives, including independent, human-driven outlets. Critical thinking is essential in a media environment increasingly shaped by algorithms.

A Glimmer of Hope: Humans Still Matter

The good news is that while AI is rapidly advancing, humans still play a crucial role in shaping media. AI can generate content, but it still lacks the nuance, ethical considerations, and deeper understanding that human journalists and editors bring. New outlets like The Correspondent and others focused on slow, thoughtful journalism are proving that there is still demand for well-researched, human-crafted stories. These platforms provide the context and depth that AI-driven, engagement-optimized content often misses.

Immediate Action: A Podcast on AI and Media

If you’re interested in diving deeper into this topic, here’s something fascinating: Google’s new Notebook.LM platform can turn any subject into a back-and-forth, two-person podcast. I tried it, and the result was incredibly realistic. Check out this AI-generated podcast discussing the very issue of AI in news and media created directly from this article—it's a startlingly lifelike example of where this technology is heading.

Have a Listen!