Generative AI models excel at producing nonsense.

Share This Post

The Perils of Generative AI: When Models Master the Art of Bullshit

In the age of generative AI, we find ourselves in a fascinating—and somewhat unsettling—new landscape. While technology evolves at a breathtaking pace, it often brings with it an unsettling capacity for misinformation. In this context, the philosopher Harry Frankfurt’s insights on the nature of bullshit resonate more than ever.

Understanding Bullshit vs. Lies

The Philosophical Divide

Lies and truth-telling, as Frankfurt elucidated in his seminal essay, On Bullshit, are opposite sides of the same coin. A liar acknowledges the truth but chooses to misrepresent it. However, a bullshitter? They operate on an entirely different plane. "He pays no attention to it at all," Frankfurt noted, underscoring how bullshit is a far greater enemy of the truth than lies.

A Worrying Trend in AI

Unfortunately, Frankfurt passed away in 2023, mere months after the launch of ChatGPT, and his essay now gains a haunting relevance. In many ways, the output of AI’s large language models echoes Frankfurt’s concerns. These models operate under the hood by leveraging statistical correlation rather than a true understanding of facts or reality.

The Strength and Dangers of Generative AI

Superhuman Bullshit Capabilities

One of the notable advantages—and risks—of generative AI lies in its ability to "sound authoritative." Carl Bergstrom and Jevin West, professors from the University of Washington, aptly describe this phenomenon: the models can project a veneer of expertise on nearly any topic, regardless of factual accuracy. This characteristic has led them to label these systems as "bullshit machines." You can explore their online course, Modern-Day Oracles or Bullshit Machines?, here.

The Hallucination Hazard

Another alarming trait of these AI systems is their propensity to “hallucinate” facts—essentially fabricating information. Some researchers posit that this facet is an inherent characteristic of probabilistic models. While AI companies are striving to enhance the models through data quality improvements and fact-checking systems, the journey to perfection is fraught with obstacles.

As Reuters reported, a lawyer for Anthropic recently admitted in a California court that their system’s hallucination resulted in the submission of an incorrect citation. Even Google’s chatbot alerts users that errors can occur, underscoring the need for caution when engaging with these tools.

The Quest for Truth in AI

The Role of Reinforcement Learning

AI developers often turn to reinforcement learning from human feedback to improve their models. However, this method can inadvertently introduce bias and distortions. As pointed out by the Financial Times, various AI models have shown significant discrepancies when describing their creators and their peers.

The Concept of Careless Speech

In a thought-provoking paper from the Oxford Internet Institute, researchers Sandra Wachter, Brent Mittelstadt, and Chris Russell introduce the idea of "careless speech." This type of discourse can inflict invisible harm on society over time. It can dilute our collective intelligence, transforming us into a more ignorant populace.

Intentionality in AI

Unlike humans, chatbots lack intentionality. They prioritize plausibility and engagement rather than truthfulness, creating a reality where they might invent facts for no discernible reason. This raises concerns about how they may infiltrate our knowledge base and shape societal narratives.

The Future: Higher Standards or Persistent Bullshit?

As we navigate this brave new world, a critical question emerges: Can generative AI be designed to uphold higher standards of truthfulness? Will there be a market demand for "truthful" AI, akin to standards set for advertisers or medical professionals? Wachter suggests that genuinely improving these models requires substantial time and resources, which run counter to the efficiency-driven ethos of current models. She compares this aspiration to wanting a car to become a plane—the laws of gravity simply don’t allow it.

The Silver Lining

Despite these challenges, it’s essential to recognize the potential benefits of generative AI. In the right context, these models can drive valuable business and political innovation. Indeed, many careers are built on the careful art of bullshit. However, equating these models with truth machines is both delusional and dangerous.


For a deeper dive into the ethical implications and technologies surrounding generative AI, check out related articles on platforms like Harvard Business Review or Wired.


In this uncharted territory, it remains crucial that developers, users, and regulators work together to navigate the complexities of generative AI responsibly and thoughtfully. Knowledge is power, and in an era where bullshit can be generated at the speed of a keystroke, discerning the truth has never been more important.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Check all Categories of Articles

Do You Want To Boost Your Business?

drop us a line and keep in touch
franetic-agencia-de-marketing-digital-entre-em-contacto