In recent years, LinkedIn has become a central hub for professionals seeking to establish themselves as thought leaders in their respective industries. However, a recent study revealed a striking shift in online content: 54% of long-form (100+ words) LinkedIn posts are now estimated to be AI-generated. Out of 8,795 posts analyzed, the majority were not authored entirely by humans.
The trend is not just about automation; it reflects a cultural and practical shift in how professionals share insights, build personal brands, and attempt to maintain visibility on platforms like LinkedIn.
Timeline of AI Influence on LinkedIn Content
To understand this transformation, we must look at how the timeline aligns with key developments in AI tools:
- Late 2022: Public release of ChatGPT by OpenAI
- January 2023 to February 2023: 189% increase in suspected AI usage in posts
- 2024: LinkedIn rolls out new AI tools for Premium users
Between January 2018 and October 2024, Originality.AI observed a dramatic increase in both the use of AI-generated content and average word counts of posts. Since ChatGPT’s launch, the average word count of long-form posts jumped by 107%, suggesting not only that people are using AI, but that AI enables them to write much longer, more polished posts than they could on their own.
Why Professionals Are Turning to AI
The motivations behind this trend are both practical and psychological:
- Pressure to stay visible: On platforms like LinkedIn, consistent posting is tied to reach and reputation.
- Time constraints: AI helps busy professionals generate content quickly.
- Fear of missing out (FOMO): Professionals worry about falling behind peers who leverage AI tools to maintain thought leadership status.
- Tool convenience: LinkedIn’s own AI tool requires only 20 input words to generate a full-length post draft. This dramatically lowers the barrier to entry.
Yet these motivations also create a risk of devaluing originality, authenticity, and true thought leadership.
From Ghostwriters to Ghostthinkers
Ghostwriting is not a new concept on LinkedIn. Executives have long worked with assistants or writers to articulate ideas more clearly. However, there’s an important distinction:
A ghostwriter is a collaborator helping refine an idea that originated from the professional.
In contrast, AI reverses this model. When a professional feeds ChatGPT a generic prompt and edits the result, they act more like a ghostwriter themselves—polishing the machine’s idea. This undermines the essence of thought leadership, which should come from original human insight.
LinkedIn claims that its AI features are meant to assist, not replace. But the platform’s design—where users can create content from a mere 20-word idea—facilitates over-reliance on AI. The consequence is a flood of posts that look polished but lack genuine human perspective.
Is the Internet Going ‘Dead’?
This trend contributes to a broader phenomenon known as the Dead Internet Theory—a growing belief that much of what we see online is no longer produced by real people. According to this theory, a significant portion of today’s web content is generated by bots, AI models, or recycled text, giving the illusion of activity, engagement, and originality.
First discussed in fringe forums around 2021, the theory claims that:
- Real human users are increasingly drowned out by fake profiles, automated posts, and AI-generated interactions.
- Online discussions are often seeded or manipulated by algorithms to guide public opinion or drive engagement.
- Search engine results, comments, and social media feeds are filled with synthetic content designed for clicks, not connection.
While some parts of the theory veer into conspiracy territory, its core idea is becoming harder to ignore—especially as generative AI tools become more powerful and widely used.
Here’s how this plays out across various content types:
Content Type | AI Application | Consequences |
Product reviews | Automated generation | Loss of trust in reviews |
News articles | AI-assisted reporting | Homogenized content, less nuance |
Social media posts | AI scheduling & writing | Reduced authenticity |
Website copy | Fully AI-driven | Bland, templated tone |
AI content is often grammatically flawless and factually adequate, but it lacks the emotional nuance, original phrasing, and context-aware thinking that human writers bring. Over time, this leads to a web that feels more like a mirror of algorithms than a space for genuine human connection.
Where Is the Social in Social Media?
The most worrying implication of all this is the potential loss of authentic human connection. When both posts and comments are generated by machines, the very idea of “social” media becomes hollow. Tools like EngageAI, which allow users to auto-generate replies to posts, threaten to turn LinkedIn into a space where people interact with bots more than with each other.
As the article author Wanda Thibodeaux put it: “Am I connecting with you, or am I connecting only with the image you used AI to make?”
How to Preserve Human Creativity
To maintain trust and quality in digital spaces, especially professional platforms like LinkedIn, it’s essential to establish guidelines and cultural norms around AI use:
Raccomandazioni
- Transparency: Clearly label AI-generated content.
- Human Review: Require human oversight for published material.
- Originality Checks: Use detection tools to ensure a human idea is at the core.
- Support for Human Creators: Introduce financial incentives for original content.
- Reform Engagement Metrics: Prioritize meaningful interaction over post frequency.
AI can assist with structure and clarity, but the soul of thought leadership is human. It’s our personal struggles, insights, and unique perspectives that truly inspire others. Without them, we’re not building connections—we’re broadcasting noise.
Conclusion
The rise of AI-generated content on LinkedIn and elsewhere online is undeniable. While AI offers convenience, scale, and polish, it also poses serious risks to authenticity, creativity, and trust. If left unchecked, it could lead to a digital landscape where human voices are drowned out by optimized but soulless automation.
If we want to keep the internet alive—as a place for real connection, discovery, and dialogue—we must strike a better balance. That means using AI as a support tool, not a replacement. Human input must remain at the center of creation, especially in areas that depend on trust, like thought leadership, education, journalism, and public discourse.
This is not a rejection of AI, but a call for intentional use. When used wisely, AI can help refine ideas, remove friction in the creative process, and scale communication. But it must never replace the origin of thought—that spark of originality that comes from human experience and emotion.
To preserve this spark, we need better education on responsible AI use, clearer guidelines on content attribution, and cultural incentives that value depth over quantity. Employers, platforms, and communities should reward insight, not just output.
Ultimately, authenticity isn’t just a marketing buzzword—it’s the currency of trust. In a world increasingly filled with AI-generated noise, truly human ideas will become more valuable than ever. Let’s make sure we protect and amplify them.
And in that vision of the future, thought leadership remains exactly what it should be: the product of human thought.