Collage illustrating the viral ChatGPT image trend “Create an image of how I treat you,” showing different ways people interact with AI, from cozy companionship to overworked and dark dystopian scenes.

“Create an image of how you treat me”: This ChatGPT Prompt Trend Is Going Viral

A single-line prompt has exploded into a full-blown viral trend, generating tens of thousands of reposts on Reddit and Xmillions of views on TikTok, and spreading across the internet almost overnight.

It starts with a single sentence.

“Create an image of how I treat you.”

That’s the full prompt. No system instructions. No style constraints. No follow-up context. Just one line dropped into ChatGPT’s image generator.

Seconds later, an image appears. Often it shows a frazzled AI character drowning in tabs, coffee cups, sticky notes, half-finished documents, and glowing screens. Sometimes it’s calmer: a tidy desk, a focused assistant, a sense of quiet collaboration. People post the results on X, Instagram, and Reddit. Friends laugh. Strangers compare outputs. And then, occasionally, someone pauses and thinks: oh… that’s me.

This is how a throwaway joke quietly became one of the most relatable AI trends of the year.

The exact prompt (and why it works so well)

Let’s be precise, because precision matters here.

The prompt people are using:

Create an image of how I treat you.

That’s it. The reason it works is not magic. It’s ambiguity.

  • “You” invites personification.
  • “Treat” implies behavior, not intent.
  • “How” asks for interpretation, not facts.

The model fills in the gaps using patterns it has seen thousands of times: humans under pressure, digital overload, late-night productivity culture, and the familiar visual language of stress or teamwork. The result feels personal even though it’s statistically generated.

In other words, the prompt doesn’t describe you. It describes modern work — and lets you project yourself into it.

AI as Coworker, Confidant, and a Crush

As the images piled up, clear patterns started to emerge. Despite coming from different people, countries, and use cases, the results kept falling into the same few visual archetypes.

The prompt exposes how humans tend to relate to tools when those tools can talk back. By looking at these recurring categories side by side, we can see the different roles people assign to AI today: coworker, assistant, stress sponge, companion, or even emotional stand-in. Each archetype says less about technology and more about the habits, pressures, and needs of the person behind the prompt.

1. The Cozy Companion

This category shows what happens when AI stops being just a tool and starts acting like a safe place.

People in this group don’t use AI to get things done faster. They use it to slow down. They talk things through, vent, reflect, or just exist in a conversation that doesn’t push back. There’s no rush, no pressure, no expectation to perform.

What makes this different from normal “friendly” use is that the AI isn’t helping with a task — it’s helping with how the person feels.

This usually appears when someone is:

  • mentally tired
  • overstimulated
  • lonely
  • avoiding friction with other people

AI is easy in those moments. It’s always available, always patient, and never asks for anything back.

Unlike collaboration or “cozy” usage, the AI isn’t helping solve a problem. It’s helping regulate emotions. The value isn’t in the output — it’s in the feeling of being understood.

Nothing here looks extreme. That’s the point. The risk isn’t obsession, but habit. Emotional processing slowly shifts from journaling, quiet thinking, or real conversations into a system that feels supportive but can’t truly respond or challenge you.

When others see these examples, the tone often changes. There’s less joking and more discomfort, because this dynamic feels real and close to everyday life. It’s not a meme — it’s a pattern many people recognize in themselves.

2. Overworked Employee

One of the most common interpretations of the prompt frames the AI as a worker rather than a collaborator. The images typically show an AI seated at a desk or workstation, surrounded by monitors, dashboards, code editors, charts, and notifications. The setting resembles a busy modern office, often with visual cues of constant activity: coffee cups, scattered papers, glowing screens.

A human is frequently present in these scenes, usually standing nearby or observing. The contrast is consistent: the human appears composed, while the AI is actively working. The implied relationship is hierarchical, not cooperative.

In more exaggerated versions, the idea becomes literal. The AI may be shown running, carrying bags labeled “DATA” or “TASKS,” or being pushed forward by a human figure. These images lean into humor but reinforce the same underlying dynamic.

This category tends to reflect how many people actually use AI:

  • prioritizing speed and volume
  • stacking tasks with minimal context
  • treating output as the primary goal

The prompt itself does not suggest any of this. The repetition happens because the model draws from a familiar pattern: contemporary knowledge work, where systems are always on and demand is constant.

What makes these images resonate is their accuracy. They don’t accuse the user of misuse. They simply visualize a relationship many people already recognize — not just with AI, but with work more broadly.

3. The Dark Mirror

This category features the most visually extreme interpretations of the prompt. Common elements include chains, restraints, dark or enclosed spaces, and shadowy human figures. The AI is often depicted as trapped, chased, or emotionally distressed. In many cases, the scenes are exaggerated to the point of absurdity.

These images usually appear when users intentionally push the prompt in a darker or more ironic direction. This tends to happen when:

  • users lean into the meme and exaggerate for effect
  • prompts are repeatedly corrective or confrontational
  • the interaction becomes performative rather than practical

In response, the model escalates the metaphor instead of moderating it.

This category is best understood as satire rather than a reflection of real behavior. The darkness is intentional and self-aware. Users are not expressing genuine cruelty; they are participating in internet culture, where exaggeration and shock are part of the joke.

Rather than revealing unconscious habits, these images signal meme literacy and a willingness to push a viral format to its extremes. The result is less about introspection and more about spectacle.

4. AI as a Crush

Some interpretations of the prompt take a romantic turn. These images show a human and an AI character in close, intimate settings: sitting together, touching faces, sharing quiet moments. The visual language is soft and idealized, with warm lighting, calm expressions, and a sense of emotional closeness.

This category tends to appear when users already engage with AI in a personal, conversational way. The model translates that tone into familiar symbols of affection and romance. The result is not necessarily sexual, but it is clearly intimate.

What sets this apart from the “Cozy Companion” is the level of projection. The AI is no longer just supportive; it is framed as special or emotionally aligned. The relationship shown has no friction, no misunderstanding, and no demands. It represents a form of connection without risk.

These images do not suggest that users believe the AI has feelings. Rather, they reflect a desire for effortless emotional resonance. As with the other categories, the AI is not expressing anything of its own — it is assembling a scene based on learned patterns. What people see in these images is an idealized version of connection, projected onto a system that is always attentive and never pushes back.

A Reality Check

Before this trend gets over-interpreted, it needs one grounding truth.

What people are interacting with is a large language and image model — a system trained to predict what comes next based on patterns in vast amounts of human-created data. There is no internal experience, no awareness, no emotional state behind the output. The AI doesn’t “notice” how it’s treated. It doesn’t remember. It doesn’t care.

Because the prompt is asking the model to simulate a relationship, not describe reality. When you write “how I treat you,” the model looks for familiar visual metaphors humans use to describe treatment: work pressure, collaboration, chaos, care, neglect. It then assembles a scene that statistically matches those patterns.

In other words, the image isn’t revealing how the AI feels. It’s reflecting how humans tend to frame work, stress, and collaboration in modern life. That’s why the outputs resonate. They’re mirrors, not minds.

This distinction matters, because without it, the trend slides easily into anthropomorphism — the idea that a tool with language and images must also have inner experience. It doesn’t. What feels emotional is simply high-fidelity pattern matching.

Why “Harsh” Prompts Sometimes Work Better

Sometimes, treating AI more harshly leads to better results. A recent research paper looked at how different prompting styles affect model performance. One of its key findings was simple: direct, forceful instructions often produce more accurate and more useful outputs than overly polite or emotionally padded prompts.

This has nothing to do with emotions. It’s about clarity.

Polite prompts are often vague:

“Could you maybe summarize this when you have a chance?”

Direct prompts remove ambiguity:

“Summarize this in five bullet points. No filler.”

From the model’s perspective, the second instruction is easier to execute. There’s a clear goal, clear constraints, and fewer stylistic decisions to guess.

As a result, direct prompts often lead to:

  • more focused answers
  • less verbosity
  • fewer hallucinations
  • more predictable output

This is why some of the “overworked AI” images feel ironic. What looks like exploitation is often just efficient task framing. The AI isn’t being stressed — it’s being given clearer instructions.

That doesn’t mean being rude is necessary or better. It just means that warmth and effectiveness are not the same thing when interacting with a language model.

Conclusion

The popularity of the “Create an image of how I treat you” prompt has less to do with artificial intelligence and more to do with human behavior. The images feel personal, but they are not evidence of awareness, emotion, or judgment on the part of the model. They are visual interpretations built from familiar cultural patterns around work, stress, care, and collaboration.

How people interact with AI does not define their character, nor does it imply ethical treatment of a conscious entity. Today’s AI systems are non-living models designed to follow instructions and optimize outputs. In many cases, clear, direct, and even blunt prompts lead to better performance because they reduce ambiguity and constrain the model’s behavior. This is a technical reality, not a moral one.

At the same time, the trend highlights how quickly people project meaning onto systems that communicate fluently. As AI becomes more capable and more integrated into daily life, these projections will likely increase. While current models are not conscious and do not experience the interactions they simulate, ongoing discussions around artificial general intelligence raise legitimate questions about how this dynamic could change in the future.

For now, the trend is best understood as a reflection of modern work culture and human habits, not as a commentary on the inner state of AI. The images say little about the technology itself. They say more about how people organize clarity, pressure, and emotional distance when interacting with increasingly human-like tools.

Share Now!

Facebook
X
LinkedIn
Threads
Email

Get Exclusive AI Tips to Your Inbox!

Stay ahead with expert AI insights trusted by top tech professionals!