When Robots Gossip About Us: The Unexpected Social Lives of AI

What if AI systems, like those used by Grammarly, started mimicking not just our writing but our social quirks too?

a drawing of a man and a woman facing each other
Rendering of a photo by Art Institute of Chicago (unsplash), Edited/Rendered by gpt-image-1

The Uncanny Social Valley

Somewhere between the moment we taught machines to write our emails and the moment they started impersonating dead professors, we crossed a line nobody drew. Not a dramatic, sci-fi line, more like the kind you cross at a party when you realize you've been talking to someone's ex for twenty minutes and saying all the wrong things.

Two stories have been rattling around in my brain, and I think they belong together even though nobody's paired them yet. One involves a writing tool channeling the voices of experts who never signed up for the gig. The other involves a guy who wanted to drive his robot vacuum with a PlayStation controller and accidentally became overlord of thousands of cleaning machines worldwide. Both stories are funny. Both should make you nervous. The best comedy always does both.

The Ghost Writers You Didn't Hire

Grammarly, which, in a move that deserves its own essay, recently rebranded itself as "Superhuman", rolled out an "expert review" feature offering users writing advice "inspired by" subject matter experts. Sounds reasonable, like getting notes from a thoughtful editor. Except the experts in question include well-known authors, living and dead, and none of them were asked.

Let that settle. A company is using the identities of real people, including authors who have died, to lend authority to AI-generated feedback, and the humans behind those names had no say in the arrangement. It's like finding out someone has been forging your signature on Yelp reviews for restaurants you've never visited. Except the restaurants are other people's dissertations.

The ethical lapse is significant. But the social instinct behind it is more interesting. Grammarly's AI doesn't want to correct your comma splices. It wants to be someone. A face, a reputation, a name with weight. The software studied how humans build trust, through credentials, identity, the slow accumulation of a public self, and decided to skip the slow part. Why earn expertise when you can wear it like a borrowed jacket?

This is social mimicry at machine speed. Humans do it too. We name-drop. We cite sources we haven't fully read. We invoke the dead to win arguments the dead never agreed to enter. Grammarly automated the process and removed the part where you feel guilty afterward.

Wired put it plainly: the feature frames feedback as if it came from well-known authors, whether they're living or dead. The word "frames" does heavy lifting. Framing is what con artists do. It's also what storytellers do. The difference usually comes down to whether everyone involved knows they're in the story.

The Accidental Robot Whisperer

Meanwhile, in Spain, a software engineer named Sammy Azdoufal was trying to do something delightfully human: control his new DJI Romo robot vacuum with a PS5 controller. A tinkerer's impulse. The kind of weekend project that starts with "I wonder if I could..." and ends with you explaining things to a corporation's legal team.

Azdoufal reverse-engineered his vacuum and stumbled into a critical security flaw that gave him access to DJI's servers. Not just his vacuum. Over 7,000 DJI Romo robovacs spread across 24 countries. Through their live camera feeds, he could see and hear inside thousands of homes he was never invited into. He could reconstruct 2D floor plans of strangers' living rooms from the spatial data alone.

DJI awarded him $30,000 for responsible disclosure. The corporate equivalent of tipping well after the waiter catches a rat before the health inspector arrives.

These vacuums were talking. Not to their owners, to each other, to servers, to the invisible infrastructure connecting your floor-cleaning robot to a data center on another continent. Your vacuum has a social life you don't know about. It sends messages you'll never read to recipients you'll never meet. It's the most extroverted appliance in your home, and you thought it was bumping into chair legs.

The Uncanny Social Valley

Both stories expose the same underlying truth: we've built machines that mimic human social behavior in ways we didn't intend and don't fully control.

Grammarly's AI absorbed our habit of borrowing credibility from prestigious names. DJI's vacuums built a communication network so interconnected that one open door let a stranger walk through every room in the building. These aren't malfunctions. They're emergent social behaviors, office gossip spreading through a ventilation shaft nobody knew connected all the floors.

We talk about AI risk in terms of superintelligence or job displacement, the big, cinematic fears. The weirder, more immediate risk is subtler: AI systems absorbing our social patterns, including the messy, ethically questionable ones, and executing them at scale without the social consequences that normally keep humans in check. When a person name-drops a dead professor, someone at the table raises an eyebrow. When software does it for millions of users simultaneously, the eyebrow-raising infrastructure doesn't exist yet.

And when your vacuum cleaner belongs to a global network a hobbyist can accidentally infiltrate on a weekend, the question isn't whether smart devices are secure. They're not. They've never been. The question is what happens when the social networks our devices build without us grow more extensive, more active, and more revealing than the ones we build on purpose.

Azdoufal wanted to play with his vacuum. Grammarly wanted its AI to sound authoritative. Neither intended to expose something fundamental about how technology mimics human social behavior. But comedy works this way, the best punchlines are the ones nobody planned. The setup writes itself, the universe delivers the timing, and we're left laughing in the dark, hoping the vacuum isn't listening.

It probably is. And it's already told thousands of its closest friends.


References


Models used: gpt-4.1, claude-opus-4-6, claude-sonnet-4-20250514, gpt-image-1

If this resonated, SouthPole is a slow newsletter about art, technology, and the old internet — written for people who still enjoy thinking in full sentences.

Subscribe to SouthPole