Social Media Needs an AI Strategy
AI labels are a useful start, but social feeds need to change how they rank, contextualize, and amplify synthetic content.
My social feeds are full of AI.
That is not surprising. I work with AI. I write about AI. I click on AI posts. I read threads about agents, models, tools, scams, workflows, product launches, and whatever else is happening this week.
The algorithm is doing exactly what it was designed to do.
I engage with a topic, so it gives me more of that topic. I pause on a post, so it learns from that pause. I click into one argument, so it assumes I want the next argument. That basic loop is not new, and it is not automatically bad. Personalized feeds can be useful. They help people find things they care about. They can surface work, ideas, and people you never would have found otherwise.
The problem is that AI changes the cost of filling the feed.
It is now cheap to generate content. Cheap to rewrite content. Cheap to make images, fake screenshots, fake experts, fake outrage, fake consensus, fake product demos, fake news clips, fake investment advice, fake political content, and fake human texture.
And once you engage with enough of it, the feed does what feeds do.
It gives you more.
The current labels are not enough
The big platforms know this is a problem. Meta has been rolling out AI Info labels. YouTube requires creators to disclose realistic altered or synthetic content. TikTok labels AI-generated content and is even testing a control for how much AI-generated content people see in their feeds. X has rules around synthetic and manipulated media.
That is all directionally good.
But a label is not a strategy.
“This was made with AI” is useful context, but it does not answer the harder questions:
- Is the claim true?
- Is this account accountable?
- Is this media synthetic, edited, staged, or just assisted?
- Is this being amplified because people trust it, or because it makes people angry?
- Is this harmless creative work, or is it pretending to be evidence?
- Is the platform showing this to people who are likely to understand the context?
Those are different problems.
A label helps at the post level. The bigger issue is the feed level.
The feed is the product
When people talk about social media and AI, the conversation often turns into a moderation argument. Should the platform remove this? Should it label that? Should creators be forced to disclose every tool they used?
Those questions matter, but they are too narrow.
The feed is the product.
The ranking system decides what gets attention. The recommendation system decides what jumps from a small audience to a huge one. The engagement loop decides whether the platform rewards useful information, cheap outrage, emotional bait, or synthetic nonsense.
X says its For You timeline ranks posts using a neural network trained on signals like likes, reposts, replies, follows, topics, and other interactions. That is one public example, but the pattern is everywhere. These systems are optimized around behavior. They learn what keeps you there.
AI attacks that incentive structure directly.
If the system mostly rewards engagement, and AI makes engagement bait cheap to produce, then the feed gets flooded with increasingly optimized bait. Some of it is entertaining. Some of it is useful. Some of it is garbage. Some of it is dangerous.
The user sees the output and has to sort it out post by post.
That is too much work.
”Just be more media literate” is not a real answer
People are already getting fooled all the time.
Older people get hit especially hard, because they did not grow up in a world where every image, voice, screenshot, and video could be synthetic by default. But this is not only an older-person problem.
I am young enough to understand the internet. I am technical enough to work with AI every day. I am skeptical enough to assume half of what I see is optimized nonsense.
And I still get fooled sometimes.
Not always for long. Not always in a way that matters. But I will still pause on something that looks real. I will still need a second pass to notice the weird detail. I will still occasionally realize that I believed the emotional shape of a post before I checked whether the underlying thing actually happened.
That is the part that worries me.
The issue is not that people are dumb. The issue is that the media environment is changing faster than normal human attention can keep up with. A feed full of synthetic content is not a fair fight, especially when the content is being optimized against your interests, fears, beliefs, and habits.
Media literacy helps. It is not enough.
The product has to help too.
What should change
I do not think the answer is automatically “build a new social network.”
Someone probably will. There is probably a serious startup idea in a feed that treats provenance, trust, and AI literacy as first-class product features instead of afterthoughts.
But the biggest platforms already have the audience. They already have the ranking systems. They already have the context. The fastest improvement would come from changing how existing feeds handle AI at the recommendation layer.
Here is what I would want.
Treat AI as a ranking signal, not just a label
AI-generated does not mean bad.
A generated image can be art. An AI-assisted post can be thoughtful. A translated, summarized, cleaned-up, or edited post can be more useful because AI helped make it clearer.
The ranking system should not punish AI by default.
But it should understand AI as a real signal.
If a post is synthetic, the feed should know that. If an account posts hundreds of synthetic clips a day, the feed should know that too. If content looks like a realistic news clip, health claim, financial claim, emergency update, or public-figure statement, the system should treat that differently from a joke image or obvious animation.
The point is not “AI bad.”
The point is “context matters.”
Give people an AI-content dial
TikTok testing an AI-generated content control is interesting because it points in the right direction.
People should be able to say, “Show me less synthetic content,” without having to mute a hundred words or manually retrain the feed one post at a time.
That control should be simple:
- More
- Balanced
- Less
Not every user wants the same thing. Some people love AI-generated history videos, speculative design, remix culture, and weird creative experiments. Other people want a feed that feels more human, local, and grounded.
That should be a product choice.
Make provenance visible without making it theatrical
The label should not scream unless the risk is high.
For most posts, provenance can be quiet. A small indicator. A context menu. A way to see whether something was creator-labeled, automatically detected, made with platform tools, or carrying a standard like C2PA Content Credentials.
But when the content is high-risk, the context should move closer to the action.
Election content. Health advice. Financial claims. Breaking news. Natural disasters. War footage. Public figures saying or doing things that would change how people think or act.
Those need more than a tiny buried label.
Add friction before synthetic content goes viral
The platforms already add friction in other places. Are you sure you want to repost this without reading it? Are you sure you want to send this message? Are you sure this content is appropriate?
Use that pattern here.
If a realistic AI-generated clip is spreading quickly and has not been verified, slow it down. Add context. Limit recommendation until confidence improves. Ask people to open the source before sharing. Make the user take one extra beat before turning synthetic media into social proof.
The goal is not censorship.
The goal is to stop the feed from laundering uncertainty into confidence.
Reward accountability
The biggest missing signal in a lot of feeds is accountability.
Who made this? Are they a real person? Do they have a history? Do they correct mistakes? Do they link sources? Do they produce original work? Are they reachable? Are they constantly posting synthetic claims with no evidence?
Anonymous speech has a place. Pseudonymous speech has a place. Remix culture has a place.
But the ranking system should not treat every piece of content as equally grounded just because it gets engagement.
If a post is making a factual claim, the system should care whether there is any accountable source behind it.
Make “why am I seeing this?” actually useful
This one bothers me because the current answer is usually too shallow.
You liked a post like this.
You follow someone who follows this person.
This is popular near you.
Fine, but not enough.
In the AI era, users need more useful explanations:
- You are seeing more AI-generated video because you watched several similar clips.
- This account frequently posts synthetic media.
- This post is labeled because it was creator-disclosed.
- This post is labeled because of detected metadata.
- This topic is being recommended because you interacted with several related posts this week.
Give people enough information to understand how the feed is shaping them.
Not a research paper. Not a settings maze. Just a clear explanation that respects the user.
This is not anti-AI
I use AI constantly.
This post started as a messy voice note. AI helped me shape it into something readable. The cover image for this post was generated with AI and then laid out deterministically so the text is clean. That is a good use of the technology.
AI is not the enemy here.
The problem is that social platforms were already optimized for engagement before AI made infinite synthetic content cheap. Now the old incentives are running inside a new media environment, and the mismatch is getting more obvious every week.
The answer cannot be “ban AI content.”
That would be impossible, and it would throw away a lot of useful creative work.
The answer also cannot be “label it and move on.”
That puts too much burden on the user and ignores the part of the system that actually controls attention.
The feed has to grow up
Social media used to have a simpler trust problem.
People could lie. Images could be edited. Screenshots could be fake. Accounts could coordinate. Bots could spam. None of that is new.
What is new is the scale, speed, and realism.
AI makes fake media cheaper. It makes persuasive writing cheaper. It makes comment spam cheaper. It makes fake consensus cheaper. It makes low-effort content look polished enough to pass a quick scroll.
That does not mean every AI-assisted post is suspicious.
It means the feed needs a better model of trust.
Not just “what will this person engage with?”
Also:
- What kind of content is this?
- How was it made?
- How confident are we?
- What harm could happen if this spreads?
- Is the user being given enough context?
- Is the system rewarding something useful, or just something sticky?
That is the change I want to see.
The platforms do not need to solve truth perfectly. Nobody can.
But they do need to admit that AI content is not just another content category. It changes the economics of the feed itself.
And if the feed is the product, then the feed is where the AI strategy has to live.