The Risks of Social Media
As a therapist and a parent of teens, I’m often asked some version of the same question:
“Is Snapchat or TikTok actually dangerous?”
It’s a fair question — and also the wrong one. The platforms themselves aren’t the core problem. The real issue is that these apps collapse privacy, performance, and permanence into a single space, while asking adolescents — whose brains are still developing — to manage adult-level risk. That’s not a character flaw. It’s a developmental mismatch.
The Illusion of Privacy
Snapchat, in particular, sells the idea of control. Messages disappear. Screens feel temporary. The design implies safety. But psychologically, Snapchat trains teens to confuse ephemerality with privacy — and those are not the same thing.
A disappearing message can still be:
- Screenshotted
- Screen-recorded
- Saved by a third party
- Used later for leverage, humiliation, or coercion
When teens are sextorted or exploited, what I hear most often is:
“I thought it was private.”
That belief isn’t naïve — it’s designed.
TikTok carries a different risk profile. The danger there is less about disappearing content and more about algorithmic amplification. Teens aren’t just posting to friends; they’re posting into a system that rewards visibility, sexualization, and performance — often without their full awareness of who’s watching. Private accounts help, but they don’t eliminate risk. Context collapses online. A video meant for friends can still attract the wrong audience.
How catfishing actually unfolds
Here’s how this often looks in real life.
A teen is added on Snapchat or TikTok by someone who appears to be their age. The profile looks normal. The conversation is friendly, flattering, and slow. Trust builds. The person shares first, creating a sense of safety and reciprocity. Eventually, the teen is asked for something more private — a photo, a video, a moment meant to stay between them. Then the tone shifts. The account disappears or changes. The content is suddenly used as leverage. What felt like a private exchange becomes a threat.
When teens say, “I didn’t think this would happen,” they’re telling the truth. Catfishing works precisely because it feels ordinary — until it isn’t. Moving away from shame When something goes wrong online, adults often rush to consequences: deleting accounts, banning apps, taking phones away.
While limits are sometimes necessary, punishment alone teaches the wrong lesson:
“Don’t get caught.”
What teens actually need is a way to think, not just rules to follow. In my work, the most effective approach is shame-free and practical: helping teens evaluate content based on risk and control, not morality or appearance. Once shame enters the room, learning shuts down. Teens stop talking. They hide. And safety gets worse, not better.
A simple safety filter teens can actually use
Instead of asking, “Is this appropriate?” — which is vague and moralized — I encourage families to use a three-question filter teens can internalize:
1. Audience
Who is this most likely to attract — people I know, or strangers who might sexualize teens?
2. Focus
Is the focus on creativity, humor, or expression — or on sexualized body parts or movement?
3. Control
If this were saved or shared outside my control, how would I feel? If two out of three raise concern, the content comes down. Not because it’s “bad,” but because it’s not safe right now. This reframes the conversation from “your body is the problem” to “who has power here?” — a shift that is deeply protective.
Why “fully clothed” doesn’t always mean low-risk
This is one of the hardest conversations for parents. Many teens post content that is technically clothed but intentionally sexualized — through camera angles, movement, or framing. Parents often struggle to name this without shaming. The key is to avoid commenting on bodies altogether.
Instead of saying, “This is too sexual,” try:
“This kind of video tends to attract attention from people who don’t have good intentions — especially right now.”
The issue isn’t what a teen is wearing. It’s how predictable online behavior can be — and how easily content can be taken out of context.
Boundaries as scaffolding, not punishment
After an online safety incident, it’s reasonable — and often necessary — to add guardrails:
- No disappearing messages for a period of time
- No private messaging with people not known in real life
- Reviewing privacy settings together
- Removing high-risk content collaboratively
What matters is how these boundaries are framed.
When teens hear, “You can’t handle this,” they feel small and resentful.
When they hear, “We’re helping you stay in control while things settle,” they feel supported — and more likely to stay honest.
Boundaries should be time-limited, transparent, and revisited. The goal isn’t surveillance. It’s skill-building.
A father’s perspective
I also write this as a father of three girls, all at different stages of adolescence. Since they were little, I’ve told them the same thing again and again: if you ever get into a situation that feels too big — if you’re scared or in trouble — you can always come to me.
I promise them I won’t get angry. I won’t shame them. We’ll slow things down and figure it out together.
That promise matters. When teens believe honesty will lead to punishment, they stay silent — and silence is where risk grows. Safety online doesn’t come from perfect choices; it comes from knowing there’s a safe adult to turn to when things go sideways.
The goal isn’t to raise teens who never make mistakes. It’s to raise teens who ask for help early — before something private becomes permanent.
The takeaway
Snapchat and TikTok aren’t going away. Neither is teen curiosity, self-expression, or the desire for connection.
Safety doesn’t come from banning platforms. It comes from helping teens understand:
- how attention works online
- how permanence sneaks in through screenshots and algorithms
- and how to protect their agency before someone else takes it
If there’s one message I hope teens internalize, it’s this: If you wouldn’t be okay with a stranger saving it, it’s not a keep. That single question does more for safety than any app ban ever will.
*Disclaimer: Offline.now offers educational coaching tips, not medical or therapeutic advice; please consult a qualified health professional for personal, clinical or health concerns.*
Written by Larry Borins, a psychotherapist in Toronto, Canada.