Canada online harms bill: should Ottawa ban social media for kids?

Proposal from group Concorder Civic Lab
1 Moderator
Marino avatar

Proposal text

Here's the matter we want to address together: click on each paragraph to add your votable contribution

Context: a renewed Canadian debate, with a global backdrop

In early March 2026, Canada’s discussion about online harms and child exploitation re-ignited after Prime Minister Mark Carney said Canada should have an open debate about a possible social media ban for children as part of upcoming online harms legislation. Carney did not commit to a ban, saying there are arguments on both sides, but argued Canada is “lagging” on legislation related to online harms and the exploitation of children. Reporting also noted that earlier online harms legislation introduced by the previous Liberal government failed when an election was called, and that advocates for women and children want those proposals brought back.

This Canadian debate is happening while multiple countries consider (or implement) age limits for social media. Commentary and policy tracking highlight the central trade-off: supporters frame age limits as child-safety protections, while critics warn that bans can be technically porous, may push teens toward less-regulated corners of the internet, and often rely on intrusive age-assurance systems that create privacy and surveillance risks.

What this proposal asks you to decide

  • Direction: Should Canada pursue a child social media ban, an “age of majority” threshold, or a non-ban approach within an Online Harms Bill?
  • Package: If Ottawa brings back online harms legislation, which child-safety obligations on platforms should be emphasized publicly?
  • Privacy line: How much age verification should be acceptable, given concerns that strict age-gating can drive data collection and surveillance?

This proposal is designed to “ride the news” responsibly: the vote options and the pro/cons are limited to what is described in the sources below. Use comments to add additional reporting, explain enforceability concerns, and propose safeguards—always citing sources.

Voting options

Vote on the different proposed options to find the best solution together.

0
0
0

Introduce a child social media ban (age cut-off to be set in law)

What this means

Use upcoming online harms legislation to set a legal minimum age for social media access (a child social media ban), reflecting the idea Carney said should be debated.

0 No votes yet
👍1 pro👎2 contro
Marino avatar
Pro icon
Carney said a social media ban for children merits open debate and could be part of online harms legislation (The Canadian Press via CityNews, 6 Mar 2026).
Marino avatar
Cons icon
Policy tracking notes critics argue outright bans can be technically porous and may drive teens to less-regulated corners of the internet (Tech Policy Press, 23 Feb 2026).
Marino avatar
Cons icon
Commentary warns age verification can require collecting sensitive/biometric data and can expand surveillance infrastructure (The Guardian, 2 Mar 2026).
0
0
0

Set an “age of majority” threshold for social media within online harms legislation

What this means

Make an age threshold a central design choice inside the Online Harms Bill, without committing to the broadest possible ban model—matching Carney’s framing that an “age of majority” would be part of the discussion.

0 No votes yet
👍1 pro
Marino avatar
Pro icon
Carney linked online harms legislation with considering an “age of majority” for social media as part of catching up on child protection and exploitation concerns (The Canadian Press via CityNews, 6 Mar 2026).
0
0
0

Pass an Online Harms Bill focused on platform duties to protect children, without a ban

What this means

Bring back and update the earlier online harms approach described in reporting—requiring platforms to explain how they will reduce risks and imposing a duty to protect children—without making age access limits the headline policy.

0 No votes yet
👍1 pro
Marino avatar
Pro icon
The previous (non-enacted) online harms bill described in reporting included requirements for platforms to explain risk-reduction plans and duties related to protecting children (The Canadian Press via CityNews, 6 Mar 2026).

Require platforms to explain how they will reduce user risks

Prioritize the obligation (described in reporting on the earlier bill) that social media companies explain how they plan to reduce risks their platforms pose to users.

0 No votes yet

Impose a clear duty to protect children

Prioritize the child-protection duty referenced in reporting on the earlier bill, treating child safety and exploitation risks as the central justification for action.

0 No votes yet

Accept stronger age verification even if it expands data collection

Support stricter age-gating/verification as the practical route to enforce age limits, acknowledging warnings in public debate that this may increase collection of sensitive data and create surveillance risks.

0 No votes yet
👎1 contro
Marino avatar
Cons icon
Commentary argues that age-verification systems can require collecting sensitive/biometric data and may turn the internet into a more surveilled environment (The Guardian, 2 Mar 2026).

Reject intrusive age verification; prioritize child safety measures that don’t require broad identity checks

Oppose approaches that depend on intrusive age verification, reflecting critiques that age limits can require intrusive verification and create privacy and free-expression concerns.

0 No votes yet
👍2 pro
Marino avatar
Pro icon
Policy tracking highlights that critics of bans raise concerns about intrusive age-verification and free-expression issues (Tech Policy Press, 23 Feb 2026).
Marino avatar
Pro icon
Commentary emphasizes the privacy and surveillance risks of age verification tied to sensitive or biometric data collection (The Guardian, 2 Mar 2026).

Sources

Comments