A comprehensive classification system for understanding and analyzing various intervention mechanisms employed by social media platforms to moderate content, behavior, and user experiences.
Offers control but contributes to filter bubbles
Isolation vs. control
Friction can annoy but also promote deliberation
Friction vs. deliberation
Promotes safety but risks perceived over-censorship
Censorship vs. safety
Increases safety but risks perceived censorship
Safety vs. censorship
Enables autonomy but risks fragmenting shared spaces
Fragmentation vs. autonomy
Catch violations but enables harassment campaigns
Harassment vs. policing
Harnesses collective knowledge but risks coordinated misuse and manipulation
Crowd wisdom vs. crowdsourcing harassment
Deters violations but impacts content creator revenue
Revenue loss vs. deterrence
Adds accountability but creates barrier to access
Barrier to entry vs. accountability
Can guide choices but risks manipulative paternalism
Manipulation vs. guidance
Can encourage reflection but risks annoying users
Annoyance vs. self reflection
Raises awareness but risks message fatigue
Awareness vs. message fatigue
Promoting verified information while risking being over prescriptive to consumer behavior
Trusted sources vs. over prescriptive
Increases transparency but impacts sponsored content visibility
Visibility vs. revenue loss
Inoculates against misinfo but paternalistically assumes susceptibility
Paternalism vs. inoculation
Shows most recent but hides potentially relevant content
Recency vs. relevance
Leverages localized judgment but risks bias
Bias vs. localized judgment
Allows oversight but risks opaque censorship
Oversight vs. opacity
Empowers consumers but risk lack of accessibility
TBD
Risk inappropriate content being share to minors if parents cannot enforce safety measure
TBD
Slows virality but risks inconveniencing good faith users
Viral spread vs. inconvenience for good faith users
Aggregates opinions but risks mob behavior
Brigading vs. crowd wisdom
Allows oversight but risks opaque censorship
Oversight vs. opacity
Reduces clutter but risks suppressing minority opinions
Clutter vs. suppression
Mitigates abuse but risks incorrectly filtering benign replies
False positives vs. safety
Upholds decorum but enables covert censorship
Censorship vs. decorum
Adds safety friction but annoys legitimate users
Friction vs. safety
Limits harm but perceived as censorship
Censorship vs. safety
Efficient but prone to over-pruning or editing content
Overblocking vs. automation
Balances context and sensitivity but opaque
Context vs. sensitivity
Gives control but enables creating echo chambers
Echo chambers vs. control
Contribute new interventions to our research database. Download the template, fill it out, and upload your contributions.
Download the CSV template with the correct format and an example intervention.
Either upload a filled CSV template or use the form below to add interventions one by one.
Required Fields: Intervention Type, Description, Focus, Driver, User Journey, Scope
Optional Fields: Link (URL to platform intervention)
Focus: Behavioral, Content, Visibility
Driver: Platform-Driven, User-Driven
User Journey: Proactive, Retroactive
Scope: Systemic, Targeted