Account Takeover
Definition
Where an unauthorized user gains control of a user account, through means such as hacking, phishing or buying leaked credentials.
Related Terms
Compromised Account, Hacked Account, Unauthorized Access, Credential Stuffing, Phishing (as a common precursor), Identity Theft.
Background
An account takeover occurs when an unauthorized individual gains control of another account’s legitimate account. This can happen through various methods, including stolen passwords from data […]
CSEA
Definition
Child Sexual Exploitation and Abuse – A broad category that encompasses both the sharing of material depicting child sexual abuse, other sexualised content depicting children, and includes grooming.
For the guidance and requirements regarding Child Sexual Abuse Material see CSAM
The top three online CSEA harms are: producing, sharing and/or viewing CSAM, online sexual solicitation, and online grooming.
Related Terms
Online Child Grooming, Child Enticement, Predatory […]
Online Harassment
Definition
Unsolicited repeated behavior against another person, usually with the intent to intimidate or cause emotional distress. Online harassment may take the form of one abuser targeting a person or group with sustained negative contact, or it may take the form of many distinct individuals targeting an individual or group.
Related Terms
Cyberbullying, Cyberstalking, Dogpiling, Brigading, Trolling, Abuse.
Background
Harassment is repeated behaviour, and may include several other […]
Service Abuse
Definition
Use of a network, product or service in a way that violates the provider’s terms of service, community guidelines, or other rules, generally because it creates or increases the risk of harm to a person or group or tends to undermine the purpose, function or quality of the service.
Related Terms
Terms of Service Violation, Platform Abuse, Technical Abuse, Network Abuse, Malicious Bot Activity, Spamming, Data Scraping, Denial of Service (DoS/DDoS).
Background
This category […]
Counterfeit
Definition
The unauthorized manufacture or sale of merchandise or services with an inauthentic trademark, which may have the effect of deceiving consumers into believing they are authentic.
Background
Counterfeiting involves the creation and distribution of products that are made to look like genuine items, often mimicking trusted brands to mislead buyers. Online, this can manifest through the sale of goods under false trademarks, or offering services that falsely claim to be associated […]
Defamation
Definition
A legal claim based on asserting something about a person that is shared with others and which causes harm to the reputation of the statement’s subject (the legal elements and applicable defenses vary by jurisdiction).
Background
Defamation involves the act of damaging someone’s reputation through false statements or communications. Online, defamatory content can spread rapidly across social media platforms, blogs, and web sites, causing significant harm to individuals or […]
Glorification of Violence
Definition
Statements or images that celebrate past or hypothetical future acts of violence.
Background
The glorification of violence refers to content that praises, promotes, or idolises violent acts, individuals who commit such acts, or ideologies that endorse violence. This can range from explicit support for terrorist activities to the romanticising of historical violence. In online spaces, such content not only violates the terms of service of most platforms but also poses significant […]
Information for Software Developers and Designers
Table of Contents
- User Consent
- User Safety
- Policy Design Considerations
- Accountability and Transparency
- Account and Content Reporting Workflow
If you are creating an app or a web service that enables inter-personal communications, the following resources can help you consider safeguards and approaches to responsible design principles.
User Consent
- Privacy and Consent for Fediverse Developers: A Guide
- Eight tips about consent for fediverse developers
User Safety
- Prosocial Design: The Prosocial Design Network curates and researches evidence-based design solutions to bring out the best in human nature online.
- Safety by Design: From Australia’s eSafety Commissioner, this proactive and preventative approach focuses on embedding safety into the culture and leadership of an organisation. It emphasises accountability and aims to foster more positive, civil and rewarding online experiences for everyone.
Policy Design Considerations
- Authentication Cheat Sheet (OWASP): Authentication is the process of verifying that an individual, entity, or website is who or what it claims to be by determining the validity of one or more authenticators (like passwords, fingerprints, or security tokens) that are used to back up this claim.
- The Google Play Child Safety Policy requires apps in the Play Store to have a CSAE policy, Pachli has shared their policy that was accepted by Google: Pachli CSAE Policy
- The Real Name Fallacy (J. Nathan Matias): People often say that online behavior would improve if every comment system forced people to use their real names. It sounds like it should be true – surely nobody would say mean things if they faced consequences for their actions?
Accountability and Transparency
- Santa Clara Principles 1.0: Basic requirements for apps to consider regarding moderation data collection, notices to end users, and appeals processes.
- DSA Transparency Database API Documentation: Attributes that may be required for DSA transparency reporting
Account and Content Reporting Workflow
- Content moderators commonly experience trauma similar to those suffered by first responders. Even though you may have never reviewed traumatic content, your app or service may deliver this traumatic content to users of your moderation workflow. When presenting reported content to a service provider or moderator, always:
- Show the classification clearly, so the moderator is aware of the type of content they are about to review
- Blur all media until the moderator hovers to view greyscale version (re-blur when hover not detected or mouseleave event)
- Grayscale all media until the moderator clicks to toggle greyscale (allow toggle state back to greyscale)
- Mute all audio until the moderator requests audio
- Allow the moderator to reclassify the report
- Allow the service operator to choose from a list of harms or rules they want to receive reports about
- Offer the end user a path to report an actor, behaviour, or content, e.g. “report this account” or “report this post”
- Condense the labels by type and classification, and label each report. Use standard metadata to classify and present reported content. Use standard language to describe the reporting context. Consider a multi-step report submission process that allows fine-grained reporting, or use a first-level classification system that individual moderators can later refine if needed/desired, e.g.
- 1. Report an Account
- Bullying (online-harassment)
- Brigading (brigading)
- Doxxing / PII (doxxing)
- Harassment (online-harassment)
- Imposter (impersonation)
- Account Takeover (account-takeover)
- Impersonation (impersonation)
- Sock Puppet / False Identity (sock-puppet)
- Inauthentic Engagement (cib)
- Astroturfing (astroturfing)
- Brigading (brigading)
- Catfishing (catfishing)
- Content Farming (farming)
- Service Abuse (service-abuse)
- Troll (troll)
- Dangerous Person or Organisation (content-and-conduct-related-risk)
- Bullying (online-harassment)
- 2. Report a Post
- Spam (spam)
- Deception (content-and-conduct-related-risk)
- Phishing (phishing)
- Scam / Fraud (content-and-conduct-related-risk)
- Sock Puppet / False Identity (sock-puppet)
- Sextortion (sextortion)
- Intellectual Property (copyright-infringement)
- Copyright (copyright-infringement)
- Counterfeit Goods or Services (counterfeit)
- Nudity / Sexual Activity (explicit-content)
- Explicit Content (explicit-content)
- Child Sexual Abuse (csam)
- False Information (disinformation)
- Defamation (defamation)
- Misinformation (misinformation)
- Manipulated Media / Deepfake (synthetic-media)
- Hateful Content (hate-speech)
- Hate Speech or Symbols (hate-speech)
- Dehumanisation (dehumanisation)
- Suicide or Self-harm (content-and-conduct-related-risk)
- Sale of illegal or regulated goods or services (content-and-conduct-related-risk)
- Violent Content (content-and-conduct-related-risk)
- Glorification of Violence (glorification-of-violence)
- Inciting Violence (incitement)
- Violent Threat (violent-threat)
- Terms of Service Violation / Community Guidelines Violation (service-abuse)
- Something Else / Not Listed (unclassifed)
- 1. Report an Account
CSAE Policy