This study investigated youth perceptions of AI-driven cyberbullying interventions on social media. Researchers at Dublin City University (DCU) tested AI-based proactive content moderation strategies designed to detect and limit harmful interactions online. The consultation involved young people aged 12 to 17, who evaluated these interventions through focus groups and online discussions.
The study found that while AI content moderation can reduce harmful interactions, youth participants expressed concerns over:
- Privacy concerns – Uncertainty about who controls AI interventions and their impact on digital rights.
- False positives and censorship – AI incorrectly flagging harmless content as harmful.
- Lack of human context – AI struggling to understand nuancesin humour and informal language.