This project has been funded with support from the European Commission. The author is solely responsible for this publication (communication) and the Commission accepts no responsibility for any use may be made of the information contained therein. In compliance of the new GDPR framework, please note that the Partnership will only process your personal data in the sole interest and purpose of the project and without any prejudice to your rights.

Case Study 5 - AI-Based CyberbullyingInterventions –Evaluating YouthPerspectives

This study investigated youth perceptions of AI-driven cyberbullying interventions on social media. Researchers at Dublin City University (DCU) tested AI-based proactive content moderation strategies designed to detect and limit harmful interactions online. The consultation involved young people aged 12 to 17, who evaluated these interventions through focus groups and online discussions.

The study found that while AI content moderation can reduce harmful interactions, youth participants expressed concerns over:

  • Privacy concerns – Uncertainty about who controls AI interventions and their impact on digital rights.
  • False positives and censorship – AI incorrectly flagging harmless content as harmful.
  • Lack of human context – AI struggling to understand nuancesin humour and informal language.

Read More

menu