By Tomiwa Akinbamire
The United Nations Children’s Fund (UNICEF) has condemned the growing use of artificial intelligence (AI) to generate sexualised images, videos, and audio involving children, describing the practice as a form of child abuse.
In a statement posted on its official X handle, UNICEF referred to the trend as “deepfake abuse,” stressing that “deepfake abuse is abuse.” The organisation warned that even when victims are not identifiable, such content normalises the sexual exploitation of children and causes serious harm.
UNICEF cited findings from a joint study conducted with ECPAT and INTERPOL across 11 countries, which revealed the scale of the emerging threat. According to the report, at least 1.2 million children disclosed that their images had been manipulated into sexually explicit deepfakes within the past year. In some countries, this represents one in every 25 children, roughly the equivalent of one child in a typical classroom.
The report also found that children are increasingly aware of the dangers posed by AI misuse. In several of the countries surveyed, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos of them. UNICEF noted that levels of concern vary widely between countries, pointing to gaps in awareness, prevention, and child-protection measures.
UNICEF stated clearly that sexualised images of children generated or manipulated using AI tools constitute child sexual abuse material (CSAM), emphasising that there is nothing fake about the harm caused by deepfake abuse.
While acknowledging the efforts of some AI developers who have introduced safety-by-design approaches and guidelines to prevent misuse, UNICEF warned that safeguards across the AI industry remain uneven. The agency noted that many AI models are still being developed without adequate protections, and that the risks are compounded when generative AI tools are integrated directly into social media platforms, allowing manipulated images to spread rapidly.
In response to the escalating threat, UNICEF called on governments, AI developers, and digital companies to take urgent action. The organisation urged governments to expand legal definitions of child sexual abuse material to include AI-generated content and to criminalise its creation, possession, procurement, and distribution. It also called on AI developers to implement robust guardrails to prevent misuse of their systems.
UNICEF further urged digital companies to prevent the circulation of AI-generated child sexual abuse material rather than merely removing it after abuse has occurred, and to strengthen content moderation by investing in detection technologies that enable immediate removal of such material.
UNICEF concluded with a warning that delays in legal and technological responses would deepen the harm to children, stating that the threat is real and urgent and that children cannot wait for the law to catch up.
Click here to read related post






