AI-created child sexual abuse and sexual exploitation material

This past year has seen a flurry of development and discussion on artificial intelligence (AI) and machine learning (ML). For most of us outside the industry, the steady stream of related news can leave one confused about the matter. Many wonder whether AI and machine learning represent humanity’s savior or executioner. While the truth is likely somewhere in between, the technology can cause harm. 

This is particularly true regarding computer-generated child sexual abuse material (CSAM) or child pornography. The accompanying article discusses the growing phenomenon of AI-created CSAM and the actions society, industry, and law enforcement must take to deter the issue before it flourishes. For this article, I’ll use the more popular term child pornography to describe photographs depicting children sexually. However, the reader should know terms like child sexual abuse and sexual exploitation material (CSEM) are increasingly being used instead, given that the words more accurately describe the dynamics and content nature. Much of the information in this article was gleaned from work by the Stanford Internet Observatory, a great resource on emerging internet technologies and related consequences.

The production and distribution of child pornography has long been an online problem.  During the 90s, when the larger public was being introduced to the internet and email, one could find child pornography on the surface net by typing terms in a search engine like WebCrawler.  However, as the millennium closed, such content was increasingly more difficult to find on public search engines, resulting in the proliferation of child pornography on peer2peer networks.  While some bad actors continue to use such networks to traffic child pornography, a new threat is emerging: computer-generated child pornography (CGCP).   Historically, online child pornography depicted actual youth being harmed, but AI and machine learning developments permit the basic user to create such content without ever physically interacting with children.  To better understand computer-generated  child pornography, it is necessary to discuss AI and machine learning briefly.

Artificial intelligence is a broad term that describes technology aimed to replicate human intelligence and reasoning

The pursuit of AI takes many forms, with one of the more prevalent approaches being machine learning.  In machine learning, the program or AI is trained via “shaping,” wherein a person prompts the software with positive and negative hits.  In machine learning, this learning process involves positive and negative biases, with the former reinforcing the program and increasing the likelihood of future similar responses, and the latter indicating a miss, diminishing further incorrect responses.  For laypeople, one can think of the process of machine learning as akin to the shaping a parent uses to encourage or reduce a child’s behavior.  When using AI generators, one types different prompts (i.e., requests) about the content and image.  Users can request a picture involving a multi-colored dog and then type different inputs to create varied backgrounds (e.g., circus, airplane, the moon). Now, imagine near-infinite prompts, with each one adjusting the sexual image of a child.  In AI-generated child pornography, a user can use specialized software to make requests about the subject’s age, physique, and position.  With repeated prompting, users can fashion AI-generated child pornography to meet their desires.

While computer-generated child pornography only represents about one percent of all online child sexual exploitation material, this will likely change as the software becomes increasingly accessible and prevalent (Observatory).  What is presently concerning is that a majority of AI-built child pornography has been deemed realistic looking (Observatory).  That means that such content is becoming realistic and free of features seen as fake, such as unusual shadowing, odd positioning of the subject, and facial asymmetry.

Popular technology used to manufacture computer-generated child pornography includes OpenPose.  OpenPose is neither inherently bad nor purposely designed to create child pornography.  However, bad actors have used such mechanisms to generate child pornography.  For instance, OpenPose software allows the user to manipulate or move the subject in a photograph, repositioning the person.  In child pornography, OpenPose can be used to manipulate the original photo of a child innocuously posing, placing the subject in a sexual position.

 

 

Bad actors have used software to learn human poses and then manipulate or reposition children to generate pornography.

Keypoint detection
Photo of keypoints in human poses from viso.ai

While some may see computer-generated child pornography as a “less harmful” alternative to older material depicting the sexual abuse of actual children, AI-made child porn presents significant issues.  First, child porn can be manufactured by AI at lightning speed, subsequently triggering a deluge of referrals to law enforcement and child protection agencies.  As technology advances and spreads, stakeholders will undoubtedly be confronted with increasing complaints about suspected child pornography, exhausting limited investigative resources.  Police must spend considerable time determining whether the photographed subject is fake or an actual sexual abuse victim.  Law enforcement, child safety organizations, and other prevention specialists are already overwhelmed by the volume of child pornography presently online without widespread AI-child pornography.

Another issue is that bad actors may use content involving real victims of child sexual abuse as a base to create computer-generated exploitation material. For instance, a user may use actual child pornography to shape and make computer-generated content.  They can use the original photograph as an initial input, which is then morphed and altered according to prompts.  In one scenario, the photo’s subject can be made to look younger or older.  Consequently, this leads to the child being repeatedly victimized, as their likeliness is used to create computer-generated child pornography.

 

 

Age transformation synthesis
Image from the paper

Subjects can be made to look younger or older such as age transformation synthesis by Adobe Research

A third risk of AI-generated child pornography is its potential to facilitate sextortion.  Sextortion is another online risk wherein a perpetrator adopts a fake online persona to solicit intimate images from youth.  AI-generated content can be used by criminals to put youth in compromising situations.   For example, a perpetrator can take an authentic, harmless picture of a teenager and manipulate it to make it appear that the youth engaged in a sexually compromising situation.  The perpetrator could then threaten to disclose the image, damaging the youth’s reputation.  Faced with such a threat, many youths may feel compelled to comply with the attacker’s demands rather than risk public humiliation.  

Although AI-created child pornography is a mounting threat, professionals are already working to deter the issue. One possible intervention involves prohibiting new AI from being able to produce child content (Observatory). This would apply implanting prohibitions in the AI before launch, stopping users from later making AI-created child pornography.   Additionally, creators could prevent specific prompts or inputs related to child pornography from being used. In this scenario, programmers, working with other key parties, could identify popular prompts associated with AI-generated child pornography and bar them from use. A third strategy involves assigning different AI programs a specific watermark, granting investigators more material to work with should they encounter manufactured child pornography. In this situation, when confronted with AI-generated child pornography, child safety specialists could look to the watermark or specialized designation to help to narrow their investigation, identifying which program was used in the creation process. 

Lastly, youth must be educated about the harms and penalties associated with AI-generated child pornography.  It is reasonable to suspect that some youth will misbelieve that such content is okay, given it is created with publicly available technology or does not involve the “physical” harming of a child.  Youths need to be told that such behavior is prohibited and illegal.  

AI, like many ground-breaking technologies, offers considerable promise, facilitating great societal advancement.  However, there is also the risk of bad actors hijacking the technology for nefarious means.  Consequently, it is imperative that society start fervently pursuing measures to minimize the threat and prevent greater sexual victimization.

Observatory, S.I. (no date) New report finds Generative Machine Learning Exacerbates Online      Sexual, FSI. Available at: https://cyber.fsi.stanford.edu/io/news/ml-csam-report (Accessed: 02 August 2023). 

 

 

 

 

Connect with
Dr. Rodrigues

Have questions about our resources or interested in having Dr. Rodrigues speak at your next event? We would love to hear from you.