Fighting child sexual abuse online

Google is committed to fighting child sexual abuse material (CSAM) online and preventing our platforms from being used to spread this kind of content.

We invest heavily in fighting child exploitation online and use our proprietary technology to deter, detect, and remove offenses on our platforms.

We also use our technical expertise to develop and share technologies free of charge to help organizations detect and remove CSAM. We partner with NGOs, industry, and others, and lead programs to share our technical expertise with them.

Tools to fight CSAM

Partnerships and programs

The Googler in Residence program

The Googler in Residence program, launched in 2014, sends Google engineers to expert child safety organizations, such as the UK’s Internet Watch Foundation (IWF) and the National Center for Missing and Exploited Children (NCMEC) in the US, to help these organizations increase their technical capacity. We also fund technical fellowships at organizations dedicated to fighting child sexual abuse like NCMEC and Thorn. In addition, Google provides training to law enforcement officials investigating online crimes against children through forums such as the Dallas Annual Crimes Against Children National Conference and Interpol’s specialist Crimes Against Children group.

Ad Grants

To make it easier for victims to seek help, and for others to report a child safety concern, we offer $120,000 annually in free advertising budget through our Ad Grants program to NGOs and charities who operate child sexual abuse reporting hotlines and helplines for victims. The program provides support and training to help these not-for-profit organizations reach people most effectively and enables the organizations to run ads for their services on Google Search free of charge.

Deterrence ads and Google Search

In many countries, users who type queries associated with child sexual abuse terms into Google Search are shown deterrence ads or an in-depth search result at the top of their search results that make it clear that child sexual abuse and any material that pictures or promotes such actions is illegal. These messages also include links to trusted partners to report abusive behavior or imagery and offers advice on where to get help.

Partnerships

For more than a decade we have also been a member of several public-private coalitions, including the Technology Coalition, the ICT Coalition, the WeProtect Global Alliance, and Europol’s Financial Coalition against Commercial Sexual Exploitation of Children Online. These partnerships bring companies together to develop tech solutions that disrupt the exchange of CSAM online and prevent the sexual exploitation of children.

Providing tools to fight child sexual abuse

Content Safety API

The Content Safety API is a tool developed by Google that uses artificial intelligence to help organizations better prioritize CSAM for review. We offer this service for free to NGOs and private companies to support their work protecting children. The API steps up the fight against CSAM by prioritizing for human review potentially illegal content that has not been seen before. This in turn can help reviewers find and report content seven times faster. Quicker identification of new images increases the likelihood that children being abused could be identified and protected from further abuse. Making review queues more efficient and less noisy also reduces the toll on the human reviewers, who review images to confirm instances of CSAM.

Content Safety API interest form

The information provided on this form will be used by Google to determine whether your organization meets the criteria for using the Content Safety API and communicate with your organization regarding your application.

Do you represent a Governmental Organization/Agency?

Legal reporting requirement

Terms and Conditions

CSAI Match

CSAI Match is our proprietary technology, developed by the YouTube team, for combating child sexual abuse imagery (CSAI) in video content online. It was the first technology to use hash-matching to identify known violative videos and allows us to identify this type of violative content amid a high volume of non-violative video content. When a match of violative content is found, it is then flagged to partners to responsibly report in accordance to local laws and regulations. Through YouTube, we make CSAI Match available for free to NGOs and industry partners like Adobe, Reddit, and Tumblr, who use it to counter the spread of online child exploitation videos on their platforms as well.

Learn more about CSAI match

Content Safety API testimonials

Fighting abuse on our own platforms and services

Google has been committed to fighting child sexual abuse on our services since our earliest days.

We devote significant resources—technology, people, and time—to detecting, deterring, removing, and reporting child sexual exploitation content and behavior. Since 2008, we’ve used “hashing” technology, which creates a unique digital ID for each known child sexual abuse image, to identify copies of images on our services that may exist elsewhere.

We’ve also created a shared industry repository of video hashes that allows known child abuse videos to be identified and blocked, enabling other companies to remove the same content from their platforms.

In 2013, we introduced algorithmic changes to Google Search to more aggressively prevent images, videos, and links to CSAM from appearing in search results. We’ve since implemented this change globally, automatically checking against millions of queries.

At Google and YouTube, both CSAI Match and the Content Safety API tools are used along with a number of other internal tools to prioritize reviews. We also use the latest advances in deep neural networks and machine learning to help support this work.

Identifying and fighting the spread of child sexual abuse materials is an ongoing challenge. Governments, law enforcement, NGOs and industry all have an important role to play. We constantly review and adapt our systems and policies to respond to new trends and threats, and to incorporate the best new technology available to make sure we are as effective as possible in keeping children safe online. For more on our child safety policies see YouTube’s Community Guidelines and the Google Safety Center.

FAQs

Expand all Collapse all
  • Does the Content Safety API work for video?

    The Content Safety API is designed for images, but through YouTube’s CSAI Match, organizations can access fingerprinting software and an API to identify matches against our database of known abusive content. Learn more.

  • What makes the Content Safety API so efficient?

    Imagine an organization working to fight the spread of CSAM has a reviewer who has to find 7,500 pieces of CSAM out of a body of 100,000 suspected images. If they take 10 seconds to review an image, they’d be able to complete only 14,400 reviews and find 1,070 images in one week. Using our Content Safety API, which prioritizes the images, they would be able to find over 7,000 pieces of CSAM in the same time frame - a 700% improvement.

  • Who can sign up to access the technology and Content Safety API?

    NGOs specializing in combating child sexual abuse and private companies seeking to protect their platform against CSAM can sign up to access the Content Safety API. Applications are subject to approval.

  • Does CSAI Match work for images?

    CSAI Match is designed for video, but through Google’s Content Safety API, a collection of tools are available to industry and NGO partners, offering machine learning-powered classification for images. Learn more.

  • Why do you make these tools so widely available?

    We have developed technology to detect CSAM in ways that are targeted and effective. We believe the best approach to tackling CSAM is to collaborate with other tech companies and NGOs. We have long worked across industry and with NGOs to support the development of new data-driven tools, boost technical capacity, and raise awareness. The next step in this fight is using AI to help us better review this content at scale.