Better prioritisation

Better prioritisation


The APIs aid in the fight against online child exploitation by prioritising abusive content for human review.

Quick identification

Quicker identification


Identifying content more quickly increases the likelihood that victims can be identified and protected from further abuse.

Safer operations

Safer operations


Making review queues more efficient and less noisy also reduces the toll on human content moderators.

Learn about our tools

Our tools have complementary capabilities. They can be used jointly, and with other solutions, to meet different needs.

Content Safety API

Content Safety API

Classifying previously unseen images and videos

CSAI Match

CSAI Match

Matching known abusive video segments

Content Safety API

Used for: Classifying previously unseen images and videos


The Content Safety API classifier uses programmatic access and artificial intelligence to help our partners classify and prioritise billions of images and videos for review. The higher the priority given by the classifier, the more likely the media file contains abusive material, which can help partners prioritise their human review and make their own determination of the content. Content Safety API issues a prioritisation recommendation on content sent to it. Partners must conduct their own review in order to determine whether they should take action on the content.

Operationally, we recommend that organisations use the Content Safety API directly before the manual review process, to classify, prioritise and help them to organise their queue. The Content Safety API can be used in parallel with other solutions, like YouTube’s CSAI Match video hashing tool, or Microsoft’s PhotoDNA, each of which address different needs.

How does it work?

Image retrieval

1. File retrieval

Files are retrieved by the partner in multiple forms, for example reported by a user, or identified by crawlers or filters that the partner has created to moderate content on their platform.

Partner

User reported images or videos

Crawlers

Pre-filters

(porn/other classifiers)

API review

2. API review

The media files are then sent to the Content Safety API via a simple API call. They are run through classifiers to determine the review priority, and the priority value for each of the pieces of content is then sent back to the partner.

Google

Content Safety API

Classifier technology

Manual review

3. Manual review

Partners use the priority value to prioritise the files that need attention first for manual reviews.

Partner

Manual review

Take action

4. Take action

Once image and video files have been manually reviewed, the partner can then take action on the content in accordance with local laws and regulations.

Partner

Action accordingly

CSAI Match

Used for: Matching known abusive video segments


CSAI Match is YouTube’s proprietary technology for combating CSAI (child sexual abuse imagery) videos online. This technology was the first to use hash-matching to identify known violative content and allows us to identify this type of violative content amid a high volume of non-violative video content. When a match of violative content is found, it is then flagged to partners to review, confirm and responsibly report in accordance with local laws and regulations. YouTube makes CSAI Match available to partners in industry and NGOs. We give access to fingerprinting software and an API to identify matches against our database of known abusive content.

Online platforms can prevent violative content from being displayed and shared on their sites by using CSAI Match to compare their content against one of the largest indices of known CSAI content. CSAI Match is simple for partners to integrate into their system, allowing them to better scale challenging content management.

How does it work?

Video fingerprinting

1. Video fingerprinting

A video is uploaded to the partner’s platform. The CSAI Match fingerprinter, which is run on the partner’s platform, creates a fingerprint file of the video, a digital ID that uniquely represents the content of the video file.

Partner

Video file

Fingerprinter

Fingerprinter file

API review

2. API review

The partner sends the fingerprint file via the CSAI Match API to be compared with the other files in YouTube’s fingerprint repository. The repository contains fingerprints of known abusive content detected by YouTube and Google.

YouTube

CSAI Match API

CSAI Match technology

Shared CSAI

Fingerprinter repository

Manual review

3. Manual review

A positive or negative match is given back to the partner once the call to the API is complete. Based on the match information, the partner manually reviews the video to verify that it is CSAI.

Partner

Manual review

Take action

4. Take action

Once the images have been reviewed, the partner can action the content in accordance with local laws and regulations.

Partner

Action accordingly

Content Safety API

Used for: Classifying previously unseen images and videos

The Content Safety API classifier uses programmatic access and artificial intelligence to help our partners classify and prioritise billions of images and videos for review. The higher the priority given by the classifier, the more likely the media file contains abusive material, which can help partners prioritise their human review and make their own determination of the content. Content Safety API issues a prioritisation recommendation on content sent to it. Partners must conduct their own review in order to determine whether they should take action on the content.

Operationally, we recommend that organisations use the Content Safety API directly before the manual review process, to classify, prioritise and help them to organise their queue. The Content Safety API can be used in parallel with other solutions, like YouTube’s CSAI Match video hashing tool, or Microsoft’s PhotoDNA, each of which address different needs.

CSAI Match

Used for: Matching known abusive video segments

CSAI Match is YouTube’s proprietary technology for combating CSAI (child sexual abuse imagery) videos online. This technology was the first to use hash-matching to identify known violative content and allows us to identify this type of violative content amid a high volume of non-violative video content. When a match of violative content is found, it is then flagged to partners to review, confirm and responsibly report in accordance with local laws and regulations. YouTube makes CSAI Match available to partners in industry and NGOs. We give access to fingerprinting software and an API to identify matches against our database of known abusive content.

Online platforms can prevent violative content from being displayed and shared on their sites by using CSAI Match to compare their content against one of the largest indices of known CSAI content. CSAI Match is simple for partners to integrate into their system, allowing them to better scale challenging content management.

Child safety toolkit interest form

Interested in using our toolkit?

Share a few details about your organisation to register your interest

View the interest form

Testimonials

FAQs

Content Safety API

What is the format of the data sent to the Content Safety API?

We have options to support both raw content bytes and embeddings derived from media files. Get in touch for more details.

Who can sign up to access the technology and Content Safety API?

Industry and civil society third parties seeking to protect their platform against abuse can sign up to access the Content Safety API. Applications are subject to approval.

Why do you make these tools so widely available?

We believe that the best approach to tackling online child exploitation is to collaborate with other companies and NGOs. We have long worked across industry and with NGOs to support the development of new data-driven tools, boost technical capacity and raise awareness. We believe that making these tools widely available, so that our partners can use AI to better review content at scale, is an important part of this fight.

CSAI Match

Does CSAI Match work for images?

CSAI Match is designed for video, but through Google’s Content Safety API, a collection of tools are available to industry and NGO partners, offering machine learning-powered classification for images. Learn more.

What information is returned with an identified match?

The match will identify which portion of the video matches known CSAI, as well as a standardised categorisation of the type of content that was matched.

What makes the CSAI Match technology so efficient?

CSAI Match detects near-duplicate segments of known CSAI content. This includes full duplicates that MD5 hash matching would get, as well as near-duplicates which might be re-encodings, obfuscation, truncations or scaling of CSAI videos – even if a video contains only a small portion of CSAI possibly mixed with non-CSAI content. Partners run a fingerprinting binary to produce a 'fingerprint' of the video, a byte-sequence similar to a MD5 hash. This is then sent to Google’s CSAI Match service, which is specifically designed for efficiency when scanning a video against YouTube’s corpus of known CSAI references.