Google Joins OpenAI and Meta Pledging Child Safety in AI Development
The fight against child sexual abuse material (CSAM) is gaining momentum as high-powered technology players pledged to prioritize child safety in all AI development phases. The coalition features leading generative AI developers, among them Microsoft-backed OpenAI, Google, and Meta, each vowing to enforce guardrails around their products.
The group is steered by two non-profit organizations with a collective pledge to end child sexual exploitation via artificial intelligence (AI). New York-headquartered All Tech is Human, alongside Thorn (formerly DNA Foundation), advocates the Safety by Design principle in all phases of generative AI development.
The pledge to prioritize child safety has been a critical objective for Thorn since its inception in 2012 by actors Ashton Kutcher and Demi Moore. The collective pledge coincided with the unveiling of the Thorn report on Tuesday, April 23, championing safety design in generative AI development.
Thorn emphasized the need for establishing guardrails that would bar the generation of child sexual abuse material (CSAM) across the AI model lifecycle. Thorn urged the companies developing, deploying, and using generative AI and affiliate products to embrace Safety by Design principles.
Thorns Warns of Deepfake Threats Amid Prevalence of CSAM
Thorn added that the collective pledge mandates each to portray dedication to prevent CSAM creation and spread via AI. Besides, Thorn urged Thorn to avoid any form of child abuse and exploitation.
Thorn indicated that AI-generated CSAM has become relatively easy for criminals to attain. Thorn affirmed its commitment to developing tools and resources that would help defend children from exploitation and sexual abuse.
Thorn reflected on the 2022 publication that revealed the discovery of over 824,000 files of child abuse material. The previous year’s impact report had over 104 million files propagating suspected CSAM.
Thorn’s efforts to combat CSAM are timely, given the prevalence of deepfake child pornography since the generative AI models emerged. The existence of stand-alone AI models aggravates the menace circulated on the dark web channels.
Thorn illustrated that Generative AI yields volumes of content more easily than ever before. It allows a single child predator to create CSAM en masse. Thorn demonstrated that generative AI allows child predators to adapt the original images and videos into newly formulated content.
Thorn observes that the influx of AIG-CSAM threatens the already burdened child safety ecosystem by scaling new victimization to more population. It exacerbates the challenges that law enforcement stakeholders confront in their efforts to identify and rescue abuse victims.
Thorn offers a series of principles that tech companies and AI developers can embrace to prevent their products from facilitating child pornography. Thorn urges responsible sourcing for training, integrating feedback loops and stress-testing strategies.
Companies Behind Generative AI Models Pledge to Mind Adversarial Misuse of Products
Thorn suggests that companies should employ provenance while minding adversarial misuse. Also, responsible hosting is key to safety in the AI models. The call saw Microsoft, Amazon, Metaphysic, Anthropic, Mistral AI, Civit AI, and Stability AI.
Metaphysic chief market executive Alejandro affirmed the importance of integrating responsibility in developing AI. The executive acknowledged the need to safeguard the vulnerable in society during this darkest advent of technology.
OpenAI identified itself with the statement conveyed by the head of child safety, Chelsea Carlson. The executive explained the importance of safety in its tools.
Carlson confirmed the guardrails deployed into ChatGPT and DALL-E align to the call by Thorn and All Tech is Human. The executive affirmed support for the Safety by Design principles as its reference to mitigate potential harm to the children.
Meta hailed its experience in keeping people safe through numerous tools to combat potential harm orchestrated by child predators. Meta pledged to adapt since predators were too adjusting to evade protections.
Google’s head of trust and safety solutions, Susan Jasper, illustrated that the team utilizes hash-matching technology and AI classifiers alongside human reviews to remove CSAE content proactively. The proactive safety approach extends to detecting AI-generated CSAM and reporting such to the NCMEC.
The creation of a collective pledge is timely given that several watchdog groups led by the UK’s Internet Watch Foundation warned that AIG-CSAM leaves the internet overwhelmed with exploitative material.
Editorial credit: AHK Photography / Shutterstock.com
Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.