NewsOpenAI / ChatGPT / Artificial Intelligence

Sam Altman Joins AI Industry Leaders Enlisted for Homeland Security’s Safety Board 

OpenAI chief executive Sam Altman is among the AI industry leaders enlisted to help establish the Artificial Intelligence Safety and Security Board (AISSB) by the US Department of Homeland Security (DHS). The leaders, drawn from twenty leading AI developers, were enlisted by DHS on a Friday, April 26 announcement with a mandate to safeguard critical infrastructure.

OpenAI’s Altman Named in AI Safety Board

OpenAI’s Altman joins Satya Nadella from Microsoft, along with others named on the board, including NVIDIA and Google executives. DHS reiterated that the newly constituted body will have critical input towards harnessing AI to protect critical infrastructure and mitigate AI-powered threats. 

Homeland Security Secretary Alejandro Mayorkas revealed via a statement that AI is transformative and harbors the potential to advance national interests in an unprecedented manner. Simultaneously, it suffers real risks that Homeland Security can mitigate by deploying best practices alongside the studied concrete actions. 

Mayorkas will steer the board, featuring high-profile executives, including Alphabet chief Sundar Pichai, NVIDIA’s Jensen Huang,  and Anthropic’s Dario Amodei, among others. The Homeland Secretary thanked the accomplished leaders for dedicating expertise and time to protecting critical infrastructure. 

DHS’ AI Safety Board to Formulate Guardrails 

Mayorkas said that the board seeks to tap AI to formulate guardrails to safeguard the vital services that Americans utilize every day. The responsibility extends towards effectively mitigating the risks it poses despite the enormous potential as a transformative technology. 

Anthropic chief  Amodei reiterated that AI technology has the potential to offer immense benefits to society once deployed responsibly. The executive whose company is behind Claude 3 supports the advocacy for efforts to examine the safety of frontier AI systems to address potential risks. 

📰 Also read:  New US Rules Target Investments in Chinese AI and Tech Industries

Amodei’s concurrent statement expressed optimism that Anthropic constitutes the team tasked with studying AI’s implications for protecting critical infrastructure. 

Amodei illustrated that safe AI development is inevitable for the US as it aims to secure infrastructure powering American society. He added that the board’s constitution is a critical step towards strengthening the US national security. 

Microsoft chief executive Satya Nadella profiled AI as the most transformative technology of the modern era and the need for safe and responsible deployment. He added that Microsoft hails the inclusion in the high-powered board tasked with the essential effort and is optimistic about sharing what it has learned. 

Integrate Safety and Human-Centered Development of AI

The board features Amazon Web Services leader Adam Selipsky and IBM’s Arvind Krishna, who also has the Leadership Conference on Civil and Human Rights president Maya Wiley.

Fei-Fei Li, the co-director at Stanford Human-Centered AI Institute, hailed the inclusion of interdisciplinary leaders with a mandate to steward the global-changing technology through the human-centred approach. Li identified AI as a potent tool whose development will ultimately affect the individual, community and society. 

The board is set to hold quarterly meetings next month to offer suggestions towards safely adopting the technology within the essential services. The board’s objective is to create a forum for the DHS to safeguard the critical infrastructure community while allowing AI leaders to share information involving AI-related security risks.

Enlisting AI industry leaders into the safety board is a critical milestone given the rapid intrusion witnessed in 2023 that prompted global governance scrambling to regulate the new technology. January saw the World Economic Forum (WEF) classify AI alongside quantum as an urgent global risk. 

📰 Also read:  Study Finds Consumers Wary of AI-Created News Especially in Politics

The Biden Administration acknowledged the intrusion of AI, which led to the executive order issued in October of last year. The order directed the formation of the AI Safety Institute Consortium (AISIC), which also features the majority of those enlisted in the DHS initiative. 

Beyond national security concerns, Google led AI developers, among them Meta and OpenAI, to join the course of enforcing guardrails to avert AI-powering child sexual abuse material (CSAM). 

The AI developers embraced the call by non-profit entities Thorn alongside All Tech is Human to pledge support to end the vice. The CSAM was also featured in the letter by Massachusetts Senator Elizabeth Warren urging action from the Justice Department and DHS. 

Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.

📰 Also read:  Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Firm 

Stephen Causby

Stephen Causby is an experienced crypto journalist who writes for Tokenhell. He is passionate for coverage in crypto news, blockchain, DeFi, and NFT.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Skip to content