NewsOpenAI / ChatGPT / Artificial Intelligence

US Standards and Technology Group Urges Public Input on AI Safety and Development Guidelines 

The US National Institute of Standards and Technology (NIST) has, in a Tuesday, December 19 statement, urged public and AI companies’ input on the Executive Order on artificial intelligence (AI) development. The federal tech and standards group targets public participation and information regarding risk management in generative AI and addressing AI-generated misinformation. 

The US Department of Commerce’s group issued the request via its official blog post seeking public information towards supporting its duties outlined in the recently issued presidential executive order. The order targets realizing secure and responsible AI development.  

NIST Sets February Deadline for Public Input

The statement published by NIST urges the public to convey their input by February 2, 2024. The feedback gathered is targeted at facilitating the execution of tests to ensure safe AI systems. 

The US Commerce Secretary Gina Raimondo indicated that the invitation extended by NIST to the public draws inspiration from the executive order issued by President Joe Biden. In particular, the October order directed NIST to prioritize the creation of guidelines that foster consensus-based standards, red-teaming, and evaluation. 

Besides creating the testing context for AI systems, the NIST framework targets supporting efforts of the AI community toward reliable, responsible, and safe AI development. Its accomplishment necessitates the request by NIST urging input from AI developers and the public on risk management and reducing vulnerability to AI-generated misinformation.

NIST moves coincide with the widespread criticism and enthusiasm for generative AI capabilities to utilize open-ended prompts to generate text, videos, and photos. 

📰 Also read:  Trading Volume vs. Onchain Volume - What is the Difference?

Tuesday’s statement by NIST urges input on several issues deemed vulnerable to manipulation via the generative AI. It seeks feedback on job placement, surpassing human capability and electoral disruptions with possibly catastrophic consequences.

NIST Request to Consider Viable Red-Teaming in AI Risks and Best Practices

Detailed scrutiny of the NIST request reveals that it seeks to ascertain the viable red-teaming areas in assessing AI risks and formulating best practices. Red-teaming involves a practice crafted during the Cold War simulations as a technique where the group assembled simulates likely adversarial scenarios to ascertain the process and system’s vulnerabilities and weaknesses. Its usage in cybersecurity has proved viable in uncovering new risks. 

Raimondo reiterated the Executive Order as one directing NIST to solicit feedback from diverse stakeholders drawn from civil society, academia and industry. 

The Commerce Secretary hailed the approach as an assured pathway towards developing standards oriented towards trust, security, and safety of AI. Such pursuit would catapult the US into the pole position founded on responsible development and utilization of the rapidly evolving technology.

NIST director Laurie Locascio hailed the pathway to reduce the synthetic content risk and advance responsible global technical standards towards AI development. He added that NIST targets more robust engagement with the AI community to realize a superior understanding of AI assessment relative to the goals outlined in the October Executive Order. 

The NIST director clarified the invitation extended to the broader AI community as one targeting engaging with the talented and committed team. Locascio expressed optimism that the request for stakeholders’ input will advance AI safety and trust practices.

📰 Also read:  Bitcoin Adoption in El Salvador: Everything You Need to Know

NIST Eyes Human-Centered Approach to Ensure AI Safety and Governance

Locascio encouraged active participation of the community to help gather diverse perspectives toward establishing an unbiased scientific understanding of AI. In the awareness of AI’s potential impact on humanity, NIST restated its devotion to developing guidance via a transparent and open process featuring input from the industry, civil society, government, and academia stakeholders. 

NIST invitation for public input aligns with the inaugural public evaluation and red-teaming event held in August. The event occurred during the cybersecurity conference through the coordination of SeedAI, AI Village, and Humane Intelligence.

The request for information conveyed on Tuesday builds upon the November announcement by NIST of the establishment of a new AI consortium. The announcement featured an official notice urging applicants with relevant credentials to join the consortium. 

The consortium eyes the creation and implementation of AI-specific policies and assessments to guarantee that US legislators embrace the human-centered approach to ensure AI safety and governance.  


Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.

📰 Also read:  Solana Hits $200 as Bitcoin Records New All-Time High

Stephen Causby

Stephen Causby is an experienced crypto journalist who writes for Tokenhell. He is passionate for coverage in crypto news, blockchain, DeFi, and NFT.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content