Skip to content

The US National Institute of Standards and Technology (NIST) has, in a Tuesday, December 19 statement, urged public and AI companies’ input on the Executive Order on artificial intelligence (AI) development. The federal tech and standards group targets public participation and information regarding risk management in generative AI and addressing AI-generated misinformation. 

The US Department of Commerce’s group issued the request via its official blog post seeking public information towards supporting its duties outlined in the recently issued presidential executive order. The order targets realizing secure and responsible AI development.  

NIST Sets February Deadline for Public Input

The statement published by NIST urges the public to convey their input by February 2, 2024. The feedback gathered is targeted at facilitating the execution of tests to ensure safe AI systems. 

The US Commerce Secretary Gina Raimondo indicated that the invitation extended by NIST to the public draws inspiration from the executive order issued by President Joe Biden. In particular, the October order directed NIST to prioritize the creation of guidelines that foster consensus-based standards, red-teaming, and evaluation. 

Besides creating the testing context for AI systems, the NIST framework targets supporting efforts of the AI community toward reliable, responsible, and safe AI development. Its accomplishment necessitates the request by NIST urging input from AI developers and the public on risk management and reducing vulnerability to AI-generated misinformation.

NIST moves coincide with the widespread criticism and enthusiasm for generative AI capabilities to utilize open-ended prompts to generate text, videos, and photos. 

📰 Also read:  Darkweb Actors to Start Selling A Database Of Gemini And Binance Users Soon

Tuesday’s statement by NIST urges input on several issues deemed vulnerable to manipulation via the generative AI. It seeks feedback on job placement, surpassing human capability and electoral disruptions with possibly catastrophic consequences.

NIST Request to Consider Viable Red-Teaming in AI Risks and Best Practices

Detailed scrutiny of the NIST request reveals that it seeks to ascertain the viable red-teaming areas in assessing AI risks and formulating best practices. Red-teaming involves a practice crafted during the Cold War simulations as a technique where the group assembled simulates likely adversarial scenarios to ascertain the process and system’s vulnerabilities and weaknesses. Its usage in cybersecurity has proved viable in uncovering new risks. 

Raimondo reiterated the Executive Order as one directing NIST to solicit feedback from diverse stakeholders drawn from civil society, academia and industry. 

The Commerce Secretary hailed the approach as an assured pathway towards developing standards oriented towards trust, security, and safety of AI. Such pursuit would catapult the US into the pole position founded on responsible development and utilization of the rapidly evolving technology.

NIST director Laurie Locascio hailed the pathway to reduce the synthetic content risk and advance responsible global technical standards towards AI development. He added that NIST targets more robust engagement with the AI community to realize a superior understanding of AI assessment relative to the goals outlined in the October Executive Order. 

The NIST director clarified the invitation extended to the broader AI community as one targeting engaging with the talented and committed team. Locascio expressed optimism that the request for stakeholders’ input will advance AI safety and trust practices.

📰 Also read:  US Lawmakers Confirm Trump's Pick Paul Atkins as the New SEC Chair

NIST Eyes Human-Centered Approach to Ensure AI Safety and Governance

Locascio encouraged active participation of the community to help gather diverse perspectives toward establishing an unbiased scientific understanding of AI. In the awareness of AI’s potential impact on humanity, NIST restated its devotion to developing guidance via a transparent and open process featuring input from the industry, civil society, government, and academia stakeholders. 

NIST invitation for public input aligns with the inaugural public evaluation and red-teaming event held in August. The event occurred during the cybersecurity conference through the coordination of SeedAI, AI Village, and Humane Intelligence.

The request for information conveyed on Tuesday builds upon the November announcement by NIST of the establishment of a new AI consortium. The announcement featured an official notice urging applicants with relevant credentials to join the consortium. 

The consortium eyes the creation and implementation of AI-specific policies and assessments to guarantee that US legislators embrace the human-centered approach to ensure AI safety and governance.  


At Tokenhell, we help over 5,000 crypto companies amplify their content reach—and you can join them! For inquiries, reach out to us at info@tokenhell.com. Please remember, cryptocurrencies are highly volatile assets. Always conduct thorough research before making any investment decisions. Some content on this website, including posts under Crypto Cable, Sponsored Articles, and Press Releases, is provided by guest contributors or paid sponsors. The views expressed in these posts do not necessarily represent the opinions of Tokenhell. We are not responsible for the accuracy, quality, or reliability of any third-party content, advertisements, products, or banners featured on this site. For more details, please review our full terms and conditions / disclaimer.

📰 Also read:  Montrixis Review 2025 – A Versatile Trading Platform That Empowers Traders at Every Step

Avatar photo

By Stephen Causby

Stephen Causby is an experienced crypto journalist who writes for Tokenhell. He is passionate for coverage in crypto news, blockchain, DeFi, and NFT.

Leave a Reply

Your email address will not be published. Required fields are marked *