Cryptocurrency RegulationNewsOpenAI / ChatGPT / Artificial Intelligence

OpenAI CEO Urges US Authority to Create New Regulatory Agency for AI

On May 4, US Deputy President Kamala Harris invited artificial intelligence (AI) experts to deliberate on ethical considerations for the new technology. Afterwards, the US policymakers requested the chief executive of OpenAI, Sam Altman, to appear before the Senate for further deliberation of AI.

The Senator for Louisiana, John Kennedy, requested the members present to expound on how AI technology should be regulated. Responding to Senator Kennedy’s request, Altman suggested that the US authority should create a new AI public office. He added that the US should consider amending the existing setting standards for AI.

Will the New Office Address AI Risks?

Altman proposed that the new agency should be in charge of licensing AI under specific conditions that will uphold compliance with the regulations. He mentioned that the new office should be given the power to ensure that the AI firms meet the required safety standards.

In his address, Altman explained that the proposed office should conduct independent audits for AI-related projects. He then demonstrated his commitment to his current task and requested the authority to allow him to regulate the new agency.

At the event, questions on how and who should regulate that new AI agency sparked heated discussions among the participants. The authority pledged to control the new AI office under certain conditions.

On the other hand, Professor Gary Marcus from New York University proposed that AI should be regulated like the food and drug administration (FDA) in the medical field. Professor Marcus suggested that the regulators conduct a safety review for the new technology that will follow a similar procedure as FDA. Such procedures would require the regulators to review the regulation before the launch.

📰 Also read:  The Year in Bitcoin: Donald Trump's Victory and ETFs Push BTC Above $100k

Review of Sam Altman Proposal

He argued that if the authority plans to launch a project serving more than 100 million individuals, it was necessary to ask someone to regulate it. Professor Marcus outlined the roles and responsibilities of the new office.

He stated that the regulators in the new office should be keen to observe recent trends in the AI sector. The new office should pre-review and review the project for necessary changes in AI.

Elsewhere, the Chief of Privacy & Trust at IBM, Christina Montgomery, argued that AI requires transparency and explainability. Montgomery stated that the regulators are required to examine the risk associated with AI.

Also, she requested the regulators to assess the effect of AI and examine how companies will be more transparent in executing their duties. Montgomery urges the regulators to support companies in training the AI technology before the launch.

In her statement, Montgomery argued that by creating an independent agency it will slow down regulations necessary to address AI risks. She stated that AI regulatory agencies still exist and operate under authority requirements.

Afterwards, Montgomery lauded the efforts of regulatory agencies to monitor AI developments. However, she mentioned the challenges battling the AI regulatory agencies, including the lack of resources and complete control over the advanced technology.

Besides the efforts made by the US authority to address the risk associated with AI developments, other globe regulators are seeking to develop a clear understanding of AI technology. In 2022 the European Union signed an AI Bill to ensure that AI-related projects are tested before launching. 

📰 Also read:  Possible ETF Approval: Solana Price Surpasses $200

Global AI Regulation

The EU adopted new regulations for AI to ensure that the technology meets the ethical requirements and supports the well-being of the European community. In Italy, the regulators adopted restrictive measures on OpenAI ChatGPT due to security concerns. Additionally other countries including North Korea, Iran, Syria and China imposed restrictive measure to ban ChatGPT in the region.

Reportedly, the Italian regulatory move forced the OpenAI team to make several changes to the ChatGPT features. On the privacy setting, the OpenAI technical team integrated additional features to enable the user to turn off the chat history on the platform.

Initially, the OpenAI used the existing customer’s data to train the chatbox. The changes allowed the user to deny the chatbox access to sensitive data.


At Tokenhell, we help over 5,000 crypto companies amplify their content reach—and you can join them! For inquiries, reach out to us at info@tokenhell.com. Please remember, cryptocurrencies are highly volatile assets. Always conduct thorough research before making any investment decisions. Some content on this website, including posts under Crypto Cable, Sponsored Articles, and Press Releases, is provided by guest contributors or paid sponsors. The views expressed in these posts do not necessarily represent the opinions of Tokenhell. We are not responsible for the accuracy, quality, or reliability of any third-party content, advertisements, products, or banners featured on this site. For more details, please review our full terms and conditions / disclaimer.

📰 Also read:  The Year in Bitcoin: Donald Trump's Victory and ETFs Push BTC Above $100k

Kimberly Crain

Kimberly Crain is a seasoned crypto trader and writer, offering valuable insights into the digital asset market. With expertise in trading strategies and a passion for blockchain technology, her concise and informative articles empower readers to navigate the evolving world of cryptocurrencies.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content