NewsOpenAI / ChatGPT / Artificial Intelligence

Stanford Researchers’ Caution Artificial Intelligence Model Transparency is Worsening

Despite foundation models promising a new age in artificial intelligence, researchers at Stanford University claim their transparency is reducing.

A team of Stanford University researchers claims that big artificial intelligence models such as LlaM-A-2, Claude, ChatGPT, and Bard are becoming less transparent. This new study originates from the institution’s Center for Research on Foundation Models (CRFM).

 It is part of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), which evaluates and makes foundational artificial intelligence prototypes.

Researchers Warns About AI Risks

An official press release by Rishi Bommasani, Society Lead at CRFM, revealed the loss of transparency in firms in the foundation model space. The opaqueness evokes risks for legislators, companies, and customers.

Despite the firms behind the most utilized and faultless large language models (LLMs) claiming they intend to do good, they have differing perspectives concerning transparency and openness. For instance, as a precaution, OpenAI has embraced the absence of transparency. According to an official blog, the firm believes it was wrong in its innovative thinking concerning openness.

Additionally, it has shifted from the need to release everything to consider the market to share the system’s benefits and access safely. Research by MIT from 2020 reveals that for some time, this has been the central idea.

The researchers wrote that OpenAI is infatuated with keeping privacy, safeguarding its image, and maintaining its workers’ loyalty. Anthropic’s central views concerning the safety of artificial intelligence (AI) indicate the startup’s dedication to ‘treating interpretable and transparent systems.’

📰 Also read:  Can Ethereum Surpass $3.5K? ETH ETF Debut To Precede New Highs, Analysts Say

Additionally, it depicts its attention to developing ‘procedural and transparency interventions to promote verifiable adherence to commitments.’ In August this year, Google introduced a Transparency Center to improve the disclosure of its guidelines and address this problem.

It is crucial to question why users should be concerned about artificial transparency and intelligence.

AI Lacks Transparency

Stanford’s paper argues that reduced transparency makes it more difficult for other organizations to know whether they can safely make applications that depend on commercial foundation models.

Further, it makes it difficult for academics to depend on commercial foundation models for research. Regarding clients, it makes it difficult to comprehend model restrictions or seek compensation for any damages. Concerning policymakers, it becomes hard to create relevant policies to control this robust technology.

Bommasani and a team from Princeton and Stanford, MIT, created the Foundation Model Transparency Index (FMTI) to solve this matter. This index assesses a vast set of topics that offer a complete view of the nature of transparent firms concerning their artificial intelligence models.

The ranking focuses on studies elements such as how a firm creates a foundation model, its dataset disposal, how it functions, and how it is utilized downstream. The findings were less than outstanding.

On a scale of 0 to 100, the highest scores ranged from 47 to 54, and Meta’s Llama 2 led the pack. Further, OpenAI’s transparency was 47%, Anthropic’s transparency was 39%, and Google’s transparency was 41%. The difference between open and closed-source models had an impact on the ratings.

Similarities of Open and Closed Source AI Model

In an extensive research paper, the Stanford University team claimed that, specifically, each open developer is almost at least as transparent about the aggregate score as the highest-scoring closed developer.

📰 Also read:  UAE and Israel Support Starlink's Internet Service for Gaza Hospital

They also said that concerning non-scientists, the most terrible open-source artificial intelligence model has more transparency compared to the finest closed-source models.

Transparency in artificial intelligence models is not merely an academic interest matter. Politicians across the globe have also talked about the need for appropriately transparent AI growth.

According to Bommasani, most European Union legislators, as well as in China, the G7, the United States, Canada, the United Kingdom, and several other governments, believe in the need for transparency as a crucial priority.

The findings’ vast effects are vivid. Transparency becomes crucial with the incorporation of artificial intelligence models into different sectors. This is for real-world applications, ethical considerations, and honesty.


Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.

📰 Also read:  Can Ethereum Surpass $3.5K? ETH ETF Debut To Precede New Highs, Analysts Say

Stephen Causby

Stephen Causby is an experienced crypto journalist who writes for Tokenhell. He is passionate for coverage in crypto news, blockchain, DeFi, and NFT.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content