Cypher
NewsOn-chain Data / AnalysisOpenAI / ChatGPT / Artificial Intelligence

OpenAI’s New Customized AI Offerings Get Mixed Reactions from Developers

OpenAI recently announced the launch of an update that adds an exciting feature to GPT-3.5 Turbo: the ability to fine-tune. It allows AI developers to improve their performance on specific tasks by leveraging specialized data.

Surprisingly, the latest development has sparked widespread reactions from developers, including anticipation and constructive criticism.

Addressing The GTP- 3.5 Turbo Fine-Tuning Issue

According to OpenAI, the fine-tuning process will allow developers to tailor the functionalities of GPT-3.5 Turbo to meet specific tasks. The platform highlighted an instance where a developer could fine-tune GPT-3.5 Turbo to generate personalized code or expertly shrink legal materials using insights from a dataset derived from the client’s business activities.

Cypher

Nevertheless, Joshua Segeren, known by the acronym X, expressed an interesting point of view. Segeren suggested that it would have been the right move to add a fine-tuning feature to GPT-3.5 Turbo while highlighting its limitations that it is not a complete solution.

He underscored the effectiveness of alternative approaches such as improving prompts, integrating vector databases for semantic searches, or even switching to the upcoming GPT-4 model, which frequently outperforms personalized training.

In addition, X suggests considering elements like initial setup complexities and ongoing maintenance costs for the updated version.

📰 Also read:  zkSync Explains Sybil Detection as Binance Announces ZK Token Airdrop

What To Expect

The starting point for GPT-3.5 Turbo models is $0.0004 per 1,000 tokens handled, with the assets serving as the building blocks of language processing. However, as a developer progresses through the fine-tuned iterations, an added price of $0.012 per 1,000 input tokens and $0.016 per 1,000 output tokens becomes applicable.

It is important to note that there is also an initial training fee proportional to the volume of data involved in the process. Over time, the importance of this functionality becomes clear to businesses and developers who seek to create customized user experiences.

For example, enterprises could fine-tune the model to align it with their brand’s distinct identity. This strategic fine-tuning process ensures that the chatbot remains consistent and resonates with the brand’s core identity.

Hence, carefully evaluating the training data is critical in ensuring the responsible use of this fine-tuning feature. This examination includes using the moderation API and the alertness of the GPT-4-powered moderation system.

OpenAI maintains that this dual-process approach protects the default model’s inherent security characteristics as the fine-tuning process unfolds. The main goal of this system is to identify and eliminate potentially risky training data, ensuring that the improved outcomes adhere to OpenAI’s well-established security standards.

📰 Also read:  Anthropic's Neural Features Discovery May Eliminate AI Hallucinations

It is equally important to recognize that this approach enables some control over the data that users input into the models, giving OpenAI some control over the content used for fine-tuning.


Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.

📰 Also read:  United States Spot Bitcoin ETF Maintains Performance, KANG Token Emerge

Cypher

Bradley Nelson

Bradley Nelson is a US based cryptocurrency news writer for Tokenhell, he helps readers stay up to date with the latest trends and news from the blockchain and crypto world. Bradley has been a crypto enthusiast since 2018.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content