NewsOpenAI / ChatGPT / Artificial Intelligence

Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Firm 

Former OpenAI executive Ilya Sutskever has launched a new artificial intelligence firm. The company will address the challenges facing AI technology, such as safety and security. 

Despite the growing popularity of AI, regulators have raised red flags regarding its privacy and security.

Former OpenAI Executive Launches New Firms

Sutskever founded Safe Superintelligence Inc. (SSI)   in collaboration with former Apple AI executive Daniel Gross and his former colleague at OpenAI, Daniel Levy. 

The trio agreed to form a company focused on enhancing AI safety and boosting efficiency. The SSI updates the X community on the new AI firm’s official launch. 

The SSI team vowed to bring revolutionary changes to the entire AI sector. The team will implement a unique approach focusing on one goal, one product. The SSI explained that the company’s business model will focus on security, safety, and progress. 

Reflecting on the regulatory concerns facing AI, the SSI seeks to advance the artificial intelligence sector peacefully. The SSI will embrace research-based initiatives to curb the gaps in the AI sector. 

Unlike its top rival, OpenAI, the SSI will not be profit-oriented; instead, the new firm will focus more on research. In an exclusive interview with Bloomberg, Sutskever explained that the SSI safety measure replicates nuclear safety and will have zero negative impact in the future. 

Objective Safe Superintelligence (SSI)

The executive explained that the SSI mission will foster a super alignment of AI models. Sutskever first pitched the mission at OpenAI, aiming to improve the safety of AI tools.

📰 Also read:  Dogecoin Price Gains 12% in 24 Hours: Here's What To Know

At his new company, Sutskever will leverage human resources to promote the development of AI tools. The SSI team plans to invest in building the SSI AI system for general purpose.

Also, the new company will enhance AI capabilities beyond the large language models (LLM). The company intends to build safe superintelligence tools that are beneficial to humanity.

Guided by SSI core values such as liberty and democracy, Sutskever and the team seek to enhance AI safety.  Sutskever told Bloomberg that the new company will ensure its product offering does not harm humanity.

The executive stated that to attain the SSI goal, the company will hire a forward-thinking team. In the meantime, the SSI is seeking to hire qualified individuals to fill the new openings. According to Sutskever, the SSI will have headquarters in the United States and Israel.

He confirmed that SSI AI models will seek to improve people’s quality of life by providing innovative solutions to everyday activities. 

Ilya Sutskever Seeks to Enhance AI Safety

Additionally, Sutskever believes that SSI will play a significant role in addressing technical challenges. The new AI firm was launched months after Sutskever stepped down from office at OpenAI. 

The decision to leave the giant tech firm stemmed from OpenAI’s derailing from its humanitarian roots to become a profit-chasing company. Sutskever, profiled as the linchpin of OpenAI’s success, understands the highs and lows of the AI sector. 

He plans to devise ways to address the shortcomings of most AI firms, such as management issues and product cycles. In his previous role, Sutskever led the OpenAI super alignment team to strengthen the AI safety tools. 

📰 Also read:  Ethereum ETFs Surge Ahead Amid Bitcoin's Institutional Capital Outflows

He was also a member of the OpenAI board, which made critical business decisions. Since his departure, the tech company has made significant milestones to boost the security of its AI models. 

A few months ago, the OpenAI team appointed a new committee to oversee the enhancement of its AI tools. The new committee was tasked with extensively researching effective approaches to strengthening AI tools. 

The committee is expected to provide recommendations for improving OpenAI safety within three months. The new committee is comprised of professionals with vast experience in matters concerning AI, machine learning, and deep learning, among others. 

The committee will support OpenAI in returning to its core mission of helping humanity. The committee was formed when the Tesla boss accused OpenAI of abandoning its humanitarian calling to become a profit-oriented organization.


At Tokenhell, we help over 5,000 crypto companies amplify their content reach—and you can join them! For inquiries, reach out to us at info@tokenhell.com. Please remember, cryptocurrencies are highly volatile assets. Always conduct thorough research before making any investment decisions. Some content on this website, including posts under Crypto Cable, Sponsored Articles, and Press Releases, is provided by guest contributors or paid sponsors. The views expressed in these posts do not necessarily represent the opinions of Tokenhell. We are not responsible for the accuracy, quality, or reliability of any third-party content, advertisements, products, or banners featured on this site. For more details, please review our full terms and conditions / disclaimer.

📰 Also read:  United States of Bitcoin? Here Are the States Considering BTC Strategic Reserves

Kimberly Crain

Kimberly Crain is a seasoned crypto trader and writer, offering valuable insights into the digital asset market. With expertise in trading strategies and a passion for blockchain technology, her concise and informative articles empower readers to navigate the evolving world of cryptocurrencies.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content