NewsOpenAI / ChatGPT / Artificial Intelligence

Former OpenAI Chief Scientist Ilya Sutskever Launches New AI Firm 

Former OpenAI executive Ilya Sutskever has launched a new artificial intelligence firm. The company will address the challenges facing AI technology, such as safety and security. 

Despite the growing popularity of AI, regulators have raised red flags regarding its privacy and security.

Former OpenAI Executive Launches New Firms

Sutskever founded Safe Superintelligence Inc. (SSI)   in collaboration with former Apple AI executive Daniel Gross and his former colleague at OpenAI, Daniel Levy. 

The trio agreed to form a company focused on enhancing AI safety and boosting efficiency. The SSI updates the X community on the new AI firm’s official launch. 

The SSI team vowed to bring revolutionary changes to the entire AI sector. The team will implement a unique approach focusing on one goal, one product. The SSI explained that the company’s business model will focus on security, safety, and progress. 

Reflecting on the regulatory concerns facing AI, the SSI seeks to advance the artificial intelligence sector peacefully. The SSI will embrace research-based initiatives to curb the gaps in the AI sector. 

Unlike its top rival, OpenAI, the SSI will not be profit-oriented; instead, the new firm will focus more on research. In an exclusive interview with Bloomberg, Sutskever explained that the SSI safety measure replicates nuclear safety and will have zero negative impact in the future. 

Objective Safe Superintelligence (SSI)

The executive explained that the SSI mission will foster a super alignment of AI models. Sutskever first pitched the mission at OpenAI, aiming to improve the safety of AI tools.

📰 Also read:  Meet Donald Trump's Pro-Crypto Picks - Who is Holding Bitcoin?

At his new company, Sutskever will leverage human resources to promote the development of AI tools. The SSI team plans to invest in building the SSI AI system for general purpose.

Also, the new company will enhance AI capabilities beyond the large language models (LLM). The company intends to build safe superintelligence tools that are beneficial to humanity.

Guided by SSI core values such as liberty and democracy, Sutskever and the team seek to enhance AI safety.  Sutskever told Bloomberg that the new company will ensure its product offering does not harm humanity.

The executive stated that to attain the SSI goal, the company will hire a forward-thinking team. In the meantime, the SSI is seeking to hire qualified individuals to fill the new openings. According to Sutskever, the SSI will have headquarters in the United States and Israel.

He confirmed that SSI AI models will seek to improve people’s quality of life by providing innovative solutions to everyday activities. 

Ilya Sutskever Seeks to Enhance AI Safety

Additionally, Sutskever believes that SSI will play a significant role in addressing technical challenges. The new AI firm was launched months after Sutskever stepped down from office at OpenAI. 

The decision to leave the giant tech firm stemmed from OpenAI’s derailing from its humanitarian roots to become a profit-chasing company. Sutskever, profiled as the linchpin of OpenAI’s success, understands the highs and lows of the AI sector. 

He plans to devise ways to address the shortcomings of most AI firms, such as management issues and product cycles. In his previous role, Sutskever led the OpenAI super alignment team to strengthen the AI safety tools. 

📰 Also read:  Tether Moves $2 Billion USDT to Ethereum for Better Liquidity Management

He was also a member of the OpenAI board, which made critical business decisions. Since his departure, the tech company has made significant milestones to boost the security of its AI models. 

A few months ago, the OpenAI team appointed a new committee to oversee the enhancement of its AI tools. The new committee was tasked with extensively researching effective approaches to strengthening AI tools. 

The committee is expected to provide recommendations for improving OpenAI safety within three months. The new committee is comprised of professionals with vast experience in matters concerning AI, machine learning, and deep learning, among others. 

The committee will support OpenAI in returning to its core mission of helping humanity. The committee was formed when the Tesla boss accused OpenAI of abandoning its humanitarian calling to become a profit-oriented organization.


Tokenhell produces content exposure for over 5,000 crypto companies and you can be one of them too! Contact at info@tokenhell.com if you have any questions. Cryptocurrencies are highly volatile, conduct your own research before making any investment decisions. Some of the posts on this website are guest posts or paid posts that are not written by Tokenhell authors (namely Crypto Cable , Sponsored Articles and Press Release content) and the views expressed in these types of posts do not reflect the views of this website. Tokenhell is not responsible for the content, accuracy, quality, advertising, products or any other content or banners (ad space) posted on the site. Read full terms and conditions / disclaimer.

📰 Also read:  Ethereum Crosses $3,000 Following 'Surprising Change' in Investor Sentiment

Kimberly Crain

Kimberly Crain is a seasoned crypto trader and writer, offering valuable insights into the digital asset market. With expertise in trading strategies and a passion for blockchain technology, her concise and informative articles empower readers to navigate the evolving world of cryptocurrencies.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Skip to content