December 23, 2024

As AI Development Advances Slowly, OpenAI Looks for New Approaches

Key Takeaways
  • The Orion model from OpenAI is not much better than earlier iterations.
  • It has been reported that a lack of high-quality training data slows down GPT development.
  • Allegedly, OpenAI is improving models with a foundation team and synthetic data.

According to recent research, the future OpenAI Orion model would suffer development hurdles and provide just slight advances over earlier iterations.

The information claims that the company’s inability to find high-quality data for training is partially the result of data restrictions.

Orion reportedly surpassed existing versions, according to testing staff, although the difference was only slight when compared to GPT-3 to GPT-4, and other researchers speculate that it might be more expensive. In other words, relative to earlier models, Orion performs worse in domains like coding, suggesting that the rate of improvement is slowing.

Delays in OpenAI Strategy and GPT Progress

One factor contributing to the slowdown in GPT development is the declining amount of high-quality text and data available for pretraining, which aids LLMs in comprehending the connections between ideas in order to complete jobs like content writing and code debugging. Developers have largely used all of the publicly accessible text from books, websites, and other pretraining sources, according to OpenAI researchers and staff.

According to reports, OpenAI established a foundation team to solve this issue by employing techniques like synthetic data training and post-training improvements to boost models with less additional input.

Sam Altman had called rumors of a December release for ChatGPT-5 “fake news” and refuted them, but he did confirm that OpenAI’s AI models would soon develop.

Leave a Reply

Your email address will not be published. Required fields are marked *