OpenAI's New Launch o1-pro - Most Pricey AI Model Yet

Read Time: 1 minutes

 Open AI has launched a new AI model, o1-pro. The new o1-pro is the more powerful and advanced version of the o1 model for reasoning. To date, the o1-pro is the most pricey AI model in the AI market. Open AI believes the new model's price goes well with its improved performance. 

O1-pro, Newly Launch AI Reasoning Model

Open AI has always been active in bringing new AI models to the market. This time, Open AI has come up with a more powerful version of its previously launched o1 reasoning AI model, o1-pro, in its developer API. 

The newly launched AI model is expected to meet the demands of reliable responses and better answers to the hardest questions. The new model, o1-pro, uses more computing power to provide "consistently better responses."

Availability and Pricing

With the launch of the new AI Model, every eye is laid on its availability and pricing. Open AI's o1-pro is the most expensive AI Model yet. Also, the advanced o1-pro is currently only available to select developers who have paid $5 for OpenAI API services. Open AI's new model charges are listed below: 

$150 per million tokens (~750,000 words) fed into the model

$600 per million tokens generated by the model

The charges are twice the price of OpenAI's GPT-4.5 for input and 10x the price of regular o1. Open AI is assured that the o1-pro pricy charges are in accordance with its improved performance. Also, developers will not hesitate to pay a handsome amount for this improved-performing model. 

Early Impressions

Just when o1-pro was launched, the users readily tried their hands on the new model. Early users of o1-pro, which has been available in OpenAI’s AI-powered chatbot platform, ChatGPT and ChatGPT subscribers, since December, have reported mixed results. It can be easily said that early impressions lack positive results. 

Areas of Improvement for o1-pro

o1-pro is Open AI's most powerful new AI reasoning model, but the model struggled with responses in some areas. The model didn't perform well with Sudoku puzzles and simple optical illusion jokes.

Benchmark Performance

OpenAI's internal benchmarks from late last year showed that o1-pro performed only slightly better than the standard o1. When o1-pro is subjected to coding and math problems, the results were a bit better but reliable.