If you’ve tried using Microsoft’s Bing chatbot, you may have encountered some questionable responses. While OpenAI’s powerful AI model, GPT-4, is available in the chatbot, it’s not always in use. It seems that, for cost reasons, Microsoft has also integrated its own Turing language models into the system, which may explain why some of the responses are unreliable.
Why Bing Chatbot’s Mistakes May Not Be OpenAI’s Fault
Microsoft’s Bing chatbot uses its own Turing language models in addition to OpenAI’s GPT-4 AI model, leading to some questionable responses.
Microsoft is trying to save on computational costs by using its own Turing models, which may be compromising the quality of the chatbot’s responses. OpenAI’s ChatGPT, which includes GPT-4, is estimated to cost $700,000 per day or 36 cents per request.
Bing chatbot’s integration of GPT-4 provides a glimpse into a possible future of search that surpasses the limitations of native chatbots.
Jordi Ribas, head of search and AI at Microsoft, revealed that the Bing chatbot uses an AI model called Prometheus. When the system assigns questions that are classified as “simple,” Microsoft’s Turing language models come into play.
Turing models are also responsible for Bing’s search results, ranking, autocomplete, and the evaluation of ad relevance, among other things.
According to Peter Salin, founder of AI startup Silo AI, the chatbot’s initial responses are generally given by a Turing model.
However, further queries would result in better results because they would be processed by GPT-4. It seems that Microsoft is trying to save on computational costs, which makes sense considering the high cost of running GPT-4.
The Cost of Running GPT-4
OpenAI has not yet released revenue and expense figures for ChatGPT. However, Dylan Patel, an analyst at research firm SemiAnalysis, estimates that ChatGPT currently costs OpenAI about $700,000 per day or 36 cents per request, amounting to more than $200 million per month.
While OpenAI makes money through a paid API, the GPT-4 API in particular is expensive. The ChatGPT Plus subscription model costs $20 per month, which is a requirement for accessing GPT-4 via ChatGPT.
Whether OpenAI makes or loses money from this venture is currently unknown.
Based on projections by SimilarWeb, there were about 100 million active users in January.
If about ten percent of these users had a paid account, the operation of ChatGPT could be roughly cost-covering. However, this is a shaky calculation.
The High Cost of Computing
The high computing costs are mainly due to the use of power-hungry Nvidia graphics cards, which Microsoft plans to replace with its own AI chip, according to The Information.
Meanwhile, the company is trying to minimize the use of the expensive GPT-4.
Ribas dismisses the idea that the Bing chatbot’s initial responses are generally of lower quality. He argues that inaccurate responses are more likely due to a lack of context in users’ initial queries.
Bing Chatbot Provides a Glimpse of the Future of Search
Microsoft’s AI chatbot has been integrated into Bing since early February 2023, providing a glimpse into a possible future of search.
Instead of generating a list of links, the AI answers questions in a compact summary. In the best-case scenario, the AI provides links to relevant sources to validate the chatbot’s statements externally.
This is where GPT-4’s integration into Bing surpasses the native ChatGPT integration.
In conclusion, while it’s easy to blame OpenAI for the Bing chatbot’s mistakes, it seems that Microsoft’s decision to integrate its own Turing models is the root of the problem.
While it may seem like a cost-saving measure, this may be compromising the quality of the chatbot’s responses.
However, the integration of GPT-4 into Bing provides a glimpse into a possible future of search that surpasses the limitations of native chatbots.