Article 6NW39 OpenAI Creates CriticGPT to Catch Errors in ChatGPT’s Outputs

OpenAI Creates CriticGPT to Catch Errors in ChatGPT’s Outputs

by
Krishi Chowdhary
from Techreport on (#6NW39)
CritciGPT-1200x601.png
  • OpenAI has created a new tool called CriticGPT, which works on GPT-4 and will be used to catch errors in the codes generated by ChatGPT.
  • Trainers are already loving the tool because, unlike other AI models, it doesn't hallucinate too often and its suggestions are mostly helpful.
  • The only problem is that it cannot handle complex problems yet and is not 100% fail-proof.

CritciGPT-300x150.png

OpenAI has created a new tool called CriticGPT, which is based on GPT-4 and helps catch mistakes in ChatGPT's code output. It is specially designed for AI trainers who use RHLF (Reinforcement Learning from Human Feedback) to update AI models.

Both ChatGPT and CriticGPT were trained with RLHF but what makes the latter so much better at spotting mistakes is that it dealt with a larger number of inputs that contained mistakes and it had to critique them.

Basically, AI trainers at OpenAI manually plugged in a few mistakes into codes written by ChatGPT and then fed it to CriticGPT asking for help.

Then multiple critiques of the same bug were compared to find when the tool could successfully detect an error. And in most cases, the result was satisfactory.

The Need for CriticGPT

With time, AI models are becoming more and more advanced, which means it is getting difficult to spot their mistakes. Plus, in some cases, these models are getting smarter than the ones training them, making it all the more difficult to make improvements.

bug_chart_desktop_light-1__1_-ezgif.com-

CriticGPT fixes this issue. It complements the shortcomings of human trainers, making the enhancement process much more refined. The CriticGPT-trainer team is much better at doing the job than a single trainer working alone.

Now, a lot of people might wonder what was the need to create a whole new tool when you can use ChatGPT itself to find errors in a code. The answer is accuracy.

Sure, ChatGPT can do a similar job, but more than 63% of trainers prefer CriticGPT because it is less likely to hallucinate or offer suggestions that are not helpful.

Limitations of CriticGPT

CriticGPT is an awesome addition to the AI training industry. However, there are a few limitations to it that should be noted.

  • For starters, the tool in itself is pretty new and has only been trained on short answers. How it handles long and complex answers is yet to be known.
  • If the source of an answer has errors, it will naturally seep into ChatGPT's response. Now, CriticGPT has been trained to deal with one wrong source. But if errors on a certain topic are widely spread across the internet, even CrtiticGPT will fail.
  • Also, not all suggestions made by this tool are correct. However, it has been noted that using CriticGPT tools has helped trainers catch more mistakes in model-written answers than they did without the help of any tool.
  • Lastly, CriticGPT is not 100% fail-proof. AI models can still make mistakes, whether it's through their own hallucinations or a mistake made by the trainers.

That being said, it's still a positive step. It's good to see that companies like OpenAI are responsible for the quality and accuracy of the content that their models churn out.

It has also promised to keep working on CriticGPT so that it can handle more complex problems and be scaled at a larger level.

The post OpenAI Creates CriticGPT to Catch Errors in ChatGPT's Outputs appeared first on The Tech Report.

External Content
Source RSS or Atom Feed
Feed Location https://techreport.com/feed/
Feed Title Techreport
Feed Link https://techreport.com/
Reply 0 comments