OpenAI Develops CriticGPT Model Capable Of Spotting GPT-4 Code Generation Errors

openai-develops-criticgpt-model-capable-of-spotting-gpt-4-code-generation-errors

OpenAI published a study about a new artificial intelligence (AI) model on Thursday that can catch GPT-4’s mistakes in code generation. The AI firm stated that the new chatbot was trained using the reinforcement learning from human feedback (RLHF) framework and was powered by one of the GPT-4 models. The under-development chatbot was designed to improve the quality of the AI-generated code that users get from the large language models. At present, the model is not available to users or testers. OpenAI also highlighted several limitations of the model.

The AI firm shared details of the new CriticGPT model in a blog post, stating that it was based on GPT-4 and designed to identify errors in code generated by ChatGPT. “We found that when people get help from CriticGPT to review ChatGPT code they outperform those without help 60 percent of the time,” the company claims. The model was developed using the RLHF framework and the findings have been published in a paper.

RLHF is a machine learning technique that combines machine output with humans to train AI systems. In such a system, human evaluators provide feedback to the AI’s performance. This is used to adjust and improve the model’s behaviour. Humans who provide feedback to the AI are called AI trainers.

CriticGPT was trained on a large volume of code data that contained errors. The AI model was tasked with finding these mistakes and to critique the code. For this, AI trainers were asked to write the mistakes in the code on top of the naturally occuring mistakes, and then write example feedback as if they had caught those errors.

Once the CriticGPT shared its multiple variations of its critique, the trainers were asked to spot if the errors they inserted was caught by the AI alongside the naturally occurring errors. OpenAI, in its research, found that CriticGPT performed 63 percent better than ChatGPT in catching errors.

However, the model still has certain limitations. CriticGPT was trained on short strings of code generated by OpenAI. The model is yet to be trained on long and complex sets of tasks. The AI firm also found that the new chatbot continues to hallucinate (generate incorrect factual responses). Further, the model has not been tested in scenarios where multiple errors are dispersed in the code.

This model is unlikely to be made public as it is designed to help OpenAI better understand training techniques that can generate higher quality outputs. If CriticGPT does make it to public, it is believed to be integrated within ChatGPT.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club – Chelsea, watching movies and anime, and sharing passionate opinions on food. More