What is GPT-4 and how is it different from ChatGPT?
- March 20, 2023
- Posted by: OptimizeIAS Team
- Category: DPN Topics
No Comments
What is GPT-4 and how is it different from ChatGPT?
Subject: Science and Technology
Section: AWARENES OF IT AND COMPUTERS
Context:
- AI powerhouse OpenAI announced GPT-4, the next big update to the technology that powers ChatGPT and Microsoft Bing, the search engine using the tech.
Details:
- GPT-4 is supposedly bigger, faster, and more accurate than ChatGPT, so much so, that it even clears several top examinations with flying colours, like the Uniform Bar Exam for those wanting to practice as lawyers in the US.
- Where GPT-3.5-powered ChatGPT only accepted text inputs, GPT-4 can also use images to generate captions and analyses. But that’s only the tip of the iceberg.
About GPT 4.0:
- Generative Pre-training Transformer or GPT-4 is a large multimodal model created by OpenAI.
- Multimodal models can encompass more than just text – GPT-4 also accepts images as input.
- GPT-3 and GPT-3.5 only operated in one modality, text, meaning users could only ask questions by typing them out.
- OpenAI says that GPT-4 also “exhibits human-level performance on various professional and academic benchmarks.”
- The language model can pass a simulated bar exam with a score around the top 10 per cent of test takers and can solve difficult problems with greater accuracy.
- For example, it can “answer tax-related questions, schedule a meeting among three busy people, or learn a user’s creative writing style.”
- GPT-4 is also capable of handling over 25,000 words of text, opening up a greater number of use cases that now also include long-form content creation, document search and analysis, and extended conversations.
How is it different from GPT 3.0?
- The most noticeable change to GPT-4 is that it’s multimodal, allowing it to understand more than one modality of information.
- GPT-3 and ChatGPT’s GPT-3.5 were limited to textual input and output, meaning they could only read and write.
- However, GPT-4 can be fed images and asked to output information accordingly.
- It can analyse the image as well unlike the Google lens which can only provide information related to the image.
- One of the biggest drawbacks of Generative models is that they get the facts mix up and provide misinformation. OpenAI claims that GPT 4.0 has been trained to avoid those mistakes.
- GPT-4 can process a lot more information at a time:
- ChatGPT’s GPT-3.5 model could handle 4,096 tokens or around 8,000 words but GPT-4 pumps those numbers up to 32,768 tokens or around 64,000 words.
- GPT-4 has an improved accuracy, upto 40% higher than that of GPT 3.5.
- GPT-4 is better at understanding languages that are not English.
Applications of GPT 4.0:
- GPT-4 has already been integrated into products like Duolingo, Stripe, and Khan Academy for varying purposes.
- Microsoft has confirmed that the new Bing search experience now runs on GPT-4.