gpt4: What is GPT-4 – multimodal AI – & how is it different from Microsoft’s ChatGPT?

[ad_1]

Microsoft-backed start-up OpenAI shook the world when it launched the now widely popular ChatGPT which released in November last year. Now in latest update, the company has started rolling out a more powerful version of the artificial intelligence model, the GPT-4, which succeeds the ChatGPT.

In simple terms, GPT-4 is multimodal, which means it can generate content from both image and text prompts.

“GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities,” a statement on the OpenAI website read.

Main Difference Between GPT-4 and ChatGPT
Unlike ChatGPT, GPT-4 can see and understand images, whereas the former is limited to text.

The GPT-4 can take in images and process them to find relevant and accurate information. This allows GPT-4 to describe patterns on a dress, inform you how to use a certain equipment at the gym or translate a label in your preferred language just by scanning the image uploaded by you.

While ChatGPT had several issues upon its arrival, Microsoft claims that GPT-4 has been trained to avoid several malicious prompts.

“GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test-takers,” the OpenAI statement said. “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations,” the tech company added.

Unlike ChatGPT, the GPT-4 also has a wider memory bank and has a maximum token count of 32,768 which translates to around 64,000 words or 50 pages of text.

To make sure GPT-4 is flawless, or near flawless, OpenAI conducted several tests and training to get this AI product right and improve user experience.

OpenAI incorporated more human feedback, including that submitted by ChatGPT users itself in order to improve and sharpen GPT-4’s behaviour. The company collaborated with over 50 experts for early feedback in domains including AI safety and security.

Just like ChatGPT, OpenAI will keep updating and improving GPT-4 at a regular cadence as more and more people start to use it.

How Capable Is GPT-4?
The latest version, GPT-4, can also help individuals calculate their taxes, a demonstration by Greg Brockman, OpenAI’s president, showed. Be My Eyes, an app that caters to visually impaired people, will provide a virtual volunteer tool powered by GPT-4 on its app.

But There Are Also Limitations …
According to OpenAI, GPT-4 has a few similar limitations as its prior versions and is “less capable than humans in many real-world scenarios”. GPT-4 still struggles with social biases, hallucinations, and adversarial prompts.

Inaccurate responses are known as “hallucinations”, which have been a challenge for many AI programs.

Who Can Use GPT-4?
While GPT-4 can process both text and image inputs, only the text-input feature will be available to ChatGPT Plus subscribers as of now. The AI tool will also be available for a select few software developers, with a waitlist. However, the image-input ability is not yet publicly available.

[ad_2]

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *