Everything You Need to Know About ChatGPT-4 AI

Unlock the potential of ChatGPT-4, latest AI technology. Discover its groundbreaking features and how it’s changing the game.

After several months of rumour, OpenAI has finally deployed GPT-4. This new AI model can understand the text but also images. It will allow ChatGPT and other GPT-based applications to become even more innovative and versatile! Find out everything you need to know.

We live in extraordinary times right now, and we are witnessing the dawning of the age of AI. For several months, a new revolutionary model has upset our society.

In July 2022, OpenAI released DALL-E 2: a text-to-art model capable of generating an image from a text prompt. At the end of the year, the startup unveiled ChatGPT: a chatbot capable of responding to natural language and generating text.

What these AIs have in common is that they are based on GPT-3: OpenAI’s “broad language model”. The firm launched GPT-4 on March 14, 2023, and will propel artificial intelligence into a new dimension.

What is GPT?

The Generative Pre-trainer Transformer (GPT) is a deep learning text generation model trained on data available on the internet. It can answer questions, summarise a text, translate, classify, or even generate computer code.

The potential applications of GPT are unlimited, and it is even possible to customize it by “fine-tuning” to refine the results. It is a “Transformer” type model.

How GPT works is similar to voice assistants like Alexa and Siri but on a different scale. This AI can write a book or hundreds of social media posts ideas in a minute.

It can be defined as on-demand intelligence. This AI can be used to solve any problem that requires human involvement. Thus, GPT offers many opportunities for businesses.

However, GPT-3, launched in 2020, still needs improvement. Problems arise when asked to generate long texts, especially on complex topics requiring precise knowledge. However, GPT-4 brings many improvements.

From GPT-1 to GPT-3

Before the first GPT, most Natural Language Processing (NLP) models were trained for specific tasks like text classification or translation. They used supervised learning.

However, this approach has two drawbacks. The first is a lack of annotated data available, and the second is the inability to perform general tasks.

It was in 2018 that OpenAI kicked the anthill, publishing the article “Improving Language Understanding by Generative Pre-Training” in the journal Paperswithcode.

Through this study, the startup presents GPT-1: a generative language model with 117 million parameters, trained on unlabeled and fine-tuned data for specific classification and sentiment analysis tasks.

A year later, in 2019, the article “Language Models are Unsupervised Multitask Learners” presents GPT-2. This 1.5 billion parameter model was trained on a larger dataset to increase performance. It also exploits the techniques of task conditioning, zero-shot learning, and zero-shot task transfer.

Then, in 2020, OpenAI introduced GPT-3 and its 175 billion parameters with the article “Language Models are Few-Shot Learners”. This new model has 100 times more parameters than GPT-2 and has been trained on an even larger dataset to maximize its results. It is also based on in-context, one-shot, few-shot and zero-shot learning techniques.

This AI shocked the world by writing stories, SQL queries, Python scripts and text summaries. Now, GPT-4 represents a new step forward.

GPT-4: a text/image multimodal AI

While GPT-3.5 only accepts text input, GPT-4 can also receive images. It’s a significant improvement.

This new AI is capable of captioning or even interpreting relatively complex images. For example, she can identify a Lightning cable adapter from a photo of an iPhone plugged in.

At this time, this capability is not available to all OpenAI customers. It is being tested with a single partner: Be My Eyes.

The service’s new Virtual Volunteer feature is based on GPT-4 and can answer questions about images sent to it.

On its blog, the company explains how it works: “for example, if a user sends a photo of the inside of his fridge, the Virtual Volunteer will not only be able to identify what is inside, but also ‘extrapolate and analyze what can be prepared with these ingredients ‘.

The tool will also be able to offer several recipes for these ingredients and send a step-by-step guide to making them. This multimodality will open up many new possibilities.

GPT-4 outperforms 90% of students on university exams

Furthermore, GPT-4 manages to match humans on a wide variety of professional and academic benchmarks.

The AI, for example, scored above 90% of students on a mock lawyer’s exam, while that of GPT-3.5 was similar to that of the worst 10% of candidates.

She also added 150 points to her SAT score, reaching 1410 out of 1600. It is a multiple-choice test created by the College Board.

Orientability: strong point of GPT-4

Orientability is one of the main improvements brought by GPT-4. The API is enhanced with a “system messages” capability allowing developers to prescribe style and task by describing specific directions.

System messages are essential for setting the tone and setting up barriers for AI interactions.

For example, a system message might say, “You are a tutor who always responds in a Socratic style . You never give the student the answer, but you always try to ask the right question to help them learn to think for themselves. You should always tailor your question to the student’s interest and knowledge, breaking the problem down into simpler parts until it is at the right level for them“.

This new feature will also be added to ChatGPT soon. It should allow the chatbot to become less monotonous and generic.

How OpenAI trained GPT-4?

Credit: Adrian Twarog

To perfect GPT-4, OpenAI spent 6 months “iteratively aligning” this model using lessons from an adversarial test program and ChatGPT.

According to the firm, this has allowed it to obtain the best results regarding factuality, orientability and refusal to cross the safeguards.

Like previous versions of GPT, this fourth version was trained on publicly available data, including from public web pages.

Using Microsoft, OpenAI also developed a supercomputer on the Azure cloud. This virtual machine was then used to train GPT-4.

GPT-4 vs GPT-3

According to a blog post published by OpenAI,” the difference between GPT-3.5 and GPT-4 can be subtle in casual conversation .” On the other hand, the new model’s full potential is revealed when a task’s complexity reaches a certain threshold.

GPT-4 vs. GPT-3

Indeed, GPT-4 is more reliable and creative and can support much more nuanced instructions than GPT-3.5.

Additionally, GPT-4 is 82% less likely to answer requests for “unauthorized” content and refuses to answer sensitive questions like medical advice or self-harm 29% more often.

A still very artificial intelligence

Despite these many improvements, OpenAI admits that GPT-4 is still imperfect. The AI ​​continues to “hallucinate” facts and make errors of reasoning, sometimes with great confidence.

In an example cited by OpenAI, GPT-4 described Elvis Presley as the “son of an actor. However, this is not its main characteristic.

As the San Francisco firm explains, “GPT-4 generally lacks knowledge of events after September 2021 and does not learn from its experiences“. Remember that his training data set ends on this date.

Moreover, “it may sometimes make errors in simple reasoning that do not seem compatible with competence in so many areas, or show credulity by accepting obviously false statements from the user “.

Sometimes AI can also “fail on difficult problems the way humans do, for example by introducing security vulnerabilities into the code it produces .”

OpenAI changes strategy: closed AI shrouded in mystery

To accompany the launch of GPT-4, OpenAI released a 98-page technical report. However, some researchers lament a lack of transparency and openness at odds with the original values ​​from which the firm takes its name.

Moreover, the document explicitly states that “given the competitive landscape and the security implications of large models like GPT-4, this report does not contain details on architecture, model size, hardware, the training method or the construction of the dataset“.

Thus, GPT-4 is much more secretive than its predecessors. It is a significant turnaround for the company, whose open-source was the founding principle.

The researchers criticize OpenAI in particular for not making it possible to evaluate the possible biases of GPT-4, to verify its performance, and, more generally, to deploy this model as a product and not a scientific tool.

When it was created, the firm promised to encourage sharing and advance humanity through its research rather than trying to generate profit. It is no longer the case, especially since Microsoft had the privilege of incorporating GPT-4 into Bing before its public launch.

In the face of criticism, chief scientist Ilya Sutskever of OpenAI defends himself with the MIT Technology Review. He says, “Security is not a binary thing but a process. Things get more complicated each time you reach a new level of abilities.

He also explains to The Verge that the market has become very competitive and that “GPT-4 is not easy to develop. It took everyone at OpenAI working together for a very long time to produce this thing .”

He said it was a mistake to have chosen the open-source path at the start: “we were wrong“. He also expects it to become “obvious to everyone “that open-sourcing AI is not a good idea in a few years…

Price and availability

Unveiled on March 14, 2023, by OpenAI, GPT-4 is now available for ChatGPT Plus users: the AI chatbot’s paid version.

The price for using GPT-4 is $0.03 for 1000 “prompt” tokens and $0.06 for 1000 “completion” tokens.

Tokens represent plain text. For example, the word “fantasy” can be broken down into “fan”, “heap”, and “tick” tokens. Prompt tokens are the syllables supplied to GPT-4, and completion tokens are the content it generates. In both cases, the 1000 tokens equal approximately 750 words.

In addition, Microsoft had hidden its game well. Since its launch in early March 2023, the new version of the Bing search engine with ChatGPT was secretly based on GPT-4. The firm has just confirmed it on its blog.

Other services that have already adopted GPT-4 include Stripe, which uses AI to scan company websites and provide a summary to customer service.

The Duolingo language learning platform has also integrated GPT-4 into a new, more expensive subscription plan . Similarly, Khan Academy leverages this AI to create an automated robot tutor.

Meanwhile, the American bank Morgan Stanley created a system based on GPT-4, capable of retrieving information from company documents and providing it to financial analysts.

For their part, developers can subscribe to the waiting list at this address to access the API. Once their application is accepted, they can start building their GPT-4-based applications.

Proud of the improvements made to its Transformer, OpenAI wants GPT-4 to become “a valuable tool for improving people’s lives by powering many applications “. In the future, the firm promises to continue improving the model through the community’s collective efforts, hoping to one day give birth to a general-purpose artificial intelligence (AGI)…

Leave a Comment