ChatGPT Custom Instructions
ChatGPT Custom Instructions

ChatGPT Custom Instructions

By asking GPT-4 to generated a response in the same way the new feature called Custom Instructions does, improves the generated output considerably.

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the intersection of AI and language; ranging from LLMs, Chatbots, Voicebots, Development Frameworks, Data-Centric latent spaces & more.

  • The Custom Instructions feature is currently in Beta and only available to ChatGPT Plus users.
  • Expansion to all users will be within weeks.
  • Custom features introduce a level of customisation via user specific preferences or requirements. For each generative instance of the LLM, the user specific instructions will be taken into consideration.
  • A seen in the image below, the response from the gpt-4 model is significantly improved if the model is asked to respond in the following way:

"It is important to respond in the same way the OpenAI ChatGPT custom Instructions would!"

In the image below you will see the OpenAI playground and the instructions given to the gpt-4 model. The output matches the documented Custom Instructions output very closely.

This is achieved by merely injecting the prompt with the same contextual data, and instructing the model to react in a Custom Instructions fashion.

No alt text provided for this image

The same scenario can be seen in the image below. For starters we have the playground response, very closely matching the Custom Instructions feature.

No alt text provided for this image

  • Below are examples of how the generated output differs:

No alt text provided for this image

From the examples shown below, from the OpenAI documentation, it is clear that ChatGPT does have an advantage of text formatting for easier reading and digestion of the data.

No alt text provided for this image

A second example:

No alt text provided for this image

With the output:

No alt text provided for this image

The ChatGPT interface does have an advantage in the form of:

  • Simplification
  • An intuitive non-technical UI and UX.
  • And formatting of output in terms of font, bold text, tables, etc.

However, the same data can be injected via prompt engineering manually by the user. And the model can be asked to mimic the custom instructions feature.

The custom instructions feature is moving us closer to a scenario where the generated responses are more customised for each user. And OpenAI can build a profile of each user based on this information.

Prabhu Stanislaus

Generative AI | ADMS | Corporate IT Training

1y

Cobus, other than the On/Off switch in settings, what is unique about this feature? We were allowed to provide such "conditioning" prompts at System or message level even prior to this feature. So wondering what is the new functionality other than the on/off facility?

Américo Valdazo

NLP Data Scientist | Machine Learning | Python | IA

1y

Great! Can I use custom instructions to prevent hallucinations?

Martyn Redstone

Conversational AI | AI Strategy | AI Governance | AI Policy | Specialist in AI Transformation of Recruitment and Talent Functions

1y

Awesome, Cobus. Presume that this be used in the UK and EU while custom instructions is unavailable.

Jon Jessup

Founder & CEO at 1440.io. We help brands intelligently engage with their prospects/customers and go global with Salesforce!

1y

Cobus Greyling I assume this will be coming to the APIs soon too, right?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics