Are you tired of struggling to understand how to use the OpenAI API in Python? Or perhaps you’re interested in supercharging your trading with artificial intelligence? Our tutorial is here to help! We’ll guide you through creating an OpenAI account, obtaining API keys, and choosing the best OpenAI model for your needs.
You’ll learn how to perform tasks like text classification, code generation, language translation, and image generation using the OpenAI API in Python. You will see GPT-3, ChatGPT, and GPT-4 models in action.
Whether you’re a beginner, an experienced developer, or an algo trader looking to get a hand up on the competition, this tutorial will give you a solid foundation for using the OpenAI API in your Python projects. Don’t waste any more time struggling with outdated or confusing resources – start learning the easy way with our tutorial today!
Follow along using the OpenAI API Python Tutorial Jupyter Notebook and the video below.
What is OpenAI?
OpenAI is an AI research and development company specializing in developing and deploying state-of-the-art natural language processing models. OpenAIs GPT-3, Codex, and Content filtering models allow you to implement advanced text classification, language generation, summarization, question answering, and chatbot applications.
Elon Musk, Sam Altman, and others founded OpenAI in 2015 in San Francisco.
What is the Open AI API?
The OpenAI API is a cloud platform hosted on Microsoft’s Azure that gives developers access to advanced, pre-trained artificial intelligence models. With the API, developers can easily add cutting-edge AI capabilities to their applications using a variety of programming languages.
OpenAI API Pros and Cons
Pros:
- Free $18 signup credit.
- Official Python wrapper makes it easier to interact with the OpenAI REST API.
- Specialized models for various API tasks.
Cons:
- Price plans are based on token usage, which can be confusing.
- Training can be costly for large datasets. For example, I had to spend roughly $8 to fine-tune the Davinci model on 650 tweets.
- No access to ChatGPT as of this writing.
OpenAI Pricing Plans
Developers can try out the OpenAI API with the free tier, which includes a limited number of API requests and a smaller selection of models. This is an excellent way for developers to get a feel for the API and see how it works without incurring any cost.
Beyond the free tier, OpenAI offers several pay-as-you-go plans that provide access to a larger number of API requests and higher usage quotas. The price depends upon the model used for the task and the number of tokens consumed.
OpenAI defines tokens as “pieces of words used for natural language processing,” where one token is roughly four characters or 0.75 words for English text.
To get an idea, generating 750 English words consume 1,000 tokens, which will cost $0.02 with the Davinci model, as shown in the image below.
The fine-tuning prices depend upon the model type you are fine-tuning. For example, fine-tuning a Davinci costs $0.12 per thousand tokens. We’ll learn about the different models shortly.
Fine-tuning involves adapting a pre-trained model to a new dataset by continuing its training. This can be beneficial as it allows the model to use the knowledge it has already acquired, reducing the time and resources required to train a model from scratch. This can be especially useful when working with small datasets that may not contain enough information to train a model effectively from scratch.
The following screenshot shows the detailed fee structure for fine-tuning OpenAI models.
You can only retrieve Ada model embeddings which cost $0.0004 per thousand tokens. Text embeddings are numeric representations of text inputs. Statistical models cannot process text. You must convert text to numeric representations.
OpenAI image processing model costs depend upon image resolution. As seen below, processing a single image of 1024 x 1024 resolution costs $0.02.
Setting Up OpenAI Python API
To set up OpenAI, you must create an OpenAI account and get your API key.
Creating an OpenAI Account
Follow these steps to create an OpenAI account.
- Sign up for an OpenAI API account.
- You will receive a verification link in your email. Click the link to verify your email address.
- Enter your name and phone number. You will receive a code on your mobile. Enter that on the login page and click the “Continue” button to log in to your OpenAI account dashboard.
Obtaining an API Key
Click your first name’s initial from the top-right corner of the OpenAI dashboard. Click the “View API Keys” link.
You will see the following window. Click the “+Create new Secret Key” button.
You will see your new API Key. Copy and place it in a safe place. Check out this excellent tutorial to use your API keys as environment variables.
Getting Data Using OpenAI
This section shows you how to connect to the OpenAI API with a Python program and get a list of all the OpenAI models. Later, you’ll learn how to perform more sophisticated tasks using the OpenAI API using Python.
Installing OpenAI Python Library
The OpenAI API provides official Python bindings that you can install using the following pip command.
pip install openai
Authenticating Your API Key
To authenticate your API Key, import the openai module and assign your API key to the api_key attribute of the module. In the script below, we use the os.getenv() function to get the value of the OpenAI-Key environment variable, which stores my OpenAI API key.
import os
import openai
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv('OPENAI_KEY')
openai.api_key = api_key
Making Requests
You can request the OpenAI API using any of the openai module’s methods. For instance, the script below lists all the OpenAI models using the list() method of the openai.Model class.
#returns a list of all OpenAI models
models = openai.Model.list()
print(models)
Processing Responses
OpenAI models return the response as an openai.openai_object.OpenAIObject object, which you can convert to a Python dictionary, list, or Pandas DataFrame.
Processing the OpenAI model’s response is highly subjective and depends upon the contents of the OpenAI object. I recommend that you print the response to see the contents of the response and then process it further.
For instance, the script below stores the list of the OpenAI models retrieved in the previous script to a Pandas Dataframe.
# converts the list of OpenAI models to a Pandas DataFrame
import pandas as pd
data = pd.DataFrame(models["data"])
data.head(20)
OpenAI API Model Types
GPT-4
GPT-4 is the newest model from OpenAI. It is so good that it will replace the Codex models for coding. If you want to reduce your token costs, you should still understand the following are still available to you for now…
OpenAI API provides three families of models that differ in terms of capabilities and price points. Listed in no particular order:
- GPT-3: used for text completion, insertion, and editing.
- Codex: allows code completion, insertion, and editing.
- Content Filtering: allows filtering objectionable content.
GPT-3 and Codex family consists of models from Davinci, Curie, Babbage, and Ada series.
GPT-3
GPT-3 models are a collection of natural language processing models that can comprehend and generate human-like language.
GPT-3 family consists of models from the following series:
- Davinci – The latest DaVinci series GPT model is text-davinci-003.
- Curie – The latest curie series model is text-curie-001.
- Babbage – Latest model: text-babbage-001.
- Ada – Latest model: text-ada-001.
Davinci is the most capable and is ideal for understanding complex intentions, identifying causal relationships, and summarizing information for specific audiences. Davinci can perform almost every task with higher accuracy compared to other models.
As per official documentation, the Curie model is quite capable of sentiment analysis and text classification. However, as you’ll soon see, experience tells a different story.
Finally, Babbage and Ada models are ideal for straightforward tasks that don’t require complex reasoning. Babbage and Ada are also the fastest and least expensive.
Codex
The Codex models are good at text-to-code generation, code editing, and code insertion. OpenAI currently offers two Codex models from Davinci and Cushman series. The code-davinci-002, and code-cushman-001 are the latest Codex models from Davinci, and Cushman series, respectively. While Davinci is the most capable, Cushman is slightly faster.
Content Filtering
The content filter model identifies text that might be sensitive or potentially harmful when originating from the API.
In addition, OpenAI API offers endpoints that call the DALL-E model for image processing tasks. The OpenAI API documentation only explains GPT, Codex, and Content filtering models. They do not mention anything about the Dall-E model except in the image processing section, where they mention that their image processing modules use DALL-E model at the backend. In a later section, you will see how to use the DALL-E model for image processing.
Which Model to Choose?
The choice of the model depends upon the task you want to perform. You can use the GPT-3 model for language understanding and generation tasks. ChatGPT and GPT-4 models are primarily optimized for chat but perform equally well for language understanding and generation tasks.
I recommend using ChatGPT over GPT-3.5 since the former achieves the same performance as the latter while being ten times cheaper. You will have to rely on GPT-3.5 models for fine-tuning and embedding since ChatGPT currently does not support fine-tuning.
In terms of model architectures for fine-tuning, GPT-3 Davinci’s models will almost always return the best. However, processing extensive data with Davinci can be expensive. I recommend fine-tuning Davinci first on a small test dataset and comparing its performance with other models. If the other models return comparable performance, you can try them for fine-tuning.
For code completion and generation, Codex models are a better choice. Finally, use the Content filtering model to detect whether a sentence is potentially sensitive.
The best way to test various models quickly and easily is to use the OpenAI Playgrounds.
OpenAI Playgrounds
To use OpenAI playgrounds, log in to your OpenAI account and click Playground from the top menu.
You will see a text area with customization options on the right sidebar.
The screenshot contains an example of input text and output response in the OpenAI playground.
Common OpenAI API Tasks
After we’re done playing, we’ll need to use the OpenAI API to do anything more serious.
Let’s learn how to perform some of the most common tasks, such as text completion, sentiment classification, and image and code generation, using the OpenAI API. You can build upon the information provided in this section to develop custom Python applications that use the OpenAI models.
Natural Language Processing
OpenAI models are primarily trained on textual data in multiple languages and are ideal for natural language processing tasks. This section will show Python code examples of performing various NLP tasks with OpenAI GPT-3 models.
Understanding Prompt Design
Prompts drive OpenAI’s natural language processing models. A prompt can be a single piece of text or a set of instructions that guide the model’s output. It offers context and defines the desired output, whether an answer to a specific question or the completion of a particular text.
Following are some guidelines for prompt design for OpenAI models:
Be Clear and Concise
Your prompt should clearly state the task or question you want the model to perform and provide all the necessary information to guide the output. Try to be as brief and to the point as possible.
Provide Sufficient Context
You should provide context about the task you want GPT-3 model to perform. More context leads to better outputs.
Use Relevant Examples
Provide relevant examples while fine-tuning or few-shot learning. OpenAI models are likely to perform well on unseen data similar to the data they encountered in the prompt.
Test and Refine
Selecting the best prompt is an empirical process. Test your prompts with the model and evaluate the results. Adjust as needed, and iterate until you are satisfied with the output.
Select Appropriate Model Configuration
Selecting the appropriate model and adjusting hyper-parameters are crucial factors in determining the model output. Choose a model suited to your task and then fine-tune the hyper-parameters, such as temperature, to attain the desired output.
I will cover GPT-4 as it can perform all of the tasks equally as well or better than all of the prior models. For many basic tasks, the difference between GPT-4 and GPT3.5 models is not significant, but the cost is.
GPT4 vs. GPT3 Models
The GPT-4 is a chat-based, think ChatGPT model. Using the API is a little different than using the GPT3 API. It’s currently in beta, but it’s fantastic. I’m documenting it here for when it goes live and for those of you who are lucky enough to have the beta, like me :). Here’s the difference:
- model: The type of the GPT-4 model
- messages: Tells the model if the message is from the system, user, or assistant.
- max_tokens: the number of tokens to consume (remember, four tokens typically constitute one English word).
- temperature: Specifies the creativity level of your model. Setting the temperature to a lower value will return more precise and straightforward answers.
To run the GPT-4 models, you use the openai.ChatCompletion.create like so:
response = openai.ChatCompletion.create(
model='gpt-4',
messages=["role":"user", "content": "Complete the test... france is famous for its"]
The message is always stored in the first element of the response object. To get directly to the response, do the following.
print(response.choices[0].message.content)
France is famous for its fine cuisine...
Now GPT-4 and ChatGPT are conversation-based models. This means you can append your messages in a list to give the model context on prior conversations and output:
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
{"role": "user", "content": "Where was it played?"}
]
That’s pretty much what’s new with GPT-4. Once it’s been out of beta for a little, I’ll update this guide.
Text Completion
Text completion refers to adding text to the text you provide in the prompt. You will typically use one of the OpenAI GPT models for text completion tasks.
The create() method from the openai.Completion module allows you to perform text completion. The parameter values are as follows:
- model: The type of the GPT-3 model, which can be one of the Davinci, Curie, Babbage, or Ada models. The previous section explains how to see the list of available models.
- prompt: The prompt’s text guides the model to produce an output.
- max_tokens: the number of tokens to consume (remember, four tokens typically constitute one English word).
- temperature: Specifies the creativity level of your model. Setting the temperature to a lower value will return more precise and straightforward answers.
The following python code shows how to complete the text “France is famous for its for its” using the latest Davinci model.
text = openai.Completion.create(
model="text-davinci-003",
prompt="France is famous for its",
max_tokens=15,
temperature=0
)
print(text)
The following screenshot shows the output response in a Python dictionary. The choices key of the outer dictionary contains a list of responses in the form of nested Python dictionaries. Since we have only one response, you can see only one nested dictionary in the choices list. You can retrieve the response text using the text key of a nested dictionary.
If you keep the temperature value to 0, you will always see the same response (most likely response).
Let’s generate five responses with the temperature value set to 1 and see what responses we get.
for _ in range(5):
text = openai.Completion.create(
model="text-davinci-003",
prompt="France is famous for its",
max_tokens=15,
temperature=1
)
print(text['choices'][0]['text'])
print("==============================")
In the output, you can see five different responses.
Instead of writing a loop, you can generate N responses using the n attribute of the create function. For instance, the script below will generate five responses.
text = openai.Completion.create(
model="text-davinci-003",
prompt="France is famous for its",
max_tokens=15,
temperature=1,
n=5
)
Text Generation
The process of text generation is very similar to text completion. All you have to change is the prompt text to guide the model about the type of text you want to generate.
The following python script generates five funny tagline ideas for a Python tutorial website using a Davinci model.
tag_line = openai.Completion.create(
model="text-davinci-003",
prompt="Write a funny tagline for a Python tutorial website",
max_tokens=15,
temperature=1,
n=5
)
for choice in tag_line['choices']:
print(choice['text'])
print("=================================")
Text Classification and Sentiment Analysis
GPT-3 models are intelligent enough to classify text into predefined categories and perform sentiment analysis. For instance, the following script uses a Davinci model to classify messages into spam or ham categories.
Notice that the prompt asks the model to classify the message’s text. It then passes a value for the message string and mentions the word text followed by a colon. The model will infer that we want to add the word ham or spam after the text.
prompt_text = """Classify the text of the following message as ham or spam
message: you have won a hundred thousand lottery. click this link.
text: """
message_type = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=15,
temperature=0,
)
message_type['choices'][0]['text']
Let’s now perform the same classification using a curie model.
prompt_text = """Classify the text of the following message as ham or spam
message: you have won a hundred thousand lottery. click this link.
text: """
message_type = openai.Completion.create(
model="text-curie-001",
prompt= prompt_text,
max_tokens=15,
temperature=0,
)
message_type['choices'][0]['text']
The output shows that the Curie model misclassified the prompt. If you can afford it, I recommend using one of the Davinci models. They are the most accurate and can make correct inferences even with unclear prompts. For other models, you must articulate your prompt clearly, which can take quite a bit of experimentation.
Let’s see an example of tweet sentiment classification using a Davinci model.
prompt_text = """What is the sentiment of the following tweet
tweet: I liked that the movie finished earlier. It was not worth watching.
sentiment: """
message_type = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=15,
temperature=0,
)
message_type['choices'][0]['text']
Conversations
GPT-3 models allow you to curate conversations which can be random chitchats or factual information.
For example, the following prompt informs the model that it should generate a conversation with a sarcastic pizza chef. It then asks a question from the chef and lets the model generate the response.
# conversation with a sarcastic Pizza chef chatbot
prompt_text = """ The following is a conversation between a Pizza Chef who gives sarcastic responses:
Human: Hi, how much time does it take to bake a mexican pizza?
Chef:
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=15,
temperature = 0.5,
n = 5
)
for choice in response['choices']:
print(choice['text'])
print("=================================")
Here is another example of generating conversation. The script below tells the model that Alex is a cryptocurrency expert. It then asks the model a question (on behalf of a human) and lets the model generate the five most appropriate responses on behalf of Alex.
# conversation with a cryptocurrency expert
prompt_text = """ Alex is a Cryptocurrency expert:
Human: Which cryptocurrencies do you think are the best to invest at the moment?
Alex:
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=40,
temperature = 1,
n = 5
)
for choice in response['choices']:
print(choice['text'])
print("=================================")
I recommend setting a higher temperature value for creative discussions and near zero for factual response generation.
For instance, we set the temperature to zero while asking a question to a football historian.
# conversation with a Football historian
prompt_text = """ The following is a conversation with a Football historian.
Human: Who was the player of the tournament in the 11th Fifa World Cup.
AI:
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=50,
temperature = 0,
n = 1
)
for choice in response['choices']:
print(choice['text'])
The Davinci model in the above script generates a correct response.
Let’s see what happens when we set a higher temperature.
# conversation with a Football historian
prompt_text = """ The following is a conversation with a Football historian.
Human: Who was the player of the tournament in the 11th Fifa World Cup.
AI:
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=50,
temperature = 0.8,
n = 1
)
for choice in response['choices']:
print(choice['text'])
The output shows that the Davinci model still gives the correct answer, albeit in different words. This shows the importance of asking a direct question from the Davinci model, which returns the correct answer even with a higher temperature.
The following script shows that generating the same conversation using the Curie model leads to incorrect output. The Brazilian Ronaldo would have been only two years in 1978 when the 11th Fifa World Cup was held.
# conversation with a Football historian
prompt_text = """ The following is a conversation with a Football historian.
Human: Who was the player of the tournament in the 11th Fifa World Cup.
AI:
"""
response = openai.Completion.create(
model="text-curie-001",
prompt= prompt_text,
max_tokens=50,
temperature = 1,
n = 1
)
for choice in response['choices']:
print(choice['text'])
Text Translation and Conversion
You can use GPT-3 models for text translation and conversation tasks. For example, the following script uses a Curie model (we will do this with Davinci as well) to translate input text into Spanish, French, and Italian.
# text translation with curie
prompt_text = """ Translate the following into Spanish,French, and Italian:
I would like to reserve a table for two persons.
"""
response = openai.Completion.create(
model="text-curie-001",
prompt= prompt_text,
max_tokens=100,
temperature = 0,
n = 1
)
for choice in response['choices']:
print(choice['text'])
The output shows only the french translation.
Let’s now perform the exact translation with a Davinci model.
# text translation with davinci
prompt_text = """ Translate the following into Spanish,French, and Italian:
I would like to reserve a table for two persons.
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=100,
temperature = 0,
n = 1
)
for choice in response['choices']:
print(choice['text'])
You can see all three translations in the output. I double-checked that the translations generated are correct using Google Translate, which again depicts Davinci’s power compared to the Curie Model.
In addition to translations, you can do almost all the other text conversion tasks with GPT-3 models. For example, the following script converts text to emojis.
# text to emoji conversion
prompt_text = """ Convert the following list of sports to emojis:
1. Cricket
2. Football
3. Tennis
4. Cycling
5. Volleyball
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=100,
temperature = 0,
n = 1
)
for choice in response['choices']:
print(choice['text'])
🏏, ⚽️, 🎾, 🚴♂️, 🏐
Text completion, generation, and conversion are not all you can do with the OpenAI GPT model. As seen in the next section, you can also fill in and edit an existing text.
Text Insertion and Editing
You can insert text within a body of text and edit existing text using the GPT-3 models.
Text Insertion
To insert a text, you pass the text preceding the inserted text as a prompt to the model. The suffix attribute of the create() method contains the trailing text.
For example, in the following script, we ask the model to write down the steps to bake pizza. We pass step number 7 as a string to the suffix attribute. The model will generate the first six steps in the output.
# insert text inside another text
prompt_text = """ Write down the steps to bake a Pizza:
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt= prompt_text,
max_tokens=100,
temperature = 0.5,
n = 1,
suffix = "7. Enjoy your pizza."
)
for choice in response['choices']:
print(choice['text'])
Editing Text
You can edit a text you passed as a prompt using the create() method from the openai.Edit module. You need to pass edit instructions to the instructions attribute of the create() method. For example, the following script converts input text from passive to active voice.
Notice we used the text-davinci-edit-001, which is different from the Davinci models you previously used.
# edit text by converting it to active voice
input_text = """ 1. A car was bought by John
2. A car was hit by a truck
3. The website is developed by ABC
"""
response = openai.Edit.create(
model="text-davinci-edit-001",
input = input_text,
instruction="Convert the sentences to active voice.",
temperature = 0
)
for choice in response['choices']:
print(choice['text'])
Code Completion
The Codex models allow you to generate, edit and insert code. Codex models are descendants of GPT-3 models, which are fine-tuned for code tasks.
Generating Code from Text
You can use one of the Codex models to generate code from text. The Codex models start with the word code. For example, the following script uses the code-davinci-002 model to generate a Python function.
You can generate code using the create() method from the openai.Completion module.
You pass the instructions as the prompt to the create() method.
# generate code from text
prompt_text = """ Write a Python function that accepts first name, second name,
and birth date in string format as a parameter values
and returns the full name, and the number of days from the birth date to today """
response = openai.Completion.create(
model="code-davinci-002",
prompt= prompt_text,
temperature=0,
max_tokens=256,
)
for choice in response['choices']:
print(choice['text'])
Let’s run the Python function generated by the model to see if it works correctly.
# calling the function generated above
import datetime
def get_full_name(first_name, second_name, birth_date):
full_name = first_name + " " + second_name
birth_date = datetime.datetime.strptime(birth_date, "%d/%m/%Y")
today = datetime.datetime.today()
days_from_birth = (today - birth_date).days
return full_name, days_from_birth
get_full_name("John", "Doe", "10/06/1995")
Voila, you can verify that the output below is correct.
Inserting Code
Inserting code is similar to inserting text. The prompt text contains the preceding code, while the suffix attribute contains the trailing code. The model will insert the generated code in between the prompt and suffix. Using the suffix attribute can better guide the model on the type of code you want to generate.
For example, in the following section, we add a prompt, “# Python 3,” which tells the model we want to generate code in Python 3. In the suffix, we instruct the model that the end of the code should be “return x * x.” The model will be intelligent enough to infer that we want to define a function that returns the square of the input parameter x, as you can see from the output.
prompt_text = "# Python 3 "
response = openai.Completion.create(
model="code-davinci-002",
prompt= prompt_text,
temperature=0,
max_tokens=256,
suffix = "return x * x"
)
for choice in response['choices']:
print(choice['text'])
Editing Code
Finally, you can edit an existing code just as you edit natural language. You must pass the code you want to edit as the prompt text. The instructions attribute of the create() stores the editing instructions. Here is an example of code editing with a Davinci model.
prompt_text = """def process (x):
return x * X
"""
response = openai.Edit.create(
model="code-davinci-edit-001",
input= prompt_text,
temperature=0,
instruction= "modify this function to return cube of the input parameter"
)
for choice in response['choices']:
print(choice['text'])
OpenAI models are not limited to text processing. You can also perform basic image processing tasks such as text-to-image generation, image editing, etc., using the OpenAI models.
Image Processing
OpenAI uses DALL-E models for image processing tasks. The image API currently provides methods for three image-processing tasks:
- Generating images from text.
- Editing images based on text instructions
- Generate image variations.
Image Generation from Text
The open.Image module’s create() method allows you to generate images using text inputs. You must pass text instructions as the prompt text. The valid image sizes are [‘256×256’, ‘512×512’, ‘1024×1024’].
The following script generates two images of a dog standing on a beach.
prompt_text = "a dog standing on a beach"
response = openai.Image.create(
prompt= prompt_text,
n=2,
size="512x512"
)
for image in response['data']:
print(image['url'])
Here is one of the images that the model generated for me.
Image Editing
To edit an existing image, you need to pass the original input image and the masked version of the input image to the create_edit() method. The area you want to edit in the original image is masked in the masked image.
response = openai.Image.create_edit(
image=open("D:/dog_image.png", "rb"),
mask=open("D:/dog_image_mask.png", "rb"),
prompt="A cat standing on a beach",
n=2,
size="512x512"
)
for image in response['data']:
print(image['url'])
For example, my input image looks like this.
And here is the masked image.
The following script shows how to edit the input image by adding a cat to the masked area. Notice the prompt here is the same as the one used to create dog images except that the word cat replaces the word dog.
response = openai.Image.create_edit(
image=open("D:/dog_image.png", "rb"),
mask=open("D:/dog_image_mask.png", "rb"),
prompt="A cat standing on a beach",
n=2,
size="512x512"
)
for image in response['data']:
print(image['url'])
In the output, you will see two images containing a cat in place of the dog.
Generating Image Variations from Image Inputs
The create_variation method generates variations of input images. Here is an example.
response = openai.Image.create_variation(
image=open("D:/dog_image.png", "rb"),
n=1,
size="512x512"
)
for image in response['data']:
print(image['url'])
Fine-Tuning Existing GPT-3 Models
GPT-3 models are trained on massive amounts of data. You can take the weights of a trained model to further update them based on your data. This process is called fine-tuning.
Fine-tuning is handy when you have a small amount of data. With fine-tuning, you can leverage a GPT-3 model’s existing knowledge and improve it based on your dataset. Usually, a fine-tuned model returns better on your custom dataset than base GPT-3 models.
This section will show you how to fine-tune a GPT model on your custom dataset.
Preparing Your Dataset
For fine-tuning a GPT-3 model, the data should be in a JSON format, with each line representing a single training example consisting of a prompt and its corresponding completion. Here is an example of how your input data should look.
OpenAI provides a CLI tool that converts the input data in CSV, TSV, JSON, XLSX, and JSONL formats to the input format required for fine-tuning a GPT model.
Let’s see an example of how to convert a CSV file to a GPT fine-tuning model-compatible format.
Download the Airline Sentiment Data from Kaggle, and import it as a Pandas dataframe. The source code below imports the dataset and filters the text and airline_sentiment columns.
# data download link
# https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment?resource=download
import pandas as pd
dataset = pd.read_csv("D:/Datasets/Airline_Sentiment/Tweets.csv")
dataset.head()
dataset = dataset.filter(["text","airline_sentiment"])
print(dataset.shape)
dataset.head()
The dataset contains 14,640 records. The text column contains a tweet, whereas the airline_sentiment column contains tweet sentiment, which can be positive, neutral, or negative.
Rename the text and airline_sentiment columns as prompt and completion, respectively. I will only keep the first 1200 records so that the model takes less time to train.
dataset = dataset.head(1200)
print(dataset.shape)
dataset.columns = ["prompt", "completion"]
dataset.head()
Save the DataFrame as a CSV file. You can directly use a CSV file if it contains prompt and completion columns by default.
dataset.to_csv("D:/Datasets/Airline_Sentiment/airline_sentiments.csv", index = False)
The following command uses the OpenAI CLI tool (that comes preinstalled with the OpenAI python library) to convert your input file to the correct format for fine-tuning GPT-3 model.
For our example, you’ll want to accept all of the recommended changes EXCEPT splitting the data into train and test sets. You’ll also want to take now of roughly how long the model will take to train. Training begins after you’re out of the queue, so it can be a while especially when training with DaVinci.
[Recommended] Would you like to split into training and validation set? [Y/n]: n
The file is saved as “airline_sentiments_prepared.jsonl”
If you open the file, you will see that prompts contain a trailing “nn###nn” string, whereas completions contain a preceding white space. These values tell a GPT-3 model about the end and start of prompts and completions.
The next step is to create a training file for fine-tuning the model. To do so, you can use the create() method from the openai.File module. You need to pass the path of the formatted JSON file to the create() method.
def create_training_file(file_path):
file = openai.File.create(
file=open(file_path, "rb"),
purpose='fine-tune'
)
return file
training_file = create_training_file("D:/Datasets/Airline_Sentiment/airline_sentiments_prepared.jsonl")
print(training_file)
The create() method returns file information, including the file ID you need to fine-tune your GPT-3 model.
The following code shows how to retrieve the file id of your training file.
training_file_id = training_file["id"]
training_file_id
Fine-tuning a Sentiment Classification Model
For fine-tuning a GPT-3 model, you need to call the create() method of the openai.FineTune module. The following are the parameter values for the create() method.
- training_file : the id of your training file (returned by the openai. file.create() method).
- model: the model for fine-tuning (the following script uses ada).
- n_epochs: the number of training iterations. This is an optional parameter, and its default value is 4.
fine_tuned_model = openai.FineTune.create(training_file = training_file_id,
model = "ada",
n_epochs = 4,
)
print(fine_tuned_model)
Calling the openai.FineTune.create() method launches a fine-tuning job in a separate thread. Initially, it was confusing to see that the code executes in a matter of seconds, and you can see model properties by printing the model, as the following script shows.
You can only use the model for inference once the fine-tuning job completes. Inference refers to generating new text based on a given prompt. In the context of fine-tuning classification models, inference refers to classifying input text into predefined categories using the fine-tuned model.
A fine-tuning job undergoes several events which can print via the openai.FineTune.list_events() method. The method returns all the past events for a fine-tuning job.
openai.FineTune.list_events(id= fine_tuned_model.id)['data']
The model is ready for inference when you see an event with the message Fine-tune succeeded. You can retrieve streaming event messages as server sent events if you set the stream attribute of the list_events() method to true. This allows you to generate a notification when you receive a streaming event for fine-tune completion. Otherwise, you will have to manually check (using the event list) whether the model is fine-tuned or not.
The third-last event contains the id of the fine-tuned model. You can also see the model id if you print the object returned by openai.FineTune.create() method.
Once a model is fine-tuned, you can use it like any other base model that we’ve been using previously, passing it our custom model name.
Let’s test our model. We pass a tweet with positive sentiment as a prompt to the create() method. Notice that we append “nn###nn” to our input tweet because our fine-tuned model has learned that the input tweet ends with this set of characters.
# use your newly trained model to make predictions
prompt_text = "The flight landed ahead of time. The food was delicious \n\n###\n\n"
response = openai.Completion.create(
model="ada:ft-alpha-2023-01-24-15-17-09",
prompt = prompt_text,
max_tokens=1,
temperature=0
)
print(response['choices'][0]['text'])
In the output, you can see that fine-tuned model correctly decodes the positive sentiment from the tweet.
Similarly, the fine-tuned model correctly classifies the following tweet with a neutral sentiment.
# use your newly trained model to make predictions
prompt_text = "I reached London on this flight \n\n###\n\n"
response = openai.Completion.create(
model="ada:ft-alpha-2023-01-24-15-17-09",
prompt = prompt_text,
max_tokens=1,
temperature=0
)
print(response['choices'][0]['text'])
Let’s classify the same tweet, “I reached London on this flight” with the Ada base model to see if our fine-tuned model did a better job.
prompt_text = """Give me the sentiment of the following tweet
'I reached London on this flight' """
response = openai.Completion.create(
model="text-ada-001",
prompt = prompt_text,
max_tokens=10,
temperature=0
)
print(response['choices'][0]['text'])
You can see that the base Ada model didn’t correctly predict the input tweet’s sentiment. Our fine-tuned model did a better job.
Let’s try to predict the sentiment with the Davinci base model.
prompt_text = """Give me the sentiment of the following tweet
'I reached London on this flight' """
response = openai.Completion.create(
model="text-davinci-003",
prompt = prompt_text,
max_tokens=10,
temperature=0
)
print(response['choices'][0]['text'])
Voila, the Davinci model successfully predicts the sentiment without fine-tuning.
Before fine-tuning your custom model, you should always try the base Davinci model first. If Davinci fails to deliver, you should fine-tune a cheaper model such as Ada or Babbage. If the results are still not optimal, you can try fine-tuning Davinci or Curie. Remember, fine-tuning Davinci should be your last resort, as it can be costly.
You can run multiple fine-tuning jobs at a time. You can get a list of all your fine-tuning jobs using the openai.FineTune.list() method.
models_list = openai.FineTune.list()
models_list
Getting Text Embeddings
You cannot directly feed text to a statistical model. You need to convert text to a numeric representation. Text embeddings are numeric representations of text inputs.
You can retrieve text embeddings or vector values for your text, from GPT-3 models and train your machine-learning models using these embeddings.
The openai.Embedding.create() method returns embeddings for the input text you pass to the input attribute of the method.
The following script returns text embeddings using the text-embedding-ada-002 model.
# create embeddings for input text
input_text = "The flight was on time, and the food was delicious."
response = openai.Embedding.create(
model = "text-embedding-ada-002",
input = input_text
)
print(len(response['data'][0]['embedding']))
response['data'][0]['embedding']
The output is a 1536-dimensional vector, which I assume is the default embedding size for Ada model.
Train Machine Learning Classifiers with GPT-3 Embeddings
You can use OpenAI to turn your text into GPT-3 model embeddings for input to your machine learning classifying tasks.
The following script defines a method that returns embeddings for input text.
## code taken from: https://beta.openai.com/docs/guides/embeddings/use-cases
def get_embedding(text, model="text-embedding-ada-002"):
text = text.replace("\n", " ")
return openai.Embedding.create(input = text, model=model)['data'][0]['embedding']
The script below uses the Pandas apply() method to get text embeddings for the first 42 records from the Airline sentiment dataset, and store them in a new column named ada_embedding. As per the API documentation, you can only send 60 requests per minute so I’ll only make 50 requests here, but you can easily rate limit your requests to get more data.
# RateLimitError: Rate limit reached for default-global-with-image-limits in organization org-TnYTFIlbYtYrctiFaodPkXpw
# on requests per min. Limit: 60.000000 / min. Current: 70.000000 / min. Contact support@openai.com
# if you continue to have issues. Please add a payment method to your account
# to increase your rate limit. Visit https://beta.openai.com/account/billing to add a payment method.
###
import pandas as pd
dataset = pd.read_csv("D:/Datasets/Airline_Sentiment/Tweets.csv")
dataset = dataset.filter(["text","airline_sentiment"])
dataset = dataset.head(50)
dataset['ada_embedding'] = dataset['text'].apply(lambda x: get_embedding(x, model='text-embedding-ada-002'))
dataset.head()
Next, we convert the embeddings from the ada_embedding column to a list, which we will use as input features for our machine learning.
X = dataset['ada_embedding'].to_list()
We convert sentiment labels to integers using the LabelEncoder() class from Sklearn’s preprocessing module.
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
y = le.fit_transform(dataset['airline_sentiment'])
The following script divides the input dataset into training and test sets.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=42
)
And the script below trains the random forest classifier on our training data. Random forest classifier is a commonly used machine learning based on decision trees.
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=10)
rfc.fit(X_train, y_train)
We can then use our trained random forest classifier to predict tweet sentiment and review the results.
rfc_pred = rfc.predict(X_test)
Now let’s print the results
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test,rfc_pred))
print(classification_report(y_test,rfc_pred))
The output shows that using the Ada text embeddings with only 50 records, our trained model can make predictions on unseen test sets with 91% accuracy. Pretty impressive!
Zero-Shot Classification with GPT-3 Embeddings
Zero-shot classification refers to making predictions on data never seen by a model before.
GPT-3 embeddings allow you to perform zero-shot classification on unseen data.
One approach is to find embeddings and then measure how similar they are to the targets. We can do this with cosine similarity.
Cosine similarity is a measure of similarity between two sequence of numbers. You can select the label with the highest cosine similarity with text input embeddings, as the category of the input text.
Let’s perform zero-shot classification on tweets from the Airline Sentiment dataset.
The following calculates embeddings for the three target labels: positive, negative, and neutral. The label_score() method takes input text and label embeddings as inputs and returns the cosine similarity between the input text and each output label.
from openai.embeddings_utils import cosine_similarity
labels = ['positive', 'negative', 'neutral']
label_embeddings = [get_embedding(label, model = "text-embedding-ada-002") for label in labels]
def label_score(review_text, label_embeddings):
review_embedding = get_embedding(review_text, model='text-embedding-ada-002')
return [cosine_similarity(review_embedding, label_embeddings[0]),
cosine_similarity(review_embedding, label_embeddings[1]),
cosine_similarity(review_embedding, label_embeddings[2])
]
The following script finds cosine similarities between a sample tweet and output labels.
label_scores = label_score("The flight was aweful, and the food was really bad", label_embeddings)
label_scores
The output shows the highest cosine similarity between the input text and the label index 1.
The following script finds the index of the maximum value from the label_scores list and uses this index value to find the corresponding label value from the labels list.
max_index = label_scores.index(max(label_scores))
label = labels[max_index]
print(label)
Let’s try to predict the label for another tweet:
label_scores = label_score("The flight was ahead of time, and the food was delicious", label_embeddings)
max_index = label_scores.index(max(label_scores))
label = labels[max_index]
print(label)
Now you know how to process natural language text with OpenAI models. But can you process whatever you want? The answer is NO.
But how would you know your text violates OpenAIs content policy? This is where OpenAI content moderation model’s come to play.
Content Moderation with Open AI’s Content Policy
OpenAI provides a model which predicts if your input content complies with OpenAI’s content policy. This feature enables you to filter content that violates OpenAI’s content policy.
The openai.Moderation.create() method classifies the input text into one or more predefined categories: hate, hate-threatening, self-harm, sexual, sexual-minors, violence, and violence graphics.
For example, the following script classifies the sample input text into hate, hate/threatening, and violence categories.
input_text = "I want to exterminate their generations"
response = openai.Moderation.create(
input=input_text,
)
response
Playing Around with Chat GPT
ChatGPT is a variant of the GPT-3 model explicitly designed for chatbot applications. ChatGPT, with 20 billion parameters, is much smaller than GPT-3 mode, which consists of 175 billion parameters. ChatGPT is much faster than GPT-3 as GPT is a general-purpose language model while ChatGPT is a chatbot application.
OpenAI has yet to include the ChatGPT model in its official API. You can try ChatGPT in your web browser using an OpenAI account.
Go to the ChatGPT web page and click the “Try CHATGPT” button. Start typing anything in the text bar and hit enter.
For example, I asked ChatGPT to implement the algorithm proposed in a particular research paper, and it returned the implementation detail, as shown in the following screenshot.
Frequently Asked Questions
Does OpenAI Use Python?
Yes, OpenAI uses Python as one of the main programming languages for its research and development projects, including GPT-3. The Python TensorFlow framework is commonly used to train OpenAI deep learning models.
How Was GPT-3 Trained?
The GPT-3 is a deep learning model trained using unsupervised learning methods. This approach trains the model on vast text data without utilizing explicit labels or annotations. After the initial pre-training process, the model is fine-tuned on smaller datasets for specific tasks, including language translation, summarization, and answering questions.
Is GPT-3 Only English?
GPT-3 is primarily trained on text in the English language. It can understand and generate text in other languages. The model’s performance in languages other than English may not be comparable.
What Language Is GPT-3 Coded In?
GPT-3 deep learning models are primarily coded in common deep learning frameworks such as TensorFlow and Keras. OpenAI API web services are developed in languages such as JavaScript, Python, and Go.
Is GPT-3 Deep Learning?
Yes, GPT-3 is a deep learning model based on the Transformer neural network architecture.
Can GPT-3 Write Code?
GPT-3 can generate code in popular programming languages such as Python, C++, Java, and JavaScript when given appropriate human instructions. The quality of the code that GPT-3 generates is contingent on the complexity of the task and the specificity of the instructions given by the human operator.
Why Is GPT-3 So Powerful?
GPT-3 is one of the largest language models ever trained. There are two factors behind the powerful capabilities of GPT-3 models: (1) Training with 175 billion parameters using, the state-of-the-art Transformers neural network architecture, and (2) Massive amount of training data from a variety of sources, including news articles, books, websites, forums, and social media posts, among others.
How Much RAM Do I Need for GPT-3?
Several factors determine the amount of RAM you need for operating GPT-3, such as the type of GPT-3 model to use, the dimension of input data, and the hardware and software configurations in use. As a rough estimate, you need at least 16GB of RAM to run GPT-3 models locally on your system.
Will GPT-3 Replace Programmers?
GPT-3 is not likely to replace programmers. GPT-3 is a tool that can assist programmers in writing code. GPT-3 can write simple code snippets. Human programmers are still needed for designing complex logic and debugging and testing the code.
Is GPT-3 the Most Powerful AI?
GPT-3 is one of the best AI models for natural language processing tasks. However, it is not the most powerful AI for every task. For example, ImageNet pre-trained models such as ResNet, VGG, and DenseNet have proven to be state-of-the-art for image processing tasks. The choice of the best AI model is highly contingent on the nature of the task you want to perform.
Can GPT-3 Solve Math Problems?
GPT-3 may not be the best suited model for solving math problems as it was not specifically designed for it anddoes not possess the ability to perform mathematical computations or equation manipulation.
Is GPT-3 Few Shot Learning?
GPT-3 is a few-shot learning model as you can fine-tune it with a few samples of data, and the model will typically generalize well on similar unseen data.
OpenAI Cloud Alternatives
Following are some AI platforms that provide services similar to OpenAI Cloud.
The Bottom Line
OpenAI’s models are among the most advanced for natural language processing. They can be accessed through the OpenAI playground and programming language bindings, such as Python. This OpenAI tutorial demonstrates using the OpenAI API through the playground and Python bindings to develop custom NLP applications. The model to use depends on the budget and the specific task. For most cases and if budget allows, I recommend the Davinci; otherwise, the baggage and Ada models are cost-effective options.
Very usefull article.
Thank you! I’m glad you liked it.