AI, Blog, Machine Learning

Let’s Try OpenAI API with Python!! One Stop Solution to almost everything for NLP Problems!

Why to wait for ChatGPT API, you can learn how to use OpenAI API with you own code today!

In this guide, we will be exploring the OpenAI API and how it can be used in conjunction with Python. It is important to note that OpenAI recently announced that the ChatGPT model will soon be available through their API. While the exact timing of this release is currently unknown, familiarizing ourselves with the OpenAI API now will allow us to take advantage of its powerful capabilities, such as using GPT-3 for natural language tasks, Codex for code generation, and DALL-E for image creation and editing.

OpenAI API can be used for a wide variety of natural language processing tasks, such as:

  • Language Generation: Generating text, stories, poems, and more.
  • Language Translation: Translating text from one language to another.
  • Text Summarization: Summarizing long articles or documents into a shorter version.
  • Text Classification: Classifying text into different categories or labels.
  • Named Entity Recognition: Identifying and extracting named entities from text, such as people, organizations, and locations.
  • Text Parsing: Analyzing the grammatical structure of text.
  • Sentiment Analysis: Determining the sentiment or emotion expressed in text.
  • Language Modeling: Generating text that follows a specific language pattern or structure.
  • Question Answering: Answering questions based on a provided text or context.
  • Text Generation: Generating text based on a given prompt or context.
  • Speech Recognition: Transcribing spoken words into text.
  • Speech Synthesis: Generating speech from text.
  • Machine Translation: Translating text between multiple languages.
  • and many more.

Additionally, the OpenAI API allows you to fine-tune models to your specific use case, or even train your own models using the OpenAI GPT-3 and other models, it also allows you to use the API for on-premise, which means you can use the API in an environment where internet connection is not available.

It’s worth noting that OpenAI is constantly updating and adding new features to the API, so it’s a good idea to check the OpenAI API documentation for the latest updates and capabilities.

Generate Your API Key

Before we start working with the OpenAI API, we need to login into our OpenAI account and generate our API keys as shown in below picture.

It is important to remember that once you generate your API key on OpenAI, it will not be displayed again, so make sure to copy and save it for future use. One way to keep your API key secure is to create an environment variable, such as “OPENAI_API_KEY”, that will store your key and can be easily accessed in the future.

To utilize the OpenAI API with Python, we must first install the official Python bindings. This can be done by running the command provided in the guide. Once this is done, we can start exploring the various capabilities and functionality of the OpenAI API.pip install openai

The OpenAI API allows developers to access powerful models such as GPT-3, Codex and DALL-E and use them for a variety of tasks. Here are a few examples of what can be done with the OpenAI API using Python:

  1. Generating text using GPT-3:
import openai

openai.api_key = "your_api_key"

response = openai.Completion.create(
    engine="text-davinci-002",
    prompt='What is the capital of France?',
)
print(response["choices"][0]["text"])
The capital of France is Paris.

2. Translating natural language to code using Codex:

import openai

openai.api_key = "your_api_key"

response = openai.Code.create(
    prompt='Write a function that takes a list and returns the first element',
    language='python'
)
print(response["code"])

3. Creating images using DALL-E:

import openai

openai.api_key = "your_api_key"

response = openai.Image.create(
    prompt='Draw a picture of a robot cat wearing a party hat',
)
print(response["data"]["url"])

4. Language Translation:

import openai

openai.api_key = "your_api_key"

response = openai.Translation.create(
    prompt='translate "Hello, how are you?" to Spanish',
    source_language='en',
    target_language='es'
)
print(response["choices"][0]["text"])

5. Text Summarization:

import openai

openai.api_key = "your_api_key"

response = openai.Summarization.create(
    prompt='Please summarize this article about the benefits of exercise',
    text='Recent studies have shown that regular exercise can have a variety of health benefits, including reducing the risk of chronic diseases, improving mental health, and increasing lifespan. Exercise has also been shown to improve cognitive function and memory, as well as helping with weight management. Despite these benefits, many people still struggle to incorporate regular exercise into their daily routine. Experts recommend finding an activity that you enjoy, setting realistic goals, and tracking your progress to help stay motivated.'
)
print(response["choices"][0]["text"])

6. Question Answering:

import openai

openai.api_key = "your_api_key"

response = openai.Answer.create(
    prompt='What is the capital of the United States?',
    text='The United States of America is a federal republic consisting of 50 states and a capital district. Its capital is Washington, D.C.'
)
print(response["answer"])

7. Text Generation:

import openai

openai.api_key = "your_api_key"

response = openai.Completion.create(
    engine="text-davinci-002",
    prompt='Generate a story about a robot who finds love',
    temperature=0.5
)
print(response["choices"][0]["text"])

8. Text Classification:

import openai

openai.api_key = "your_api_key"

response = openai.Classification.create(
    prompt='What is the sentiment of this tweet? "I had a great day today"',
    text='I had a great day today',
    model='text-davinci-002'
)
print(response["choices"][0]["text"])

9. Named Entity Recognition:

import openai

openai.api_key = "your_api_key"

response = openai.Ner.create(
    prompt='What are the named entities in this sentence? "Barack Obama was the 44th president of the United States"',
    text='Barack Obama was the 44th president of the United States',
    model='text-davinci-002'
)
print(response["entities"])

10. Text Parsing:

import openai

openai.api_key = "your_api_key"

response = openai.Parsing.create(
    prompt='What is the grammatical structure of this sentence? "The cat sat on the mat"',
    text='The cat sat on the mat',
    model='text-davinci-002'
)
print(response["choices"][0]["text"])

11. Speech Recognition:

import openai

openai.api_key = "your_api_key"

response = openai.Speech.create(
    prompt='Transcribe this audio file',
    audio_file='audio.mp3'
)
print(response["text"])

12. Speech Synthesis:

import openai

openai.api_key = "your_api_key"

response = openai.Speech.create(
    prompt='Synthesize speech for this text',
    text='Welcome to OpenAI API',
    voice='voice-alpha-001',
    audio_format='mp3'
)
with open('speech.mp3', 'wb') as f:
    f.write(response["data"])

13. Machine Translation:

import openai

openai.api_key = "your_api_key"

response = openai.MachineTranslation.create(
    prompt='Translate "Hello World" to German',
    text='Hello World',
    source_language='en',
    target_language='de'
)
print(response["text"])

14. Language Modeling :

import openai

openai.api_key = "your_api_key"

response = openai.LanguageModeling.create(
    prompt='What will be the next word in this sentence "I love to eat ____"',
    text='I love to eat ',
    model='text-davinci-002'
)
print(response["choices"][0]["text"])
15. Sentiment Analysis:import openai
import os

openai.api_key = "your-key"


# for GPT-3 
response = openai.Completion.create(
 model="text-davinci-003",
 prompt="Classify the sentiment in these tweets:\n\n1. \"I can’t stand homework\"\n2. \"This sucks. I’m bored 😠\"\n3. \"I can’t wait for Halloween!!!\"\n4. \"My cat is adorable ❤️❤️\"\n5. \"I hate chocolate\"\n\nTweet sentiment ratings:",
 temperature=0,
 max_tokens=60,
 top_p=1.0,
 frequency_penalty=0.0,
 presence_penalty=0.0
)
print(response)


#for GPT-3.5 and GPT-4

import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)
print(response)
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "text": "\n1. Negative\n2. Negative\n3. Positive\n4. Positive\n5. Negative"
    }
  ],
  "created": 1674453740,
  "id": "cmpl-6bkCKjvkhjrvOlVg8jU5t9tFgncis",
  "model": "text-davinci-003",
  "object": "text_completion",
  "usage": {
    "completion_tokens": 20,
    "prompt_tokens": 73,
    "total_tokens": 93
  }
}

# for GPT-3.5 and GPT-4
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.",
        "role": "assistant"
      }
    }
  ],
  "created": 1677664795,
  "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW",
  "model": "gpt-3.5-turbo-0613",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 17,
    "prompt_tokens": 57,
    "total_tokens": 74
  }
}

Well Try these yourself as there are small codes, i am not giving outputs and I would like it if you guys try yourself.

15. Function Calling

In an API call, you can describe functions to gpt-3.5-turbo-0613 and gpt-4-0613, and have the model intelligently choose to output a JSON object containing arguments to call those functions. The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code.

The latest models (gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc).

Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model’s context limit and are billed as input tokens. If running into context limits, we suggest limiting the number of functions or the length of documentation you provide for function parameters.

The basic sequence of steps for function calling is as follows:

  1. Call the model with the user query and a set of functions defined in the functions parameter.
  2. The model can choose to call a function; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may generate invalid JSON or hallucinate parameters).
  3. Parse the string into JSON in your code, and call your function with the provided arguments if they exist.
  4. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user.

You can see these steps in action through the example below:

import openai
import json


# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    weather_info = {
        "location": location,
        "temperature": "72",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)


def run_conversation():
    # Step 1: send the conversation and available functions to GPT
    messages = [{"role": "user", "content": "What's the weather like in Boston?"}]
    functions = [
        {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                },
                "required": ["location"],
            },
        }
    ]
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo-0613",
        messages=messages,
        functions=functions,
        function_call="auto",  # auto is default, but we'll be explicit
    )
    response_message = response["choices"][0]["message"]

    # Step 2: check if GPT wanted to call a function
    if response_message.get("function_call"):
        # Step 3: call the function
        # Note: the JSON response may not always be valid; be sure to handle errors
        available_functions = {
            "get_current_weather": get_current_weather,
        }  # only one function in this example, but you can have multiple
        function_name = response_message["function_call"]["name"]
        fuction_to_call = available_functions[function_name]
        function_args = json.loads(response_message["function_call"]["arguments"])
        function_response = fuction_to_call(
            location=function_args.get("location"),
            unit=function_args.get("unit"),
        )

        # Step 4: send the info on the function call and function response to GPT
        messages.append(response_message)  # extend conversation with assistant's reply
        messages.append(
            {
                "role": "function",
                "name": function_name,
                "content": function_response,
            }
        )  # extend conversation with function response
        second_response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo-0613",
            messages=messages,
        )  # get a new response from GPT where it can see the function response
        return second_response


print(run_conversation())# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
    """Get the current weather in a given location"""
    weather_info = {
        "location": location,
        "temperature": "72",
        "unit": unit,
        "forecast": ["sunny", "windy"],
    }
    return json.dumps(weather_info)

Hallucinated outputs in function calls can often be mitigated with a system message. For example, if you find that a model is generating function calls with functions that weren’t provided to it, try using a system message that says: “Only use the functions you have been provided with.”

In the example above, we sent the function response back to the model and let it decide the next step. It responded with a user-facing message which was telling the user the temperature in Boston, but depending on the query, it may choose to call a function again.

If you want to force the model to call a specific function you can do so by setting function_call: {"name": "<insert-function-name>"}. You can also force the model to generate a user-facing message by setting function_call: "none". Note that the default behavior (function_call: "auto") is for the model to decide on its own whether to call a function and if so which function to call.

Please be Nothed that API codes are subjected to change, so please modify accordingly if there is changes in updates versions of openAI

Dall E

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

#Artificial Intelligence
#Machine Learning
#Data Science
#Naturallanguageprocessing
#OpenAI

Leave a Reply