Rise of the ChatBots (1) - Chat Completions

By Charles LAZIOSI
Published on
  1. Introduction
  2. First integration with the official library (Python)
  3. Integration with the API (Rust)
  4. Chat Completions streaming (OpenAI Library)
  5. Chat Completions streaming with Rust

Introduction

The OpenAI API Chat Completions endpoint powers ChatGPT and provides a simple way to take text as input and use a model like GPT-4 to generate an output. This API allows users to create a model response for a given chat conversation by taking a list of messages as input and returning a model-generated message as output. It is a powerful tool for adding conversational AI features to applications and has been used to build chatbots, content writing tools, and more.

It's time to get hands-on with a first example of integration in Python.

First integration with the official library (Python)

To integrate the OpenAI Chat Completions API with Python, you will need to perform a series of steps, which I'll outline below. Before you start, ensure you have an OpenAI API key, which is required to authenticate your requests.

Here's a step-by-step guide:

Step 1: Set up your environment

Make sure Python is installed on your system. It’s also a good practice to create a virtual environment for your project to manage dependencies:

python -m venv openai-env
source openai-env/bin/activate  # For Unix or MacOS
openai-env\Scripts\activate  # For Windows

Step 2: Install the openai package

Install the openai package using pip, which is the official library provided by OpenAI:

pip install openai

Step 3: Authenticate your API request

You’ll need to set up authentication using your OpenAI API key. How to proceed with an environment variable ?

To set up authentication for the OpenAI API using an environment variable, you can follow these steps:

  1. Locate your OpenAI API Key:

    • Sign in to your OpenAI account on the OpenAI website.
    • Navigate to the API section and find your API keys. If you don't have one, create a new API key.
  2. Setting Up the Environment Variable: Depending on your operating system, the method for setting up an environment variable will differ slightly.

For Linux or macOS:

  • Open a terminal.
  • Use a text editor like nano or vim to open your shell's configuration file (e.g., ~/.bashrc, ~/.zshrc, etc.)
nano ~/.bashrc
  • Add the following line at the end of the file:
export OPENAI_API_KEY='your_api_key_here'
  • Save and close the file.
  • To make the changes effective, you can either restart your terminal or source your configuration file with:
source ~/.bashrc

For Windows:

  • Open Start Search, type in "env", and choose "Edit environment variables for your account".
  • Click on "New" under User variables.
  • In the Variable Name field, enter OPENAI_API_KEY.
  • In the Variable Value field, enter your actual API key.
  • Click OK and Apply as necessary to save this variable.
  1. Accessing Environment Variable in Your Code: When writing code that needs to access this environment variable (for example in Python), you can use:

    import os
    
    # Retrieve API key from environment variable
    openai_api_key = os.getenv('OPENAI_API_KEY')
    
    if openai_api_key is None:
        raise ValueError("Please set up OPENAI_API_KEY as an environment variable.")
    
    # Now you can use this key with OpenAI's client library or HTTP requests
    
  2. Using Environment Variable with OpenAI's Client Library: If you're using Python and have installed OpenAI's official client library (openai), you can initialize it like so:

    import openai
    
    # It will automatically look for 'OPENAI_API_KEY' environment variable
    openai.api_key = os.getenv('OPENAI_API_KEY')
    
    # Now you are ready to make calls to OpenAI's services using their library
    

By storing your API key in an environment variable, it remains secure and separate from your codebase which is especially important if you're sharing code or using version control systems like Git. Always remember never to hard-code secrets directly into your source code.

Step 4: Write the integration code

Here's a basic example of how you can use the Chat Completions endpoint:

# Import the OpenAI module and os module for environment variable access.
from openai import OpenAI
import os

# Set the API key for the OpenAI API from an environment variable for security reasons.
OpenAI.api_key = os.getenv('OPENAI_API_KEY')

# Create an instance of the OpenAI class.
client = OpenAI()

# Make a call to the chat completion endpoint of the GPT model specifying:
# - The model version as "gpt-4-1106-preview".
response = client.chat.completions.create(
	model="gpt-4-1106-preview",
	messages=[
		{"role": "user", "content": "Who won the world series in 2020?"}
	]
)

# Print out the response from the model.
print(response.choices[0].message.content)

This is how you can integrate OpenAI's Chat Completions endpoint into a Python application for generating chatbot responses, having conversations, and more. Always remember to follow best practices when handling API keys and sensitive data.

Integration with the API (Rust)

Integrating OpenAI's Chat Completions API in Rust involves several steps, similar to the integration in Python. Here is a step-by-step guide to get started:

Step 1: Set up your Rust environment

Ensure that you have Rust installed on your system. If not, you can install it using rustup, which is the recommended tool for managing Rust versions and associated tools. You can follow the instructions at https://www.rust-lang.org/tools/install.

Step 2: Create a new Rust project

Create a new Rust project using Cargo, which is Rust’s build system and package manager:

cargo new openai_chat_integration
cd openai_chat_integration

Step 3: Add dependencies

You will need an HTTP client to make requests to the OpenAI API. One of the most popular HTTP clients in Rust is reqwest. You'll also need serde and serde_json for JSON serialization and deserialization. Add these dependencies to your Cargo.toml:

[dependencies]

reqwest = { version = "0.*", features = ["json"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tokio = { version = "1", features = ["full"] }
dotenv = "0.15" # To manage environment variables.

Step 4: Set up environment variable handling (optional)

It's good practice not to hard code sensitive information like API keys in your source code. Instead, use environment variables or another secure method of configuration.

Create a .env file in the root of your project and add your OpenAI API key:

OPENAI_API_KEY=your-api-key

Make sure you add .env to your .gitignore file to prevent accidentally committing it.

Step 5: Write the integration code

Now let's write some code in main.rs:

use reqwest;
use serde::{Deserialize, Serialize};
use serde_json::json;
use dotenv::dotenv;
use std::env;

#[derive(Serialize, Deserialize)]
struct Message {
    role: String,
    content: String,
}

#[tokio::main]
async fn main() -> Result<(), reqwest::Error> {
    dotenv().ok(); // Load .env file if available
    
    let api_key = env::var("OPENAI_API_KEY").expect("OPENAI_API_KEY must be set");
    
    let client = reqwest::Client::new();
    let conversation = vec![
        Message {
            role: "system".to_string(),
            content: "You are a helpful assistant.".to_string(),
        },
        Message {
            role: "user".to_string(),
            content: "Who won the world cup in 2018?".to_string(),
        },
    ];

    let response = client.post("https://api.openai.com/v1/chat/completions")
        .header("Authorization", format!("Bearer {}", api_key))
        .json(&json!({
            "model": "gpt-4-1106-preview",
            "messages": conversation,
        }))
        .send()
        .await?;

    let response_text = response.text().await?;
    
    println!("{}", response_text);
    
    Ok(())
}

This example uses async/await syntax which is common in modern Rust for handling asynchronous operations such as network requests.

With this basic setup, you've successfully integrated OpenAI's Chat Completions endpoint into a simple Rust application that sends messages and receives responses from GPT-4 or other supported models offered by OpenAI.

Always remember that proper error handling and security practices should be adhered to when dealing with external APIs and sensitive data like API keys.

Chat Completions streaming (OpenAI Library)

Overall, let's see how to interact with OpenAI's Chat Completions endpoint using an asynchronous streaming approach where you can handle output incrementally rather than all at once after completion. This Python code uses the OpenAI API to interact with the Chat Completions endpoint in a streaming manner. It is designed to send messages to the OpenAI model (in this case, GPT-4) and receive a streamed response.

from openai import OpenAI
import os

OpenAI.api_key = os.getenv('OPENAI_API_KEY')
client = OpenAI()

response = client.chat.completions.create(
model="gpt-4-1106-preview",

messages=[
	{"role": "user", "content": "Who is Charlemagne?"}
	],
	stream=True
)

# Print out the response from the model.
for event in response:
	event_text = event.choices[0].delta
	answer = event_text.content;

	# Test if answer is not a NoneType
	if answer is None:
		answer = ''
	print(answer, end='', flush=True)

At the time of penning this text, GPT-4 Turbo lacks the capability to utilize the library for this task. It might be an amusing challenge to attempt crafting a solution without the aid of a library, perhaps in a programming language like Rust, where the OpenAI library is yet to be introduced.

Chat Completions streaming with Rust

We're working with the programming language Rust, but we don't have any extra tools or code libraries to help us out. Things might get tough because there's a problem with the way OpenAI sends us information in chunks of JSON format; sometimes the information is incomplete. However, we expect this issue to be resolved soon. Despite this, dealing with it can help us learn more about how OpenAI sends messages piece by piece.

Please find the code who solves this:

/*
    Text Completion with ChatGPT OpenAI API (streaming)
    This function takes a question as input and returns the answer from the AI.
*/
pub async fn ask_ai_streaming(question: &str) -> Result<()> {
    // Create a new HTTP client
    let client = Client::new();

    // Simplify request building with chained calls
    let mut response = client
        .post(COMPLETION_URL)
        .bearer_auth(env::var("OPENAI_API_KEY")?)
        .json(&json!({
            "model": "gpt-4-0613",
            "messages": [{"role": "user", "content": question}],
            "temperature": 0.7,
            "stream": true
        }))
        .send()
        .await?;

    // Buffer for incomplete chunks
    let mut buffer = String::new();

    // Read the response body as chunks
    while let Some(chunk) = response.chunk().await? {
        // Convert chunk bytes to string and add it to the buffer
        buffer.push_str(&String::from_utf8_lossy(&chunk));

        // Process each line separately within the buffered data
        while let Some(pos) = buffer.find("\n\n") {
            let line = &buffer[..pos]; // Get one line from the buffer

            if line == "data: [DONE]" {
                return Ok(());
            }

            // Parse the line as JSON
            if let Some(json_data) = line.strip_prefix("data: ") {
                match serde_json::from_str::<ChatCompletionChunk>(json_data) {
                    Ok(chat_chunk) => {
                        if let Some(choice) = chat_chunk.choices.get(0) {
                            if let Some(content) = &choice.delta.content {
                                print!("{}", content);
                                io::stdout().flush()?;
                            }
                        }
                    }
                    Err(e) => eprintln!("Error parsing JSON: {}", e),
                }
            }

            // Remove the processed line from the buffer including delimiter "\n\n"
            buffer.drain(..=pos + 1);
        }
    }

    Ok(())
}

If you are interested to run this project, you can check out the code on my GitHub space : https://github.com/claziosi/RustAI