Simple Bot using OpenAI API
Date:
This note builds a minimal Python “bot” (bot.py) that uses the OpenAI API to classify customer feedback as Negative or Positive, and—when negative—generate a preliminary, helpful response that a support agent could refine.
The goal is to demonstrate:
- a clean project setup (virtual environment, dependency install)
- safe API key handling (environment variables)
- the core message structure (system/developer vs user messages)
- practical controls (temperature/top-p concepts)
- a small, production-style Python script
Problem statement
Customer support teams often receive large volumes of feedback. We want a lightweight program that:
- Takes a feedback message as input.
- Classifies it as Negative or Positive.
- If Negative, produces a short preliminary solution or next-step suggestion (e.g., troubleshooting steps or escalation guidance).
- Outputs structured, predictable results.
Getting an OpenAI API key
- Sign in to the OpenAI developer platform and open the API Keys page.
- Create a new secret key.
- Store the key securely (do not hard-code it into source code).
Recommended: put it in an environment variable.
Project setup
Create a folder and files like this:
openai-cmd-bot/
bot.py
README.md (optional)
Create a virtual environment (recommended)
macOS / Linux / WSL
python3 -m venv .venv
source .venv/bin/activate
python -m pip install --upgrade pip
Windows (PowerShell)
py -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
Install packages
Install the OpenAI Python SDK:
pip install openai
The official Python library uses the Responses API as the primary interface for text generation.
How OpenAI “messages” work (theory)
Roles: system/developer vs user
In modern OpenAI APIs, messages often use roles in a hierarchy:
- System / Developer: high-level instructions that define behavior and constraints (e.g., output format, safety, task definition).
- User: the actual end-user request or input text.
Think of it like this:
- System/Developer = “how you should behave”
- User = “what I want right now”
(Exact role naming can vary by API style, but the instruction hierarchy concept is consistent.)
Sampling controls: temperature, top_p, top_k (theory)
temperature
Controls randomness:
- Lower (e.g.,
0.0–0.3) → more deterministic, consistent outputs - Higher (e.g.,
0.7–1.0) → more diverse, creative outputs
top_p (nucleus sampling)
Instead of choosing from all tokens, the model considers only tokens in the smallest set whose total probability ≥ top_p.
- Lower
top_p→ more conservative - Higher
top_p→ more variety
General best practice: tune temperature OR top_p, not both aggressively.
top_k
Some AI libraries (outside OpenAI’s standard parameters) use top_k, which means “only consider the top K most likely tokens.” OpenAI APIs commonly expose top_p rather than top_k. If you’ve seen top_k, it’s usually from other ecosystems.
Important: model compatibility
Some GPT-5 family models only support certain sampling parameters in specific modes. For example, OpenAI’s “Latest model” guidance notes compatibility constraints and recommends using reasoning/verbosity controls instead in many cases. citeturn1search8
In this note, we keep the demo simple and avoid passing unsupported sampling fields for gpt-5-nano.
bot.py
Create a file named bot.py:
"""
A minimal customer feedback classifier using the OpenAI Responses API.
Features:
- Reads an API key from environment variables
- Classifies feedback as Positive/Negative
- If Negative, generates a brief preliminary support response
- Produces structured JSON output for reliability
Usage:
python bot.py "The app keeps crashing when I open settings"
"""
from __future__ import annotations
import json
import os
import sys
from dataclasses import dataclass
from typing import Any, Dict
from openai import OpenAI
@dataclass(frozen=True)
class Result:
sentiment: str
confidence: float
summary: str
suggested_reply: str | None
def to_dict(self) -> Dict[str, Any]:
return {
"sentiment": self.sentiment,
"confidence": self.confidence,
"summary": self.summary,
"suggested_reply": self.suggested_reply,
}
def _get_api_key() -> str:
api_key = os.environ.get("OPENAI_API_KEY")
if not api_key:
raise RuntimeError(
"OPENAI_API_KEY is not set. "
"Set it as an environment variable and retry."
)
return api_key
def classify_feedback(client: OpenAI, text: str) -> Result:
"""Classify input feedback and optionally propose a preliminary reply."""
system_instructions = (
"You are a customer support triage assistant.\n"
"Task:\n"
"1) Classify the user's feedback as Positive or Negative.\n"
"2) Provide a 1-sentence summary of the feedback.\n"
"3) Provide a confidence score from 0.0 to 1.0.\n"
"4) If Negative, produce a short suggested support reply with steps.\n\n"
"Output MUST be valid JSON with exactly these keys:\n"
"{\n"
' \"sentiment\": \"Positive\" | \"Negative\",\n'
' \"confidence\": number,\n'
' \"summary\": string,\n'
' \"suggested_reply\": string | null\n'
"}\n"
"Do not output any extra keys or surrounding text."
)
user_message = f"Customer feedback:\n{text}"
response = client.responses.create(
model="gpt-5-nano",
input=[
{"role": "system", "content": system_instructions},
{"role": "user", "content": user_message},
],
)
output_text = response.output_text
try:
data = json.loads(output_text)
except json.JSONDecodeError as e:
raise RuntimeError(
"Model returned non-JSON output. "
"Try tightening the instruction or adding retry logic."
) from e
sentiment = str(data.get("sentiment", "")).strip()
confidence = float(data.get("confidence", 0.0))
summary = str(data.get("summary", "")).strip()
suggested_reply = data.get("suggested_reply", None)
if suggested_reply is not None:
suggested_reply = str(suggested_reply).strip() or None
if sentiment not in {"Positive", "Negative"}:
raise RuntimeError(f"Unexpected sentiment value: {sentiment!r}")
confidence = max(0.0, min(1.0, confidence))
return Result(
sentiment=sentiment,
confidence=confidence,
summary=summary,
suggested_reply=suggested_reply,
)
def main() -> int:
if len(sys.argv) < 2:
print('Usage: python bot.py "your feedback text here"')
return 2
feedback = " ".join(sys.argv[1:]).strip()
if not feedback:
print("Error: empty feedback text.")
return 2
client = OpenAI(api_key=_get_api_key())
result = classify_feedback(client, feedback)
print(json.dumps(result.to_dict(), indent=2, ensure_ascii=False))
return 0
if __name__ == "__main__":
raise SystemExit(main())
Set your API key (environment variable)
macOS / Linux / WSL
export OPENAI_API_KEY="YOUR_KEY_HERE"
For persistence, you can add an export line to ~/.bashrc or ~/.zshrc (recommended by OpenAI key safety guidance). citeturn0search15
Windows (PowerShell)
setx OPENAI_API_KEY "YOUR_KEY_HERE"
Open a new terminal after setx so the variable is available.
Run the bot
python bot.py "The delivery arrived late and the packaging was damaged."
Example outputs
Example 1 (Negative)
Input:
The app keeps crashing when I open Settings. I already tried reinstalling twice.
Output (example):
{
"sentiment": "Negative",
"confidence": 0.91,
"summary": "The user reports repeated app crashes when opening Settings despite reinstall attempts.",
"suggested_reply": "Sorry about the crashes—please share your device model and OS version, then try clearing the app cache/data and updating to the latest build; if it still crashes, we can escalate with logs and a crash timestamp."
}
Example 2 (Positive)
Input:
Support resolved my issue quickly and the new update feels much faster.
Output (example):
{
"sentiment": "Positive",
"confidence": 0.86,
"summary": "The user is happy with support and perceives performance improvements after the update.",
"suggested_reply": null
}
Notes on production hardening (theory)
If you deploy this beyond a demo:
- Add retries for transient errors and strict JSON validation
- Add logging (but never log API keys)
- Store secrets in a secret manager (not in code)
- Add evaluation sets to track quality over time
