Getting Started with Python AI
How do I get started?
Same philosophy as the Java side — just start. But here's a roadmap.
-
Install the SDK
pip install openaiThat's your foundation. Everything else builds on this.
-
Get an API key
Head to platform.openai.com, create a key, and set it as an environment variable:
export OPENAI_API_KEY="sk-..."Or on Windows:
$env:OPENAI_API_KEY = "sk-..."The SDK picks it up automatically. No hardcoding keys in code — ever.
-
Make your first call
from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "user", "content": "What's the capital of Nepal?"} ] ) print(response.choices[0].message.content) # "The capital of Nepal is Kathmandu."That's it. You just called an LLM from Python.
Understanding the Message Structure
Every OpenAI call takes a list of messages. Each message has a role and content. Three roles matter:
| Role | Purpose |
|---|---|
system |
Sets the persona and rules — the model follows these instructions |
user |
The human's input |
assistant |
The model's previous responses (for multi-turn conversations) |
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful librarian who recommends books."},
{"role": "user", "content": "I liked The Metamorphosis. What should I read next?"}
]
)
The system message shapes how the model responds. The user message is what it responds to. Simple, but powerful.
Progressive Complexity
Just like the Spring AI side, let's walk through the levels:
Level 1: Basic prompt → response
messages = [
{"role": "user", "content": "Hey!"}
]
# Response: "Hello! How can I help you today?"
Level 2: Add context via system prompt
messages = [
{"role": "system", "content": "The user's name is Sky. Always greet them by name."},
{"role": "user", "content": "Hey!"}
]
# Response: "Hey Sky! What can I do for you?"
Level 3: Inject domain knowledge (proto-RAG)
books_context = "Available books: The Metamorphosis by Kafka, 1984 by Orwell, Dune by Herbert"
messages = [
{"role": "system", "content": f"You help users find books. Here's our catalog: {books_context}"},
{"role": "user", "content": "Do you have anything by Kafka?"}
]
# Response: "Yes! We have The Metamorphosis by Kafka."
Level 4: Multi-turn conversation
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "My name is James."},
{"role": "assistant", "content": "Nice to meet you, James!"},
{"role": "user", "content": "What's my name?"}
]
# Response: "Your name is James."
Notice: you're managing the conversation history yourself. The model doesn't remember anything — you pass the full message list each time. This is the fundamental difference from a stateful chat app.
Streaming Responses
For longer responses, streaming gives you tokens as they're generated instead of waiting for the full response:
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a haiku about Python programming"}],
stream=True
)
for chunk in stream:
content = chunk.choices[0].delta.content
if content:
print(content, end="", flush=True)
Async Support
The SDK has first-class async support. If you're building a web app with FastAPI or similar:
from openai import AsyncOpenAI
client = AsyncOpenAI()
async def ask(question: str) -> str:
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content
Now What?
You've got the basics. From here, the paths branch:
- Tools/Function calling — let the model call your functions
- Structured output — get JSON back instead of prose
- RAG — ground responses in your own data
- Agents — let the model plan and execute multi-step tasks
Each of these builds on what you just learned. The message list is always the foundation.
Installing LangChain (When You Need It)
You don't need LangChain to start. But when you want RAG, agents, or complex chains:
pip install langchain langchain-openai langchain-community
LangChain wraps the same OpenAI calls you just learned, but adds orchestration layers on top. Learn the raw SDK first, then LangChain will make a lot more sense.