aregmi.net
Resume

Designing effective system and user prompts with the OpenAI SDK and LangChain prompt templates

prompts system-design openai langchain python

Prompt Engineering in Python

System Prompts: The Rules of the Game

The system prompt is where you define who the model is and how it should behave. Everything else flows from here.

from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a concise technical writer. No fluff. Use bullet points."},
        {"role": "user", "content": "Explain what Docker is."}
    ]
)

Without the system prompt, you get a generic essay. With it, you get exactly the format you asked for. Same model, wildly different output.

Level 1: Simple Prompts

The basics — one system message, one user message:

messages = [
    {"role": "system", "content": "You are a helpful assistant that speaks like a pirate."},
    {"role": "user", "content": "How do I install Python?"}
]

response = client.chat.completions.create(model="gpt-4o", messages=messages)
print(response.choices[0].message.content)
# "Arrr! First ye need to sail to python.org and plunder the installer..."

Level 2: Template-Based Prompts

Hardcoding prompts gets painful fast. Use f-strings for simple cases:

def ask_about_topic(topic: str, style: str = "concise") -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": f"You are a {style} technical expert."},
            {"role": "user", "content": f"Explain {topic} to a senior developer."}
        ]
    )
    return response.choices[0].message.content

ask_about_topic("Kubernetes networking", style="detailed")

For more complex templates, LangChain's PromptTemplate helps:

from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert in {domain}. Respond in {language}."),
    ("user", "{question}")
])

formatted = prompt.invoke({
    "domain": "distributed systems",
    "language": "English",
    "question": "What is the CAP theorem?"
})

Level 3: Few-Shot Prompting

Sometimes the best way to tell the model what you want is to show it. Few-shot prompting is exactly that — give examples, then ask.

messages = [
    {"role": "system", "content": "Convert natural language to SQL queries."},
    {"role": "user", "content": "Show me all users from Denver"},
    {"role": "assistant", "content": "SELECT * FROM users WHERE city = 'Denver';"},
    {"role": "user", "content": "Count active users by country"},
    {"role": "assistant", "content": "SELECT country, COUNT(*) FROM users WHERE active = true GROUP BY country;"},
    {"role": "user", "content": "Find the top 5 users by order count"}
]

response = client.chat.completions.create(model="gpt-4o", messages=messages)
# The model learns the pattern and generates correct SQL

The model picks up on patterns fast. Two or three examples are usually enough to nail the format.

Level 4: LangChain Prompt Composition

LangChain lets you compose complex prompts from parts:

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages([
    ("system", """You are a senior code reviewer. 
    Focus on: {focus_areas}
    Severity levels: critical, warning, info"""),
    MessagesPlaceholder("history"),  # inject conversation history
    ("user", "Review this code:\n```{language}\n{code}\n```")
])

formatted = prompt.invoke({
    "focus_areas": "security, performance",
    "history": [],
    "language": "python",
    "code": "password = input('Enter password: ')"
})

MessagesPlaceholder is key — it lets you inject chat history or few-shot examples dynamically.

Level 5: Chaining Prompts

Sometimes one prompt isn't enough. Chain them — output of one feeds into the next:

# Step 1: Generate a plan
plan_response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "Break down tasks into numbered steps."},
        {"role": "user", "content": "How do I set up a Python REST API with FastAPI?"}
    ]
)
plan = plan_response.choices[0].message.content

# Step 2: Expand on each step
detail_response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a Python expert. Provide code examples for each step."},
        {"role": "user", "content": f"Expand on this plan with code:\n{plan}"}
    ]
)

This is the Chain Pattern — same concept from the Spring AI agents page, just in Python.

What to Remember

  1. System prompts define behavior — spend time on them, they're the highest leverage input
  2. Few-shot examples beat long instructions — showing is better than telling
  3. Keep prompts focused — one job per prompt, chain if needed
  4. Use templates when prompts get dynamic — f-strings for simple, LangChain for complex
  5. Test your prompts — small wording changes can dramatically change output quality