aregmi.net
Resume

Building practical agents in Python with OpenAI tools and LangChain agent executors

agents openai langchain tools routing python

Agentic Patterns in Python

What is an agent here?

A normal prompt call gives one answer. An agent can decide steps, call tools, and keep going until done.

Does every use case need an agent? No. If a simple chain works, use that first.

Pattern 1: Chain (simple and predictable)

from openai import OpenAI

client = OpenAI()

def run_chain(user_input: str) -> str:
    step1 = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "Classify this request into one word."},
            {"role": "user", "content": user_input}
        ]
    ).choices[0].message.content

    step2 = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": f"You are a {step1} expert."},
            {"role": "user", "content": user_input}
        ]
    ).choices[0].message.content

    return step2

Good when flow is known upfront.

Pattern 2: Router (my favorite for production)

One model decides who should handle the request.

SPECIALISTS = {
    "billing": "You are a billing support specialist.",
    "technical": "You are a technical support specialist.",
    "general": "You are a general assistant."
}

def route_request(text: str) -> str:
    route = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "Return only one label: billing, technical, or general."},
            {"role": "user", "content": text}
        ]
    ).choices[0].message.content.strip().lower()

    prompt = SPECIALISTS.get(route, SPECIALISTS["general"])

    return client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": prompt},
            {"role": "user", "content": text}
        ]
    ).choices[0].message.content

Same routing idea as the Spring AI version, just SDK-native Python.

Pattern 3: Tool-Using Agent (LangChain)

from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate

@tool
def get_order_status(order_id: str) -> str:
    """Get order status by order id."""
    return f"Order {order_id} is in transit"

llm = ChatOpenAI(model="gpt-4o")
tools = [get_order_status]

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a support assistant. Use tools when needed."),
    ("human", "{input}"),
    ("placeholder", "{agent_scratchpad}")
])

agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

result = executor.invoke({"input": "Track order A123"})
print(result["output"])

Pattern 4: OpenAI Assistants API (managed agent)

If you want OpenAI to manage tool orchestration, use Assistants API. It's more managed, less custom.

Use it when you want quick setup and hosted orchestration. Use raw SDK/LangChain when you want full control.

Guardrails (keep agents from going weird)

  1. Keep tool descriptions very clear
  2. Limit tool set per agent
  3. Add max step/iteration limits
  4. Log tool calls and responses
  5. Validate tool args before execution

What to Remember

  1. Agent = model + tools + loop
  2. Router pattern is usually the best first agentic move
  3. LangChain reduces boilerplate when orchestration gets complex
  4. Don't start with mega-agents — start with focused specialists