SDK
LangGraph
1. Initial Setup
import os
import nora
# Set API key via environment variable or directly
os.environ["OPENAI_API_KEY"] = "your-openai-key"
# Initialize Nora
nora_client = nora.init(
api_key="your-nora-api-key",
environment="langgraph-test"
)
2. Define LangGraph State
Define the state used by the LLM/Agent using TypedDict.
from typing import TypedDict, List
from langchain_core.messages import BaseMessage
class AgentState(TypedDict):
messages: List[BaseMessage]
The
messagesarray can includeHumanMessage,AIMessage, andSystemMessage.
3. Define Nodes
3-1. Preprocessing Node
from langchain_core.messages import SystemMessage
async def preprocessing_node(state: AgentState):
"""Always add a system message as the first message"""
messages = state["messages"]
if not messages or not isinstance(messages[0], SystemMessage):
messages.insert(0, SystemMessage(content="You are a helpful assistant."))
return {"messages": messages}
3-2. Agent Node
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
def agent_node(state: AgentState):
"""Call the LLM"""
response = llm.invoke(state["messages"])
return {"messages": [response]}
3-3. Postprocessing Node
def postprocessing_node(state: AgentState):
"""Post-process the result messages"""
return state
4. Configure Graph
from langgraph.graph import StateGraph, END
graph = StateGraph(AgentState)
graph.add_node("preprocessing", preprocessing_node)
graph.add_node("agent", agent_node)
graph.add_node("postprocessing", postprocessing_node)
graph.set_entry_point("preprocessing")
graph.add_edge("preprocessing", "agent")
graph.add_edge("agent", "postprocessing")
graph.add_edge("postprocessing", END)
agent = graph.compile()
Important: Nodes and Edges are automatically tracked by Nora.
Therefore, users only need to add a Trace Group at the point of agent execution.
5. Execute Agent with Trace Group (invoke/ainvoke)
5-1. Synchronous Execution
from langchain_core.messages import HumanMessage
with nora_client.trace_group(name="StreamingAgent"):
result = agent.invoke(
{"messages": [HumanMessage(content="Hello, LangGraph!")]}
)
print(result["messages"][0].content)
5-2. Asynchronous Execution
import asyncio
async def async_run():
async with nora_client.trace_group(name="StreamingAgentAsync"):
result = await agent.ainvoke(
{"messages": [HumanMessage(content="Hello async LangGraph!")]}
)
print(result["messages"][0].content)
asyncio.run(async_run())
Trace Group only needs to be created at the point of agent invoke/ainvoke.
LLM calls and message processing inside nodes are automatically tracked.
Streaming events are also fully captured when executed inside a Trace Group.
6. Example: Tool Calling
def get_weather(location: str) -> str:
return f"Weather in {location}: Sunny, 22°C"
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {"location": {"type": "string"}},
"required": ["location"]
}
}
}
]
def agent_with_tools_node(state: AgentState):
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
messages = [{"role": m.type if m.type != "human" else "user", "content": m.content} for m in state["messages"]]
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
tools=tools,
tool_choice="auto",
max_tokens=100
)
return {"messages": [response.choices[0].message]}
Execution with Trace Group:
with nora_client.trace_group(name="StreamingAgentWithTools"):
result = agent.invoke(
{"messages": [HumanMessage(content="What's the weather in Seoul?")]}
)
Again, Trace Group only needs to wrap the agent execution.
Node-level LLM/tool calls are automatically tracked.
Was this page helpful?