Span Type
Span Types for the Nora Observability Python SDK
Span Types
The span_type parameter categorizes operations. Types are case-insensitive and converted to lowercase internally.
Available Span Types
Decision-Generating Types:
llm: Language model callsrag: Retrieval-augmented generationtool: Tool or function executionselect: Selection from optionsretrieval: Data retrieval
Reference Types:
workflow: End-to-end processagent: Agent-specific operationsrouter: Routing logicpolicy_eval: Policy evaluationCustom types are allowed
Decision Generation Requirements
A decision represents a choice made during execution. Functions must return data in specific formats to generate decisions.
For rag and tool types:
return {
"options": [
{"content": "Option 1 text", "score": 0.95},
{"content": "Option 2 text", "score": 0.87},
{"content": "Option 3 text"}
]
}The score field is optional.
For select type:
def select_best(documents):
# documents parameter becomes options automatically
return {
"content": "Selected document text",
"score": 0.92
}Input should be a list of options. Output should be the selected option with content and optional score.
For llm type:
When tools are provided to an LLM call, they automatically become options. The tool actually invoked becomes the selected option. No manual formatting is needed.
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information"
}
},
{
"type": "function",
"function": {
"name": "search_database",
"description": "Search database"
}
}
]
response = client.responses.create(
model="gpt-5",
input=[
{
"role": "user",
"content": "What's the weather in Seoul?"
}
],
tools=tools,
tool_choice="auto"
)This automatically generates:
options:
[{"content": "get_weather", "score": null}, {"content": "search_database", "score": null}]selected_option:
[{"content": "get_weather", "score": null}]
For retrieval type:
No specific format required for general retrieval. To generate decisions, use the same format as rag with options.
Understanding Execution Spans and Decisions
Execution Span:
An execution span records the execution of a function decorated with @nora_client.trace(). It captures input, output, duration, and errors. Execution spans are only created inside trace groups.
Decision:
A decision is extracted from an execution span when the span represents a choice between options. Decisions track which option was selected during execution. Examples include which tool an LLM chose or which document was selected from a RAG retrieval.
Decisions require the input/output format described above.
Structure:
trace_group
├─ execution_span (function execution)
│ └─ decision (if applicable)
├─ execution_span
│ └─ decision
└─ llm_call (automatically tracked)RAG Example
import nora
from openai import OpenAI
nora_client = nora.init(api_key="your-nora-api-key")
client = OpenAI(api_key="your-openai-key")
@nora_client.trace(span_type="retrieval", name="DocumentRetrieval")
def retrieve_documents(query):
documents = [
"AI is a branch of computer science focused on creating intelligent machines.",
"Machine learning is a subset of AI that enables systems to learn from data.",
"Deep learning uses neural networks with multiple layers."
]
return documents
@nora_client.trace(span_type="llm", name="GenerateAnswer")
def generate_answer(query, context):
context_text = "\n".join(context)
response = client.responses.create(
model="gpt-5",
input=[
{
"role": "system",
"content": f"Context:\n{context_text}"
},
{
"role": "user",
"content": query
}
]
)
return response.output_text
@nora_client.trace_group(name="RAG_Pipeline", metadata={"type": "WORKFLOW"})
def rag_pipeline(query):
documents = retrieve_documents(query)
answer = generate_answer(query, documents)
return answer
result = rag_pipeline("What is machine learning?")
print(result)Routing and Policy Evaluation
import nora
from openai import OpenAI
nora_client = nora.init(api_key="your-nora-api-key")
client = OpenAI(api_key="your-openai-key")
@nora_client.trace(span_type="router", name="IntentRouter")
def route_by_intent(user_message):
response = client.responses.create(
model="gpt-5",
input=[
{
"role": "system",
"content": "Classify intent. Reply with one of: support, sales, general."
},
{
"role": "user",
"content": user_message
}
]
)
intent = response.output_text.strip().lower()
return intent
@nora_client.trace(span_type="policy_eval", name="ContentPolicy")
def check_content_policy(message):
response = client.responses.create(
model="gpt-5",
input=[
{
"role": "system",
"content": "Check if the message violates policy. Reply with: safe or unsafe."
},
{
"role": "user",
"content": message
}
]
)
result = response.output_text.strip().lower()
return result == "safe"
@nora_client.trace(span_type="agent", name="SupportAgent")
def support_agent(message):
return "Support team will contact you."
@nora_client.trace(span_type="agent", name="SalesAgent")
def sales_agent(message):
return "Let me help you with pricing."
@nora_client.trace_group(name="SmartRouter")
def smart_routing(user_message):
is_safe = check_content_policy(user_message)
if not is_safe:
return "Message flagged by content policy."
intent = route_by_intent(user_message)
if intent == "support":
return support_agent(user_message)
elif intent == "sales":
return sales_agent(user_message)
else:
return "How can I help you today?"
result = smart_routing("I need help with my account")
print(result)Complex Workflow Example
import nora
from openai import OpenAI
nora_client = nora.init(api_key="your-nora-api-key")
client = OpenAI(api_key="your-openai-key")
@nora_client.trace(span_type="retrieval", name="FetchUserHistory")
def fetch_user_history(user_id):
return {
"user_id": user_id,
"recent_queries": ["AI trends", "ML frameworks"],
"preferences": ["technical", "detailed"]
}
@nora_client.trace(span_type="retrieval", name="SearchKnowledgeBase")
def search_knowledge_base(query, context):
return [
"Document about AI trends 2024",
"Latest ML frameworks comparison"
]
@nora_client.trace(span_type="llm", name="SynthesizeResponse")
def synthesize_response(query, documents, user_context):
prompt = f"""
User preferences: {user_context['preferences']}
Recent queries: {user_context['recent_queries']}
Documents:
{chr(10).join(documents)}
Question: {query}
"""
response = client.responses.create(
model="gpt-5",
input=[
{
"role": "user",
"content": prompt
}
],
max_output_tokens=300
)
return response.output_text
@nora_client.trace(span_type="tool", name="FormatMarkdown")
def format_as_markdown(text):
return f"# AI Assistant Response\n\n{text}"
@nora_client.trace_group(name="PersonalizedQA_Workflow", metadata={"type": "WORKFLOW"})
def personalized_qa(user_id, query):
user_context = fetch_user_history(user_id)
documents = search_knowledge_base(query, user_context)
response = synthesize_response(query, documents, user_context)
formatted = format_as_markdown(response)
return formatted
result = personalized_qa("user_123", "What are the latest AI trends?")
print(result)Was this page helpful?