Understanding Task Results
Learn how to interpret Task outputs, analyse execution data, and understand performance metrics.
Response Structure
Complete Response Format
After running a task, you receive a comprehensive response containing:
Key Response Components
Your model output is contained in the response field, while the rest of the task response contains other useful information including performance metrics and metadata.
Core Identifiers
Response Section
Run Data
Performance Metrics
Metadata
Unique identifier of the Task definition
Specific revision that processed this run
Unique identifier for this specific execution
Performance Metrics
Token Usage Analysis
Understanding token consumption in your results:
What to Look For
- Consistent token usage across similar inputs.
- Unexpected spikes in token consumption.
- Patterns that indicate optimisation opportunities.
Timing Breakdown
Each execution provides timing metrics:
Input Processing
LLM Processing
Total Response
Input Processing Time (input_processor_timing)
-
URL fetching duration
-
Document extraction time
-
Image preprocessing
-
RAG context retrieval
Cost Tracking
Each execution shows credits consumed (charged_credits):
🧮 Cost Factors
- Model selection - higher-end models consume more credits per token
- Token count - both input and output tokens contribute to cost
⚙️ Optimisation
- Compare model performance vs. credit consumption
- Simplify prompts to minimise unnecessary tokens
- Reduce output verbosity where possible
