Understanding Task Results

Learn how to interpret Task outputs, analyse execution data, and understand performance metrics.

Response Structure

Complete Response Format

After running a task, you receive a comprehensive response containing:

1{
2 "task_id": "0195d1ff-1f05-437a-95ac-6de8969cb47b",
3 "task_revision_id": "0195d1ff-1f42-f14e-8b65-641baf9dc32e",
4 "response": {
5 "sentiment": "negative",
6 "image_match": true,
7 "image_description": "The image shows a severely damaged toaster..."
8 },
9 "run_data": {
10 "submitted": {
11 "customer_review": "My toaster exploded during breakfast..."
12 },
13 "files": [
14 "be8d9e69-9f2a-4bfd-bbf4-559d6b4eb5d0.jpeg"
15 ]
16 },
17 "id": "0195d207-32bb-d03d-cfdc-f4516e9222c8",
18 "created": "2025-03-26T10:37:15.687874Z",
19 "input_tokens": 2051,
20 "output_tokens": 130,
21 "total_tokens": 2181,
22 "input_processor_timing": 0.0001468900591135025,
23 "llm_call_timing": 4.773190421052277,
24 "charged_credits": "9.00"
25}

Key Response Components

Your model output is contained in the response field, while the rest of the task response contains other useful information including performance metrics and metadata.

task_id
string

Unique identifier of the Task definition

task_revision_id
string

Specific revision that processed this run

id
string

Unique identifier for this specific execution

Performance Metrics

Token Usage Analysis

Understanding token consumption in your results:

Input Token Components →← Output Token Components
• User and system prompt length• Complexity of output structure
• Input variable content size• Verbosity of model responses
• Injected context (if using RAG)• Number of fields in output format
What to Look For
  • Consistent token usage across similar inputs.
  • Unexpected spikes in token consumption.
  • Patterns that indicate optimisation opportunities.

Timing Breakdown

Each execution provides timing metrics:

Input Processing Time (input_processor_timing)

  • URL fetching duration

  • Document extraction time

  • Image preprocessing

  • RAG context retrieval

Cost Tracking

Each execution shows credits consumed (charged_credits):

🧮 Cost Factors

  • Model selection - higher-end models consume more credits per token
  • Token count - both input and output tokens contribute to cost

⚙️ Optimisation

  • Compare model performance vs. credit consumption
  • Simplify prompts to minimise unnecessary tokens
  • Reduce output verbosity where possible

Next Steps