*** title: Introduction description: >- Inject intelligence where it matters - Rightbrain lets you wire AI into your tech stack, making your existing platforms smarter. --------------------------------------------------- Rightbrain isn't another agent or automation platform, **just intelligence exactly where your team and customers need it**, making your existing platforms and workflows smarter. This is achieved through Tasks, custom AI functions that can be triggered from anywhere in your stack - through an API, webhook, scheduled job, or user action. Tasks are LLMs with set instructions to process incoming data and respond in a pre-defined output format, perfectly designed for surfacing in an app's user interface, storing in your database or initiating another step in a wider workflow. #### Core components of Tasks: | Property | Description | | -------------- | -------------------------------------------------------------------- | | **Stateless** | Self-contained with no dependencies between runs and no drift | | **Triggered** | Activated by events - API calls, workflows, or user actions | | **Consistent** | Pre-defined instructions and output format for maximum compatibility | | **Composable** | Can be chained, reused, or embedded into any process | By wiring Tasks into existing systems, teams can safely add AI to any workflow **without having to build or maintain an AI platform**. ### Example Tasks * **Content Moderator** – Review user-generated content for compliance * **Invoice Processor** – Extract line items, totals, and vendor details * **Data Classifier** – Categorise and route support tickets or feedback * **Image Generator** – Create branded visuals from text prompts * **Competitor Analyser** – Research and summarise a company’s top competitors ### Where It Fits Rightbrain Tasks can be called through **REST API** or **MCP** - the flow is identical in each case. They connect business rules with AI, making your existing tools smarter, faster, and more adaptive. [Learn how Rightbrain works →](/docs/getting-started/how-it-works) *** ## How Teams Use Rightbrain * Build AI-powered features using natural language * Enrich existing products without extra infra * Test, tune, and launch features fast in the dashboard * Deploy safely with version control and rollback * Monitor performance and costs as they happen * Enable teams and remove bottlenecks with custom AI tooling * Role-based permissions for every team * Track usage, costs, and performance in real time * Roll out new AI tools safely with A/B testing * Monitor and audit AI responses * Embed AI into any app with a single API call * Built-in OAuth 2.0 for secure access * Handle text, images, files, urls and web searches as dynamic inputs * Get predictable JSON, text, document, image or audio outputs * Monitor, debug, and ship with full observability ## For AI Agents & Assistants Using Cursor, Claude, or other AI coding assistants? We provide **machine-readable documentation endpoints** optimised for AI agents. Drop our docs into any LLM to give it instant context on Rightbrain's capabilities and APIs. ## Build with Rightbrain Build your first Task in our no-code dash Everything devs need to get started *** **Ready to ship AI that actually works?** Start your 7-day free trial and join over 1,000 teams already building with Rightbrain. [Start Free Trial →](https://app.rightbrain.ai) *** title: Why Rightbrain? description: Most AI projects fail before they reach production. Rightbrain changes that. ----------------------------------------------------------------------------------------- ## The Problem: Most AI never makes it to production 95% of AI proof-of-concepts fail before deployment.\ Not because the models don’t work - but because everything around them does. Teams spend months building scaffolding: APIs, infrastructure, observability, compliance, rollback logic. By the time it’s ready, the opportunity has passed. **8 months** is the average time it takes for a team to move from prototype to production. Most never make it. *** ## The Real Bottleneck: Infrastructure, Not Intelligence Prototypes are easy. Production is hard. ### 1. The Infrastructure Trap “Week 1: We built a ChatGPT wrapper!”\ “Week 4: We need error handling.”\ “Week 8: Where are the logs?”\ “Week 16: The compliance team has questions.”\ “Week 32: Let’s start again - properly this time.” Teams build everything from scratch, only to realise they needed versioning, audit trails, and rollbacks from day one. ### 2. The Integration Maze AI features end up scattered across different systems - Python scripts, Lambda functions, third-party tools - with no shared monitoring, no consistent schema, and no control. ### 3. The Scaling Wall The proof-of-concept works on one input. In production, it breaks under real data, real users, and real expectations. *** ## The Rightbrain Approach: From Prompt to Production Rightbrain replaces custom infrastructure with a **Task-based architecture** - small, modular AI units that can be deployed anywhere, securely, and at scale. | The Old Way (Monolithic AI) | The Rightbrain Way (Task-Based) | | ---------------------------------- | --------------------------------------------- | | Models buried in custom code | Modular **Tasks** you can call via API | | No version control or rollback | **Versioned and auditable** by default | | Inconsistent results | **Structured, predictable outputs** every run | | Hard to test and debug | **Observable and testable** units | | Works in demo, fails in production | **Built for reliability at scale** | Rightbrain turns AI from an experiment into an operational system. *** ## Proof in Action: PAL.health **Goal**\ Launch a production-ready AI health coach app - without any AI infrastructure. **Solution** Rightbrain Tasks power every AI feature in their platform. **Results** * 15 production Tasks deployed in weeks * 200+ optimisations per Task * Feature ideas created by the founder and the underlying Task API's handed off to the mobile developers > I can switch models, fix issues, and ship updates daily without waiting on engineers. Without Rightbrain, this simply wouldn’t be possible. * Chris Davison, Founder & CEO, [PAL.](https://pal.health) *** ## Why It Matters Companies that win with AI aren’t the ones with the biggest models.\ They’re the ones that **ship fast, iterate safely, and scale predictably**. Rightbrain gives you the foundation to do exactly that -\ from **natural language idea → production-ready AI feature** in a fraction of the time. *** ## Is Rightbrain Right for You? | Perfect Fit | Consider Alternatives | | ------------------------------------------------ | -------------------------------------------- | | You need production AI inside existing products | You’re only exploring basic prototypes | | You want fast iteration without building infra | You prefer full in-house platform teams | | You need governance and compliance from day one | You’re experimenting with single-use scripts | | You want predictable costs and model flexibility | You don’t plan to scale AI features | *** Build your first Task in our no-code dashboard Learn the architecture behind Rightbrain Tasks *** ## title: How Rightbrain Works Rightbrain provides composable AI capabilities that integrate directly into your applications. No complex infrastructure. No orchestration layers. Just production-grade AI that works. ## The Integration Pattern Every Task follows the same four-step flow, whether you're calling it via REST API, MCP, or agent protocols: An event fires in your system - a user action, API call, or webhook that needs AI processing. Data from the trigger event is sent to the Rightbrain Task. This can be text, structured data, files, or images. The Task executes its defined AI logic: your chosen model, prompt, and schema, with the right context provided at runtime. Validated, structured output is returned to your application - ready to store, display, or feed into other Tasks. This pattern is consistent across all integration methods, making it simple to build once and deploy anywhere. ## Integration Methods **Call Tasks like any REST API** Send a request with your input data, get structured output back. OAuth 2.0 authenticated, production-ready. **Perfect for:** * Server-side applications * Batch processing * Synchronous workflows **Make Tasks available to AI assistants** Tasks can be exposed via Model Context Protocol, making them discoverable and usable by any MCP-compatible AI client. **Perfect for:** * AI agent workflows * Claude, Cursor and other MCP clients * Cross-platform AI tool distribution **Receive results asynchronously** Configure Task Forwarders to send results to your endpoints when processing completes. **Perfect for:** * Long-running tasks * Event-driven architectures * Queue-based processing ## Governance Built In 🧭 **Observability** - Every execution logged with inputs, outputs, timing, and costs 🔐 **Access Control** - OAuth 2.0 authentication with role-based permissions 🌿 **Versioning** - Revisions enabling A/B testing and weighted traffic 🛡️ **Compliance** - Audit trails, data residency, and enterprise-grade security ## TL;DR Rightbrain removes all the complexity between your application and AI capabilities: * No infrastructure to manage * No models to host * No prompt versioning to build * No monitoring to implement * No governance to figure out **You focus on functionality. We handle everything else.** *** Build your first Task in our no-code dash Example use cases to inspire your own *** title: Create Your First Task description: Build and deploy a production-ready AI tool in 5 minutes --------------------------------------------------------------------- Let's build your first Rightbrain Task. In this quickstart, we'll draft a prompt, structure your outputs and choose an appropriate model for the job. **What You'll Build**: A Task that analyses customer reviews, extracts sentiment, and validates product images - all in one API call. ## Understanding Task Components Before we start, let’s understand the four core components of every Task: 💬 **Instructions** - Craft clear prompts that tell the model exactly what to do and how to behave. **🧩 Dynamic Inputs** - Use placeholders, such as `{customer_review}`, to insert real input data at runtime 📊 **Structured Outputs** - Define your expected fields and schema to ensure consistent, predictable responses. 🧠 **Model Selection** - Choose the model that best balances speed, cost, and capability for your use case. ## Step 1: Set Your Prompts First, we'll compose the Task instructions using the user prompt. For this demo, we're building a sentiment analysis Task that can also verify product images. The user prompt tells the AI what to do with the input data. Use clear, specific instructions for best results. ### User Prompt Navigate to the user prompt field and enter: ``` Please conduct a comprehensive sentiment analysis on the {customer_review} and describe the product image (if one is included). Verify that the image matches the product described in the review. ``` prompt_writing Notice the `{customer_review}` variable - this is a dynamic placeholder that gets replaced with actual data when the Task runs. The Task is also designed to handle optional image inputs. ### System Prompt (Optional) You can add a system prompt to define the AI's role and behavior: ``` You are an expert product review analyst specializing in e-commerce sentiment analysis and visual verification. ``` ## Step 2: Configure Inputs and Outputs Now we'll define what data goes in and what structure comes out. ### Define Input Variables Your Task can accept: * **Text strings** – simple text inputs like support tickets, reviews or any text data feed. * **Images** – visual inputs such as product photos, receipts, or screenshots for multimodal analysis and verification. * **Documents** – commonly Word, PDF, PPT or CSV files for data extraction, summarisation or classification. * **URLs** – web addresses that the Task can fetch and analyse for content, metadata or linked resources. * **Web Search **- a live search via Perplexity that returns up to the top 20 results for further analysis. For our Task, the `{customer_review}` variable is automatically detected from your prompt. You can add additional inputs if needed. input-config ### Specify Output Structure Define exactly what data structure you want back. This ensures every Task execution returns consistent, parseable results. Add these outputs to your Task manually or by pasting the JSON schema: Whether the review is positive, negative, or neutral A description of the product image. Returns "N/A" if no image provided Whether the image matches the product described in the review ```json { "sentiment": { "type": "str", "description": "Whether the review is positive, negative, or neutral" }, "image_description": { "type": "str", "description": "A description of the product image. Returns \"N/A\" if no image provided" }, "image_match": { "type": "str", "description": "Whether the image matches the product described in the review" } } ``` output-formatting **Pro tip**: Always provide clear descriptions for your outputs. This helps the model understand exactly what you expect and improves accuracy. ## Step 3: Choose Model and Configure Settings ### Select Your Model Different models excel at different tasks. For our sentiment analysis with image verification, we need a vision-capable model: **Claude Sonnet 4.5** 🖼️ * Excellent for combined text and image analysis * Cost-effective for production use * Fast response times Other vision-capable models are available through the model selection interface, including: `Gemini 2.5 Pro`, `GPT 5`, `Gemini 2.5 Flash`, `GPT 5 Mini`, `Claude Haiku 4.5`, `Llama 4 Maverick` model-selector ### Configure Task Settings Set to `0.3` for consistent, focused analysis.\ Lower values (0.1–0.3) produce more deterministic outputs, ideal for classification tasks. You can specify a fallback model in the Task settings. If your primary model fails four consecutive times, the Task will automatically route subsequent runs to the designated fallback model. **Congratulations!** You've created your first Task. It's now available via the API endpoint under the Integrate tab. ## What's Next? Run and test your Task Integrate via REST API *** title: Running Your Task description: Test and execute your Tasks through the Rightbrain dashboard ------------------------------------------------------------------------- Once you've created a Task, it's time to test and run it. This guide covers how to execute your Task through the Rightbrain dashboard. ## Accessing the Run Task View Navigate to the **Run Task** view to test your Task execution. You can access this directly from your dash homepage or right after creating a task. **Task run view** Screenshot 2025-10-22 at 12.27.29 Let's run through what you can expect to see in this view: **Configuration & Inputs** * LLM model selection - primary and fallback * Input fields for your variables * Image upload for vision tasks * User & system prompts * Output format schema **Results & History** * Run history logs * Usage & performance statistics * Output results display * Token and credit tracking ## Understanding Input Variables Tasks are designed to run on dynamic input variables (e.g., `{customer_review}`). The platform automatically detects these variables from your prompts and creates corresponding input fields. **Plain Text Variables** * Direct text entry for string variables * Support for long-form content * Special characters and emojis supported * No size limit for testing **Example variables**: `{customer_review}`, `{product_description}`, `{user_query}` **Images** * JPEG, PNG formats * Enable with `image_required: true` * Automatic image optimisation available * Multiple images per task supported **Documents** * PDF, DOCX, TXT, CSV, XLSX * Content automatically extracted * Requires a corresponding filename **Web Content** * Complete URLs with `https://` * Content fetched automatically * Must be publicly accessible * Handles HTML, JSON, and plain text * Optional fallback text if the url fetch fails **Example variables**: `{webpage_url}`, `{api_endpoint}`, `{article_link}` **Search Results:** * Conducts a Perplexity web search for your variable * Can be configured to return the top to 5-20 results * Augmented with additional text such as 'Top competitors for `{variable}`' which runs a search for full search term as opposed to just the variable * Search results can be used for further analysis **Example searches and variables**: 'top competitors for `{company}`', latest news related to `{topic}`', 'weekly investments in `{industry}`' **Multiple Input Types**: Tasks can combine different input types. For example, a task might compare text to images, summarise documents with web content, or analyse URLs alongside uploaded files. ## Providing Test Data ### Text Input Example For our sentiment analysis task let's provide a value for the input variable `{customer_review}`: ```plaintext My toaster exploded during breakfast, sending flaming bread across the kitchen! 😱 On the bright side, I've discovered a new way to heat up the whole house.But seriously folks, this isn't just a hot topic – it's a fire hazard!The warranty card didn't mention anything about impromptu fireworks displays. 🎆 ``` We simply hit **Run Task** and we'll get a response: run-task Now let's try adding an accompanying image: