Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.jinba.io/llms.txt

Use this file to discover all available pages before exploring further.

Build a fully functional AI chat system that answers questions using your own documents. This tutorial walks you through every step: creating a knowledge base, building a RAG workflow, and deploying it as a chat interface.

What You’ll Build

A chat system that:
  1. Accepts a user question (via chat, API, or AI assistant)
  2. Searches your knowledge base for relevant information
  3. Uses an AI model to generate an accurate answer grounded in your documents
  4. Returns the answer with source references

Prerequisites

  • A Jinba Flow account (Sign up here)
  • Documents to upload — PDFs, DOCX, or text files you want your chat to reference
  • Basic familiarity with Jinba Flow concepts — Flows, Steps, and Tools
No coding experience is required. You can build the entire system using the chat panel or graph editor. The YAML manifests shown below are for reference and can be copied directly.

Architecture Overview

Before we start building, here’s how all the pieces fit together:
This tutorial uses Jinba Knowledge Base as the primary backend. For other backends, see Alternative Backends near the end.

Part 1: Set Up Your Knowledge Base

Step 1: Create a Knowledge Base

1

Navigate to Storage

In your workspace sidebar, click Storage, then select the Knowledge Bases tab.Storage Knowledge Bases screen
2

Create a New Knowledge Base

Click Create Knowledge Base. Enter a descriptive name (for example, “Product Documentation” or “Company FAQ”).Create knowledge base button
3

Note Your Knowledge Base ID

After creation, open your new knowledge base and confirm it appears in the list. In this tutorial, the example knowledge base is named RAG Tutorial KB.Knowledge base created

Step 2: Upload Documents

1

Open Your Knowledge Base

Click on your newly created knowledge base from the Storage page.
2

Upload Files

Click Add files to open the upload dialog. Select your PDFs, DOCX, Markdown, or text files from your computer.Uploading files to knowledge base
3

Wait for Processing

Files are automatically processed through the following pipeline:
StageWhat Happens
ParsingText is extracted from your documents
ChunkingDocuments are split into searchable chunks
EmbeddingChunks are converted to vector embeddings
IndexingVectors are stored for fast similarity search
Wait until all files show Ready status.File processing status
Processing may take a few minutes depending on file size. Do not proceed until your files show a completed status.

Step 3: Store Your Credentials

Before building the flow, store the required secrets in your workspace.
1

Navigate to Credentials

Go to your workspace credentials page.
2

Add Secrets

Add the following secrets:
Secret NameValuePurpose
JINBA_API_TOKENYour Jinba API tokenAuthenticates vector search
KNOWLEDGE_BASE_IDYour knowledge base ID from Step 1Identifies which knowledge base to search
ANTHROPIC_API_KEYYour Anthropic API key (optional)For Claude-based generation
If you do not have an OpenAI API key, you can still use OPENAI_INVOKE with Jinba API credit.

Part 2: Build the RAG Flow

Now let’s create the workflow that powers the chat system.

The Complete Manifest

You can paste this directly into the YAML Coding Panel:
# Step 1: Receive the user's question
- id: user_question
  name: Receive Question
  tool: INPUT_TEXT
  input:
    - name: value
      value: ""

# Step 2: Search the knowledge base for relevant content
- id: search_knowledge
  name: Search Knowledge Base
  tool: JINBA_VECTOR_SEARCH
  config:
    - name: token
      value: "{{secrets.JINBAFLOW_WS_API_KEY}}"
  input:
    - name: query
      value: "{{steps.user_question.result}}"
    - name: knowledgeBaseId
      value: YOUR_KB_ID_HERE
    - name: topK
      value: 5
    - name: threshold
      value: 0.3
  needs:
    - user_question

# Step 3: Generate an answer using the retrieved context
- id: generate_answer
  name: Generate Answer
  tool: OPENAI_INVOKE
  config:
    - name: version
      value: gpt-4o
  input:
    - name: prompt
      value: |
        ## Instructions
        You are a helpful assistant. Answer the user's question based ONLY
        on the provided context. If the context doesn't contain the answer,
        say "I don't have enough information to answer that question."

        Always cite which source document(s) your answer comes from.

        ## Context (from knowledge base)
        {{steps.search_knowledge.results | dump}}

        ## User Question
        {{steps.user_question.result}}

        Please provide a clear, concise answer based on the context above.
  needs:
    - search_knowledge

# Step 4: Output the answer
- id: output_answer
  name: Output Answer
  tool: OUTPUT_TEXT
  input:
    - name: value
      value: "{{steps.generate_answer.result.content}}"
  needs:
    - generate_answer
Fastest path: paste the full YAML manifest above. If you prefer building flows visually, you can also add the four nodes manually in the editor and configure them one by one.

Alternative: Build the Flow Manually in the Editor

If you want to learn how each node is assembled visually, follow this manual path instead of pasting YAML.
1

Add the Input Text node

Open the node picker, search for Input Text, and add it as the first node in the graph.Manual build - add input text nodeAfter adding it, rename the node to Receive Question so it matches the tutorial manifest.Manual build - configure receive question
2

Add and configure the Vector Search node

Click the + connector under the first node, search for Vector Search, and add the JINBA_VECTOR_SEARCH node.Manual build - add vector search nodeThen configure it with:
  • Token: JINBAFLOW_WS_API_KEY secret
  • Query: {{steps.user_question.result}}
  • Knowledge Base ID: your knowledge base ID
  • Top K: 5
  • Threshold: 0.3 Manual build - configure vector search node
3

Add and configure the OpenAI Invoke node

Add an Invoke node and choose OPENAI_INVOKE.Manual build - add openai invoke nodeSet the model version to gpt-4o, then paste the same prompt shown in the YAML example so the model answers only from retrieved knowledge base context.Manual build - configure openai invoke node
4

Add and configure the Output Text node

Add an Output Text node as the final step.Manual build - add output text nodeRename it to Output Answer, and set its value to:
{{steps.generate_answer.result.content}}
Manual build - configure output text node

Step-by-Step Walkthrough

Step 1: Receive Question (INPUT_TEXT)

- id: user_question
  name: Receive Question
  tool: INPUT_TEXT
  input:
    - name: value
      value: ""
The INPUT_TEXT tool creates an input parameter for the flow. When executed manually, it shows a text box. When called via API, this becomes a parameter in the request body. Learn more about input tools.
- id: search_knowledge
  name: Search Knowledge Base
  tool: JINBA_VECTOR_SEARCH
  config:
    - name: token
      value: "{{secrets.JINBAFLOW_WS_API_KEY}}"
  input:
    - name: query
      value: "{{steps.user_question.result}}"
    - name: knowledgeBaseId
      value: YOUR_KB_ID_HERE
    - name: topK
      value: 5
    - name: threshold
      value: 0.3
  needs:
    - user_question
This step performs semantic search — finding content by meaning, not just keywords.
ParameterValueWhy
topK5Returns the 5 most relevant chunks
threshold0.3Filters out low-relevance results
The needs: [user_question] ensures the search waits for the user’s question before executing. See Step Module Options for more details.
Start with topK: 5 and threshold: 0.3. If answers lack context, increase topK. If irrelevant content appears, increase threshold. See the Vector Search reference for more guidance.

Step 3: Generate Answer (LLM)

- id: generate_answer
  name: Generate Answer
  tool: OPENAI_INVOKE
  config:
    - name: version
      value: gpt-4o
  input:
    - name: prompt
      value: |
        ## Instructions
        You are a helpful assistant. Answer the user's question based ONLY
        on the provided context. If the context doesn't contain the answer,
        say "I don't have enough information to answer that question."

        Always cite which source document(s) your answer comes from.

        ## Context (from knowledge base)
        {{steps.search_knowledge.results | dump}}

        ## User Question
        {{steps.user_question.result}}

        Please provide a clear, concise answer based on the context above.
  needs:
    - search_knowledge
The | dump filter serializes the search results into the prompt so the LLM can read all retrieved chunks and their sources. Learn more about Variables & Templates.

Step 4: Output Answer (OUTPUT_TEXT)

- id: output_answer
  name: Output Answer
  tool: OUTPUT_TEXT
  input:
    - name: value
      value: "{{steps.generate_answer.result.content}}"
  needs:
    - generate_answer
The OUTPUT_TEXT step captures the final answer as the flow’s output, making it available when calling the flow via API or chat.
Replace the generation step with:
- id: generate_answer
  name: Generate Answer
  tool: ANTHROPIC_INVOKE
  config:
    - name: version
      value: claude-3-5-sonnet-20241022
    - name: token
      value: "{{secrets.ANTHROPIC_API_KEY}}"
  input:
    - name: prompt
      value: |
        ... (same prompt as above)
  needs:
    - search_knowledge
See the Anthropic tool reference for details.
Replace the generation step with:
- id: generate_answer
  name: Generate Answer
  tool: GEMINI_INVOKE
  config:
    - name: version
      value: gemini-1.5-flash
    - name: token
      value: "{{secrets.GEMINI_API_KEY}}"
  input:
    - name: prompt
      value: |
        ... (same prompt as above)
  needs:
    - search_knowledge
See the Gemini tool reference for details.

Test Your Flow

1

Execute the Flow

Build the four-step flow in the editor, then click the Run button in the top right.RAG flow in graph editor
2

Enter a Test Question

When prompted, enter a question that your uploaded documents should be able to answer.Run dialog with test question
3

Review Results

Check the execution results. The answer should reference information from your knowledge base.Execution result

Part 3: Deploy as a Chat Interface

Your RAG flow works. Now make it accessible to users.

Jinba App Chat

Best for: End users who need a ready-made chat UI

REST API

Best for: Custom applications and integrations

MCP Tool

Best for: AI assistant integration
Deploy your RAG flow as a chat connector in Jinba App for the simplest end-user experience.
1

Publish Your Flow

In the flow editor, click the Publish button. A dialog will ask “Who will trigger this workflow?”:Publish flow
OptionDescriptionWhen to choose
My teamSimple interface anyone can use✅ Choose this for the chat UI path (Option A)
EngineersCall it from code via APIChoose this for the API path (Option B)
AutomaticRuns on schedule or when events happenChoose this for scheduled/event-driven flows
Select My team and click Continue →.Trigger type selection
2

Understand the Jinba Flow → Jinba App Relationship

After selecting “My team”, an education screen appears explaining that Jinba Flow (where you build) and Jinba App (where your team uses it) are separate products:
  • Separate products, by design — Jinba Flow is the builder; Jinba App is the chat interface your team uses
  • Enterprise-grade security — Jinba App has its own authentication and access controls
  • Simple for your team — No code, no complexity — just a familiar chat interface
Click Got it, continue to proceed to the MCP setup.Flow and App relationship screen
3

Enable MCP and Connect to Jinba App

The Create an MCP dialog appears. This shows a preview of how your flow will appear as a chat tool:
  • A chat preview showing @your_flow_name with a description
  • A Demo button to preview the chat experience
  • An Enable MCP for this flow toggle Create MCP dialog
You must turn on the Enable MCP for this flow toggle. This is what creates the connection between Jinba Flow and Jinba App — the “Connect with Jinba App” button only appears after you enable it.
Once enabled, additional options appear:
  • Workspace Token — used for authentication
  • Connection Snippet — JSON config for MCP clients (Claude Desktop, Cursor, etc.)
  • Connect with Jinba App button — click this to open the connector in Jinba App
Enabling MCP also lets AI assistants (Claude, Cursor, etc.) call this flow as a tool — you get both Jinba App chat and MCP tool access with a single toggle.
See Publish for details.
4

Configure MCP Connection Settings

After enabling MCP, you’ll be taken to the MCP → Connect tab. This page has several important sections:
  1. Your Token — your workspace authentication token (keep this secret)
  2. 1-Click Connect — click Connect to instantly link this flow to Jinba App
  3. Visibility — defaults to “Unlisted” (only users in your access rules can use it). Change to “Listed” if you want all workspace members to see it
  4. Access Scope Settings — configure who can access this tool using JWT claims (e.g., email whitelist)
  5. MCP Configuration JSON Snippet — copy this to use the flow in external MCP clients (Claude Desktop, Cursor, etc.) MCP Connect tab
You must click the 1-Click Connect button to make the flow available as a tool in Jinba App. Without this step, the tool won’t appear in Jinba App even though MCP is enabled.
Add your team members’ emails to the Access Scope Settings so they can also use this tool. Click + Add Rule to add more email rules.
5

Chat with Your RAG System

Open Jinba App and start a new chat:
  1. Click New Chat
  2. Click the connectors icon (⚙️) at the bottom of the chat input
  3. In the “Search agents and connectors…” dropdown, find your workspace’s MCP connector (e.g., “Tutorial Demonstrations MCP … 1 tool”)
  4. Click into the MCP connector to see your RAG Chat Demo tool listed
  5. Select the tool — it appears as a tag in the chat input bar
  6. Type your question and press Enter Connector selection
The tool runs automatically — you’ll see the Arguments (your user_question) and Result (the RAG response content) in an expandable section, followed by the AI’s formatted answer.Chat with RAG in Jinba App
You can also use Auto Select mode — Jinba App will automatically choose the right tool based on your question, so you don’t need to manually select the connector each time.
See Jinba Flow Connectors for the full guide.
You can also create a Jinba App Agent that bundles this connector with custom instructions.

Option B: REST API

Expose your RAG flow as an API endpoint for custom applications.
1

Publish Your Flow

Follow the same publish steps as above, but select Engineers in the “Who will trigger this workflow?” dialog instead. This optimizes the flow for API access.
2

Get Your API Key

After publishing, navigate to your flow’s settings to find the auto-generated API key.API key location
3

Call the API

curl -X POST https://api.jinba.dev/api/v2/external/flows/{flow-id}/published-run \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "args": [
      {"name": "user_question", "value": "What is your return policy?"}
    ],
    "mode": "sync"
  }'
Python example:
import requests

response = requests.post(
    "https://api.jinba.dev/api/v2/external/flows/{flow-id}/published-run",
    headers={
        "Authorization": "Bearer YOUR_API_KEY",
        "Content-Type": "application/json"
    },
    json={
        "args": [
            {"name": "user_question", "value": "What is your return policy?"}
        ],
        "mode": "sync"
    }
)
answer = response.json()
print(answer)
See the full API reference for async mode, error handling, and more.

Option C: MCP Tool (AI Assistant Integration)

Make your RAG flow available as a tool for AI assistants like Claude Desktop.
1

Publish as MCP

Publish your flow with MCP enabled. Navigate to the MCP tab in your workspace settings.MCP configuration
2

Configure Your AI Assistant

Add the Jinba Flow MCP server to your AI assistant’s configuration:
{
  "mcpServers": {
    "jinbaflow": {
      "command": "npx",
      "args": [
        "-y",
        "supergateway",
        "--streamableHttp",
        "https://api.jinba.io/api/v2/workspaces/YOUR_WORKSPACE_ID/mcp",
        "--header",
        "Authorization: Bearer YOUR_TOKEN"
      ]
    }
  }
}
See the MCP guide for the full setup process.
3

Use from Your AI Assistant

Your RAG flow now appears as a tool. The AI assistant can invoke it to answer questions using your knowledge base.

Part 4: Advanced Patterns

Multi-Step RAG with Query Refinement

For complex questions, add a refinement step that improves search results:
- id: user_question
  name: Receive Question
  tool: INPUT_TEXT
  input:
    - name: value
      value: ""

- id: initial_search
  name: Initial Search
  tool: JINBA_VECTOR_SEARCH
  config:
    - name: token
      value: "{{secrets.JINBAFLOW_WS_API_KEY}}"
  input:
    - name: query
      value: "{{steps.user_question.result}}"
    - name: knowledgeBaseId
      value: YOUR_KB_ID_HERE
    - name: topK
      value: 10
  needs:
    - user_question

- id: refine_query
  name: Refine Query
  tool: OPENAI_INVOKE
  config:
    - name: version
      value: gpt-4o
  input:
    - name: prompt
      value: |
        Based on the initial search results, generate a more specific search query.

        Original Question: {{steps.user_question.result}}

        Initial Results:
        {{steps.initial_search.results | dump}}

        Generate a refined search query that will find more specific information.
        Output ONLY the refined query, nothing else.
  needs:
    - initial_search

- id: refined_search
  name: Refined Search
  tool: JINBA_VECTOR_SEARCH
  config:
    - name: token
      value: "{{secrets.JINBAFLOW_WS_API_KEY}}"
  input:
    - name: query
      value: "{{steps.refine_query.result.content}}"
    - name: knowledgeBaseId
      value: YOUR_KB_ID_HERE
    - name: topK
      value: 5
    - name: threshold
      value: 0.4
  needs:
    - refine_query

- id: final_answer
  name: Generate Final Answer
  tool: OPENAI_INVOKE
  config:
    - name: version
      value: gpt-4o
  input:
    - name: prompt
      value: |
        Answer the user's question using the refined search results.

        Question: {{steps.user_question.result}}

        Refined Search Results:
        {{steps.refined_search.results | dump}}

        Provide a detailed, accurate answer with source citations.
  needs:
    - refined_search

- id: output_answer
  name: Output Answer
  tool: OUTPUT_TEXT
  input:
    - name: value
      value: "{{steps.final_answer.result.content}}"
  needs:
    - final_answer

Handling No Results Found

Use conditional execution to handle cases where the knowledge base has no relevant content:
- id: check_results
  name: Check Results
  tool: PYTHON_SANDBOX_RUN
  input:
    - name: code
      value: |
        results = {{steps.search_knowledge.result}}
        has_results = len(results.get('results', [])) > 0
        print(has_results)
    - name: data_type
      value: STRING
  needs:
    - search_knowledge

- id: generate_answer
  name: Generate Answer
  tool: OPENAI_INVOKE
  config:
    - name: version
      value: gpt-4o
  input:
    - name: prompt
      value: |
        ... standard RAG prompt with context ...
  needs:
    - check_results
  when: "'{{steps.check_results.result}}' == 'True'"

- id: no_results_response
  name: No Results Response
  tool: PYTHON_SANDBOX_RUN
  input:
    - name: code
      value: |
        result = "I couldn't find relevant information in the knowledge base to answer your question. Please try rephrasing your question or contact support for assistance."
    - name: data_type
      value: STRING
  needs:
    - check_results
  when: "'{{steps.check_results.result}}' == 'False'"

Keeping Your Knowledge Base Updated

Create a scheduled flow that automatically updates your knowledge base with new documents:
- id: fetch_new_docs
  name: fetch_new_docs
  tool: PYTHON_SANDBOX_RUN
  input:
    - name: code
      value: |
        new_docs = [
            "https://example.com/updated-faq.pdf",
            "https://example.com/new-product-guide.pdf"
        ]
        result = new_docs
    - name: data_type
      value: STRING

- id: add_to_kb
  name: Add to Knowledge Base
  tool: JINBA_KNOWLEDGE_BASE_FILE_ADD
  forEach: "{{steps.fetch_new_docs.result}}"
  config:
    - name: token
      value: "{{secrets.JINBAFLOW_WS_API_KEY}}"
  input:
    - name: knowledgeBaseId
      value: YOUR_KB_ID_HERE
    - name: file
      value: "{{item}}"
    - name: executionMode
      value: "SYNCHRONOUS"
    - name: chunkerSettings
      value:
        chunkSize: 512
        chunkOverlap: 128
You can schedule this flow to run daily or weekly.

Alternative Backends

The primary tutorial uses Jinba Knowledge Base, but two additional backends are available for teams with specific requirements.

When to Choose Each Backend

FeatureJinba Knowledge BasePineconeAzure AI Search
Setup complexity⭐ Simplest⭐⭐ Moderate⭐⭐⭐ Advanced
External dependencyNonePinecone accountAzure subscription
Document uploadUI + APIAPI onlyAzure portal / ADLS
Metadata filtering✅ Rich filter syntax✅ OData filters
Namespace isolation✅ Indexes
Reranking✅ Built-in models✅ Semantic ranker
Indexer / pipelineAutomaticManual✅ Built-in indexers
CostIncluded in Jinba planSeparate Pinecone billingAzure billing
Best forMost use cases, quick startAdvanced search requirementsEnterprise / Azure-native organizations

Alternative A: Pinecone

If you need metadata filtering, namespace isolation, or reranking, use Pinecone as your vector backend.

Pinecone RAG Manifest

Replace the search step with Pinecone:
- id: search_pinecone
  name: Search Pinecone
  tool: PINECONE_QUERY
  config:
    - name: apiKey
      value: "{{secrets.PINECONE_API_KEY}}"
  input:
    - name: indexName
      value: my-knowledge-base
    - name: query
      value: "{{steps.user_question.result}}"
    - name: topK
      value: 5
    - name: includeMetadata
      value: true
    - name: rerankModel
      value: bge-reranker-v2-m3
    - name: rerankTopN
      value: 3
  needs:
    - user_question
Then adjust the generation step to use Pinecone’s output format:
- id: generate_answer
  name: Generate Answer
  tool: OPENAI_INVOKE
  config:
    - name: version
      value: gpt-4o
  input:
    - name: prompt
      value: |
        You are a helpful assistant. Answer based on the provided context.

        ## User Question
        {{steps.user_question.result}}

        ## Relevant Documentation
        {{steps.search_pinecone.result.matches | dump}}

        Answer accurately based on the documentation above.
  needs:
    - search_pinecone
See the full Pinecone tool reference for index creation and document upsert.

Alternative B: Azure AI Search (Enterprise)

For enterprise organizations already using the Azure ecosystem, Jinba Flow supports Azure AI Search as an external knowledge base backend. This option provides advanced indexing, semantic ranking, and integration with Azure Data Lake Storage Gen2.
Azure AI Search integration is an enterprise feature. Contact your Jinba administrator or the Jinba sales team to enable this for your workspace.

How It Works

Azure AI Search integration operates differently from the built-in knowledge base:
  1. Documents are stored in Azure — uploaded to Azure Data Lake Storage Gen2
  2. Indexing is handled by Azure — Azure AI Search indexers process and index documents
  3. Search queries go through Azure — either directly or via Azure API Management (APIM)
  4. Results flow back to Jinba Flow — where the LLM generates answers

Setup Overview

1

Configure Azure Connection

In your workspace settings, navigate to the External Knowledge Base configuration. You can connect in two modes:
ModeWhen to Use
APIM ModeRoutes through Azure API Management — recommended for production
Direct ModeConnects directly to Azure AI Search — simpler for development
2

Set Up Azure Resources

Ensure you have:
  • An Azure AI Search service with an index configured
  • Azure Data Lake Storage Gen2 for document storage
  • API keys or APIM subscription keys
  • An indexer configured to process uploaded documents
3

Upload Documents

Upload documents through the Jinba workspace UI. Files are automatically sent to Azure Data Lake Storage, and the configured indexer processes them into the search index.
4

Search in Your Flow

The search step uses the External Knowledge Base configuration from your workspace. The exact tool and parameters depend on your enterprise deployment.

Key Configuration Options

SettingDescription
Endpoint URLAzure AI Search or APIM endpoint
API KeyAuthentication key
Index API PathPath for index operations
Search API PathPath for search queries
ADLS API PathPath for file storage operations
Indexer NameIndexer to trigger after file uploads
Connection settings can be configured per workspace through the UI, with environment variables as fallbacks.

Azure AI Search Advantages

  • Semantic ranking: Azure’s built-in semantic ranker improves result relevance
  • Hybrid search: Combine vector search with keyword search
  • Built-in indexers: Automatically extract and index content from various file formats
  • Enterprise compliance: Data stays within your Azure tenant
  • Azure ecosystem integration: Works well with other Azure services such as Azure OpenAI

Tuning & Best Practices

Chunking Configuration

When adding files, tune chunk parameters for your content:
Content TypeChunk SizeOverlapWhy
FAQ / Short answers256–51264Precise, focused retrieval
Technical docs512–1024128Balance precision and context
Long-form content1024–2048256Maintain narrative context

Similarity Threshold Guide

ThresholdBehaviorUse When
0.7–1.0Very strict, near-exact matches onlyPrecise factual lookups
0.4–0.7High relevance, closely relatedMost Q&A use cases
0.2–0.4Moderate, may include tangential resultsExploratory or broad questions
0.0–0.2Very broad, many resultsNot recommended for production

Prompt Engineering Tips

  1. Be explicit about grounding: Tell the LLM to answer only from the provided context
  2. Request citations: Ask the LLM to reference source filenames
  3. Handle uncertainty: Instruct the LLM to say “I don’t know” when context is insufficient
  4. Set the tone: Add persona instructions for your use case (formal, casual, technical)

What’s Next?

Knowledge Base Docs

Deep dive into knowledge base management, chunking, and RAG patterns

Vector Search Reference

Full parameter reference and advanced search examples

Pinecone Reference

External vector database with filtering and reranking

API Reference

Complete guide to calling flows via REST API

MCP Integration

Connect flows to AI assistants via MCP

Jinba App Agents

Wrap your RAG flow in an agent for enhanced chat