Configure your Llama model credentials to use local or cloud-based Llama models in your Jinba Flow workflows.

Prerequisites

  • Access to Llama models (local or cloud-based)
  • API key for cloud Llama services (if using cloud)

Getting Your API Key (Cloud Services)

For Cloud Llama Services (e.g., Replicate, Ollama Cloud)

  1. Visit your chosen Llama cloud service provider
  2. Sign in to your account or create a new one
  3. Navigate to the API keys or credentials section
  4. Generate a new API key
  5. Copy the generated key

For Local Llama Models

If you’re running Llama models locally (e.g., with Ollama), you typically don’t need an API key, but you’ll need to configure the connection details.

Setting Up in Jinba Flow

Cloud-based Llama

  1. Go to your workspace settings in Jinba Flow
  2. Navigate to the “Secret” section
  3. Find “Llama” in the list of available credentials
  4. Click “Configure” or “Add New”
  5. Enter your API key in the designated field
  6. Select your preferred Llama model version
  7. Click “Save” to store your credentials

Local Llama Models

  1. Go to your workspace settings in Jinba Flow
  2. Navigate to the “Secret” section
  3. Find “Llama” in the list of available credentials
  4. Click “Configure” or “Add New”
  5. Select “Local” as the connection type
  6. Enter your local server URL (e.g., http://localhost:11434)
  7. Specify the model name you have installed
  8. Click “Save” to store your credentials

Available Models

Llama 3 Models

  • Llama 3.1 8B - Fast and efficient for most tasks
  • Llama 3.1 70B - More capable for complex reasoning
  • Llama 3.1 405B - Most capable model (cloud only)

Llama 2 Models

  • Llama 2 7B - Lightweight model
  • Llama 2 13B - Balanced performance
  • Llama 2 70B - High performance

Local Setup (Optional)

If you want to run Llama models locally:
  1. Install Ollama
  2. Pull your preferred model: ollama pull llama3.1:8b
  3. Start the Ollama service: ollama serve
  4. Configure the local connection in Jinba Flow