jinba home page
English
Support
Go to jinba
Go to jinba
Search...
Search...
Navigation
Credentials
Llama Credentials
Getting Started
Introduction
How to make your workflow
Overview
Method 1: Chat Panel
Method 2: Graph Editor (Drag & Drop)
Method 3: YAML Coding Panel
Tools
Validation & Conversion
Productivity
Communication
AI & LLM
E-Commerce
Sandbox & Scripting
Search & Data Retrieval
Files
Media Generation
CMS & Web Tools
Jinba
Credentials
Anthropic Credentials
OpenAI Credentials
Llama Credentials
Slack Credentials
Discord Credentials
LINE Credentials
Chatwork Credentials
Notion Credentials
Google OAuth Credentials
Netsea Credentials
Shopify Credentials
Luma Credentials
Kling Credentials
GitHub Credentials
Gemini Credentials
Grok Credentials
On this page
Prerequisites
Getting Your API Key (Cloud Services)
For Cloud Llama Services (e.g., Replicate, Ollama Cloud)
For Local Llama Models
Setting Up in Jinba Flow
Cloud-based Llama
Local Llama Models
Available Models
Llama 3 Models
Llama 2 Models
Local Setup (Optional)
Credentials
Llama Credentials
Configure your Llama model credentials to use local or cloud-based Llama models in your Jinba Flow workflows.
Prerequisites
Access to Llama models (local or cloud-based)
API key for cloud Llama services (if using cloud)
Getting Your API Key (Cloud Services)
For Cloud Llama Services (e.g., Replicate, Ollama Cloud)
Visit your chosen Llama cloud service provider
Sign in to your account or create a new one
Navigate to the API keys or credentials section
Generate a new API key
Copy the generated key
For Local Llama Models
If you’re running Llama models locally (e.g., with Ollama), you typically don’t need an API key, but you’ll need to configure the connection details.
Setting Up in Jinba Flow
Cloud-based Llama
Go to your workspace settings in Jinba Flow
Navigate to the “Secret” section
Find “Llama” in the list of available credentials
Click “Configure” or “Add New”
Enter your API key in the designated field
Select your preferred Llama model version
Click “Save” to store your credentials
Local Llama Models
Go to your workspace settings in Jinba Flow
Navigate to the “Secret” section
Find “Llama” in the list of available credentials
Click “Configure” or “Add New”
Select “Local” as the connection type
Enter your local server URL (e.g.,
http://localhost:11434
)
Specify the model name you have installed
Click “Save” to store your credentials
Available Models
Llama 3 Models
Llama 3.1 8B
- Fast and efficient for most tasks
Llama 3.1 70B
- More capable for complex reasoning
Llama 3.1 405B
- Most capable model (cloud only)
Llama 2 Models
Llama 2 7B
- Lightweight model
Llama 2 13B
- Balanced performance
Llama 2 70B
- High performance
Local Setup (Optional)
If you want to run Llama models locally:
Install
Ollama
Pull your preferred model:
ollama pull llama3.1:8b
Start the Ollama service:
ollama serve
Configure the local connection in Jinba Flow
OpenAI Credentials
Slack Credentials
Assistant
Responses are generated using AI and may contain mistakes.