Prerequisites
- Access to Llama models (local or cloud-based)
- API key for cloud Llama services (if using cloud)
Getting Your API Key (Cloud Services)
For Cloud Llama Services (e.g., Replicate, Ollama Cloud)
- Visit your chosen Llama cloud service provider
- Sign in to your account or create a new one
- Navigate to the API keys or credentials section
- Generate a new API key
- Copy the generated key
For Local Llama Models
If you’re running Llama models locally (e.g., with Ollama), you typically don’t need an API key, but you’ll need to configure the connection details.Setting Up in Jinba Flow
Cloud-based Llama
- Go to your workspace settings in Jinba Flow
- Navigate to the “Secret” section
- Find “Llama” in the list of available credentials
- Click “Configure” or “Add New”
- Enter your API key in the designated field
- Select your preferred Llama model version
- Click “Save” to store your credentials
Local Llama Models
- Go to your workspace settings in Jinba Flow
- Navigate to the “Secret” section
- Find “Llama” in the list of available credentials
- Click “Configure” or “Add New”
- Select “Local” as the connection type
- Enter your local server URL (e.g.,
http://localhost:11434
) - Specify the model name you have installed
- Click “Save” to store your credentials
Available Models
Llama 3 Models
- Llama 3.1 8B - Fast and efficient for most tasks
- Llama 3.1 70B - More capable for complex reasoning
- Llama 3.1 405B - Most capable model (cloud only)
Llama 2 Models
- Llama 2 7B - Lightweight model
- Llama 2 13B - Balanced performance
- Llama 2 70B - High performance
Local Setup (Optional)
If you want to run Llama models locally:- Install Ollama
-
Pull your preferred model:
ollama pull llama3.1:8b
-
Start the Ollama service:
ollama serve
- Configure the local connection in Jinba Flow