Creating a Tool
Open the ToolSet
Navigate to the ToolSet where you want to add a tool. Make sure you are on the Tools tab.
Click Add Tool
Click Add Tool. A modal will ask for:
- Slug — URL-safe identifier (e.g.
post-message). Used in API endpoints. - Name — Human-readable name (e.g. “Post Message”).
- Description — What the tool does.
The Tool Editor
The Tool Editor is a resizable, multi-panel workspace:Code Editor
A Monaco-based editor (the same engine as VS Code) with syntax highlighting, auto-completion, and error markers for TypeScript and Python.
Test Panel
Enter JSON input and run the tool in a live sandbox. View output, errors, and logs without leaving the editor.
Settings Panel
Edit the tool’s slug, name, and description. Trigger auto-generation of metadata from code.
AI Copilot
A built-in chat assistant that can generate code, suggest fixes based on execution output, and update test inputs.
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Cmd/Ctrl + S | Save the tool |
Cmd/Ctrl + Enter | Run the tool with current test input |
Cmd/Ctrl + B | Toggle the AI Copilot sidebar |
Cmd/Ctrl + , | Toggle the Settings panel |
Cmd/Ctrl + . | Toggle the API panel |
Escape | Close the active panel or modal |
Writing Tools in TypeScript
TypeScript tools use Zod for schema definitions. Exportinput and output schemas and an async run function:
- Import
zfrom"zod"— it is pre-installed in the sandbox. - Export
inputandoutputas Zod schemas. These are used for validation and to generate JSON Schema for the API. - Export
runas the entry point. It receives the validated input and must return an object matching the output schema. - You can
importany packages listed in the ToolSet’s Packages tab. - Environment variables are accessible via
process.env.
Writing Tools in Python
Python tools use Pydantic for schema definitions. DefineInput and Output models and a run function:
- Define
InputandOutputas PydanticBaseModelsubclasses. - Define
runas the entry point. It receives the validated input and must return an instance of theOutputmodel. - You can
importany packages listed in the ToolSet’s Packages tab. - Environment variables are accessible via
os.environ.
Input and Output Schemas
Schemas serve two purposes:- Validation — The sandbox validates inputs before execution and outputs after execution.
- Documentation — Schemas are exposed through the API and MCP endpoints so that AI agents understand what a tool expects and returns.
.describe() (Zod) or Field(description=...) (Pydantic) on each field to provide clear descriptions. These descriptions appear in the API documentation and are used by AI agents to understand how to call your tool.
Auto-Generate Metadata
Click Auto Generate in the Settings panel to automatically extract:- Slug — derived from the function and schema names in your code.
- Name — a human-readable name inferred from the code.
- Description — a summary of what the tool does.
AI Copilot
The AI Copilot sidebar provides a chat interface that is context-aware of your tool’s code, language, and execution output. Use it to:- Generate code — Describe what you want the tool to do and the Copilot will write the implementation.
- Debug errors — After a failed test run, ask the Copilot to analyze the error and suggest a fix.
- Apply code — The Copilot can push generated code directly into the editor and auto-save.
- Update test input — The Copilot can suggest and apply JSON test inputs to the Test panel.
Cmd/Ctrl + B or the bot icon in the sidebar.
API Panel
Toggle the API panel (Cmd/Ctrl + .) to view:
- The REST API endpoint URL for executing this tool.
- The MCP endpoint URL (if MCP is enabled on the ToolSet).
- Example
curlcommands and SDK usage snippets. - The tool’s input schema in JSON Schema format.
Saving and Deleting
- Save — Click the Save button or press
Cmd/Ctrl + S. Changes are saved to the draft state and do not affect published versions. - Delete — Click the Delete button in the header. A confirmation dialog will appear. Deleting a tool removes it from the draft state. If it was included in published versions, those snapshots remain unchanged.
Best Practices
- Keep tools focused. Each tool should do one thing well. Use the ToolSet to group related tools.
- Write descriptive schemas. Good field descriptions help AI agents call your tools correctly.
- Test before publishing. Use the Test panel to verify behavior with various inputs before publishing a new version.
- Use environment variables for secrets. Never hard-code API keys or tokens in tool code. Store them in the ToolSet’s Environments tab.
- Leverage the Copilot for iteration. After a failed test run, the Copilot has access to the error output and can suggest targeted fixes.
Jinba Toolbox