Getting Started
Getting Started with llamactl
Section titled âGetting Started with llamactlâLlamaAgents uses the llamactl CLI for development. llamactl bootstraps an application server that manages running and persisting your workflows, and a control plane for managing cloud deployments of applications. It has some system pre-requisites that must be installed in order to work:
- Make sure you have
uvinstalled.uvis a Python package manager and build tool.llamactlintegrates with it in order to quickly manage your projectâs build and dependencies. - Windows support is experimental (as of version
0.3.14) and requires some adjustments to runllamactlwithout issues: see our dedicated guide on the topic. For a better user experience, it is still advisable to use WSL2 (e.g., Ubuntu) and follow the Linux instructions. See Install WSL. - Likewise, Node.js is required for UI development. For macOS and Linux, we recommend installing Node via
nvmto manage versions. You can use your node package manager of choice (npm,pnpm, oryarn). For Windows, we recommend using Chocolatey for the installation process. - Ensure
gitis installed:- macOS: Install via Xcode Command Line Tools or Homebrew (
brew install git) - Linux: Follow your distro instructions: git-scm.com/download/linux
- Windows: use Chocolatey
- macOS: Install via Xcode Command Line Tools or Homebrew (
Install
Section titled âInstallâChoose one:
- Try without installing:
uvx llamactl --help- Install globally (recommended):
uv tool install -U llamactlllamactl --helpInitialize a Project
Section titled âInitialize a Projectâllamactl includes starter templates for both fullâstack UI apps, and headless (API only) workflows. Pick a template and customize it.
Since llamactl uses symlinks when initializing, you might run into some permission issues with Windows. We advise you to activate Developer Settings using start ms-settings:developers before running llamactl init.
llamactl initThis will prompt for some details, and create a Python module that contains LlamaIndex workflows, plus an optional UI you can serve as a static frontend.
When you run llamactl init, the scaffold also includes AI assistant-facing docs: AGENTS.md, CLAUDE.md, and GEMINI.md. These contain quick references and instructions for using LlamaIndex libraries to assist coding. These files are optional and safe to customize or remove â they do not affect your builds, runtime, or deployments.
Application configuration is managed within your projectâs pyproject.toml, where you can define Python workflow instances that should be served, environment details, and configuration for how the UI should be built. See the Deployment Config Reference for details on all configurable fields.
Develop and Run Locally
Section titled âDevelop and Run LocallyâOnce you have a project, you can run the dev server for your application:
llamactl servellamactl serve will
- Install all required dependencies
- Read the workflows configured in your appâs
pyproject.tomland serve them as an API - Start up and proxy the frontend development server, so you can seamlessly write a full stack application.
For example, with the following configuration, the app will be served at http://localhost:4501/deployments/my-package. Make a POST request to /deployments/my-package/workflows/my-workflow/run to trigger the workflow in src/my_package/my_workflow.py.
[project]name = "my-package"# ...[tool.llamadeploy.workflows]my-workflow = "my_package.my_workflow:workflow"
[tool.llamadeploy.ui]directory = "ui"# from workflows import ...# ...workflow = MyWorkflow()At this point, you can get to coding. The development server will detect changes as you save files. It will even resume in-progress workflows!
For more information about CLI flags available, see llamactl serve.
For a more detailed reference on how to define and expose workflows, see Workflows & App Server API.
Create a Cloud Deployment
Section titled âCreate a Cloud DeploymentâLlamaAgents applications can be rapidly deployed just by pointing to a source git repository. With the provided repository configuration, LlamaCloud will clone, build, and serve your app. It can even access GitHub private repositories by installing the GitHub app
Example:
git remote add origin https://github.com/org/repogit add -Agit commit -m 'Set up new app'git push -u origin mainThen, create a deployment:
llamactl deployments createThe first time you run this, youâll be prompted to log into LlamaCloud.
Username/password sign-in is not yet supported. If you do not have a supported social sign-in provider, you can use token-based authentication via llamactl auth token. See llamactl auth for details.
This will open an interactive Terminal UI (TUI). You can tab through fields, or even point and click with your mouse if your terminal supports it. All required fields should be automatically detected from your environment, but can be customized:
- Name: Humanâreadable and URLâsafe; appears in your deployment URL
- Git repository: Public HTTP or private GitHub (install the GitHub app for private repos)
- Git branch: Branch to pull and build from (use
llamactl deployments updateto roll forward). This can also be a tag or a git commit. - Secrets: Preâfilled from your local
.env; edit as needed. These cannot be read again after creation.
When you save, LlamaAgents will verify that it has access to your repository (and prompt you to install the GitHub app if not)
After creation, the TUI will show deployment status and logs.
- You can later use
llamactl deployments getto view again. - You can add secrets or change branches with
llamactl deployments edit. - If you update your source repo, run
llamactl deployments updateto roll a new version.
Next: Read about defining and exposing workflows in Workflows & App Server API.