Frequently Asked Questions
Which LlamaCloud services communicate with which database/queue/filestore dependencies?
Section titled “Which LlamaCloud services communicate with which database/queue/filestore dependencies?”- Backend: Postgres, MongoDB, Redis, Filestore
- Jobs Service: Postgres, MongoDB, Filestore
- Jobs Worker: RabbitMQ, Redis, MongoDB
- Usage: MongoDB and Redis
- LlamaParse: Consumes from RabbitMQ, Reads/Writes from Filestore
- LlamaParse OCR: None
Which Features Require an LLM and what model?
Section titled “Which Features Require an LLM and what model?”-
Chat UI: This feature requires the customer’s OpenAI Key to have access to either the Text-Only models and/or the Multi-Modal model (if multi-modal index)
-
(As of 09.24.2024): These keys are set up via the Helm chart:
-
backend:config:openAiApiKey: <your-key># If you are using Azure OpenAI, you can configure it like this:# azureOpenAi:# enabled: false# existingSecret: ""# key: ""# endpoint: ""# deploymentName: ""# apiVersion: ""
-
-
-
Embeddings: Credentials to connect to an embedding model provider are input within the application directly during the Index creation workflow.
-
LlamaParse Fast: Text extraction only. No LLM.
-
LlamaParse Accurate: This mode uses the
gpt-4ounder the hood, and the key can be configured here:-
llamaParse:config:openAiApiKey: "<your-key>"# If you are using Azure OpenAI, you can configure it like this:# azureOpenAi:# enabled: false# existingSecret: ""# key: ""# endpoint: ""# deploymentName: ""# apiVersion: ""
-
LLM API Rate Limits
Section titled “LLM API Rate Limits”There will be many instances where you may run into some kind of rate limit with an LLM provider. The easiest way to debug is to view the logs, and if you see a 429 error, increase your tokens per minute limit.
What auth modes are supported at the moment?
Section titled “What auth modes are supported at the moment?”As of 07-25-2025, we support both OIDC and Basic Auth for self-hosted deployments. For more information, please refer to the Authentication Modes documentation.
Known Issues
Section titled “Known Issues”BYOC Port-Forwarding with Custom Helm Release Names
Section titled “BYOC Port-Forwarding with Custom Helm Release Names”Issue: When testing BYOC deployments without ingress setup (using port-forwarding), the backend service must be reachable at http://llamacloud-backend:8000. This works correctly when the Helm release name is llamacloud, but breaks when using a different release name.
Affected Setup:
- BYOC deployments without ingress configuration
- Using
kubectl port-forwardfor testing - Helm release name is not
llamacloud
Workarounds (until permanent fix is available):
-
Manual Service Creation: Create an additional backend service with the expected name
-
Setup Ingress: Configure proper ingress instead of relying on port-forwarding. See the Ingress Configuration documentation for details.
Recommendation: For production deployments, always use proper ingress configuration rather than port-forwarding.