Skip to main content

Docker Setup

This guide will walk you through pulling and setting up a Docker image to run as a server. For the system to function correctly and send out requests, you'll need to configure the following environment variables: PASSWORD, LLM_PROVIDER, LLM_API_URL, LLM_API_KEY, and optionally OPENAI_API_KEY and GEMINI_API_KEY depending on your LLM provider choice. The PASSWORD is used for secure internal authentication. The LLM configuration variables determine which language model provider and API to use.

Docker Repository for ObjectWeaver

Access the Docker image for ObjectWeaver via the following link:

ObjectWeaver - Docker Hub Repository

This repository hosts the containerized version of the application, making it easy to deploy and run in various environments.

Prerequisites

  • Docker installed on your system
  • Docker Compose installed on your system
  • Valid API keys for your chosen LLM provider (OPENAI_API_KEY, GEMINI_API_KEY, or custom provider)
  • A password for internal authentication (PASSWORD)

Steps to Pull and Set Up the Docker Image

1. Pull the Docker Image

First, pull the Docker image from the repository. Use the following command to pull the image:

docker pull ObjectWeaver/ObjectWeaver:latest

2. Create a Docker Compose File

Create a docker-compose.yml file in your project directory with the following content:

services:
objectweaver:
image: objectweaver/objectweaver:latest
ports:
- "2008:2008"
environment:
# Required: Authentication password for internal communication
PASSWORD: YOUR-PASSWORD

# Required: LLM Provider Configuration
LLM_PROVIDER: openai # Options: openai, gemini, or custom provider name
LLM_API_URL: https://api.openai.com/v1 # API endpoint URL for your LLM provider
LLM_API_KEY: YOUR-LLM_API_KEY # API key for your chosen LLM provider

# Optional: Provider-specific API keys (if using those providers directly)
OPENAI_API_KEY: YOUR-OPENAI_API_KEY # OpenAI API key
GEMINI_API_KEY: YOUR-GEMINI_API_KEY # Google Gemini API key

# Optional: LLM Configuration
LLM_BACKOFF_STRATEGY: exponential # Backoff strategy for rate limiting (optional)
LLM_USE_GZIP: true # Enable gzip compression for LLM requests (optional)

# Optional: Server Configuration
PORT: 2008 # Server port (default: 2008)
ENVIRONMENT: development # Options: development, production (default: production if blank)
USER_TIER: 1 # User tier for rate limiting (optional)

# Optional: TLS/Security Configuration
GRPC_UNSECURE: true # Set to true for non-TLS gRPC, false or empty for TLS
CERT_FILE: /path/to/cert.pem # Path to TLS certificate file (required if GRPC_UNSECURE=false)
KEY_FILE: /path/to/key.pem # Path to TLS key file (required if GRPC_UNSECURE=false)

# Optional: Debugging
VERBOSE_LOGS: false # Enable verbose logs for detailed debugging (optional)

# Optional: Epistemic Validation Configuration
LLM_AS_JUDGE: false # Enable LLM as judge for epistemic validation (true/false)
KMEAN_AS_JUDGE: false # Enable K-Mean clustering for epistemic validation (true/false, takes precedence over LLM_AS_JUDGE)
LLM_JUDGE_MODEL: gpt-4o-mini # Model to use for LLM as judge (default: gpt-4o-mini)
KMEAN_EMBEDDING_MODEL: text-embedding-3-small # Embedding model for K-Mean clustering (default: text-embedding-3-small)
EPSTIMIC_WORKER_COUNT: 3 # Number of worker threads for epistemic validation (default: 3)

3. Set Up Environment Variables

Replace the placeholder values with your actual configuration:

  • PASSWORD: Your password for secure internal authentication
  • LLM_PROVIDER: The name of your LLM provider (e.g., openai, gemini)
  • LLM_API_URL: The API endpoint URL for your chosen provider
  • LLM_API_KEY: Your API key for the chosen provider
  • OPENAI_API_KEY / GEMINI_API_KEY: Provider-specific keys if using OpenAI or Gemini
  • TLS Certificates: If using secure gRPC (GRPC_UNSECURE=false), provide paths to your certificate and key files

Example Configuration:

    environment:
# Required
PASSWORD: my-secure-password-123
LLM_PROVIDER: openai
LLM_API_URL: https://api.openai.com/v1 #Optional (isn't needed unless the LLM_PROVIDER is local)
LLM_API_KEY: sk-proj-xxxxxxxxxxxxx

# Optional - OpenAI specific
OPENAI_API_KEY: sk-proj-xxxxxxxxxxxxx

# Optional - Server settings
PORT: 2008
ENVIRONMENT: production
USER_TIER: 1

# Optional - Security
GRPC_UNSECURE: false
CERT_FILE: /certs/server.crt
KEY_FILE: /certs/server.key

# Optional - LLM Configuration
LLM_BACKOFF_STRATEGY: exponential
LLM_USE_GZIP: true

# Optional - Debugging
VERBOSE: false

# Optional - Epistemic Validation
LLM_AS_JUDGE: false
KMEAN_AS_JUDGE: false
LLM_JUDGE_MODEL: gpt-4o-mini
KMEAN_EMBEDDING_MODEL: text-embedding-3-small
EPSTIMIC_WORKER_COUNT: 3

4. Build and Run the Docker Container

Navigate to your project directory and run the following command to build and start the Docker container using Docker Compose:

docker-compose up --build

This command will build the Docker image based on your Docker Compose file and start the container, mapping port 2008 on your local machine to port 2008 on the container.

5. Verify the Setup

Once the container is running, you can verify that the server is set up correctly by sending a test request to the server.

Environment Variables Reference

Required Variables

  • PASSWORD: Authentication password for internal communication
  • LLM_PROVIDER: LLM provider name (e.g., openai, gemini)
  • LLM_API_URL: API endpoint URL for the LLM provider
  • LLM_API_KEY: API key for the chosen LLM provider

Optional Variables

  • OPENAI_API_KEY: OpenAI-specific API key
  • GEMINI_API_KEY: Google Gemini-specific API key
  • LLM_BACKOFF_STRATEGY: Backoff strategy for rate limiting
  • LLM_USE_GZIP: Enable gzip compression (true/false)
  • PORT: Server port (default: 2008)
  • ENVIRONMENT: Environment mode (development/production) default is production
  • USER_TIER: User tier for rate limiting
  • GRPC_UNSECURE: Enable non-TLS gRPC (true/false)
  • CERT_FILE: Path to TLS certificate file
  • KEY_FILE: Path to TLS key file
  • VERBOSE: Enable verbose logging (true/false)

Epistemic Validation Variables

These variables are needed for configuring epistemic validation features:

  • LLM_AS_JUDGE: Enable LLM as judge for epistemic validation (true/false)
  • KMEAN_AS_JUDGE: Enable K-Mean clustering for epistemic validation (true/false, takes precedence over LLM_AS_JUDGE)
  • LLM_JUDGE_MODEL: Model to use for LLM as judge (default: gpt-4o-mini)
  • KMEAN_EMBEDDING_MODEL: Embedding model for K-Mean clustering (default: text-embedding-3-small)
  • EPSTIMIC_WORKER_COUNT: Number of worker threads for epistemic validation (default: 3)