Skip to main content

Kubernetes (Helm)

Two Helm charts cover the full deployment: ai-standalone for the AI backend and gateway-service for the routing and authentication layer.

Download the charts

component-ai.zip — AI backend chart
gateway-service.zip — Gateway chart

Figure: Kubernetes deployment steps for a full Uxopian AI stack.

Figure: Kubernetes service topology. Each service maintains its own Hazelcast cluster for session state.

Prerequisites

  • Kubernetes 1.24+
  • helm 3.x
  • kubectl configured for the target cluster
  • Image pull secret (regcred) in the target namespace
  • An OpenSearch instance reachable from the cluster

Image pull secret

kubectl create secret docker-registry regcred \
--docker-server=artifactory.arondor.cloud:5001 \
--docker-username=<your-username> \
--docker-password=<your-password> \
--namespace=<your-namespace>

Deploy ai-standalone

Configure values

Create a values-ai.yaml overriding the defaults for your environment:

image:
registry: artifactory.arondor.cloud:5001
repository: uxopian-ai
tag: "2026.0.0-ft3"

appConfig:
openSearch:
host: "opensearch-service" # OpenSearch service name in the cluster
port: "9200"
externalUrls:
appBaseUrl: "https://your-domain.example.com"
renditionBaseUrl: "http://arender-rendition-broker:8761" # ARender only
flowerDocsWsUrl: "http://flowerdocs-core:8081/core/" # FlowerDocs only
contextPath: "/gui/gateway/uxopian-ai"
llm:
defaultProvider: "openai"
defaultModel: "gpt-4.1"
defaultPrompt: "basePrompt"
contextSize: "10"

resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"

Create the API keys secret

The chart reads LLM API keys from a Kubernetes secret named ai-standalone-secrets. Create it before deploying:

kubectl create secret generic ai-standalone-secrets \
--from-literal=OPENAI_API_KEY=sk-your-key \
--namespace=<your-namespace>

Add keys for any other providers you use (ANTHROPIC_API_KEY, GEMINI_API_KEY, AZURE_OPENAI_API_KEY, NUEXTRACT_API_KEY). Keys for unused providers can be omitted.

Install

helm install ai-standalone ./component-ai \
-f values-ai.yaml \
--namespace <your-namespace>

Upgrade

helm upgrade ai-standalone ./component-ai \
-f values-ai.yaml \
--namespace <your-namespace>

Deploy gateway-service

Configure values

Create a values-gateway.yaml. The most important section is files.application.yaml, which defines the routes and authentication provider.

image:
repository: artifactory.arondor.cloud:5001/uxopian-gateway
tag: "2026.0.0-ft3"

hazelcast:
enabled: true
clusterName: "uxopian-gateway-cluster"
javaOpts: "-Xmx256m -Xms256m"

files:
application.yaml: |
app:
routes:
- id: uxopian-ai
uri: http://ai-standalone-service:8080
prefix: /gui/gateway/uxopian-ai/
path: /gui/gateway/uxopian-ai/**
provider: FlowerDocsProvider
security:
- path: /.well-known/**
public: true
- path: /assets/**
public: true
- path: /v3/**
public: true
- path: /actuator/health
public: true
- path: /prompt/**
roles: ["ADMIN"]
- path: /goal/**
roles: ["ADMIN"]
- path: /prompt-statistics
roles: ["ADMIN"]
- id: uxopian-ai-ws
uri: ws://ai-standalone-service:8080
path: /gui/gateway/uxopian-ai/ws/**
prefix: /gui/gateway/uxopian-ai/ws/
security:
- path: /**
public: true
server:
port: 8085

Install

helm install gateway-service ./gateway-service \
-f values-gateway.yaml \
--namespace <your-namespace>

Embedded config files

The chart mounts a ConfigMap at /app/config/ containing four YAML files. You can override any of them inline under files in your values-ai.yaml.

The most common override is switching the ECM integration. flowerdocs and alfresco are mutually exclusive — enable one or the other:

files:
application.yml: |+
plugins:
tools:
enabled-tags: flowerdocs,files

Alternatively, set the environment variable PLUGINS_TOOLS_ENABLED_TAGS directly instead of overriding the file.

You can also override hazelcast.yml, llm-clients-config.yml, and prompts.yml the same way — paste the full file content under the corresponding key.

Hazelcast clustering

Both services discover cluster peers through a Kubernetes headless service. The headless service (<name>-headless) and the required RBAC resources (ServiceAccount, ClusterRole, ClusterRoleBinding) are created automatically by the charts.

The two Hazelcast clusters are independent:

  • uxopian-ai-cluster — session token cache shared across ai-standalone replicas
  • uxopian-gateway-cluster — session validation cache shared across gateway replicas

No additional configuration is needed beyond setting hazelcast.enabled: true (the default).

Verification

Check that both pods are running:

kubectl get pods -n <your-namespace>

Test the gateway health endpoint:

curl https://your-domain.example.com/gui/gateway/uxopian-ai/actuator/health

Expected response:

{"status":"UP"}

Key values reference

PathDescription
image.tagImage version — use 2026.0.0-ft3
appConfig.openSearch.hostOpenSearch service hostname
appConfig.openSearch.portOpenSearch port (default 9200)
appConfig.openSearch.forceRefreshForce index refresh after writes (default false, enable only for debug)
appConfig.externalUrls.appBaseUrlPublic URL of the application (used in generated links)
appConfig.externalUrls.renditionBaseUrlARender DSB URL (ARender integration only)
appConfig.externalUrls.flowerDocsWsUrlFlowerDocs core web service URL (FlowerDocs integration only)
appConfig.contextPathServlet context path, must match gateway route prefix
appConfig.llm.defaultProviderDefault LLM provider (openai, anthropic, gemini, etc.)
appConfig.llm.defaultModelDefault model name
appConfig.llm.defaultPromptID of the default system prompt (default basePrompt)
appConfig.llm.contextSizeNumber of conversation turns kept in context (default 10)
appConfig.javaOptsJVM flags (default -Xmx768m -Xms512m)
existingSecretName of the K8s secret holding LLM API keys (default ai-standalone-secrets)
hazelcast.enabledEnable K8s-based Hazelcast peer discovery
files.application.ymlOverride the embedded application.yml (plugins, context path, etc.)
files.hazelcast.ymlOverride the embedded hazelcast.yml (cluster name, discovery)
files.llm-clients-config.ymlOverride the embedded LLM provider configuration
files.prompts.ymlOverride the embedded prompt definitions