Skip to main content

LLM providers

Uxopian AI supports nine LLM providers. Provider configurations are loaded from llm-clients-config.yml at startup into OpenSearch, then managed at runtime via the Admin API.

Provider overview

ProviderBean nameModels
OpenAIopenaigpt-5.1, gpt-4.1, gpt-4o
Anthropicanthropicclaude-sonnet-4, claude-opus-4
Azure OpenAIazure-openaigpt-4o via deployment
AWS Bedrockbedrockclaude-3-sonnet, cohere
Google Geminigeminigemini-2.5-pro, flash
Mistral AImistral-aimistral-large, mistral-small
HuggingFacehuggingfaceMistral-7B, Llama-3-8B
Ollamaollamallama3 (local)
NuExtractnu-extractspecialized extraction

Configuration structure

Each provider is configured in llm-clients-config.yml under llm.provider.globals:

llm:
default:
provider: ${LLM_DEFAULT_PROVIDER:openai}
model: ${LLM_DEFAULT_MODEL:gpt-5.1}
base-prompt: ${LLM_DEFAULT_PROMPT:basePrompt}
context: ${LLM_CONTEXT_SIZE:10}
provider:
globals:
- provider: openai
defaultLlmModelConfName: gpt5
globalConf:
apiSecret: ${OPENAI_API_KEY:}
temperature: 1
timeout: 60
maxRetries: 3
llModelConfs:
- llmModelConfName: gpt5
modelName: gpt-5.1
multiModalSupported: true
functionCallSupported: true

Global configuration fields

FieldDescription
providerProvider identifier (see table below)
defaultLlmModelConfNameDefault model configuration name for this provider
globalConf.apiSecretAPI key or secret credential
globalConf.endpointUrlBase URL for the provider API
globalConf.temperatureSampling temperature (0.0 to 1.0+)
globalConf.timeoutRequest timeout (e.g., 60s)
globalConf.maxRetriesNumber of retry attempts on failure
globalConf.extrasProvider-specific additional parameters

Model configuration fields

FieldDescription
llmModelConfNameInternal name used to reference this model
modelNameActual model name sent to the provider API
multiModalSupportedWhether this model accepts image inputs
functionCallSupportedWhether this model supports function calling (required for tools)

Supported providers

Each provider is a Spring @Service bean. The bean name (shown in the provider column) is the identifier used in llm-clients-config.yml and the admin UI.

Providerprovider valueAuth fieldsExtra parametersStreaming
OpenAIopenaiapiSecret, endpointUrl (optional)Yes
AnthropicanthropicapiSecret, endpointUrlYes
Azure OpenAIazure-openaiapiSecret, endpointUrlYes
AWS BedrockbedrockapiSecretAwsRegion, AwsAccessKey, AwsSessionTokenYes
Google GeminigeminiapiSecretYes
Mistral AImistral-aiapiSecret, endpointUrlYes
OllamaollamaendpointUrlYes
HuggingFacehuggingfaceapiSecretNo
NuExtractnu-extractapiSecret, endpointUrlmodelIdYes

All providers are implemented as LangChain4J wrappers. See Write a custom LLM client for the full source code of each provider and instructions to create your own.

Credential encryption

API keys are encrypted with AES/GCM before being stored in OpenSearch. The encryption key is configured via app.security.secret-key in application.yml. If not set, a default development key is used. Set a unique key in production.

Default provider and model

The default provider and model used when none is specified per-request:

  • LLM_DEFAULT_PROVIDER: provider identifier (default: openai)
  • LLM_DEFAULT_MODEL: model name (default: gpt-5.1)
  • LLM_DEFAULT_PROMPT: base prompt ID (default: basePrompt)

These can be overridden per-request via query parameters (provider, model) on the requests endpoint.

Context size

llm.context (or LLM_CONTEXT_SIZE) controls how many previous requests from the conversation are included in each LLM call. Default: 10.

Tenant overrides

Per-tenant LLM provider configurations can be defined under llm.provider.tenants with a mergeStrategy of MERGE, OVERWRITE, or CREATE_IF_MISSING. This allows different tenants to use different API keys or models. See Multi-tenancy.