Skip to main content

Uxopian AI overview

Uxopian AI is a framework for embedding AI assistants inside legacy enterprise applications. It is designed for content management and document-centric systems. It does not replace the host application. It adds a conversational interface, connects to multiple LLM providers, and exposes tools the LLM can call to operate on enterprise data.

Who this documentation is for

AudienceEntry point
System administrator deploying the stackQuickstart with Docker Compose
Integration developer embedding the chat UIEmbed in a web application
Solution architect evaluating the systemSystem architecture
ARender integratorIntegrate with ARender
Alfresco integratorIntegrate with Alfresco
FlowerDocs integratorIntegrate with FlowerDocs
Prompt or assistant developerPrompts and templating
Gateway / route configurationConfigure gateway routes

System components

Figure: High-level component topology.

uxopian-ai

The primary application. Java 21, Spring Boot (reactive WebFlux stack). Handles all business logic: conversations, requests, prompt rendering, tool execution, LLM calls, and the REST/WebSocket API.

uxopian-gateway

Spring Cloud Gateway acting as a reverse proxy. The only public entry point. Authenticates every request using a configurable AuthProvider and forwards identity headers to uxopian-ai. uxopian-ai must never be exposed directly to browsers.

OpenSearch

Primary persistence store. Stores conversations, requests, prompts, goal configurations, LLM provider configurations, and usage metrics. All data is tenant-scoped.

Hazelcast

Distributed in-memory cache used by the gateway for session token caching. Configured via hazelcast.yml.

Supported LLM providers

Nine providers are supported out of the box:

ProviderKey identifier
OpenAIopenai
Anthropicanthropic
Azure OpenAIazure-openai
AWS Bedrockbedrock
Google Geminigemini
Mistral AImistral-ai
HuggingFacehuggingface
Ollamaollama
NuExtractnu-extract

See LLM providers for configuration details.

Integration paths

Three integration paths are available:

  • Generic web application: load the JavaScript and CSS bundles from the gateway's /api/web-components/chat/script and /api/web-components/chat/style endpoints, call window.createChat(). See Embed in a web application.
  • ARender document viewer: adds an AI menu to the ARender top panel. Documents are accessed via the ARender DSB API. See Integrate with ARender.
  • FlowerDocs ECM: embeds the chat panel via FlowerDocs scope files. Uses FlowerDocsProvider in the gateway. See Integrate with FlowerDocs.

Extension mechanisms

  • Custom tool plugins: write a @ToolService class, package as a shaded JAR, drop in plugins/. The LLM can then call those methods as tools.
  • Custom ServiceHelpers: write a @HelperService class, expose it as a named expression object in Thymeleaf prompt templates.
  • Custom auth providers: implement the AuthProvider interface in the gateway to support any identity system.
  • Prompt and goal customization: define per-tenant overrides in prompts.yml and goals.yml, or manage them live via the Admin API.

Key concepts

ConceptDescription
TenantPrimary isolation unit. All data is scoped to a tenant ID.
ConversationA chat session. Contains a sequence of Requests.
RequestA single LLM round-trip: inputs, rendered prompt, response, token usage.
PromptA named Thymeleaf template defining a role and content.
GoalA named group of ordered prompt references with optional filters.
PluginA shaded JAR in plugins/ loaded at startup by IntegrationLoader.

Next steps

  1. Follow Quickstart with Docker Compose to run a local stack.
  2. Read System architecture for the full component diagram and request flow.
  3. Read Authentication and gateway before any deployment.