#AI
Posts tagged #AI · 2 posts
- Domino IQ RAG: A Built-In Pipeline That Wires Your NSFs Straight Into a Local LLM
Domino 14.5.1 adds RAG (Retrieval-Augmented Generation) support to Domino IQ, running the LLM, embedding model, and vector database all on the Domino server itself — local execution, with NSF ACL and Readers fields enforced natively. This guide walks through prerequisites, the two-phase dominoiq.nsf configuration, updall vectorization, calling LLMReq from LotusScript, and why this is a different species from the OpenAI + Pinecone pipeline.
2026.05.05 - Domino IQ: What It Means to Run an LLM Inside the Domino Server
Domino 14.5 introduces Domino IQ — an AI inference engine baked into the Domino server backend, callable from LotusScript via NotesLLMRequest / NotesLLMResponse without ever leaving the box. This guide covers the architecture, hardware requirements, install flow, the two-phase dominoiq.nsf configuration, the Command and System Prompt document model, and why this trade-off works for existing Domino shops where bolting on OpenAI doesn't.
2026.05.05