- LLM Prompt Design: At the foundational level, all our prompts are crafted to anchor LLM responses to the context they analyze. We validate all prompt changes by calculating the “hallucination rate” metrics using internal validation datasets.
- Post-Processing Verification: After the LLM has found an answer, we initiate a separate “verification” step. This step employs another LLM and a set of heuristics to confirm that the information supporting the answer genuinely exists in the cited sources.
- User Level: Ultimately, all answers seen by users are supported by explanations and citations that link to the sources where the answer was found. This approach enables straightforward auditing and verification of answers.