Understanding the AI Models Used by Vincent
Learn about the Vincent foundational models. Our RAG legal knowledge base is engineered to use the best of OpenAI, Anthropic, and others for legal research.
Summary
This guide provides a transparent overview of the technology and security behind Vincent AI. Learn about our agnostic model architecture (using OpenAI, Google, and Anthropic), our RAG framework for preventing hallucinations, and the strict governance policies that ensure your data remains confidential.
Why This is Important
Using generative AI in a legal context requires a higher standard of trust. Vincent is built on an agnostic model architecture, meaning we are not tied to a single provider. This allows us to give you the best-performing models on the market while wrapping them in a secure, legal-grade framework that prioritizes accuracy, confidentiality, and zero data retention.
1. Our Foundational Model Lineup
Vincent incorporates a variety of large language models (LLMs) at various stages of the question-answering process. Our system intelligently selects the best model for each specific task. Our lineup currently includes leading models from:
OpenAI: Renowned for its powerful and versatile GPT models.
Google: Leveraging the advanced capabilities of its Gemini family of models.
Anthropic: Known for its focus on AI safety and producing reliable, helpful models like Claude.
2. Accuracy: The RAG Architecture
Our primary goal is to provide answers you can trust and minimize the risk of "hallucinations" (invented information). We achieve this through our Retrieval-Augmented Generation (RAG) architecture.
Grounding Answers in Verifiable Sources: Vincent does not answer questions from a generic, open-internet model. Instead, it first retrieves relevant documents from vLex's curated and authoritative legal database. The AI model is then instructed to generate its answer only based on these verified sources.
Mitigating Bias: We mitigate the risk of bias by sourcing our data exclusively from authoritative primary sources, such as courts and legislatures, and reputable publishers.
3. Model Security & Data Privacy
We ensure that your interactions with our AI models are always private and secure.
Trusted, Vetted Providers: We only integrate with state-of-the-art foundational models from trusted, industry-leading vendors that meet our security standards.
Zero Data Retention (ZDR): We have strict contractual agreements with our AI model providers that enforce a "zero data retention" policy. This means that your data (prompts and documents) is never used to train their models and is not stored by them for any longer than is necessary to process your request.
4. Governance and Risk Management
We have a formal process to manage the risks associated with AI and to continuously improve our systems.
Continuous Testing: Before deploying any new model or feature, we conduct extensive testing to ensure its accuracy, reliability, and safety.
User Feedback Loop: Vincent includes a built-in feedback mechanism that allows you to report any concerns or errors. This feedback is reviewed by our team and is a critical part of our ongoing quality monitoring.
Best Practices & Pro Tips
Trust but Verify: Always use the verification features built into Vincent. Hover over the blue highlighted text in an answer to see the exact source quote, and click the links in the "Legal Authorities" panel to review the full document.
Focus on the Output: Because of our agnostic approach, you don't need to worry about which specific model is running in the background. Vincent automatically optimizes for the best balance of quality and speed for your specific query.
Related Articles
What's Your Next Step?
Last updated
Was this helpful?

