AI Transparency - TrustThis
How our Artificial Intelligence works
Last updated: October 15, 2025
TrustThis uses Artificial Intelligence to automate privacy and security analyses. This page documents how our AI works, its limitations and how we ensure responsible governance. Algorithmic transparency is an essential part of our mission.
1. Overview: How Our AI Works
Our automated audit system follows a 5-stage pipeline with mandatory human review:
Public Data Collection
Automated scraping of privacy policies, terms of use, and public company documentation. No personal user data is sent to AI.
Preprocessing
HTML cleaning, text normalization, script removal. Reduces size by 30-40% without information loss.
Evidence Mapping (optional)
AI identifies relevant sections for each criterion before complete audit. Improves accuracy by 20-30%.
Dual Audit (Gemini + DeepSeek)
Two AI models independently analyze 90 ISO 27001/27701 criteria. AITS v3 system combines results via logical union (reduces false negatives).
Mandatory Human Review
Privacy expert reviews all results before publication. AI suggests, humans decide.
π‘ Fundamental Principle:
AI is a support tool, not an oracle. Final decisions about scores and recommendations always go through qualified human review.
2. Principles and References
Our AI system is developed following these international frameworks and principles:
π ISO 27001/27701/42001
- ISO 27001: Information security (baseline)
- ISO 27701: Privacy (90 AITS criteria derived from here)
- ISO 42001: Management System for AI (governance)
πΊπΈ NIST AI Risk Management Framework
- Govern: Defined policies and responsibilities
- Map: AI risks identified and documented
- Measure: Performance and fairness metrics
- Manage: Continuous risk mitigation
πͺπΊ EU AI Act
- High-Risk System (Annex III, 5b):
- Mandatory for users: Mandatory for users
- Human Oversight: Human review implemented
- Contestation Rights: Right to contest decisions
π§π· LGPD / πͺπΊ GDPR
- LGPD Art. 20: Right to review automated decisions
- GDPR Art. 22: Right not to be subject to automated decisions
- Clear justifications for scores: Clear justifications for scores
π― AITS Framework (TrustThis Property)
90-criteria system derived from ISO 27701, NIST Privacy Framework, and GDPR. Evaluates privacy and security compliance in a structured and auditable way.
3. Where We Use AI
AI is used in specific tasks where we demonstrate greater benefit vs. risk:
β Privacy Criteria Classification
What it does: Analyzes privacy policies and classifies compliance with 90 ISO 27701 criteria.
Why AI here: Large text volume (10k-50k tokens), complex patterns, multiple possible interpretations. Human alone would take 8-12 hours/audit. AI + human: 30-60 minutes.
Human review: β Mandatory before publication
β Policy Change Detection
What it does: Compares policy versions and identifies significant changes (new data collection, changes in legal bases).
Why AI here: Semantic diff is superior to textual diff. AI detects "now we collect biometrics" even if wording has completely changed.
Human review: β Mandatory for high-impact changes
β Explanatory Summaries
What it does: Generates plain-language summaries of technical findings (e.g., "This company does not specify how long it keeps your data").
Why AI here: Translation of technical jargon to accessible language. Improves UX without compromising accuracy.
Human review: β Always reviewed to avoid excessive simplification
β What We DON'T Do with AI:
- Final decisions without humans: AI suggests, expert decides
- Scoring people: Only companies/systems are audited, never individuals
- Training with customer data: Models are external (OpenAI, Google, DeepSeek). We don't train our own models.
- Hiring/credit decisions: Our scores should not be used for decisions about people
- Sentiment analysis: We don't do emotional or psychographic analysis
4. Models and Providers
We use routing via OpenRouter, which allows flexibility to choose the best models per task.
π€ Main Models (AITS v3 System):
Primary Model: Google Gemini 2.0 Flash Thinking
- Provider: Google LLC (via OpenRouter)
- Context Window: 1M tokens (allows processing extensive policies)
- Why we chose it: Excellent at legal/compliance reasoning, supports thinking mode (shows chain of thought)
- Approval rate: ~38-46% of 90 criteria (conservative, reduces false positives)
Secondary Model: DeepSeek R1
- Provider: DeepSeek (via OpenRouter)
- Context Window: 32k tokens
- Why we chose it: Different perspective (different training), helps detect Gemini false negatives
- Dual System: Logical union (OR) - criterion approved if any model approves
π Privacy Guarantees:
- No personal user data sent: Only public policies are processed
- Cache disabled: No data stored on AI provider servers
- Processing region: United States (OpenRouter)
- Contractual safeguards: DPAs signed, SCCs in effect
- We don't use data to train models: Models are external and pre-trained
Model change history: View complete AI changelog
5. Known Limitations and Risks
AI is not perfect. We document below the main known risks and how we mitigate them:
β οΈ Hallucination
What it is: AI "invents" facts that don't exist in the document.
Example: Claiming a company has ISO 27001 certification when there's no evidence in the policy.
Mitigation:
- Dual audit (two models need to agree or we approve if at least 1 approves)
- Mandatory human review verifies cited evidence
- Contestation system allows users to report inconsistencies
β οΈ Outdated Information
What it is: AI models are trained up to a cutoff date. Recent legal changes may not be known.
Example: Law approved in 2025, but model trained until 2024.
Mitigation:
- ISO criteria are stable (gradual changes)
- Human reviewers are continuously updated on legal changes
- Criteria update system via code (doesn't depend only on the model)
β οΈ Document Ambiguity
What it is: Vague or contradictory privacy policies lead to uncertain analyses.
Example: "We may share data with partners" - which partners? For what purposes?
Mitigation:
- AI signals when text is ambiguous ("insufficient evidence")
- Score reflects uncertainty (criterion not approved if there's no clarity)
- Recommendations include clarification suggestion for the company
π‘οΈ Quality Assurance (QA) System:
- Audits of audits: 10% of audits are reviewed by second expert
- Performance metrics: False positive, false negative rates monitored
- Feedback loop: User contestations feed improvements
- Tests on known cases: Suite of 50 policies with expected results
6. Human Review and Contestation
Human review is mandatory before any audit result is published. AI accelerates the process, but the final decision is always human.
β Human Review Process:
- Initial AI analysis: Gemini + DeepSeek process 90 criteria (30-60min)
- Technical validation: Privacy expert reviews evidence cited by AI (1-2h)
- Consistency verification: Does final score reflect findings? Do recommendations make sense? (30min)
- Publication approval: Only after human approval is result visible
π How to Contest an Analysis:
If you disagree with an audit result or believe there's an error, you can request a review:
1. Contact Us:
Email: privacy@trustthis.org
2. Provide Details:
- Name of audited company
- Specific criterion/criteria contested
- Evidence supporting your position (link to policy section, etc.)
- Why you believe the analysis is incorrect
3. Independent Review:
Second expert (not involved in original analysis) reviews case with evidence from both parties.
4. Response SLA:
- Receipt confirmation: 2 business days
- Preliminary analysis: 5 business days
- Complete review + response: 10 business days
5. Resolution:
If review concludes analysis was incorrect, score is updated and notification is sent. If analysis was correct, we provide detailed justification.
π 100% Human Audit (Optional):
For enterprise clients, we offer fully manual audit flow (without AI) upon request. Delivery time: 2-3 weeks. Additional cost applies. Contact: compliance@trustthis.org
7. AI Impact Assessments (AIA)
We conduct AI Impact Assessments periodically to identify and mitigate risks of bias, discrimination, and other adverse impacts.
π When We Conduct AIAs:
- Before launching new AI system (baseline)
- Every 6 months (periodic review)
- After significant changes in models or criteria
- When we identify unexpected behavior
- Upon request from regulatory authorities
π Fairness Metrics (Public Summary):
Last AIA: October 2025 (baseline)
Approval Rate by Category:
- Small companies (<50 employees): 32% criteria approved
- Medium companies (50-500): 38% criteria approved
- Large companies (>500): 42% criteria approved
- Analysis: Expected difference (larger companies have more compliance resources)
False Positive/Negative Rates:
- False positives: ~8% (AI approves criterion, but human review rejects)
- False negatives: ~12% (AI rejects, but dual audit + human approves)
- Goal: Reduce FN to <5% by Q2 2026
Complete AIA reports: Available under confidentiality agreement for researchers and auditors. Contact: compliance@trustthis.org
8. Auditability
We understand that transparency includes allowing external auditing of our AI systems.
π What We Provide (under NDA):
- Decision criteria: Source code of 90 ISO criteria (already public at
/lib/audit/criteria-definitions-iso.ts) - Processing logs: Timestamps, models used, versions (anonymized)
- Performance metrics: Latency, approval rate, score distribution
- Prompts (structure): General prompt templates (without specific cases)
- AIA reports: Complete impact assessments
π« Scope and Limits:
- We don't provide: Word-for-word prompts (avoid gaming), complete model responses (intellectual property)
- We don't allow: Reverse engineering of proprietary models (provider limitation like Google/DeepSeek)
- Audit limited to: Academic researchers, certified auditors, regulatory authorities
π§ How to Request Audit Access:
Contact: compliance@trustthis.org
Required information:
- Institution/Organization
- Audit purpose (research, compliance, due diligence)
- Desired scope (which system components)
- Timeline (completion deadline)
9. AI FAQ
Q: Do you use data from my audits to train AI?
A: No. We use external pre-trained models (Google Gemini, DeepSeek). We don't train our own models. Only public privacy policies are processed. User data is never sent to AI.
Q: Can I trust AI results 100%?
A: No. AI is a support tool, not an oracle. That's why we have mandatory human review and contestation system. Always critically review results.
Q: How do you ensure AI is not biased?
A: We use dual audit (two models with different training), fairness metrics in periodic AIAs, and human review. We monitor approval rate by company category to detect bias.
Q: What happens if AI makes a serious error?
A: We have incident response process: (1) Immediate score correction, (2) Affected user notification, (3) Root cause analysis, (4) Public record in AI changelog, (5) Mitigation implementation to prevent recurrence.
Q: Can I opt for 100% human audit (without AI)?
A: Yes. For enterprise clients, we offer complete manual flow upon request. Delivery time: 2-3 weeks vs. 30-60min with AI. Additional cost applies. Contact: compliance@trustthis.org
Q: My company was poorly evaluated. Can I contest?
A: Absolutely. We have formal contestation process with 10 business days SLA. See "Human Review and Contestation" section above or contact: privacy@trustthis.org
Q: How do you choose which AI models to use?
A: We evaluate models on suite of 50 known cases. Criteria: accuracy, latency, cost, long context window support, legal/compliance reasoning capability. Changes are recorded in changelog.
Q: Do you share data with Google/DeepSeek?
A: Only public privacy policies are sent for processing (available on the internet). Cache is disabled (data not stored). DPAs and SCCs in effect. No personal user data is shared.
π TrustThis Commitment
Transparency about AI use is an essential part of our mission. If you have questions not answered here, disagree with an analysis, or want to audit our systems, contact us:
- Audit review: privacy@trustthis.org
- AI technical questions: compliance@trustthis.org
- General contact: team@trustthis.org
This page is part of our commitment to responsible AI governance and compliance with EU AI Act, NIST AI RMF, and ISO 42001.
π Related Documents: