Skip to main content

AITS Framework

AITS — AI Trust Score Framework — 20 privacy and AI governance criteria (ISO / LGPD)

What Is TrustThis and AITS?

TrustThis is a platform that provides the AITS (AI Trusty Score) — an index that measures the degree of transparency and public communication of privacy practices of corporate software and digital services, with a specialized focus on AI features, AI governance, and AI management.

Based on 20 criteria (12 traditional privacy + 8 AI-specific) aligned with international ISO standards. Audits analyze only publicly accessible documents and information, such as privacy policies, terms of use, official statements, and AI usage documentation.

Why "AITS"?

"AI"

  • • Specialized focus on AI governance and transparency
  • • Evaluates how companies communicate their use of artificial intelligence
  • • Aligned with emerging AI standards such as ISO/IEC 42001

"Trusty"

  • • Measures reliability in privacy and data protection
  • Includes AI-specific privacy aspects
  • • Based exclusively on public information

"Score"

  • • It is a quantitative indicator, not a certification
  • • A screening and benchmarking tool
  • • Covers privacy criteria and AI-specific ones

Why Is AITS Important?

For Buyers

  • Efficient vendor screening
  • Risk reduction through transparency
  • Informed decisions on AI and privacy
  • Due diligence acceleration

For Vendors

  • Competitive differentiation
  • Transparency gap identification
  • Benchmark against competitors
  • Preparation for AI regulations

AI Governance and Transparency

Criteria 1-8 are specific to software that uses AI and evaluate:

  • Declaration of AI use in features
  • Documented AI governance policy
  • Transparency about AI-powered features
  • Explainability of automated decisions
  • Right to contest AI decisions
  • Risk assessment (bias, discrimination)
  • Human oversight in high-risk systems
  • Ethical principles in AI development/use

5Hyper-Critical Criteria: AI Data Use (C13-C15)

Criteria C13-C15 have weight 5 (hyper-critical) because they answer the most important questions about how the company uses your data for AI:

C13

Input/Output Retention

"How long do they keep my AI prompts and responses?"

C14

Training Use

"Is my data used to train the AI models?"

C15

Training Opt-out

"Can I prevent my data from being used for training?"

These 3 criteria represent 50% of the AI score (45 out of 90 points), reflecting their critical importance for privacy in artificial intelligence systems.

Assessment Methodology

Criteria Structure

Criteria 1-8: Specific to software that uses AI

Criteria 9-20: Traditional privacy (always applicable)

Weight System

Weight 5: Hyper-critical criteria (C13-C15: AI data use)

Weight 3: Critical criteria

EV3 Evidence System

Each criterion is evaluated at 3 levels: 0 (no evidence), 1 (partial evidence), 3 (consistent evidence)

AITS Criteria

The AITS index uses 20 essential criteria to evaluate transparency in privacy and AI governance:

20 Essential Criteria

Privacy + AI Governance

12 criteria for traditional privacy

8 criteria specific to AI

Analysis time: 10-15 seconds

Use: Public rankings and vendor screening

Coverage: LGPD, GDPR, CCPA, and ISO standards

Critical privacy and AI criteria

AITS Limitations

AITS is a transparency index, not a certification. It does NOT evaluate:

  • Actual technical implementation of controls
  • Internal practices not publicly documented
  • Full regulatory compliance
  • Technical system security
  • Actual performance or accuracy of AI systems

Legal and Regulatory Foundation

LGPD

General Data Protection Law (Law 13.709/2018) - Main foundation for transparency criteria, data subject rights, and data governance in Brazil.