Skip to main content

What is AITS? How the AI transparency index helps companies evaluate suppliers

AITS transforms public evidence into a comparable index of AI and privacy transparency. Discover how it standardizes supplier mapping and comparison under the EU AI Act.

Trust This Team

Compartilhar este artigo:
Última atualização: 07 de fevereiro de 2026
What is AITS? How the AI transparency index helps companies evaluate suppliers

What is AITS? How the AI transparency index helps companies evaluate suppliers

What is AITS and why was it created?

Companies that contract corporate software face a critical challenge: how to evaluate the AI compliance of dozens of suppliers quickly, in a standardized and auditable way? Manual analysis of AI governance policies is time-consuming, inconsistent between evaluators, and rarely allows fair comparisons.

The AITS (AI Trust Score) methodology was developed by Trust This to solve exactly this problem. It is an index that measures the degree of transparency and public communication of AI governance and privacy practices of software and digital services, with special emphasis on systems that use artificial intelligence.

Unlike certifications or technical audits that require internal access to companies, AITS works exclusively with public evidence: AI governance policies, terms of use, official documentation, and statements about AI use. Each evaluation is based on specific URLs, versions, and dates, creating a complete and reproducible audit trail.

How does the AITS methodology work in practice?

The AITS methodology evaluates software through 20 criteria inspired by the EU AI Act and international AI standards (ISO/IEC 42001, ISO/IEC 23894, and ISO/IEC 42005).

The structure is divided into two layers:

  • Criteria 1-8: AI governance and transparency
  • Criteria 9-20: traditional privacy

How does AITS evaluate transparency in AI systems?

When software declares AI use (criterion 71), 16 additional specific algorithmic governance criteria are applied:

  • Explicit declaration of AI use in functionalities
  • AI governance policy publicly documented
  • Transparency about specific functionalities that use AI
  • Explainability of automated decisions by AI systems
  • Right to contest decisions made by AI
  • Specific risk assessment (algorithmic bias, discrimination)
  • Bias monitoring in AI systems implemented
  • Quality and origin of training data described
  • AI system impact assessment performed
  • Human oversight in high-risk systems guaranteed
  • Notification of AI interaction (chatbots, virtual assistants)
  • Limitations and appropriate contexts of AI use described
  • Provenance of data used in AI traceable
  • Ethical principles in AI development and use declared
  • Specific channel for AI-related issues available
  • Updates and retraining of AI models documented

What traditional privacy criteria does AITS evaluate?

The second layer of 12 criteria covers fundamental aspects of privacy transparency:

  • Clear identification of DPO (Data Protection Officer) or contact channel
  • Detailed cookie and tracking technology policy
  • Legal basis for personal data processing specified
  • Sharing with third parties and subprocessors listed
  • Data subject rights and procedures to exercise them
  • International data transfers documented
  • Data retention and deletion with defined timeframes
  • Breach notification and incident response procedures
  • Consent and permission revocation explained
  • Transparency in collection and detailed purposes

Each criterion receives a response with three scores: 3 (clear and available information), 1 (generic information without tangibility), or 0 (absent information). This simplicity allows objective comparisons between suppliers.

Why is AITS based only on public evidence?

The choice to work exclusively with public information is strategic and brings unique benefits:

Scalability: it's possible to analyze hundreds of suppliers without depending on questionnaires, on-site audits, or access to internal systems. This drastically reduces evaluation time.

Auditability: each score comes with specific URLs, document versions, and access dates. Anyone can validate the evidence supporting the analysis.

Comparability: by using the same criteria and public sources for all suppliers, AITS allows fair and objective comparisons, eliminating individual evaluator biases.

Reproducibility: the methodology can be applied consistently over time, allowing continuous monitoring and detection of changes in supplier transparency practices.

Democratization: companies of any size can access the same information and perform comparable evaluations, without needing resources for deep technical audits.

What limitations does AITS explicitly recognize?

Transparency about limitations is a fundamental part of the AITS methodology. The index does NOT evaluate:

  • Actual technical implementation of privacy controls or AI systems
  • Internal practices not publicly documented
  • Complete regulatory compliance or technical compliance
  • Technical security of systems or robustness of AI models
  • Internal governance processes or algorithm auditing
  • Performance, accuracy, or actual bias of AI systems in production

AITS is a screening tool based on public communication, not a certification or substitute for in-depth technical audits. It answers the question: "How does this supplier communicate its AI governance practices?" and not "Is this supplier 100% compliant with the EU AI Act?"

How to use AITS to compare suppliers?

The AITS methodology was designed to support different moments of supplier governance:

In initial RFP screening

Use AITS - AI Trust Score to quickly filter dozens of candidates, identifying those with better public transparency. This allows focusing in-depth due diligence efforts on the most promising suppliers.

In the purchase decision process

Compare scores and specific criteria between finalists. For example, if two suppliers have similar functionalities but very different AITS scores, this indicates that one communicates its practices better — an important sign of governance maturity.

In contract negotiation

Identify specific gaps (criteria marked as "NO") and use this information to require compensatory contractual clauses. If a supplier doesn't document data retention, include explicit obligations about this point in the contract.

In continuous monitoring

Track changes in scores and public policies of already contracted suppliers. Significant changes may indicate the need for contract review or risk reassessment.

What is the technical process behind AITS?

Trust This uses an advanced AI pipeline to operationalize the AITS methodology at scale:

Automated collection: web scraping identifies and captures AI governance policies, terms of use, FAQs, technical documentation, and AI statements from each evaluated software.

AI analysis: specialized models (Gemini Pro 2.5, Claude 3.5 Sonnet, DeepSeek V3.1) process documents, identify evidence related to the 86 criteria, and generate analyses in executive language.

Cross-validation: multiple models verify consistency in analyses, reducing chances of false positives or incorrect interpretations.

Versioning: all analyses are dated and referenced to specific versions of public documents, allowing historical tracking and change detection.

Insight generation: beyond the numerical score, the system identifies patterns, market best practices, and transparency trends by software category.

Who benefits from the AITS methodology?

The AITS methodology was developed to serve different profiles within organizations:

DPOs and privacy professionals use AITS for quick supplier screening, replacing manual analysis that would take days with standardized evaluation in minutes. The auditable score facilitates justifications in opinions.

CISOs and IT managers identify privacy risks in tools adopted by shadow IT and prioritize which suppliers deserve in-depth technical security analysis, optimizing limited resources.

Purchasing and Procurement teams find in AITS an objective criterion for tiebreaking between similar suppliers and defensible documentation for internal audit processes and approval committees.

Legal and Compliance accelerate contract analysis using gaps identified by AITS to require specific clauses, reducing rework and contractual exceptions.

EU AI Act consultants offer quick diagnostics to clients, justify the need for in-depth analyses, and create recurring supplier monitoring services.

How does AITS support decisions without replacing human expertise?

The AITS methodology doesn't seek to fully automate AI governance, but rather to enhance human decisions:

Professionals maintain final responsibility for evaluations and contracting decisions. AITS provides standardized and comparable diagnosis that serves as a basis for contextual and strategic analysis.

The index identifies where to deepen. Critical gaps signaled by AITS indicate points that deserve direct questioning to the supplier, detailed legal analysis, or specific contractual requirements.

The evidence trail with URLs and versions allows teams to validate findings, verify original context, and make informed decisions — not just based on a number, but on documented and auditable facts.

What impact does AITS generate in the software market?

By making AI transparency measurable and comparable, the AITS methodology creates market incentives:

Suppliers are motivated to improve public communication about AI governance and privacy, as low scores become visible competitive disadvantages in purchasing processes.

Buyers gain negotiating power based on objective data, accelerating screening and reducing regulatory risks through more informed decisions.

The market evolves to higher transparency standards, better preparing for emerging AI regulations and creating competitive differentiation through best practices.

Conclusion

The AITS methodology doesn't replace audits or formal certifications, but fills a critical gap: the need for quick, standardized, and auditable supplier screening based on public evidence. In a world where companies manage hundreds of software suppliers and AI becomes ubiquitous, having a clear methodology to map and compare AI transparency is essential for effective governance.

Want to know the AITS scores of your current suppliers? Explore our platform and see how the methodology can support your purchasing and monitoring decisions.

#ai-transparency-index#eu-ai-act-compliance#supplier-evaluation#ai-governance#transparency-scoring#artificial-intelligence-assessment

Trust This Team