Skip to main content

Privacy RFI (Request for Information) for AI: what to ask before contracting software

RFI (Request for Information) identifies privacy risks in AI software before contracting. Discover what to ask about automated decision-making, explainability and model training under EU AI Act.

Trust This Team

Share this article:
Last updated: February 07, 2026
Privacy RFI (Request for Information) for AI: what to ask before contracting software

Privacy RFI (Request for Information) for AI: what to ask before contracting software

Why you need a privacy RFI before contracting AI software?

Your procurement team just received three CRM proposals with AI functionalities. All promise automation, predictive insights and productivity. But which one handles your data transparently? Which uses responsible AI? Which presents the lowest regulatory risk?

A well-structured RFI (Request for Information) answers these questions before you sign any contract.

RFI is the formal request for information sent to suppliers before the negotiation phase. It works as an initial screening that identifies privacy, transparency and compliance risks — especially critical when software uses artificial intelligence.

In this guide, you'll understand:

  • When to use a privacy RFI
  • Which questions to ask
  • How to score responses to make safer decisions

What is RFI and why is it essential when contracting AI software?

Request for Information (RFI) is a stage prior to RFP (Request for Proposal) or final quotation. While RFP focuses on prices and commercial details, RFI maps supplier capabilities, practices and risks.

For software using AI, privacy RFI gains doubled importance because:

  • AI processes personal data in an automated way, generating decisions that can impact data subjects' rights (EU AI Act Article 14)
  • Lack of algorithmic transparency is one of the main regulatory risks today
  • Model training with your corporate data can violate internal policies and legislation
  • Automated decisions without human review can expose your company to legal liabilities

A well-applied RFI reduces up to 60% of due diligence time and avoids unpleasant surprises after contracting.

When to use a privacy RFI in software procurement processes?

RFI should be applied at three main moments:

1. Initial screening of new suppliers

Before investing time in demonstrations and proof of concepts, use RFI to eliminate suppliers with inadequate privacy and AI practices.

2. Renewal of high-risk contracts

Software that processes sensitive data (health, finance, children) or that recently implemented AI functionalities deserves reassessment via RFI.

3. Comparison between market alternatives

When you have multiple options similar in functionality and price, RFI works as an objective tiebreaker criterion based on transparency and governance.

Practical rule: if software declares AI use, performs automatic scoring/classifications or makes decisions without human intervention, apply specialized RFI.

What are the essential questions of a privacy RFI for AI?

An effective privacy and AI RFI should cover six critical areas, totaling about 24 objective questions:

How to identify if software really uses AI?

  • Does the software use artificial intelligence or machine learning in any functionality?
  • Which specific functionalities use AI? (e.g.: recommendations, classifications, predictions, sentiment analysis)
  • Is AI use mandatory or optional for the end user?

Why this matters: Many suppliers use the term "AI" generically. You need to identify if there's actually automated decision-making or just automation of fixed rules.

What is the purpose and scope of AI processing?

  • What is the AI system's objective? (e.g.: optimize processes, detect fraud, personalize experience)
  • What types of personal data are processed by AI?
  • Are data used exclusively to deliver the contracted service or also for other purposes?

Why this matters: EU AI Act requires that data processing has legitimate, specific and informed purpose to the data subject. AI trained for secondary purposes without consent violates legislation.

How does automated decision-making work and is there human review?

  • Does the system make automated decisions that impact users' rights? (e.g.: credit approval, resume selection, dynamic pricing)
  • Is there possibility of human review of these decisions? How to request?
  • How long does the human review process take?

Why this matters: EU AI Act Article 14 guarantees data subjects the right to contest automated decisions and demand human review. Absence of this mechanism is a regulatory red flag.

Does the system offer explainability of AI decisions?

  • Does the software provide explanations about how it reached a certain decision or result?
  • Are these explanations understandable for non-technical users?
  • Is technical documentation about the model publicly available?

Why this matters: Algorithmic transparency is a growing requirement in global regulations (GDPR Art. 22, EU AI Act) and AI ISO standards. Suppliers who cannot explain their decisions present high risk.

How are training and retraining data handled?

  • Are client data used to train or retrain AI models?
  • Is there possibility to opt-out of data use for training?
  • Where are training data located? Are they transferred internationally?
  • How does the company ensure that sensitive or proprietary data don't leak via model?

Why this matters: Unauthorized use of corporate data for AI training is one of the biggest concerns of CISOs and DPOs. Cases like GitHub Copilot and enterprise ChatGPT evidenced this risk.

What is the organization's AI governance?

  • Does the company have formal AI policy and algorithmic governance?
  • Is there documented AI Impact Assessment?
  • How does the company monitor bias, discrimination and model quality over time?
  • Is there specific channel to report problems related to AI decisions?

Why this matters: Suppliers with mature AI governance present lower risk of incidents, discrimination and future non-compliance.

How to score privacy RFI responses?

After receiving responses, apply a simple scoring system based on three categories:

🟢 GREEN FLAG (2 points): Complete, documented response, with public evidence and practices aligned with best standards (ISO/IEC 42001, EU AI Act, GDPR)

🟡 YELLOW FLAG (1 point): Partial response, with gaps or basic practices requiring additional contractual clauses

🔴 RED FLAG (0 points): Absent, evasive response, or clearly inadequate practice (e.g.: data use for training without opt-out, absence of human review in critical decisions)

Final classification:

  • 38-48 points (80-100%): Low-risk supplier, high transparency
  • 24-37 points (50-79%): Medium risk, requires specific contractual clauses
  • 0-23 points (<50%): High risk, consider alternatives or deep audit

Use this scoring to objectively compare different suppliers and justify decisions to internal committees.

How to use RFI responses to negotiate better contracts?

RFI is not just a compliance exercise — it's ammunition for negotiation. Here's how:

For each RED FLAG identified:

Demand specific contractual clause that mitigates risk

Example: if there's no opt-out for AI training, include clause: "Supplier will not use Client data for training, fine-tuning or improving AI models without prior and explicit consent"

For YELLOW FLAGS:

Request improvement roadmap with defined deadlines

Example: supplier promises to implement human review in 6 months — this goes into contract as obligation

For multiple GREEN FLAGS:

Use as justification to choose a supplier even if price is slightly higher

Document in procurement process: "Supplier X presents 15% more transparency in AI governance than alternatives"

Golden tip: DPOs and legal teams value contracts with fewer exceptions and residual risks. A well-done RFI drastically reduces rework between areas.

What are the most common mistakes when applying privacy RFI?

Even mature companies make these mistakes:

Mistake 1: Too generic questions

  • Avoid: "Are you compliant with EU AI Act?"
  • Prefer: "How do you ensure human review of automated decisions according to EU AI Act Article 14?"

Mistake 2: Not adapting RFI to software type

A CRM with AI needs different questions from HR or security tools. Customize RFI for context.

Mistake 3: Accepting vague responses as sufficient

If supplier responds "we follow market best practices" without specifying, that's RED FLAG. Ask for concrete evidence.

Mistake 4: Applying RFI too late in process

If you're already in price negotiation, RFI lost utility. Apply in initial screening, before demonstrations and POCs.

Mistake 5: Not documenting responses

RFIs without adequate documentation don't serve future audits nor justify decisions. Always export and archive.

How Trust This can accelerate your RFI process?

Applying a complete RFI manually can take days or weeks. Trust This platform offers three features that drastically accelerate this process:

1. Instant AITS score

Automated analysis of 86 transparency criteria in privacy and AI based on supplier's public documents. You discover in minutes which areas present risk.

2. Category benchmark

Instantly compare supplier with market alternatives. Know if their transparency is above or below average.

3. Personalized RFI templates

Receive additional question suggestions based on gaps identified in AITS analysis.

Real case: A financial sector company reduced from 3 weeks to 2 days the screening process of 8 AI credit analysis software suppliers, using AITS for pre-qualification and applying detailed RFI only to 3 finalists.

What's the difference between RFI, RFP and RFQ in software procurement?

It's common to confuse these three terms. Here's the practical difference:

RFI (Request for Information)

  • Objective: Understand capabilities, practices and risks
  • Timing: Initial phase, supplier screening
  • Focus: Technical qualification, transparency, governance
  • Result: Shortlist of qualified suppliers

RFP (Request for Proposal)

  • Objective: Receive detailed solution proposals
  • Timing: After RFI shortlist
  • Focus: Functionalities, architecture, integrations, price
  • Result: Supplier selection

RFQ (Request for Quotation)

  • Objective: Compare prices of already defined solutions
  • Timing: When you already know exactly what you want
  • Focus: Only price and commercial conditions
  • Result: Quick contracting

For AI software, ideal flow is: RFI (privacy screening) → RFP (technical solution) → Negotiation with RFI-based clauses.

Next steps: implement privacy RFI in your company

You now have the necessary knowledge to structure effective privacy and AI RFIs. To put into practice:

Week 1

Identify which software procurement processes need RFI (prioritize AI software, sensitive data or high volume of personal data)

Week 2

Adapt RFI model for your company context and create templates by software category (CRM, HR, security, marketing)

Week 3

Establish scoring system and define approval criteria (example: minimum 60% score to advance to RFP)

Week 4

Train procurement, IT and legal teams in RFI use and implement "compare-first" rule in intake processes

Want to accelerate even more? Explore Trust This platform for automated transparency analysis in privacy and AI. Discover AITS score of over 1,000 corporate software.

Support Materials for Privacy RFI Implementation in AI Software

We provide free three reference infographics to structure your supplier evaluation process:

  • RFI → RFP → Contract Flow: visualize the three procurement process stages with key privacy and AI questions at each phase
  • Supplier Comparison Table: practical evaluation example with green/yellow/red flag system applied to AI and privacy criteria
  • RFI 6 Pillars Diagram: visual mapping of essential areas that cannot be missing in artificial intelligence software evaluation

Download complete PDF kit

SUGGESTED IMAGES FOR CONTENT:

  • Visual infographic showing RFI → RFP → Contract flow, highlighting key questions at each stage
  • Comparative table of 3 fictional suppliers with Green/Yellow/Red Flag scoring on different AI criteria
  • Diagram showing the 6 essential pillars of privacy RFI (AI Identification, Purpose, Automated Decision, Explainability, Data Training, Governance)
#privacy-rfi#corporate-ai#software-procurement#automated-decision-making#eu-ai-act#ai-governance#due-diligence#procurement

Trust This Team