Artificial Intelligence, and particularly generative AI, is moving beyond experimentation into practical use cases that directly affect how Scheme Operators and Data Providers manage their responsibilities. Whether it is a Scheme Operator evaluating Participants under frameworks such as the EU’s Financial Data Access (FiDA) proposal, or a Data Provider assessing Data Recipients requesting access to Open Finance APIs under various national regulations, the ability to make consistent, transparent and efficient decisions at scale is becoming increasingly critical.
Why AI is Relevant Now
1. Consistency and Fairness
AI systems excel at applying rules in a uniform way. Where human decision makers may interpret requirements differently or be influenced by individual perspectives, AI can ensure decisions are consistently aligned with documented criteria. For both Scheme Operators and Data Providers, this consistency underpins fairness and trust.
2. Scalability of Decision Making
Evaluating Scheme participants or open finance Data Recipients may be manageable in small numbers. However, as ecosystems grow, manual processes can quickly become expensive and time consuming. AI provides the ability to scale decision making, allowing oversight bodies to handle hundreds or thousands of applications or reviews without a proportional increase in staff and cost.
3. Evaluation of Complex Evidence
Generative AI is particularly strong at working with unstructured documents. For example, a SOC 2 (Service Organisation Control 2, an auditing standard developed by the American Institute of Certified Public Accountants) report can run into hundreds of pages, each dense with technical information. AI can quickly extract relevant details and cross reference them with structured data sources such as company registers, sanctions lists or credit reports. This produces a more complete and accurate assessment than manual review alone.
4. Cost and Resource Efficiencies
By automating the initial layers of review, AI reduces the need for large compliance or evaluation teams. Decisions can be made faster, freeing human experts to focus on edge cases, escalations or strategic considerations. This efficiency also makes it feasible to run more frequent reviews, improving ecosystem resilience and trust.
5. Reduction of Human Error and Bias
Fatigue, overload or unconscious bias can affect human reviewers. Properly designed AI systems can reduce the risk of errors and improve objectivity, supporting fairer outcomes across participants.
Risks and Limitations
1. Inconsistency Without Strong Governance
Although AI has the potential to deliver consistent outcomes, without strong governance it can produce different results for similar cases. Small differences in training data or prompt design can create variation, undermining confidence in the process.
2. Explainability and Transparency
Decisions need to be explainable. Generative AI models are often criticised as “black boxes”, which can create challenges if Data Recipients dispute a decision or if regulators require a clear audit trail. Without explainability, trust in AI-driven decision making is limited.
3. Bias in Training Data
AI can replicate systemic biases present in training data. If historic decisions were skewed toward certain types of entities, an AI model may perpetuate those patterns. This requires active monitoring and mitigation.
4. Over-Reliance on Automation
AI should be seen as augmenting human judgment, not replacing it. There is a risk of over-reliance, where unusual or novel cases are mishandled because they fall outside what the model has been trained to recognise. Human oversight remains essential.
5. Ethical and Legal Considerations
Data privacy, accountability for outcomes and the perception of fairness are critical issues. Both Scheme Operators and Data Providers must ensure AI systems are designed and governed in line with applicable laws and ethical standards.
Striking the Right Balance
The promise of generative AI in this space lies in its ability to combine speed, scalability, and consistency with human oversight and judgment. A hybrid model – where AI handles evidence gathering, cross referencing, and first-line evaluation, while humans provide oversight and adjudicate complex or exceptional cases – offers a pragmatic and balanced path forward.
For Scheme Operators and Data Providers alike, generative AI is not just a tool for efficiency. It has the potential to reshape how decisions are made as ecosystems grow and regulatory expectations increase under frameworks like FiDA and CFPB Dodd-Frank 1033 Act. The challenge is ensuring that its adoption is accompanied by strong governance, transparency and an ongoing role for human judgment.

Dickie Smith
Head of Product