On 7 June 2023, at the ATxAISummit, Singapore launched the AI Verify Foundation, which aims to “harness the collective power and contributions of the global open source community” in order to develop the AI Verify testing tool for the responsible use of AI.

In this short post, we discuss this development as well as the AI Verify testing tool, which was first developed by Singapore’s Infocomm Media Development Authority (IMDA), and its implications for organisations seeking to develop and deploy AI solutions.

What is AI Verify?

Background

AI Verify was developed by IMDA in consultation with companies from different sectors and scales including AWS, DBS, Google, Meta, Microsoft, Standard Chartered, UCARE.AI etc. It was first released in May 2022 as a minimum viable product for an international pilot, attracting interest from over 50 local and multinational companies. Following the announcement and launch of the AI Verify Foundation on 7 June 2023, AI Verify is now available to the open source community and may be accessed here.

AI Verify’s testing framework

AI Verify is based on a framework that is aligned with internationally recognised AI ethics principles, guidelines, and frameworks, such as those from the EU, OECD and Singapore, which culminates in 11 AI governance principles, namely:

  • Transparency
  • Explainability
  • Repeatability/reproducibility
  • Safety
  • Security
  • Robustness
  • Fairness
  • Data Governance
  • Accountability
  • Human agency and oversight
  • Inclusive growth, social and environmental well-being

The framework comprises both technical and process checks against these 11 principles. Technical tests are performed on fairness, explainability and robustness; process checks are applied to all 11 principles.

The toolkit

The AI Verify toolkit utilises a single integrated software toolkit that operates within the user’s enterprise environment to execute the testing framework – crucially, this allows the organisation’s data and model to remain within the organisation’s environment.

Existing open source tools were packaged into the AI Verify toolkit, including:

  • IBM’s AI Fairness 360 and Microsoft’s Fairlearn for fairness testing;
  • IBM’s Adversarial Robustness Toolbox for robustness testing; and
  • the SHAP Toolbox and Salesforce’s OmnixAI for explainability testing.

The AI Verify toolkit supports the AI Verify testing framework by providing an integrated interface that helps track the completion progress of 85 good practice steps over 11 process checklists. It generates a summary of how the AI system aligns with the AI Verify Testing Framework. A sample report generated by the AI Verify toolkit is available here. The process checklists are set out, and the technical tests are summarised, at Annexes A and B of the sample report.

Key Takeaways

Use of Framework and compliance/accountability trail

The AI Verify testing framework requires the deployer to work through good practice steps for AI deployment (which would support claims that the AI has been deployed responsibly), to upload documents as to how it has addressed each step and conversely, justify where it has not done so. It also includes some technical tests. It does not mandate tolerance levels or standards, rather, if worked through properly, it will describe how thorough and careful (or not) a deployer has been and record some key metrics about the AI system’s characteristics. Accordingly, its use will create the basis for an effective accountability trail.  The deployer can use the outputs to help determine whether the AI system will meet applicable standards and regulatory requirements.

Advantages of open source

The open source and extensible nature of the AI Verify toolkit also allows the toolkit to develop organically and incrementally, in response to developing AI governance norms and regulations and to adapt it for particular use cases where necessary.

Governance

The toolkit gives a compact indication of the detailed steps that need to be taken in deploying AI and will be a useful resource in developing AI governance processes.

(We would like to thank our intern Jonathan Ong for his contribution to this post.)