On 16 January 2024, Singapore’s Infocomm Media Development Authority (IMDA), in collaboration with the AI Verify Foundation, announced a public consultation on its draft Model AI Governance Framework for Generative AI (Draft GenAI Governance Framework), showing the areas where future policy interventions relating to generative AI may take place and options for such intervention.

The Draft GenAI Governance Framework may be accessed here. Views on the Draft GenAI Governance Framework may be provided to the IMDA at info@aiverify.sg.

A brief summary of, and our key takeaways from, the Draft GenAI Governance Framework are set out below.

Singapore’s initiatives on AI governance

The Singapore government has been keeping a close eye on the AI landscape through the implementation of the following key initiatives:

  • National AI Strategy: In 2019, Singapore released its first National AI Strategy, detailing initiatives aimed at enhancing the integration of AI to boost the economy. To highlight the practical applications of AI, Singapore initiated national projects in sectors such as education, healthcare, and safety & security. Additionally, investments were made to bolster the overall AI ecosystem. The National AI Strategy was last updated in 2023.
  • Model AI Governance Framework: The Model AI Governance Framework was first introduced in 2019 to provide detailed and readily implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. A second edition of the Model AI Governance Framework was issued in 2020[1].
  • AI Verify Foundation and the AI Verify testing tool: Announced in June 2023, the IMDA released AI Verify, an open-source AI governance testing framework and software toolkit developed by IMDA. IMDA also set up the AI Verify Foundation to harness the collective power and contributions of the open-source community to further develop the AI Verify testing tools for the responsible use of AI[2].
  • Proposed Advisory Guidelines on Use of Personal Data in AI Recommendations and Decision Systems: In July 2023, the Personal Data Protection Commission (PDPC) issued proposed advisory guidelines under the Personal Data Protection Act 2012 concerning the use of personal data to develop machine learning (ML) AI models or systems, as well as the collection and use of personal data in such ML systems for decisions, recommendations, and predictions.[3]
  • Discussion Paper on Generative AI: Implications for Trust and Governance: In June 2023, the IMDA, together with Aicadium, released a discussion paper outlining Singapore’s plans for a reliable and responsible adoption of Generative AI. The paper discusses risk assessment methods and suggests six key dimensions for policymakers to enhance AI governance—addressing immediate concerns while investing in long-term outcomes.
  • MAS’s FEAT principles and Veritas Toolkit: In June 2023, the Monetary Authority of Singapore (MAS) introduced an open-source toolkit aimed at promoting responsible AI usage within the financial industry. Known as the Veritas Toolkit version 2.0, this toolkit facilitates financial institutions in conducting assessments based on the Fairness, Ethics, Accountability, and Transparency (FEAT) principles. These principles offer guidance to firms in the financial sector on responsibly utilising AI and data analytics in their products and services.

Against this backdrop, the Draft GenAI Governance Framework emerges as the latest instrument driving AI development in Singapore.

Summary of the Draft GenAI Governance Framework

Aligned with Singapore’s National AI Strategy, the Draft GenAI Governance Framework aims to propose a systemic and balanced approach to addressing generative AI concerns while continuing to facilitate innovation.

The Draft GenAI Governance Framework underscores the importance of global collaboration on policy approaches and emphasises the need for policymakers to work with industry, researchers, and like-minded jurisdictions.

To that end, the Draft GenAI Governance Framework identifies nine dimensions that address generative AI concerns while balancing fostering ongoing innovation. They are summarised in the table below.

S/N Dimension Key recommendations
1 Accountability The Draft GenAI Governance Framework suggests allocating responsibility in the generative AI development chain according to the control levels of each stakeholder. It also proposes enhancing end-user protection by providing indemnities and updating legal redress and safety protection frameworks. This ensures that end-users have additional safeguards against potential harm from AI-enabled products and services.
2 Data The Draft GenAI Governance Framework advises policymakers to clarify the application of existing personal data laws to generative AI and encourage research for the creation of safer and culturally representative models. Additionally, policymakers are urged to foster open dialogue between copyright owners and generative AI companies, facilitating balanced solutions for copyright issues related to data used in AI training.
3 Trusted Development and Deployment The Draft GenAI Governance Framework proposes that the industry should standardize several aspects of Generative AI. Firstly, it suggests adopting common best practices in the development of Generative AI. Secondly, the framework recommends standardizing the disclosure of models, akin to a “food label”, enabling comparisons between different AI models. Thirdly, it suggests standardizing the evaluation of Generative AI models to implement a baseline set of required safety tests.
4 Incident Reporting AI developers should establish processes to monitor and report incidents arising from the use of their AI systems. Simultaneously, policymakers need to determine the severity threshold for AI incidents that would necessitate reporting to the government.
5 Testing and Assurance Policymakers are recommended to establish common standards in AI testing to ensure quality and consistency across the industry.
6 Security The Draft GenAI Governance Framework suggests developing new testing tools to mitigate the risks associated with generative AI. One example is the creation of a digital forensic tool specifically designed for generative AI, aimed at identifying and extracting potential malicious codes concealed within the model.
7 Content Provenance Due to the potential for AI-generated content to amplify misinformation, policymakers are urged to collaborate with stakeholders in the AI content lifecycle. Together, they can work on solutions such as digital watermarking and cryptographic provenance to reduce the risk of misinformation.
8 Safety and Alignment R&D Policymakers are urged to accelerate their investment in R&D to guarantee alignment of AI models with human intention and values. Additionally, facilitating global cooperation among AI safety R&D institutes is essential to optimise limited resources and keep pace with commercial growth.  
9 AI for Public Good The Draft GenAI Governance Framework encourages governments to democratize AI access by educating the public on identifying deepfakes and using chatbots safely. Additionally, it emphasizes the role of governments in leading innovation within the industry, especially among SMEs, through measures like the use of sandboxes. Furthermore, the framework recommends increasing efforts to upskill the workforce and promote the sustainable development of AI systems.

Key Takeaways

The Draft GenAI Governance Framework reflects Singapore’s broader efforts to contribute to AI governance and provides useful insight into the concerns of policymakers regarding the development and deployment of generative AI systems.

While the Draft GenAI Governance Framework is helpful for organisations to understand the key policy implications regarding generative AI, it is more of a discussion paper and does not prescribe specific practices for organisations to adopt or implement when deploying generative AI solutions. This approach is not entirely unexpected at this stage as the technology is still developing rapidly and policymakers worldwide are still grappling with how they should deal with the risks and concerns associated with generative AI.

We are closely observing this space to see how policymakers around the world will react to the upcoming EU AI Act and whether they will follow a similar approach. We also anticipate that the Singapore government will issue additional papers and guidance in the near future.

We would like to thank Judeeta Sibs, practice trainee at Ascendant Legal LLC, for her assistance with the preparation of this update.

[1] See our summary of the second edition of the Model AI Governance Framework here: https://www.dataprotectionreport.com/2020/02/singapore-updates-its-model-artificial-intelligence-governance-framework/

[2] See our summary on the AI Verify Foundation here: https://www.dataprotectionreport.com/2023/06/singapore-contributes-to-the-development-of-accessible-ai-testing-and-accountability-methodology-with-the-launch-of-the-ai-verify-foundation-and-ai-verify-testing-tool/

[3] See our summary of the public consultation on this development: Singapore Releases Proposed Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems | Data Protection Report