On 1 March 2024, Singapore’s Personal Data Protection Commission (PDPC) issued the Advisory Guidelines on the Use of Personal Data in AI Recommendation and Decision Systems (AI Advisory Guidelines). These AI Advisory Guidelines followed a public consultation which concluded in August 2023. Our blog post on the public consultation for the draft AI Advisory Guidelines can be accessed here.

Summary of the Advisory Guidelines

At the outset, it should be noted that the AI Advisory Guidelines are focused on the use of personal data in AI recommendation and decision systems (AI Systems). It does not discuss the use of personal data in the context of generative AI (GenAI) solutions.

The AI Advisory Guidelines provide specific guidance on how the Personal Data Protection Act 2012 (PDPA) applies in three typical stages of AI System implementation:

  1. Development, Testing and Monitoring – When organisations act as AI developers and use personal data for training and testing AI Systems, as well as for monitoring the performance of AI systems post-deployment.
  2. Deployment – When organisations deploy AI Systems that collect and use personal data (“business to consumer” or B2C).
  3. Procurement – Service providers for bespoke AI Systems developed using personal data in the organisations’ possession (“business to business” or B2B).

In the table below, we summarise the applicable obligations and exceptions under the PDPA highlighted in the AI Advisory Guidelines that may apply at each stage of AI System implementation.

Stage of AI System ImplementationApplicable Obligations and Exceptions
Development, Testing and MonitoringGenerally, organisations can use personal data for such activities only if they have received meaningful consent for such use.
 
Alternatively, organisations may rely on the following exceptions (subject to the relevant requirements) under the PDPA:
 
1. Business Improvement Exception – in situations where an organisation is developing a product or has an existing product that it is enhancing. This exception is also relevant when an AI System is intended to improve operational efficiency by supporting decision making, or if the AI System is intended to offer more or new personalised products and/or services to customers. It extends to intra-group sharing of personal data for such purposes (but not cross-company).
 
2. Research Exception – in situations where an organisation is conducting commercial research to advance science and engineering without a specific product development roadmap. It extends to cross-company sharing of personal data for commercial research.
DeploymentIn deploying AI Systems, organisations need to be aware of the following obligations:
 
1. Consent and Notification Obligations – to obtain meaningful consent, organisations are encouraged to provide the following information when crafting notices (to the extent practicable):
 
a. the function of their product that requires collection and processing of personal data;

b. a general description of types of personal data that will be collected and processed;

c. an explanation of how such processing of personal data collected is relevant to the product feature; and

d. the specific features of personal data that are more likely to influence the product feature.
 
2. Legitimate Interests Exception – organisations may rely on the legitimate interests exception to process personal data without consent if the purpose for processing data falls within one of the specific purposes identified in the PDPA and if they meet the relevant requirements. To illustrate the operation of this exception, the AI Advisory Guidelines cite the use of personal data as input in an AI System for the purpose of detecting or preventing illegal activities.   
 
3. Accountability Obligation – organisations deploying AI Systems are encouraged to be transparent about their use of such systems by:
 
a. including in their written policies relevant practices and safeguards to achieve fairness and reasonableness;
 
b. providing greater detail in their written policies to obtain meaningful consent from individuals to process personal data and/or provide information about the practices and safeguards to protect the interests of individuals where organisations seek to rely on the relevant exceptions to consent; and
 
c. providing more information on data quality and governance measures taken during AI System development.
 
ProcurementService providers (e.g., systems integrators) may be considered data intermediaries under the PDPA if they process personal data as part of developing bespoke or fully customisable AI Systems for their customers. In such situations, service providers will need to comply with the relevant obligations applicable to data intermediaries under the PDPA (i.e., the Protection and Retention Obligations). To satisfy the Protection Obligation, the AI Advisory Guidelines recommend that service providers adopt the following good practices:
 
1. At the pre-processing stage, use techniques such as data mapping and labelling to keep track of data that was used to form the training dataset.
 
2. Maintain a provenance record to document the lineage of the training data that identifies the source of training data and tracks how it has been transformed during data preparation.
 
Further, the AI Advisory Guidelines encourage these service providers to support organisations in meeting their Consent, Notification and Accountability Obligations. They may do so by:
 
1. being familiar with the types of information that contribute towards meeting their customers’ Consent, Notification and Accountability Obligations by paying attention to the context

2. designing their systems to facilitate the extraction of relevant information to meet their customers’ PDPA obligations.

Key Takeaways

Substantively, the published AI Advisory Guidelines are very similar to the draft version that was released for public consultation in July 2023, with an added explanation of how organisations may rely upon the legitimate interests exception when deploying AI Systems. The primary focus of these guidelines remains unchanged from the draft, aiming to clarify the PDPA’s application where personal data is used in the development and training of AI Systems, as well as when personal data is collected as input from data subjects for use in AI Systems.

As the AI Advisory Guidelines are targeted at AI recommendation and decision systems, they do not address emerging concerns related to generative AI, which raise distinct privacy concerns in the use of personal data to train foundation models or as input in applications. These concerns relating to generative AI are currently being studied by the PDPC and have been highlighted in the recently released draft Model Governance Framework for Generative AI (GenAI MGF) by Singapore’s Infocomm Media Development Authority in collaboration with the AI Verify Foundation. Our blog post on the GenAI MGF can be accessed here.

That said, the AI Advisory Guidelines remain a valuable resource as companies increasingly turn to AI Systems to optimise processes and develop insights from data that they have collected (e.g., for HR purposes, customer engagement etc.). These AI Advisory Guidelines are also relevant to service providers – who increasingly have a role in helping companies with developing AI systems for use in their business operations.

As AI Systems often involve the processing of significant amounts of data (including personal data), it is critical for companies to understand how they can implement these AI systems in a way which complies with these guidelines.

We would like to thank our trainee Judeeta Sibs, practice trainee at Ascendant Legal LLC, for her contribution to this post.