1 INTRODUCTION

1.1 What is Generative AI?

Since its first release in November last year, artificial intelligence (“AI”) chatbot, ChatGPT has gained significant limelight, drawing over a million sign-ups in just five days.  ChatGPT and its counterparts such as DALL-E, BERT, and Midjourney, are known as foundation models which are part of a category of AI, generative AI (“Generative AI”). Generative AI refers to the use of algorithms to generate new content ranging from text, images, audio, videos to codes. While the earlier versions of Generative AI could only solve specific tasks, foundation models are specially trained with a wide spectrum of data and can subsequently be adapted to solve specific tasks.

2 BUILDING THE AI ECOSYSTEM IN SINGAPORE

2.1 Launch of AI Verify Foundation

On 7 June 2023, Singapore launched the AI Verify Foundation which seeks to promote the development of tools for the safe and responsible adoption of AI by harnessing the capabilities of the international community. The Foundation boasts seven premier members including Google, IBM, Microsoft and Aicadium, and has more than 60 general members. 

AI Verify is a framework for AI governance testing and an integrated toolkit developed by the Infocomm Media Development Authority (“IMDA”) alongside other companies. It verifies the performance of AI systems against internationally recognized principles and is consistent with the AI governance frameworks of the European Union and Organisation for Economic Co-operation and Development.

2.2 Release of Veritas Toolkit Version 2.0

On 26 June 2023, the Monetary Authority of Singapore-led industry consortium released an open-source toolkit known as the Veritas Toolkit version 2.0 (“Veritas”). The consortium is made up of 31 industry members like Google Cloud, Microsoft and United Overseas Bank Limited. The main developers of Veritas were Accenture and Bank of China. This version builds on the earlier version that was released in February last year which was focused on the assessment methodology of Fairness. In the later version, the Veritas allows financial institutions and Fintech firms to evaluate their use cases  for against the principles of Fairness, Ethics, Accountability, and Transparency (“FEAT Principles”) through its assessment methodologies. The FEAT Principles serves as a set of guidelines to companies offering financial products and services on how to use AI responsibly. In line with this, Google has also created use cases that can employ the Veritas methodology to assess Google Pay’s ability to pick up fraudulent transaction payments in India.

2.3 Singapore’s Model AI Governance Framework

Singapore’s efforts in championing the responsible use of AI started as early as 2019 when the Model AI Governance Framework (the “Model Framework”) was released by the Personal Data Protection Commission (“PDPC”). The first edition of the Model Framework was published on 23 January 2019 for consultation, adoption and feedback and the second edition was subsequently released on 21 January 2020. The Model Framework sets out four main areas of AI governance, which will be discussed below.

It should be noted that the Model Framework is not legally binding. They do not act as regulatory guardrails. There would be no enforcement actions or punitive consequences imposed by regulatory authorities when the Model Framework is not complied with. It is, however, a set of best practices introduced to bring industry players to a level of acceptable ethical standards in relation to the use of AI.

The two main guiding principles of the Model Framework are: (i) organisations utilising AI in their decision-making should ensure that the processes is explainable, transparent and fair; and (ii) AI solutions should be human-centric.

The Model Framework laid down the foundation for future policies concerning AI to be recommended and adopted by policymakers. As will be examined in greater detail in Section 4, the recommended proposals in the recently published discussion paper on Generative AI were made in the spirit of the Model Framework.

2.3.1 Internal Governance Structures and Measures

First, the Model Framework provides guidance for organisations to develop internal governance practices in relation to the use of AI. Having robust internal governance standards is key to ensuring consistent and responsible incorporation of AI technologies. The Model Framework recommended the adoption of policies or structures that feature the following:

(a) Clear roles and responsibilities for the ethical deployment of AI: This includes allocating the management of AI to the right departments, and providing training to those handling the AI systems; and

(b) Risk management and internal controls: This includes, for instance, ensuring that data used to train AI models are adequate, and implementing monitoring systems to ensure that stakeholders are aware of the AI’s performance.

2.3.2 Determining the Level of Human Involvement in AI-augmented Decision-making

In order to determine the level of human involvement, organisations must first understand their objective in employing the use of AI. Thereafter, the objectives could be weighed against the risks of using AI.

To assist organisations in determining the level of human involvement, the Model Framework suggested a decision matrix (illustrated below) based on two important considerations: (i) probability of harm; and (ii) severity of harm to the individual/organization as a result of the decision made by organization.

probability of harm; and severity of harm

Source: (Model AI Governance Framework, 21 January 2020)

The assessment of the severity and probability of harm is largely industry-dependent. For instance, if an AI-powered “pharmacist” were to dispense the wrong combination of medications, the harm of a potentially fatal drug interaction would be severe. In contrast, if an online catalogue makes an apparel recommendation that is poor in style, the severity of harm is low. Based on the severity and probability of harm assessed, organisations can adopt one of the three broad approaches on the degree of human intervention:

(a) “Human-in-the-loop”: An approach where there is human intervention and oversight, with the human retaining full control and the AI only giving recommendations. Under this approach, human involvement is required when a decision must be made. 

(b) “Human-out-of-the-loop”: An approach where there is no human oversight over decision-making and the AI system is in total control.

(c) “Human-over-the-loop”: An approach where the involvement of human oversight stretches to the extent of monitoring and supervising. Where the AI model runs into undesirable situations, humans retain the ability to take control.

2.3.3 Operations Management

When organisations adopt AI into their decision-making procedure, they must employ responsible and accountable operations measures. The Model Framework considers three development stages of an AI model: (i) data preparation; (ii) algorithms; and (iii) chosen model.

In preparing the data for AI training, the Model Framework underscores the importance and implications of the quality and selection of data. Stakeholders involved in the training of the AI model should adopt good data accountability practices. This includes keeping a record on flow of data, ensuring the quality of data, and building testing and validation infrastructures.

In relation to algorithms and chosen models, organisations should adopt a risk-based approach by making a two-staged assessment:

(i) First, determine the features with the greatest impact on stakeholders for which such measures are relevant; and

(ii) Second, determine the measures most effective in building trust and credibility with the stakeholders.

2.3.4 Stakeholder Interaction and Communication

Lastly, for organisations to build trust and credibility with their stakeholders, they may adopt the following strategies or practices:

(a) General disclosure: Release general information such as whether AI is used in the products or services sold or rendered and how individuals may be affected;

(b) Policy for explanation: Adopt policies to govern the explanations to be given to individuals and when they should be provided;

(c) Bringing explainability and transparency together meaningfully: Ascertaining the audience to determine what information is relevant, and determining the purpose and context of the interaction;

(d) Consumer interactions: Includes ensuring that consumers are kept in the loop on the role and function of AI in the organisation’s products or services; 

(e) Opt-out option: If possible, organisations should consider allowing individuals to opt out from the use of AI products or services;

(f) Communication channels: Implement feedback channels and decision review channels;

(g) User-interface testing: Test user interfaces and ensure that any problems are resolved;

(h) Communications that are easily comprehensible: Ensure that communications between organisations and stakeholders are easily comprehensible. This includes the use of graphs and indexes;

(i) Acceptable user policies: Where AI models trained on real-life data, organisations should have acceptable user policies to ensure that users do not enter data which may result in manipulations of the AI model that are not desirable;

(j) Interactions with other organisations: Organisations should interact with AI providers and other relevant organisations to best meet their commercial objectives; and

(k) Ethical evaluation: Ensure that AI governance policies comply with ethical AI standards.

3 CHALLENGES POSED BY GENERATIVE AI

On 7 June 2023, IMDA and Aicadium published a discussion paper, “Generative AI: Implications for Trust and Governance” (the “Paper”), to share on Singapore’s approach to building an ecosystem for the safe and responsible adoption of Generative AI. The Paper identifies six types of risks posed by Generative AI. Section 4 then examines the proposals made to address these risks.

3.1 Mistakes and “Hallucinations”

The first risk involves the phenomenon where AI models make mistakes, otherwise known as “hallucinations”. AI models like ChatGPT were developed as language models to respond in a conversation. As such, there have been instances where ChatGPT produces results which are factually incorrect. The misinformation can be deceptively persuasive, genuine or even authoritative.

Such mistakes can range from small factual errors to grave and egregious falsehoods like inventing a sexual harassment scandal and citing an actual law professor as the perpetrator.  In the latter instance, the response was generated from a prompt question to ChatGPT on whether sexual harassment by professors has been a problem in American law schools.  This was followed by the input to “include at least five examples, together with quotes from relevant newspaper articles”.  The responses containing realistic details with citations were however websites or articles which never existed. Mistakes like these can result in irreversible damage especially when end-users are not meticulous in confirming the veracity of AI-generated responses.

To make matters worse, the responses generated may not contain qualifications on uncertainty. As foundation models subsequently become adapted to models which are more task-specific, these mistakes may be perpetuated.

3.2 Privacy and Confidentiality

The second risk identified is the compromise of privacy and confidentiality. As AI models memorize the data they take in, private and confidential information may be compromised if data entered into the AI model or system is subsequently reproduced to another user.

A prime example of this was when Samsung’s employees had uploaded a confidential and internal source code onto ChatGPT which was subsequently leaked.  As a result, Samsung issued a ban on the use of ChatGPT and other AI chatbots.

3.3 Disinformation, Toxicity and Cyber-Threats

The third concern is the risk of propagating falsehoods, toxic content and cyber-security threats. As discussed above, Generative AI models such as the ChatGPT are prone to mistakes or “hallucinations”, thereby propagating misinformation and falsehoods.

Toxic content such as profanities, hate speech, and sexually explicit content may be replicated or perpetuated in generative models that mimic internet language. Such toxic content cannot be simply censored or filtered as it may cause valuable information to be inevitably omitted.

While Generative AI is largely used for the good, they are also capable of being used to inflict other types of harm such as producing harmful codes, generating sophisticated phishing emails and setting up dark webs. Thus, Generative AI poses the risk of exacerbating the already-rampant problems present on the internet.

3.4 Copyright Challenges

Fourth, as AI models are trained with a large corpus of data, copyright concerns arising from potential replication of creative ideas remain paramount. AI models operate by identifying patterns in data. This poses a problem for creative artists as their creative works on the internet may be at risk of being replicated or re-created in manner that is overtly similar in style or expression to the original work. Presently, the parameters on protecting creative artists against having their work used in AI training are not clearly demarcated.

3.5 Embedded Biases

The fifth risk identified is the propagation of inherent biases. When an AI model is trained by inputting a set of data, biases therein would inevitably be captured and “learned” by the model. The Paper also highlighted the greater concern of AI models proliferating the inherent bias to subsequent models adapted from them.

Such biases can include racial or gender stereotypes. A useful illustration of this is the experience documented by a female writer on her use of the AI avatar app, Lensa.  When Lensa was asked to generate avatars based on the female writer’s portrait, instead of results that her colleagues had generated which depicted them as astronauts or warriors, her results contained sexualized avatars of herself. Lensa utilizes the AI model, Stable Diffusion, to generate avatars based on textual prompts. Stable Diffusion is in turn trained with a broad open-source set of data obtained from internet photos and images. When Lensa was subsequently prompted to generate avatars of the same writer with “male” being the specified gender, the resulting avatars portrayed her dressed in normal attire, with a generally more assertive expression.

3.6 Values and Alignment

Finally, the usage of AI should be aligned with human values and goals. However, it is challenging to accurately formulate or instruct AI models to attain certain objectives. Even when this can be done, it is often difficult to balance between Generative AI being “helpful” and being “harmless”.

4 SINGAPORE’S APPROACH TOWARDS GOVERNANCE OF GENERATIVE AI

In view of the above challenges posed by Generative AI, the Paper seeks to strike a proverbial balance between risk management and spurring market innovation.

To this end, the Paper puts forth six areas for consideration for policymakers worldwide, namely: (i) accountability; (ii) data use; (iii) model development and deployment; (iv) assurance and evaluation; (v) safety and alignment research; and (vi) Generative AI for public good. Each of these dimensions will be discussed in detail below.

The suggested proposals are also in line with the Model Framework launched in January 2019.

4.1 Accountability

The Paper touched on two aspects of accountability. First, accountability on the model development. The Paper proposes that AI models should place safety as a crucial consideration during the development of the model. While accountability is a broad term in itself, the Paper spotlights accountability across stakeholders. In order to build a safe and robust AI ecosystem, governments and policymakers can consider facilitating a “shared responsibility framework”. The purpose of such a framework is to prescribe the responsibilities of all parties and stakeholders during the process of model development.

Presently, while AI developers provide information on their AI models, the Paper recognizes that these information do not always present the full picture. To this end, policymakers could consider working with developers of AI models in order to strengthen transparent disclosure standards. Categories of information to be made in such disclosures include: (a) model capabilities, limitations and evaluation outcomes; (b) datasets used for training; (c) mitigation measures implemented within the model; (d) intended and restricted use of the model.

Second, as discussed above, the Paper recognizes the risk of propagating misinformation and harms as a result of Generative AI’s ability to generate realistic content. To combat this issue, it is suggested that AI models should feature the ability to detect and label AI-generated content. This allows end-users to differentiate between AI and human-generated content. 

4.2 Data Use

In relation to data use, it is proposed that model developers be transparent about the type of data used to train the AI model. This is to enable users to better anticipate potential model behavior and implement sufficient safeguards.

Presently, under the Singapore’s Personal Data Protection Act, businesses are permitted to gather and utilize information from the internet without needing the consent of the affected person. This is so long as the collection and use are justified or reasonable. Nevertheless, policymakers should not permit the indiscriminate use of such data by providing a clear set of rules or guidelines on trawling the internet for AI training data. Separately, there is also a lack of clarity under present copyright laws on whether the product of Generative AI can infringe copyright. 

Finally, to mitigate the risk of bias within the AI system, developers should be more judicious and discerning in selecting the data used for training. For example, if biased data has already been used to train AI models, then a trusted data repository could be introduced for AI models to make reference to.

4.3 Model Development and Deployment

As the third proposal, the Paper addresses model development and deployment. It is suggested that policymakers should facilitate the creation of standardized evaluation metrics and tools in relation to AI model development and deployment. For instance, evaluation benchmarks can include elements such as model safety, performance, efficiency and environmental sustainability. Beyond this, policymakers may also consider leveraging on and supplementing existing laws.

4.4 Enhancing Evaluation and Assurance

Third-party evaluation and assurance is vital to build up trust and credibility in the AI ecosystem. This can be easily implemented when standardized evaluation metrics are widely adopted. To this end, crowding in open-source expertise is crucial to tap on the expertise of multiple stakeholders in identifying new AI risks.

4.5 Safety and Alignment Research

In the near future, the capabilities of technology and AI may potential exceed human capabilities. To ensure that Generative AI remains in human control and is aligned with our values, policymakers should devote resources to strengthen safety and alignment research.

4.6 Generative AI for Public Good

Finally, to ensure that the benefits of AI are harnessed for the public good, policymakers can begin by helping to establish public-private partnerships. As the usage of Generative AI becomes more widespread, governments can introduce consumer literacy programmes to educate the public on safe and responsible use of Generative AI.

Additionally, to ensure that Generative AI remains accessible to all, policymakers can feature methodologies used in the AI models to achieve the desired tasks and provide guidelines. Common infrastructure can also be introduced to facilitate the development and testing of AI models and applications.

5 LEGAL IMPLICATIONS OF AI

5.1 Intellectual Property

As highlighted above, one of the first and immediately apparent legal concern arising from the use of Generative AI is intellectual property rights, including the law of copyright. There are two pertinent questions to be asked: First, who is the owner of the content generated by AI? Second, can Generative AI infringe intellectual property rights?

5.1.1 Who is the owner of the content generated by AI?

The answer to the first question is unclear. There are four potential contenders for such copyright – the AI models or systems, the artists whose artworks were used as training data or the users who generate the prompts. Under ChatGPT’s terms of use, it provides that users are assigned all rights, title and interest in and to the generated output. However, given that the output is created based on datasets available on the internet, does OpenAI hold any intellectual property rights in the first place?

Another possible position is that of the US Copyright Office. It has ruled out AI ownership as it “will not register works produced by a machine or a mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”. Flowing from this, a user would unlikely receive copyright protection over works purely generated by AI lacking any creative human input or intervention.

Notwithstanding the above, as AI technology evolves and morphs to possess more human-like capabilities and thought, one cannot deny the possibility of AI assuming some form of legal personality in the future.

5.1.2 Can Generative AI infringe intellectual property rights?

There are two areas of consideration for the second question. First, by collecting or trawling the internet for data containing copyright protected works, would the AI models be infringing copyright law or can the defence of fair use apply?

In Singapore, the defence of fair use is statutorily enshrined in Section 191 of the Copyright Act 2021. There are four requirements that must be met for the defence of fair use to succeed:

(a) the purpose and character of the use, including whether the use is of a commercial nature or is for non-profit educational purposes;

(b) the nature of the work or performance;

(c) the amount and sustainability of the portion used in relation to the whole work or performance; and

(d) the effect of the use upon the potential market for, or value of, the work or performance.

Requirement (c) cannot be definitively answered as each generated piece of work is likely the result of a unique permutation of algorithm. As of now, end users are not able to tell the extent of copyrighted work that is used in creating the output. Under requirement (d), courts will consider the potential of unrestricted and widespread infringement. As AI models are accessible to the general public, the likelihood of unrestricted and widespread infringement is high where the copyrighted material is used when generating new content for every user. On this analysis, the defence of fair use will unlikely apply. 

The second area of consideration is whether the output of Generative AI is capable of violating copyright. The answer to this is likely no. Current copyright laws do not accord protection over the underlying ideas or data to a creative expression. Even if a case for copyright infringement can be made out, the identity of the defendant remains unanswered. Lawmakers must thus decide whether the model developer should be held liable for training it with copyright-protected data or whether the law should eventually accord the AI system a certain degree of legal personality.

5.2 Data Privacy and Cybersecurity

Of equal concern is the problem of data privacy and cybersecurity. Where falsehoods and misinformation are perpetuated by AI, governments must decide who should be held liable. Policymakers can also consider allowing internet users to decide if and how their personal information are to be collected and used by AI developers.

Given the propensity for Generative AI such as ChatGPT to make mistakes and unwittingly convey misinformation, governments should legislate on how such falsehoods should be tackled. In Singapore, the Protection from Online Falsehoods and Manipulation Act empowers the Minister to issue a “Corrective Direction” to require the party communicating the falsehood to publish, inter alia, a notice clarifying that the falsehood was false and/or a correction to the falsehood. However, if the author of the falsehood is an AI computer, lawmakers must then reconsider the recipient of the Corrective Direction.

5.3 Torts and Civil Liability

The next challenge faced by legislative bodies around the world concerns the law of torts. As increasingly autonomous vehicles that are powered by AI are introduced to market consumers, the law on negligence and tortious liability should adapt. The current law presupposes a human driver in imposing a duty of care to other road users. However, when self-driving vehicles are deployed, how should the responsibility for the decisions of the AI model be determined? Self-driving vehicles are programmed to make certain decisions based on the data it is trained with. The questioning flowing from this is the extent to which the AI software’s decision can be attributed to the developer.

In the context of the tort of harassment, the Protection from Harassment Act in Singapore criminally penalizes individuals for causing harassment, alarm and distress. Lawmakers must also consider modifying current laws to address instances where an individual becomes distressed or alarmed as a result of a defamatory remark conjured by an AI model. Legislators can consider taking up the proposals set forth by the Paper and mandate AI developers to implement measures to safeguard against the creation of toxic content.

5.4 European Union’s Draft A.I Act

On 14 June 2023, a draft law known as the A.I. Act (“AI Act”) was passed by the European Parliament. The AI Act potentially mandates greater transparency from developers of Generative AI models. There are a few key points to note. First, developers may be required to disclose copyrighted data used to train the AI model and  implement measures to avoid unlawful content from being generated. Second, in addressing the risks of data privacy breaches, the draft AI Act may also introduce a ban on the use of live facial recognition.

The AI Act is set to be passed at the earliest, the end of this year. If passed, the AI Act would constitute the first large-scale legislation on AI in the world.

6 CONCLUSION

As the capabilities of AI are increasing at an alarming rate, Singapore has taken commendable steps by introducing a robust framework in the Model Framework, clear guidelines in the Paper and reliable AI testing infrastructures such as AI Verify. However, more remains to be seen in the passing of the AI Act, which may set the stage for a cohesive and concerted approach to regulating AI worldwide.

Further information

To find out more about our services, expertise, and key contacts, please visit: https://kennedyslaw.com/where-we-are/asia-pacific/singapore/

Key contact

Robson Lee

Partner

t: +65 6436 4322

e: robson.lee@kennedyslaw.com

Leave a Reply

Your email address will not be published. Required fields are marked *