Trust in AI: Singapore’s Blueprint for Addressing AI Anxiety in Southeast Asia

As artificial intelligence (AI) becomes increasingly prevalent across society and the economy, there is a growing need for robust governance frameworks to ensure AI systems are developed and deployed responsibly and ethically. We have already written about how Singapore has emerged as leader in Southeast Asia in this space, with the emergence of its recent Model AI Governance Framework potentially serving as a model for regional guidance and collaboration.

How AI is used and perceived both at personal and national levels is a key focus of Blackbox’s and Cint’s quarterly ASEANScan studies, and the contemporary sentiment in Southeast Asia tells us that more proactive government communication and regulation towards AI is timely. Our data shows that more than two-in-three Southeast Asians (68%) now use generative AI (genAI) at least occasionally, so as this usage rate continues to grow, some common guardrails will be welcome. 

Singapore’s Model AI Framework: Addressing Shared Concerns

A review of Singapore’s framework shows that it directly addresses many of the key concerns expressed by Southeast Asians in our latest ASEANScan data (collected in April 2024). Its emphasis on accountability, data quality, security, and trusted development processes aligns closely with apprehensions we discovered around AI usage in many sensitive domains - including government/state security, online banking, and healthcare settings., among others. 

The above chart demonstrates the proportion of Southeast Asians who say the use of AI in the mentioned areas “frightens” them. As we can see, despite widespread use of AI, there remains a significant level of anxiety in how it is being used. Let's delve into some of the key dimensions highlighted in the framework and see how it attempts to directly address the relevant concerns uncovered by our data. 

1. Accountability, Trust and Assurance 

  • The framework emphasizes accountability for AI systems. Our data shows significant levels of concern for AI in areas like government/state security (35%), online banking (30%), and healthcare settings (26%), indicating a need for clear accountability measures to build trust. Content provenance is also a big issue, with the need for transparency in AI-generated content urged by the framework.  

  • This framework also focuses on responsible AI development and deployment processes. The unease surrounding AI in critical domains like government/state security (35%), aircrafts (32%), and healthcare settings (26%) underscore the need for trusted and transparent development methodologies. 

  • Rigorous testing and assurance are highlighted in the framework. The relatively high level of worry for AI in areas like online banking (30%), cars (24%), and personal health monitoring (21%), indicates a need for thorough testing and validation to assure safety and reliability. 
     

2. Data Quality and Security 

  • Ensuring high-quality and unbiased data is a key dimension. The considerations around personal data usage in areas like online banking (30%), mobile phones (24%), and personal health monitoring (21%), highlight the importance of robust data governance practices. 

  • Ensuring the security of AI systems is also critical. The rate of concern around government/state security (35%), online banking (30%), and mobile phones (24%) point to the importance of robust security measures, especially for sensitive applications. 

3. Safety, Alignment and Incident Reporting 

  • The framework highlights the importance of research into safe and aligned AI systems. The apprehensions around AI in critical domains like government/state security (35%), aircraft (32%), and healthcare settings (26%), highlight the need for ongoing research to ensure safety and alignment with human values. 

  • The framework includes provisions for incident reporting to ensure that any issues or malfunctions in AI systems are promptly addressed. This is crucial for maintaining trust and accountability, especially in sensitive areas where the impact of AI failures can be significant. 

4. AI for Public Good 

  • Lastly, this dimension encourages the use of AI for societal benefit. Our ASEANScan data shows a generally positive outlook towards AI in specific areas, with respondents indicating that its introduction in specific areas is most likely to help them. Areas such as education (70%), supermarkets (68%), and personal health monitoring (64%) garnered strong positive responses, indicating that there is strong public agreement on the potential of AI to have a beneficial impact and contribute to the public good in these domains. 

Conclusion: A Foundation for Regional Consensus 

Singapore’s Model AI Governance Framework aligns closely with the sentiments expressed in our ASEANScan findings. It emphasizes the need for accountability, data quality, security, and trusted development processes, especially in sensitive domains like finance, healthcare, and government services. The framework's focus on safety, alignment research, and public good also resonates with the potential benefits and concerns around AI in various aspects of daily life. 

As AI continues to evolve and integrate into society, establishing shared principles and standards for its governance becomes imperative. By proactively addressing generative AI concerns through a balanced, collaborative, and internationally aligned framework, Singapore has positioned its Model AI Governance Framework as a viable foundation for fostering regional cooperation and consensus on the ethical governance of AI systems across Southeast Asia.  

Don’t miss a beat. Click the icon to follow Blackbox on LinkedIn, and fill out the form below to sign up to our newsletter or send us a message.

Next
Next

Bridging the Perception Gap: Singapore's Positive Outlook on AI Governance