Singaporean regulator works on guidelines for AI use by financial institutions
Concerns that AI may be misused by certain entities are the primary driver for the development of the guidance, set to be ready by the end of the year.

The Monetary Authority of Singapore (MAS) has become the latest regulator to seek ways to secure that artificial intelligence (AI) is used in a responsible way by financial institutions.
Earlier today, the regulator announced that it is collaborating with financial industry representatives to develop a guidance on the responsible and ethical use of AI and data analytics by financial institutions.
The guidelines aim to outline key principles and best practices for the use of AI and data analytics, helping financial institutions to boost internal governance and reduce risks of data misuse.
The guide is targeted for completion by the end of the year. It will cover all segments of the financial sector including FinTech firms, the regulator said.
The use of various AI solutions and novel fintech products has attracted the attention of other regulators too. Last week, for instance, the Hong Kong Securities and Futures Commission (SFC) clarified its stance regarding the provision of robo-advice.
The SFC notes that it is important for clients to understand how investment advice is generated and how algorithms are used to manage their accounts. Information provided to clients should include the limitations of the robo-adviser’s services and how and when the algorithm might rebalance a client’s portfolio.
The lack of transparency of how AI solutions make their decisions is a persistent problem. In a way, AI systems work like “black boxes”. That is, the programmers know their inputs and outputs but are unaware of their inner workings. As a result, it is not yet clear how and whether to attribute the responsibility for any errors to the machines or the people behind them.
In October last year, the UK Select Committee on Artificial Intelligence talked to academic experts about the ‘big picture’ issues associated with AI. The problem about ethics and the possibility to somehow regulate AI was raised during the discussion too. The experts agreed that the necessary codes of practice already exist in numerous areas, but broader guidelines were dubbed not necessary.
Professor Michael Wooldridge, Head of Department and Professor of Computer Science, University of Oxford, said:
“AI-specific ethical guidelines, I’m not convinced is something that is particularly necessary. Nor AI law. Looking at specific areas – the data protection legislation that we have, would make sense but a general AI law – no.”