Independent Research and Policy Advocacy

Principles of responsible and trustworthy AI in digital lending

Save Post

Abstract

The adoption of artificial intelligence is steadily increasing across various industries. The financial sector, in particular, is at the forefront of this movement, and is harnessing AI to realise considerable commercial value while simultaneously enhancing operational effectiveness, enriching customer experience, strengthening risk management frameworks, and promoting innovation. Some of the benefits of AI adoption in FinTech are:

  1. Data processing abilities: Enhanced data processing abilities, including the ability to process qualitative and audio- based data could lead to a deeper understanding of customers’ needs and improve product fitment. These insights could also significantly improve the customers’ journey by sensing their needs and offering customised, relevant and timely support in a customer-friendly format along with the customer journey. AI systems can also help financial service providers (FSPs) build stronger defenses against fraudulent activity.
  2. Flexibility and scalability: AI systems exhibit a high degree of scalability and flexibility which enables FinTech organisations to offer hyper- personalisation at scale.
  3. Process-rationalisation: AI systems help realise efficiency gains from process-rationalisation where providers can eliminate duplication of tasks by deploying AI.

When AI works as intended, it can deepen financial inclusion and enhance the relevance of financial services at population- scale. For instance, GenAI can significantly enhance the ease of opening accounts for uninitiated customers. It can also nudgethem to improve account usage, promote budgeting, and deepen financial literacy through relevant, customised and timely content. In the case of credit, fuelled by big data, algorithms could do a better job in predicting creditworthiness of thin- filed customers and credit invisibles. These gains, however, are tempered by attendant risks. AI-related risks could arise from the advanced processing of rich and personal data. This includes risks related to data privacy, bias and discrimination, AI hallucination and misinformation, and inconsistent accuracy of the AI system. These risks can take away from the gains presented by deeper processing capabilities. Further, these risks are aggravated by the relative scalability of algorithms.

Just like the benefits, the risks could also increase with the model, affecting very large number of customers at once. The difficulties in explaining AI processes and implementing complex algorithms could make the algorithm impermeable and difficult to assess for accuracy, thereby reducing the scope for customers to question the algorithm, identify mistakes or seek remedial action. These risks can trigger adverse systemic shifts in the financial system and jeopardise customer safety. For instance, reliance on similar, off-the-shelf machine learning tools could encourage herd behaviour among lenders which could intensify economic volatility. Similarly, a less-than-fit algorithm could enable lending to those borrowers who may not have the wherewithal to repay the loans. It could affect the borrowers’ credit scores, cutting them off formal credit markets and severely erode the lenders’ portfolio quality.

Assessing and understanding the risks and benefits of using AI is essential for drafting informed policies on how AI can be integrated in the product development process. This must be supplemented with a detailed analysis of the assessments’ underlying causal mechanisms. AI governance must adopt a lifecycle or a value-chain approach which focuses on improving the visibility of the various components of an AI system and the stakeholders responsible for each. It allows organisations to identify the origins of the risks as well as the benefits along the value chain and allocate responsibilities accordingly. This visibility over the various components, their potential implications and the respective stewards also benefits the regulators.

Finally, the value-chain approach to governance is necessary for crafting an AI governance framework that focusses on responsible and trustworthy AI (RTAI). RTAI concerns itself with the AI system as-a-whole and not just the outcomes of AI adoption. It also requires AI systems to be technologically robust and aligned with socially desirable values such as non- discrimination and fairness to minimise biases and any instances of data breach.

Though responsible and trustworthy are often used interchangeably, this paper distinguishes the two as follows:

  1. Responsible pertains to the processes associated with the design and deployment of AI.
  2. The conduct and the outcomes of the AI system thus designed fall under the ambit of trustworthy.

Thus, ‘responsible’ describes the processes and procedures put in place to ensure that the conduct, decision and outcomes of the AI systems are trustworthy.

The paper explores what RTAI means in the context of digital lending. The first section compiles principles of RTAI along with its essential components. The next section maps relevant tools for each principle. These tool recommendations can help lenders implement RTAI practices in their operations. The distance map—a checklist for technology teams of digital lenders— allows digital lenders to gauge how far their current AI safeguards are from the desired level and how they might close this gap. The teams should be able to review the checklist without external supervision and reflect on the overall intensity of their AI safeguards. The map serves as a diagnostic tool and offers guidance to lenders on how they might further strengthen their AI practices.

Read the full report here.

Authors :

Tags :

Share via :

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts :