RegTech – the smart future for model risk management
The inexorable advance of new technologies; artificial intelligence (AI), machine learning (ML), big data and cloud computing are transforming financial institutions and markets. Machine learning in particular, is rapidly gaining traction in the world of model risk management, where the increased number of models to be managed, requires a consolidation of effort across the entire model landscape.
Despite infrastructure challenges and reservations as to the transparency of algorithms, these technologies are being used for model risk management in order to alleviate the overhead of regulatory compliance; a key example whereby RegTech can help banks to achieve compliance in the smartest way.
Disruptive technologies are still at an early stage of adoption by most financial institutions, but the rapid expanse of big data and the ability to purchase flexible infrastructure and data storage through the cloud is driving the expansion of algorithms using ML in operations areas.
As I argue in my previous blog Model Risk Governance – the Risk at the Heart of Finance, model risk management is increasingly important, as financial institutions create ever more sophisticated models with ever-increasing dependencies, leading to heavier regulatory oversight. To mitigate the increased workload of regulation ML is being fast-tracked as a tool that can underpin the management of models.
The life blood of financial models, whether for risk mitigation, valuation, or trading, is high quality data with sufficient coverage to enable both algorithms and testing. The management function for these models needs to ensure this data is available, together with processes for data quality, data cleansing and issue resolution. ML can both assist with cleansing and can make use of large data sets, whether structured or unstructured, in ways that human beings cannot. ML can also use structured and unstructured learning to validate the outputs of the models and direct the human agency to the ‘edge cases’ that require further analysis.
When developing the models, clustering algorithms can be used to identify whether new data is similar or related to data that the model recognises, and the ongoing self-training of the algorithms enables the model to predict variations to risk based on data changes, thereby minimising the likelihood of underestimating the risk. This can be particularly useful for operational risk models e.g. to identify the risk of a cyber-attack.
Unsupervised learning algorithms, where patterns in the data are detected without prior labelling, can help model risk management identify both back testing trends and whether the model output is within tolerance levels. ML is also seen as the solution to the repetitive checks that are part of model validation, used in conjunction with human oversight and counter checks.
Another potential use for ML is the required periodic validation and data cleansing, where algorithms can quickly identify missing input and patterns far more efficiently than hard-coded programs. The algorithm can then learn if further inputs could be valuable for inclusion in the model and, apart from these applications, ML algorithms can be used to assemble and organise the data for building ‘challenger models’ to evaluate the model over time.
However, the increasing use of ML can also bring its own particular challenges. The risk infrastructure of banks tends to lean towards traditional legacy solutions, and taking advantage of new technologies such as cloud, big data and ML is not always as simple as it may first appear.
Another concern, both internally and for the regulators, is that the self learning of the algorithms, both those used in primary models and those used for validation purposes, can result in a ‘black box’ where even the creators can no longer explain how a model reached a certain conclusion, because the algorithm is now too complex to disassemble!
Model risk management is also subject to regulatory requirements for documentation and reporting, and to also satisfy regulators that there have been efforts to tag the outputs with explanations; however, these do not necessarily create a full audit trail of the decision making process. There are also concerns regarding the transparency of the aggregated model risk, with self-learning models that are increasingly interdependent, which could lead to contagion. Further work needs to be done on the transparency of the decisions and how regulators may need to adjust their own frameworks to incorporate new governance and auditability rules within the process.
In conclusion, the landscape of model risk governance is changing fast. To be fully compliant with regulations for the growing number and complexity of financial models requires the adoption of new technologies. ML solutions are needed to interpret and manage big data, especially the growing volume of unstructured data. Many repetitive and time-consuming tasks can be taken out of human hands and, furthermore, the models themselves can become more relevant and efficient as the algorithms continually learn and rebuild themselves. There are certainly challenges along the way, but the growth of RegTech is now almost certainly irreversible, and if the risks are properly managed then it will also be clearly extremely beneficial.