I'd like to know how Artificial Intelligence risks could be embedded in risk management practises, taxonomy and best practises for managing AI cyber/tech risks specially financial sector
I personally do not think that embedding it in risk management or best practises should change the way that you currently do that for any other technology/process/application.
As with any "new" technology, you must determine the risk to your organisation and then implement adequate security.
Where I see the largest risk to AI of any sort could be the bi-directional flow of information. Much like Cloud Security, one needs to know if they (their firm) are uploading anything to anyplace what protocols are in place.
Another risk that I see and believe could harm a corporation is plagiarism and privacy (not sure where you are but there are many regulations that need to be taken into account.
So I would do the following:
Assess the security threats (malware/ransomware, accidents. natural disaster and other.
Analysis and assess the risks
What is the probability? and what could the potential income be?
Mitigate and monitor the Risk
Plan and develop options to reduce the threats
Determine the strategy that works for your organization
- Avoidance
- Reduction
- Sharing - not typical in financial situations
- Transference
- Acceptance.
Others?
d
My thoughts are how the AI ladder works, if you understand how the data is gathered, cleansed and formatted, you have a good idea of how the process is meant to work.
I strongly suggest a good set of Governance principles and ethics are applied through the organisation itself, this will greatly help everyone use it wisely, and carefully.
Understanding how bias, can be incorporated into the developers mindset, will help you test the outputs and ensure you really a gathering good data, and validating and testing it.
Regards
Caute_Cautim