Hi All
A very interesting posting on Cognitive Bias, so imagine creating new software programs or programming your organisational Artificial Intelligence without the inherent knowledge of your own bias.
https://www.visualcapitalist.com/50-cognitive-biases-in-the-modern-world/
A fascinating insight.
Regards
Caute_Cautim
See https://www.schneier.com/news/archives/2011/05/the_5_biggest_biases.html
See also https://www.schneier.com/tag/bias/
There also an extensive literature on heuristics, biases and decision making. Trying starting with the classic studied by Amos Tversky and Daniel Kahneman if interested.
My organization pounds anti-bias training into every opportunity we can. Makes you question every decision after a while, thanks 😉
- B/Eads
But you have ultimately to move forward and make decisions rather than be paralysed by fear of deciding.
I had an HR person tell me that discrimination was illegal some years ago. I just said; Oh, really. So every job applicant must be offered a position and we cannot choose between candidates as that would be discriminating against those less likely to be able to do the job? You have to decide and so long as you don't discriminate on the basis of protected characteristics it's necessary to do so.
@BeadsThe main point is that AI and ML only work within distinct boundaries. Go outside of those boundaries, then they simply provide unreliable results. And are unbelievable.
Regards
Caute_Cautim
Agreed. However after rereading both articles in the OP I only see human bias not trained bias in AI/ML modeling which is where my focus was aimed with both my comments. My company provided 3 hours of anti-bias training this year alone. Nothing about AI bias which is a mathematical construct that can be discarded through training but that would be a completely different topic altogether.
If I missed the AI/ML conversation I would be happy to correct myself a I for now, I fail to see it mentioned above save your comment.
Well, off to a parade and afternoon at the VFW hall(s).
- B/Eads
The issue of AI/ML bias is explicitly called out in the ICO's paper on big data. From a privacy perspective the regulator requires organisations to explicitly consider and document the potential consequences of automated decision making based on these techniques, before they are used, which should go some way to considering it bias will result.
My approach here was of human bias as I saw nothing above connecting machine bias which entails a different set of rules and technical discussion.
- B/Eads