If it's used for "social scoring." (Whatever that is.) Or surveillance. Or to predict stuff. (But only if you're targetting vulnerabilities.)
Except for the Powers-That-Be. And the military.
Is that all clear?
Privacy laws in Europe have long had a provision allowing objections to completely automated decision making. These were strengthened when GDPR came in back in 2018. The whole Big Data Analytics and AI area is problematic as individuals can object. More problematic for organisations is that privacy legislation flows from the human rights declarations as anti discrimination laws. It would not surprise me to see class actions in future from groups who believe they've been systematically harmed by big data and AI use.
@rslade wrote:
If it's used for "social scoring." (Whatever that is.)
A few articles on it:
Also mentioned in a documentary called Coded Bias.