Have you seriously thought about Deepfakes, and asked yourself can you pick out the real from the fake?
Can we educate ourselves to question whether or not what we are reading is actually the truth?
https://securityintelligence.com/articles/how-deepfakes-will-make-us-question-everything-in-2020/
Regards
Caute_cautim
Have I thought about 'deepfakes'? Yes, yes I have and have come to the conclusion that nothing good can come of it.
As to damaging, tarnishing or otherwise mudracking another's image or reputation for the negative the immediate impacts will be damning in the short term but as these fakes become widely available, if not pedestrian in nature will also become cover for any and all bad behavior possible.
Deepfakes will give unnecessary cover to all sorts of questionable to outright illegal behavior as the victim will either declare it to be a fake or simply 'someone' is out to get them with absolute, technological, probable deniability.
Basically, we all loose two fold. Nothing can be believed and any evidence brought to light can and maybe easily dismissed without better forensic tools brought to bear. Not everyone will have the resources to bring to bear and will subsequently loose in the court of opinion.
I find the above to be a very sad commentary on the technology.
- b/eads
@Beads I agree, it will only become worst:
Biased AI Is Another Sign We Need to Solve the Cybersecurity Diversity Problem
Artificial intelligence (AI) excels at finding patterns like unusual human behavior or abnormal incidents. It can also reflect human flaws and inconsistencies, including 180 known types of bias. Biased AI is everywhere, and like humans, it can discriminate against gender, race, age, disability and ideology. AI bias has enormous potential to negatively affect women, minorities, the disabled, the elderly and other groups. Computer vision has more issues with false-positive facial identification for women and people of color, according to research by MIT and Stanford University. A recent ACLU experiment discovered that nearly 17 percent of professional athlete photos were falsely matched to mugshots in an arrest database.
https://securityintelligence.com/articles/biased-ai-is-another-sign-we-need-to-solve-the-cybersecuri...
So where do we go from here?
Regards
Caute_cautim