Hi All
More sobering reading from the AI Incident Data, whose latest summary of recent AI incidents demonstrates a significant rise in fraud-related incidents, political misinformation and non-consensual deepfakes.
💰 AI-related incidents in business contexts:
▶ CFODive reports that 53% of U.S. and U.K. businesses have been targeted by AI-powered deepfake scams in 2024. Using AI to create realistic fake videos and audio of corporate executives, scammers have successfully stolen millions, including $25 million from British engineering group Arup (Incident 800, 3 September 2024).
💡 Responsible AI approach: train your staff on the realities of deepfakes and how to spot red flages. Have clear policies and processes around identify verification.
▶ An LA school district's US $6 million investment in developing an AI chatbot "Ed," which was designed to provide academic and mental health support to students, failed when the contracted service provider collapsed due to financial difficulties (Incident 793, 1 July 2024).
💡 Responsible AI approach: Conduct robust due diligence on AI service providers, including as to their financial stability and risks of vendor lock-in.
🤥 Deepfakes in political contexts:
▶ Donald Trump sharing AI-generated images that falsely suggested pop star Taylor Swift had endorsed him (Incident 766, 18 August 2024)
▶ On the same day, AI-generated images were published of Kamala Harris at the Democratic National Convention with communist flags in the background (Incident 767)
▶ A US Senator was targeted in a deepfake Zoom video call by someone posing as a former Ukrainian foreign minister (Incident 805, 19 September 2024).
👿 Non-consensual deepfakes
While there is often a focus on political deepfakes and those used for disinformation purposes, the vast majority of deepfakes (98%) is deepfake pornography used to attack and silence women.
▶ At least 22 students at a US high school were targeted by deepfake nudes (Incident 765, 14 March 2024)
▶ Child predators are reportedly generating deepfake nudes of minors in order to extort them &Incident 784, 23 April 2024).
▶ A US soldier from was accused of using AI to generate child pornography (Incident 780, 23 August 2024) while Incident 777 (28 August 2024) tracks a surge in explicit deepfake pornography in South Korea.
https://incidentdatabase.ai/blog/incident-report-2024-august-september/
https://incidentdatabase.ai/
Regards
Caute_Cautim