Not really industry news, but there isn't a "misc" category..
http://www.labsix.org/physical-objects-that-fool-neural-nets/
A 3D-printed version of an image classifier attack, which fools machine learning but is invisible to humans.
Nifty research.
I know it's been a while since that news story, but I recently came across a similar attack on 3D machine learning systems. Interestingly, the number of threats of this type is only growing. This is despite the constant development of not only artificial intelligence, but also 3D machine learning in general.
A very good indicator, so who should put in the self checks and validate these 3D learning systems? Certainly not the programmers!! Do we need by legislation and independent AI Ethics boards put in place? We always be responsible for our decisions even a programmer or a Software Architect.
It appears just because automation and and AI is used, creates a Laissez-faire approach to all things?
https://www.ibm.com/uk-en/artificial-intelligence/ethics
Regards
Caute_Cautim