Researchers from Tencent's Keen Security Lab reverse-engineered several of Tesla's automated processes to see how they reacted when environmental variables changed. One of the most striking discoveries was a way to cause Autopilot to steer into oncoming traffic. The researchers said autopilot uses a function called detect_and_track to detect lanes and update an internal map that sends the latest information to the controller.
The ability to alter self-driving cars by altering the environment isn't new. In late 2017, researchers showed how stickers affixed to road signs could cause similar problems. Tesla in a statement said that these are old issues and have been patched. They also said that changes to physical environments are generally considered outside the scope of attacks against their self-driving systems and in effect are not eligible for their bug bounty program. What a pity! That will just make researchers work harder and choose how they release their findings.
As a whole,
At this point in time I see all self driving cars as an alternative to euthanasia for idiots.
As fast as these cars are being developed, the attack vectors are growing even faster and the attacker field is as well.
It used to be about stealing car or plane designs, now it will be about injecting code into cars before they even leave the factory, to be initiated at a later date.
Brrrrrrrrr
The future is definitely there, but I am going to wait a looooong time after they start being released before I get in one.
I want to know, who wrote the program, what bias they have and how they were educated, brought up and how any bias was removed from any Augmented Intelligence (AI) intrinsically used within the driving system. Did they apply SecDevOps throughout the whole development process? Was SDLC applied and who tested it? Can they be held accountable?
What communications systems are passively enabled or is someone from Google via a hijacked WiFi or Cell tower likely to interfere with the system and its ability to get me from A to B?
Yes, I agree less accidents - but at the end of day, some human being wrote the original programs - and this has inherent bias built into it - were all scenarios tested - no because it would cost too much. It would become uneconomic, but at least it would make people and organisations think about the safety of their employees. Perhaps if the system was treated like an aircraft with international standards applied through appropriate authorities, then Safety would become the top priority?
However, once again, we have to wait for a certain percent of accidents to occur, before this happens - its called the human condition - why not be proactive and prevent these accidents from occurring or minimising the likelihood?
Statistical probability of any of the above, being heeded, about the same for the largest prime number we can currently compute on classical computing systems.
Regards
Caute_cautim