cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rslade
Influencer II

The ethics of who to kill in a crash ...

With the recent surge in self-driving vehicles (of many types), the question of how to make decisions in "no-win" driving situations has become of great interest.  A recent study has noted some interesting "clusters" of cultures, identified by differences in choices in regard to these questions.

 

It has also identified similarities in decisions.  Based on those, I'm in trouble.  I'm old, I'm overweight, I'm male, and I jaywalk just about every day going to the library.  Nobody is going to swerve to avoid me if there are other choices ...


............

Other posts: https://community.isc2.org/t5/forums/recentpostspage/user-id/1324864413

This message may or may not be governed by the terms of
http://www.noticebored.com/html/cisspforumfaq.html#Friday or
https://blogs.securiteam.com/index.php/archives/1468
5 Replies
Badfilemagic
Contributor II

Honestly, I don't think that I would willingly get into a vehicle that I knew would chose to kill me if it had to and I certainly wouldn't buy one.  But I have broader issues with self-driving vehicles as it is, particularly those being created by silicon valley companies and "real" engineering companies like automotives or aerospace. The cost of failure in Silicon Valley is low compared to the cost of failure in these other spaces. Additionally, I don't want someone "failing early and failing often" when lives are on the line and going "oops, we'll just fix that for everyone else in the next over-the-air update! our bad!"

 

No thanks, not for me.

-- wdf//CISSP, CSSLP
Kempy
Newcomer III

Totally agree, I would not buy or get in a vehicle that may choose to kill me.

But how would you know this if the code is proprietary?
CISOScott
Community Champion

Is this more of an ethics question or a risk question. Ethics means doing the right thing based on your morals/culture/upbringing. Risk means evaluating each situation and taking the appropriate choice based on risk tolerance, cost, etc.

 

I know if presented with the choice of 1) hitting a young child who runs into traffic or 2) hitting an oncoming car head-on I would usually choose hitting the car unless at a very high speed (hopefully we don't have too many kids playing near the autoban unsupervised). The choice seems obvious because the cars have more safety items to protect them (crash cages, airbags, seatbelts, etc) than an unprotected human body.  The forces will be greater in the head-on collision but there are more safety features to help protect. BUT this changes as the variables change. If I was driving a school bus full of kids and there was a 18-wheeler coming at a high rate of speed, or if there was a motorcycle coming the other way, or if there were a cliff on one side and the kid on the other........  Can the computer keep up with all of this in time to make the "BEST" decision ???????

Badfilemagic
Contributor II

@CISOScott

 

I think computers excel at keeping up with variables and acting at high-speed. If they didn't, we wouldn't need or want them.  That said, "best" in terms of ethics is an ever-changing, open-ended question. If everyone could ALWAYS agree on what is ethically or morally "best", the world would be a much different place, even if not everyone always lived up to the standard. 

 

Unfortunately, as the linked study show, we don't have a universal view of what "best" might be in all the general cases, let alone the edge cases. This is what is so concerning about machines making "decisions."  Even if they're acting in accordance with an algorithm explicitly programmed by people and not "making stuff up", for lack of a better term, via some opaque set of generalized machine learning algorithms, if people can't agree then this is more of a political issue than a technical one.

 

And all of that even puts aside the questions of liability, etc. Right now, if I run over the kid, then I'm at fault, not BMW. If I buy a self-driving car, then it is my property (licensing and DRM and all of that other rent-seeking aside). If the self-driving car I own runs over the kid, do I own the liability for that or does the self-driving car manufacturer? If the car is supposed to NEVER pick "run over a kid" but fails to make that decision due to a bug, what happens to the individual programmer when the forensic audit checks the SCM and git blame points at Joe Blow?  We don't have anything like the PE (Professional Engineer) licensing scheme for software engineers.

 

If a civil engineer signs off on a bridge, it collapses and kills people, his license is pulled and he's not going to be building bridges again any time soon, if ever. If software is buggy, people have hitherto just gone "well, it's computers. What do you expect?" That's the mindset that scares me the most and why I don't really want to get into one of these things until all of this is well hammered out from all aspects: legal, regulatory, ethical, technical, etc.

-- wdf//CISSP, CSSLP
rslade
Influencer II

The Faraday Auto Navigating Locomotive Company is proud to announce the 2019 Faraday Watt!

The Watt is our premier model, but priced for families.  It has the greatest range of options in its class, including 29 cup-holders (unprecedented for a five seat model) and a 73 inch dashboard display.

It also has the greatest range of user-selectable moral driving options, including "don't kill me," "kill me but leave my passengers alive," and "I'm done for, you go on and marry Alice."

Watt! The fun moral driving solution!


............

Other posts: https://community.isc2.org/t5/forums/recentpostspage/user-id/1324864413

This message may or may not be governed by the terms of
http://www.noticebored.com/html/cisspforumfaq.html#Friday or
https://blogs.securiteam.com/index.php/archives/1468