Self-Driving Vehicles
#796
Lexus Test Driver
People want self-driving cars to prioritize young lives over the elderly
Today, MIT released the results of a global survey on the moral and ethical decisions that autonomous vehicles should be programmed to make. The survey reveals that general preferences include prioritizing human lives over animals, younger and healthier people over the elderly and saving more lives over fewer lives. People also preferred to spare bystanders (who were obeying the law) over jaywalkers.
The study is unique because of its sheer scale; over 2 million people from 200+ countries participated in the survey. It presented variations the "Trolley Problem," a classic ethical dilemma that asks participants to choose who to save in the event that an out-of-control trolley is endangering people. When it comes to autonomous vehicles, the software may have to prioritize whether to swerve into a group of people to avoid a head-on collision or decide whether to save its own passengers at the expense of lives in another vehicle.
While the survey results revealed general preferences, there were variations and trends based on where respondents were from. "The main preferences were to some degree universally agreed upon," lead author Edmond Awad, a postdoc at MIT, said in a release. "But the degree to which they agree with this or not varies among different groups or countries." An example is that in "eastern" countries, including many in Asia, respondents were not in favor of prioritizing young lives over the elderly.
The full results of the study will be published in the journal Nature. It will be interesting to see if autonomous vehicle programmers take results into account when determining the ethical and moral preferences of the vehicles they are working on.
While the survey results revealed general preferences, there were variations and trends based on where respondents were from. "The main preferences were to some degree universally agreed upon," lead author Edmond Awad, a postdoc at MIT, said in a release. "But the degree to which they agree with this or not varies among different groups or countries." An example is that in "eastern" countries, including many in Asia, respondents were not in favor of prioritizing young lives over the elderly.
The full results of the study will be published in the journal Nature. It will be interesting to see if autonomous vehicle programmers take results into account when determining the ethical and moral preferences of the vehicles they are working on.
This makes me effing sick.
#798
Lexus Test Driver
#800
Its also easier for the government to track our whereabouts
#801
Lexus Test Driver
well that's certainly cruel and very darwinian but in a way makes sense. from a purely evolutionary standpoint, there's more "need" for younger people to stay alive, at least until they have kids. our society isn't run on purely evolutionary principles though so this obviously needs some more thought lol.
i think fully autonomous cars are as likely to happen as flying cars so i don't think this is a major concern.
i think fully autonomous cars are as likely to happen as flying cars so i don't think this is a major concern.
#802
Lexus Test Driver
The problem with self-driving cars is the morality of the AI system. Humans, no matter how flawed we are, I bet majority of us would swing the car into a tree than hit someone.
#803
Driver School Candidate
So you would prefer to hit someone else's parents rather than someone's children? There is no right answer to the question. I would prefer the car to hit no one.
The problem with self-driving cars is the morality of the AI system. Humans, no matter how flawed we are, I bet majority of us would swing the car into a tree than hit someone.
The problem with self-driving cars is the morality of the AI system. Humans, no matter how flawed we are, I bet majority of us would swing the car into a tree than hit someone.
Its not saying the car is going to be a heat seeking missile, hell bent on hurting geriatrics.
#805
Pole Position
Oh that's super easy - you just use a forward looking camera and program it to look for a Cadillac or Buick badge
(just kidding @mmarshall )
On a serious note, the ethical debate here is complex, in no small part because it's obviously not the "car" making the potentially life or death decision but the engineers who developed the software effectively making the call and determining, in advance and without any context, who they think is expendable, who should be more likely to bite the dust, or who should be spared to live to see another day.....
(just kidding @mmarshall )
On a serious note, the ethical debate here is complex, in no small part because it's obviously not the "car" making the potentially life or death decision but the engineers who developed the software effectively making the call and determining, in advance and without any context, who they think is expendable, who should be more likely to bite the dust, or who should be spared to live to see another day.....
Last edited by swajames; 10-25-18 at 12:20 PM.
#806
Lexus Fanatic
Oh that's super easy - you just use a forward looking camera and program it to look for a Cadillac or Buick badge
(just kidding @mmarshall )
(just kidding @mmarshall )
#808
Lexus Test Driver
Well, everyone would prefer the car would hit no one. This is referring to a (probably very unlikely)situation in which hitting a pedestrian is unavoidable and the AI has to chose which way to go. Either one is going to result in an injury, but in this case it would prioritize the safety of a younger person.
Its not saying the car is going to be a heat seeking missile, hell bent on hurting geriatrics.
Its not saying the car is going to be a heat seeking missile, hell bent on hurting geriatrics.
Since you are the driver, you should hit a pillar and save other people first because you are the one liable behind the wheel.
There is someone in the comment section of the article that made a good point: I believe, the person said and I loosely quote, "the AI self-driving car should protect the people outside of the car before inside of the car because the people inside of the car agreed to letting the car drive them and agree to the risk of the terms of a self-driving car while the people outside of the car never agreed to such terms."
#809
Pole Position
Yes. I do understand that the car, if programmed this way, would favor the younger person over the older person. I know its not going to missile towards the elder folk. The problem is - if there is a situation where the AI car needs to hit someone, it ill-moral way to think that its "okay" to hit the elder person rather than the younger person. At the end of the day - these are someone's kids, parents, or grand parents.
Since you are the driver, you should hit a pillar and save other people first because you are the one liable behind the wheel.
There is someone in the comment section of the article that made a good point: I believe, the person said and I loosely quote, "the AI self-driving car should protect the people outside of the car before inside of the car because the people inside of the car agreed to letting the car drive them and agree to the risk of the terms of a self-driving car while the people outside of the car never agreed to such terms."
Since you are the driver, you should hit a pillar and save other people first because you are the one liable behind the wheel.
There is someone in the comment section of the article that made a good point: I believe, the person said and I loosely quote, "the AI self-driving car should protect the people outside of the car before inside of the car because the people inside of the car agreed to letting the car drive them and agree to the risk of the terms of a self-driving car while the people outside of the car never agreed to such terms."
#810
Lexus Champion
There is a context to this, and it is this...
Autonomous vehicles will be computer-controlled. Computers are like young children: They are ignorant -- unaware and uninformed -- and know only as much as they have been taught and learned in their short lives; they can walk or pedal a tricycle but do not know how to steer, and will continue straight ahead, perhaps into a collision, for example.
So what if an autonomous vehicle meets a situation in which it is heading straight for a collision? If it has not been taught -- not been programmed -- what to do, it will ignorantly continue on its way, straight into that obstacle.
This problem grows out of what is known as the undergraduate philosophy thought experiment known as the trolley dilemma: A runaway trolley is running out of control on its tracks towards five people; as a bystander you have a choice to pull a lever to divert the trolley onto a side track changing the direction of the trolley but doing so puts it on a course that will result in a single person’s death. What would you do?
So what if that straight-ahead obstacle is a woman on a bicycle (to borrow that Uber collision as an example)? Do you program the vehicle to proceeed straight through (and hit the woman but save the car's passengers) or program it to swerve to avoid? What if the choices then become:
But how do you determine (and rank) value-of-life? Are the car's passengers at the top of the list (save at all costs)? Who is next? Individuals or groups / families? Young or old? Is it allowable to injure the car's passengers in order to avoid injuring other people?
These are questions that philosophers involved in autonomous vehicle development are working on. They may need the help of local populations, especially if populations in different parts of the world where an autonomous vehicle may operate have different values (some cultures have great respect for their elders while other value their young more).
Autonomous vehicles will be computer-controlled. Computers are like young children: They are ignorant -- unaware and uninformed -- and know only as much as they have been taught and learned in their short lives; they can walk or pedal a tricycle but do not know how to steer, and will continue straight ahead, perhaps into a collision, for example.
So what if an autonomous vehicle meets a situation in which it is heading straight for a collision? If it has not been taught -- not been programmed -- what to do, it will ignorantly continue on its way, straight into that obstacle.
This problem grows out of what is known as the undergraduate philosophy thought experiment known as the trolley dilemma: A runaway trolley is running out of control on its tracks towards five people; as a bystander you have a choice to pull a lever to divert the trolley onto a side track changing the direction of the trolley but doing so puts it on a course that will result in a single person’s death. What would you do?
So what if that straight-ahead obstacle is a woman on a bicycle (to borrow that Uber collision as an example)? Do you program the vehicle to proceeed straight through (and hit the woman but save the car's passengers) or program it to swerve to avoid? What if the choices then become:
- Swerve to the left, into oncoming traffic...
- Swerve to the right, into a young family with young children, waiting to cross the street...
- What if the oncoming traffic is another car, and hitting it would injure the car's passengers and also injure the other car's passengers, but save the bicyclist and the family from harm?
- What if the oncoming traffic is a large truck, and hitting it would likely seriously injure (or kill) the car's passengers but save the bicyclist and the family from harm?
But how do you determine (and rank) value-of-life? Are the car's passengers at the top of the list (save at all costs)? Who is next? Individuals or groups / families? Young or old? Is it allowable to injure the car's passengers in order to avoid injuring other people?
These are questions that philosophers involved in autonomous vehicle development are working on. They may need the help of local populations, especially if populations in different parts of the world where an autonomous vehicle may operate have different values (some cultures have great respect for their elders while other value their young more).