Machines cannot have an operating mode where they can kill someone, even if in the grand scheme of things they are saving lives. Prime example: Therac radiotheraphy machines - it's without a doubt that these machines have saved countless lives, but they had a weird error state where instead of treating the patient, they would kill them instead. At that point we didn't go "oh well, a human operated radiotherapy machine would have killed more people because humans are inherently worse at this stuff" - we banned all of them until the problem could be found, rectified, the solution tested and implemented. It also contributed to much stronger industry testing standards for medical devices.
I'd say an automonous car which can completely fail to react to a person in its path is firmly in the "therac-level of failure" category - even if on average they crash much less than humans do. They should all be taken off road and only allowed once we determine through rigorous industrial-level proof testing that it can't happen again.
1) Yes, absolutely. As a result of Therac machines being put on hold while the flaw was investigated I am sure some people didn't receive life-saving radiation treatment when they were scheduled to - that's preferable to operating unsafe machinery. And you word it in such a way as if its 3000 deaths/day or nothing and there is nothing in between - you must know very well it's not as simple as this. I suspect emergency breaking systems(mandatory soon in US, already mandatory on new cars in EU) will reduce that number close to zero long before anything with an "autonomous" badge will be allowed anywhere near public roads.
2) You can do real-world testing without testing on public roads and endangering everyone around you - which is what Tesla was/is doing - all their cars were gathering data which wasn't actually used for autonomous driving yet, but they were able to see what the car would have done had it been running in autonomous mode.
3) No, I do not believe that. I do believe that a certain category of problems has been completely eliminated by rigorous testing, certification processes and engineering practices that prevent those problems in the first place. That's not the same as saying that there will never ever be an issue with a radiotherapy machine. To bring the topic back to self driving cars - the processes should be developed that will ensure that the car cannot not react(like it did in this case) when there is person in its path - it should be physically impossible from the software point of view. If the hardware cannot provide clear enough data to guarantee that, then it simply shouldn't be allowed on the road.
Agree completely. Training self driving in cars "virtual reality"and collecting data from human driven cars and then processing that data through the self driving engine should be done extensively before they are ever allowed on public roads.
The driver assists you talk about will increase the safety of human driven cars to the point the self driving safety argument becomes less and less relevant.
The idea of a fully autonomous self driving car that can pick you up from your cabin by the lake and drive you into the city is in "flying car" territory for me. The AI required to execute that journey is so far off that the society could have changed enough to make the idea obsolete.
Self driving approved routes are the only way I see this working. Eg. goods haulage or reducing congestion in city centers.
> Machines cannot have an operating mode where they can kill someone
Yes, they can, and, in fact, many operational machines do. You probably mean should not, but even that is clearly not the standard our society applies. Except for machines designed to kill people, we generally try to mitigate this to the extent reasonable, but lots of machines that have killed people are retained in operation in the same configuration.
> Machnes cannot have an operating mode where they can kill someone.
I agree that they shouldn’t, but they certain do. Strangely we seem fairly ok with devices that kill us when we use them wrong (eg cars, power tools, weapons) but we are outraged when devices that make decisions for themselves kills us. For the victim it’s little different, but somehow it is.
I believe the difference is that the decision is taken by someone who has a conscious mental model of how he's going to be punished for killing someone.
I think it more that if someone is killed by something they control it is seen as more acceptable than them being killed by something they don’t have any control over. For example someone dying in a car crash on the way to work is not front page news. However if someone dies on the way to work when the train crashes, it is.
If that alone explains it, then the situation is even stranger. We are alarmed at an infrequent event but accept as normal that lots of people die because of cars. Not seeing the wood for the trees.
I'd say an automonous car which can completely fail to react to a person in its path is firmly in the "therac-level of failure" category - even if on average they crash much less than humans do. They should all be taken off road and only allowed once we determine through rigorous industrial-level proof testing that it can't happen again.