Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

for those who haven't seen it, the manufacturer of the lidar system believes their hardware was functioning normally, and the problem is in uber's software: https://www.forbes.com/sites/alanohnsman/2018/03/23/lidar-ma...


[flagged]


An inattentive driver killing a pedestrian isn't typically treated as criminal, as far as I know. As a pedestrian and a driver, I have mixed feelings about this.


Not sure about the US but in other countries it often is.

In Australia for example if you are using your phone and kill someone you absolutely will be seeing jail time.


> An inattentive driver killing a pedestrian isn't typically treated as criminal, as far as I know.

It's usually, though exact standards vary, vehicular homicide if there is criminal negligence, which can include inattention while driving especially if it results in a citable (on its own, before considering the fatality) violation of a traffic mandate that causes then fatality.


A vehicle involved death is always treated as a potential criminal case.


Sure, the police always check for intoxication, etc.


What about Toyotas faulty acceleration?

My... My understanding was that you can totally have this be treated as criminal but that most families don't want to press charges? Because, well, all of it sucks.


If you charge the engineers criminally, the result is that they won't cooperate with investigations, and will do their best to cover things up.

With "no fault" investigations, the engineers work with the investigators to identify and fix the problems. The result is a lot better than the adversarial system.


We aren’t talking about a micro service going down, we’re talking about someone dying.

If that driver hadn’t been in a self driving car, they’d clearly be going to jail for looking at their phone while driving.


That's right, we're talking about someone dying. Is it better to have the problem fixed, or better to get revenge? In aviation, for example, some mechanics in Chicago were putting jet engines on using the wrong procedure. This resulted in a crack in the engine mount, the engine fell off during takeoff, resulting in a fiery crash where everyone died.

Everyone cooperated with the investigation. The mechanics were not criminally charged. The procedures were changed so it wouldn't happen again.

Under your proposal, the shop would have had every incentive to alter records, deny, obfuscate, and in general impede finding the truth as much as possible. Do you think that would have been better?

I know of no cases where mechanics, engineers, air traffic controllers, regulators or pilots were criminally charged in a fatal accident. The result is we have incredibly safe air travel. I don't know about you, but I'm happy about that result.


Has there ever been any back-stabbing after the engineers "cooperate" and volunteer potentially self-incriminating information?


> What about Toyotas faulty acceleration?

This is a great case to read up on. The testimony from experts in this case was top notch.

I believe the criminal charges in the case were related to the degree of deception Toyota engaged in with regards to unintended acceleration in their vehicles.

edit: Someone did a case study: https://users.ece.cmu.edu/~koopman/pubs/koopman14_toyota_ua_...


Toyota paid a huge fine and other penalties, but looks like the criminal charge has been dismissed: https://www.reuters.com/article/us-usa-toyota/u-s-judge-dism...


Families don't press criminal charges, the DA does. But if someone doesn't want to testify, a case can fall apart.


What faulty acceleration?

You mean https://en.wikipedia.org/wiki/2009%E2%80%9311_Toyota_vehicle... ?

> The most common problem was drivers hitting the accelerator pedal when they thought they were hitting the brake, which the NHTSA called "pedal misapplication.” Of the 58 cases reported, 18 were dismissed out of hand. Of the remaining 40, 39 of them were found to have no cause; the remainder being an instance of “pedal entrapment.”



So... the NHTSA started an investigation in 2010, and Toyota 'provided to the American public, NHTSA and the United States Congress an inaccurate timeline of events that made it appear as if TOYOTA had learned of the sticky pedal in the United States in “October 2009,”', when "In fact, TOYOTA had begun its investigation of sticky pedal in the United States no later than August 2009, had already reproduced the problem in a U.S. pedal by no later than September 2009, and had taken active steps in the months following that testing to hide the problem from NHTSA and the public."

But the NHTSA's 10-month investigation beginning in 2010 when Toyota had already informed them of the sticky pedals came to the conclusion that zero of the reported cases of unintended acceleration involved sticky pedals.

That's a case for lying to the government, and we see a judgment against Toyota for lying to the government. How is it a case for unintended acceleration?


Read the second line as well to make conclusions.


> An inattentive driver killing a pedestrian isn't typically treated as criminal, as far as I know

They certainly should be treated like a criminal. They used a deadly weapon to kill someone because they didn’t respect their responsibility.


A software developer fucked up. It could have been much worse. It's going to make our industry look worse if we aren't held accountable.


Nah, I'd rather not be held accountable for endless scope creep and 80 hour work weeks delivering a product I repeatedly called a pile of bugs


Engineers who are held responsible for their work and can go to jail when they get it wrong don't accept work conditions like you describe.

Software engineering might be a better profession to work in if it was regulated the same way other engineering fields are.


Makes you wonder if a governing body will ever come about - eg medical councils/boards, engineers associations etc. Presumably it’s these type of problems that they came into existence for. It’s a double edged sword, but often works in the interest of the profession where I am. The biggest difficulty I’d see is that defining a software developer is not prticularly easy.


It's generally considered a form of negligent homicide, which is a crime in the USA.


No, "ordinary" negligence justifies a charge of vehicular manslaughter, which has lesser penalties than plain manslaughter.

Even then, it seems like many states require "gross" negligence, like speeding or intoxication to charge with a crime. Someone who simply "didn't see the pedestrian" but was otherwise following the law will likely not be charged with a crime. They'll get sued, of course.


Criminal negligence statutes (I don't know Arizona law in particular) virtually always have a "gross negligence" requirement. A reasonable person would have had to have known that the action was likely to cause harm. It's not enough just to make a fuckup.

That just doesn't seem to apply here, absent evidence that we don't have (e.g. Uber knew the LIDAR was being ignored, something like that). This was just a software bug. You don't throw people in jail because they wrote buggy software.


The safety driver was texting. Virtually everyone knows if you're a driver you're not supposed to text and drive. Safety drivers are held to an even higher level of road safety knowledge, so they should understand that they're beta testing a literally lethal weapon, and take reasonable precautions, like staying focused on the road.


> You don't throw people in jail because they wrote buggy software.

That depends upon the application.

Too many programmers think "Meh. Bugs are no big deal." because their code doesn't interact with the real world.

The problem is that the useful code that doesn't interact with the world has basically all been written, code is now starting to interact with he world by default.

Hang on, programmers, this ride is about to get bumpy.


Do you have an example from anywhere ever of a programmer being prosecuted for a bug?


Behold, The Great Worm of yore! https://en.wikipedia.org/wiki/Robert_Tappan_Morris#Criminal_...

(I thought of the first computer-aided fatality, but the sources don't seem to mention any prosecution w/r/t Therac-25)


No, the Morris worm was criminal even if it didn't kill systems. The fact that it was accidentally killing systems surely changed the prosecutorial decisionmaking, but it wasn't relevant to the case itself. Morris was prosecuted for penetrating and abusing systems he didn't own, not for a bug in software that would have been non-criminal if it worked properly.


Which is it? You're saying we don't have enough details AND that it's just buggy software.


Upthread it was pointed out that the LIDAR manufacturer believes their system was operating correctly. Assuming that, it becomes a software bug more or less by definition.

But fine: let's say it was the LIDAR system was at fault. Do they go to jail, then? How about the bureaucrats who approved the test?

All this doesn't matter. I'm simply saying that ALL the facts in evidence point to this being a reasonable mistake, and reasonable mistakes don't qualify for negligence prosecution. If you want to argue to the contrary you need evidence, which doens't seem to exist. Therefore the upthread question "shouldn't someone go to jail?" is answered with a firm "no".


How could you possibly know if the mistake was reasonable?


Because you're demanding jail time, and the burden of proof goes the other way. I don't need to know whether the mistake was reasonable, you need to prove it wasn't if you want to argue someone should go to jail.


To be horribly utalitarian - every week you set back the development of self-driving tech, you're 'killing' future people who are involved in human-at-fault collisions. Counterintuitively, it is the most ethical choice to continue to accelerate self driving car development even if it is sometimes conducted in an unsafe manner.


Other scenario: Waymo will beat out Uber no matter if slowed, so any deaths stopped by stopping Uber is better. In order for yours to be true, we have to believe that Uber is contributing to the future of self-driving tech. I think they will be perpetually playing catchup to Waymo until their bankruptcy.


It would be utilitarian if we knew that self-driving tech was the best outcome out of all potential alternatives. We don't.


Not disagreeing, just curious: besides the obvious "drive it yourself", what are the in-the-works alternatives to self-driving cars? I'll admit I don't follow the space too closely.


I'm not in this crowd, but mass-transit advocates argue that the resources/attention poured into self-driving could be better allocated toward investment in mass transit infrastructure.


Mass transit infrastructure will not provide VC's with 100x returns.


This also assumes a good market of people who want to use mass transit vs something private.


Ehhh, I think given enough time and money just about any problem can be solved. Given what we know today, it seems pretty reasonable to say that we will achieve a car that can drive itself in any situation and not cause deaths...at some point in the future.


If everyone on earth gives me all their money and property and absolute power to rule, I will do so benevolently and in a way that maximizes the happiness of the world's population.

Therefore, every moment you spend not actively giving me money and property and power, and persuading others to do so, is a moment in which you are actively causing harm and suffering to billions of people who would have had happiness sooner if not for your delay.

You monster.


> Counterintuitively, it is the most ethical choice to continue to accelerate self driving car development even if it is sometimes conducted in an unsafe manner.

However, conducting self-driving car development in an unsafe manner is a good way of making people oppose self-driving cars, which leads to development of self-driving technology taking longer.


Who is setting back self-driving tech? Law enforcement who are trying to keep real people safe? Or Uber by being their cavalier selves about safety standards?


See, the issue is not just in utility. "Overeager operators disable safety checks for an experiment, death ensues, entire field of engineering suffers a major setback" has been seen before: have you heard of Chernobyl? (You have, thousands of times, just like anyone else from 1986 on) Yes, today's reactors are far safer and whatnot - yet the disaster is so ingrained that any discussion of nuclear power is practically destroyed just by bringing that name up.

"People behave rationally" is the greatest error assumed by utilitarianism, do not underestimate the reptile brain and its response to fear.


This assumes that self-driving cars will be safer. I think this is an unrealistic idea.

Look on the road some time. You see a lot of cars with dented bodywork, running one headlight, often poorly maintained, and depending on the make and model very cheaply made. Many of the cars may be ten years old, or older. They are often sold used, with limited warranty or dealer service.

And you really want to add self-driving to this? You think a 8 year old kia whose operator hasn't updated it because he can't afford the dealer service fees is going to be safer?


> This assumes that self-driving cars will be safer. I think this is an unrealistic idea.

I've seen this suggested a few times, and it makes me wonder if this is caused by religious beliefs or general pessimism.

The reason is that the only way it is an unrealistic idea is if we assume that human intelligence can not possibly be matched by a machine and/or if we assume that the progress towards genera AI will be so slow that we for the intents and purposes of the debate will take very long to match it.

The only thing that will stop us from eventually matching human intelligence with machines is if there is some super-natural soul necessary to match it. Even then, for that to stop us, said soul would need to be a necessary condition to make self-driving cars safer than human, which sounds even more implausible to me, given the many advantages self-driving cars can obtain:

Additional views of the road. Benefiting from accumulated knowledge from billions of hours of driving. Potentially wireless information exchange with the other automated cars in the vicinity (can't see through the fog very well? well, maybe the 10 cars around you can fill in blanks).

I think it's a totally unrealistic idea that self-driving cars won't get to a safety level where human drivers will be outlawed on public roads.


You completely missed the meat of the comment which was about how cars get old, worn out and damaged over time, and aren't always well maintained, and that this will apply to self-drivers too and affect their safety.


No, I didn't. There's no reasons we can't build self driving systems that aren't sufficiently hardened in terms of hardware that they will outlast the care and/or with sufficient self-diagnostic ability to refuse to drive if they are degraded in any way.


Well that's alright then.


If we can make cars that actually drive themselves, I'm sure we can make them refuse to run if there are serious maintenance issues.


Ah, DRM will save us, you say? Today's farmers disagree - "make them refuse to run" will be misused by vendors, as it already is today. https://hn.algolia.com/?q=tractors+repair


That assumes Uber had a reasonable chance at advancing self-driving tech. Everything demonstrated to date makes them seem more like Theranos, and it would be difficult to find someone willing to argue they were advancing medical progress.


That amounts to "the ends justify the means." Some people go for that, some don't. Personally, I usually don't.


> That amounts to "the ends justify the means."

That, uh, is what "utilitarian" means.



That way eugenics lies.


But who would go to jail? The developer responsible for writing the faulty code? Or the QA team that missed it on review? The project lead who signed off and checked the code into the repo? The supervisors of any of these people? I'm really not sure how you aportion blame on something whose development is so widely distributed.


What about the 'safety' driver who paid no attention to the road until after the impact? The dash cam video showed them messing with their phone for the entire duration. They were legally in charge of, and driving, the vehicle. Much as I hate to throw the lowest member of the pecking order under a bus, this was absolutely their responsibility.


I'm confused why more people here aren't calling the driver out on this.

My understanding is that Uber knows these cars aren't ready and that's why they have a human at the controls.

I would have thought the driver would be instructed to maintain awareness of the road as though he were operating the vehicle and that looking away from the road in a manner that suggests he was using his phone would immediately relieve Uber of any responsibility.

Perhaps the collision still would have happened, but at least the driver would have the defence of "I was paying attention and meeting all the job-role requirements as safety-operator of the vehicle."


If the driver was paying attention but was unable to react in time, then I'd expect the same result as any other time that a commercial driver on public roads is involved in a fatal accident: Some kind of investigation, and if no fault is found, everyone shrugs and it's forgotten because shit happens.


The fault is with Uber because their software should have been able to use LIDAR to detect the person walking in the road.


Yep.


Typically the employee would be criminally negligent, and the employer would be civilly liable. Uber's $20B is a legal war chest, so this was fully anticipated.


you all keep calling it 'safety driver' but it was for all the intended purposed since the beginning of this fray the 'scapegoat driver'


This is frustrating, and we've just begun seeing these fatal car accidents done by self-driving cars.

These things and liability should have been decided before the cars were allowed on the road. But when Congress deregulated them at the federal level everyone hailed it as a good thing and completely ignored all the potential negatives of that.

Now, even if Uber is found guilty, it may get away with it, because there may be no law clearly attributing guilt to a self-driving car maker in case of accidents like these.


I think that by the letter of the law, liability would be strictly with the "safety driver" (aka scapegoat on duty). They were surely told to keep attentive all the time and - surprise - it did not happen.

The root cause, in my opinion, is allowing cars on the road that inevitably lull their nominal driver into inattentiveness.

There might be ways to keep a safety driver engaged so that the chance of attention failure is significantly lower than inevitable, but it doesn't seem like Uber had been looking for them very hard.


It's not a unique situation. Developers are responsible for software that kills.


This is idealistic, but generally, it should go up to the boss. Either the lead of the self-driving unit or the CEO of Uber. It's their responsibility to make sure the software is ready before testing it on the streets. They get paid a huge salary exactly because of the responsibility this is.


Seems like there should be some kind of legal process for figuring this out. People have died from faulty software before.

(Someone went to jail for VW and there was no death directly as a result of the emissions cheating.)


There were dozens of statistical deaths caused by all that increased particulate pollution. Air pollution is not harmless and it does reduce people's lifespans.


Statistical deaths from pollution are weird though, in that you are perfectly fine legally as long as you pollute within the accepted threshold. Driving 50 miles in a car that pollutes twice the allowed amount somehow makes you responsible for fractional statistical deaths in a way that driving 100 miles in a car that pollutes the allowed amount does not.

This observation does in no way absolve the polluters, but the big picture would be incomplete without this perspective. We all have some blood from statistical deaths in our hands, some people more than others, and again some of them more than others because they broke some rules we introduced to make the deaths not run or of bounds.


But VW was not polluting within the accepted threshold. They were very specifically defrauding regulators to try to get away with polluting far beyond the accepted threshold.


(Someone went to jail for VW and there was no death directly as a result of the emissions cheating.)

That was an instance of deliberate fraud.


The developer’s boss(es) seems more useful in this case. Unless a single developer was grossly negligent, then throw them in there too, but even then their supervisors are on the hook for signing off on grossly negligent software.


Yes, the developer.


What if the developer had sent an email to their boss saying "hey I think this is a bad idea" and their boss pushed it anyway? What if there is an open bug report that the developer intends to fix, but management won't give them the free hours to do it?


Then it's up to courts to take that into account. This is not a completely alien situation on Mars; it decomposes into manageable problems, some of which have been solved previously.


I'd argue that prosecuting a developer because his employer used his code to kill someone would be comparable to prosecuting a gun manufacturer because a customer used their weapon to kill someone.


This is comparable to prosecuting the developer who works for a gun manufacturer when a new type of digital "safety" fails.


An equally fair comparison. I'm not a lawyer, this is not legal advice, and I'm really just spit-balling for discussion's sake and because I consider it interesting; please do poke holes in my reasoning.

Presumably the developer in this scenario is not responsible for guaranteeing to and/or misleading a customer that his code adequately renders a deadly weapon safe, more likely that responsibility falls on the managers whose project it was to implement such a digital "safety" and instructed the developer to write the code in the first place.

Assuming the developer did not write the code with the intent of bringing about a person's death, which might involve fooling his superiors as to the efficacy of his work, I'd wager he can't be guilty of a crime. Outside of that, I believe vicarious liability applies.

Perhaps he was a one-man department of this hypothetical gun manufacturer, implementing, deploying, and marketing his product himself and of his own initiative; I don't think there's any doubt he'd be liable in this case, the extent of which is dependent on his intention.

Perhaps our developer was self-employed as a contractor, and warranted to his client (the hypothetical gun manufacturer) that the code was safe; assuming the manufacturer used it as warranted, and it failed, I'd assume the developer would be mostly liable for the result.


Gun companies are a bad analogy because they are one of the only exceptions to having legal liability when their products kill people. Almost every other industry can be held liable if their product kills someone.


Ahh, I wasn't aware of that.

What about heavy machinery, like construction plant? Or power tools? Or common kitchen utensils? Would they satisfy the analogy?


Related, has anyone gone to jail for faulty aircraft surface control software?


Did you see the video?

Imagine if it was just a person at the wheel. Would a person have gone to jail for hitting that person, in the video?


The low quality dash-cam video that looks nothing like real life? Street lights and headlights cast more light than is evident in that video.

Do you ever drive at night? Then you should intuitively know if it was really that dark, the headlights were off, or fog was present, or something else that would indicate a speed much lower than the posted limit was sensible (vs this car, which was speeding st the time).


That manipulated video does more to make me distrust Uber than the crash itself. They're showing what their true colors are, same as always. I don't trust them to do self-driving cars safely.


Pedantry: video is probably not manipulated ("has not been postprocessed"), but it is manipulative ("released with intent to manipulate audience"): I've seen similarly crappy, unedited videos from cheap dashcams. Indeed, it would make sense to buy the worst-quality scapegoat camera - you could then claim "see, it's really, really dark out there, nothing we could have done!"


The selective choice of what to release, even though they clearly have much better information, is what's manipulative. No disagreement there.


If the person knew they were there, yes. That's the case with the LIDAR.


It would be involuntary man-slaughter at worst. Hard to prove right?


Why? The car should have all the logs, shouldn't it? It's won't be 100% an account of what happened, but it should be pretty close - unless neither Uber's lidar nor radar was functioning, in which case that could ironically save the company.


Would a human driver be prosecuted criminally?


In Germany, almost certainly (involuntary manslaughter).


If they had LIDAR capability, yes.


> Edit: It's disgusting you guys are down voting me over this.

Adding this is not a mature response to downvotes. I was indifferent to your original comment. I downvoted because of this addendum.


No.


Over what? Correct me if I am wrong, but the car had the right of way and the person illegally crossed. Regardless of any improvements in what the self driving car/driver could/should have done, it seems clear cut to me that the root cause lies with the person.


So reading this article [0], it seems as though both parties are evaluated for their actions and blame is assigned proportionally. So while the women jaywalking would probably get the bulk of the blame for jaywalking, the driver/uber might also get some due to software error/driver inattention.

My guess for why the cops are not really going after uber is because the women was reportedly homeless so she does not have any family to fight for her case.

[0]: http://www.alllaw.com/articles/nolo/auto-accident/pedestrian...


Just because you have right of way does not allow you to murder someone.


Murder? This was not a premeditated killing. It was an accident that everyone involved certainly wishes did not happen.

Although, you might be able to argue for involuntary manslaughter but I'm no expert.


I’m not sure about Arizona, but murder in common law doesn’t require intent, only that a reasonable person could have predicted the action would lead to someone’s death.


I know you're using hyperbole, but murder does imply a pre-determined intent to kill. At worst this is involuntary manslaughter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: