Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Driver who died in a Tesla crash using Autopilot ignored 7 safety warnings (washingtonpost.com)
58 points by _1 on June 20, 2017 | hide | past | favorite | 109 comments


From a UX (and potentially legal) standpoint, it matters less how many warnings he ignored in the few minutes prior to the crash, but how many warnings he—and others—ignored on other drives.

By way of extreme example, if every time anyone engaged autopilot, it issued a warning every minute, you would quickly ignore them. Information fatigue.


> Since the crash, those reports have said, Tesla has updated its Autopilot feature to include a strikeout system, whereby drivers who repeatedly ignore safety warnings risk having their Autopilot disabled until the next time they start the car.

Looks like this concern has been addressed already.


Our new Chrysler Pacifica has a lane departure detection system that will nudge the steering wheel to always keep you in your lane. Combined with adaptive cruise control, it gets close to a rudimentary autopilot-ish system. I've experimented with it (without other traffic present, of course), and have discovered that not only will it beep and complain at me if I keep my hands off the wheel for more than a few seconds, but it will also disable the lane departure detection, just to make sure that I'm not using it for that purpose. That seems like a sensible approach for this level of autonomous system.


Interesting. How and when does it handle informing you that it’s disabled lane-departure due to your antics? Does it do this mid-drive?


The beeps are quite loud, and it displays the warnings on the screen in the instrument cluster—it's hard to miss. It also is advertised as an assistance feature, with no "autopilot"-type branding.


Somewhat, at least. It'd be nice if it did something like exponential backoff/decay on the number of ignored warnings it allowed per drive or per unit of time. Repeat offenders should probably be required to prove more rigorously that they're paying attention for the safety of the driver and others.


The "addressing of concerns" seems to happen only when people die in accidents with Tesla. They never seem to be too pro-active about this sort of safety features, despite the fact that they like to tout Teslas as the safest cars.


...next time might be too late. I'm pretty sure BMWs will pull over if you ignore the steering wheel for too long.


One concern is also whether this is fair criticism or talking shit about a dead person. Any changes that came after the crash are completely irrelevant to that concern.


They're relevant because much of this could have been easily foreseen. Some of these safety warnings have been on others cars since the features were first deployed in other brands. Tesla didn't include them and marketed it as 'autopilot' in a way that really seems kind of reckless to me.


> talking shit about a dead person

Being dead does not automatically command more respect. If someone's fuck-up caused them to die, there's nothing wrong with criticizing them for their fuck-up.


Did I say it is? I said that one concern is "whether this is fair criticism or talking shit". And for that question the state of the autopilot then is relevant, not any improvements afterwards. Speaking of autopilots I'm really curious what thought processes, if any at all, lead to such downvotes and responses. It doesn't get more simple than basic chronology, after all.


> Did I say it is? I said that one concern is "whether this is fair criticism or talking shit"

You said "whether this is fair criticism or talking shit about a dead person", which can easily be interpreted as saying you think people that are dead deserve an extra level of respect simply for being dead.

Obviously your message has been misinterpreted by multiple people.


My Subaru has loud alarms and beeps when you depart a lane or when the Eyesight tech is having issues due to really bad weather. It catches your attention. I think the key is for it to be sharp, cutting and to not stop until you correct the issue. Think of all of those space movies where something goes wrong and alarms are blaring for sensors and systems.


> beeps when you depart a lane

Which is super annoying when driving on narrow two-lane roads, since you're usually almost on top of the line when you're driving perfectly straight down the center of the road.


The feature should be disabled then. That should also be an option.


I'd love to read anything related to in-driving implicit communication channels. If anyone knows books, articles, ideas, pionneers (cars, bikes, boats, planes, NASA)


You're supposed to have your hand on the wheel even with Tesla's autopilot. But you're right; the correct way of enforcing this behavior is that if you disregard it for too long, it parks you safely on the shoulder.


There's really never a good place for a car to stop. The car is dependent on a human to choose that place. So there's really no way to present the user with an ultimatum besides disabling autopilot on their next drive.

edit: I hear the cars engage hazard lights and slow down and stop if one's hands are not on the wheel. I guess that's fine.


I don't want to nitpick but; parking on the shoulder is not safe. Don't do it unless it's a real emergency.


It appears that not having your hands on the wheel while autopilot is engaged could perhaps qualify as some kind of emergency, or at the very least, a risky enough situation to justify pulling over.


Agreed. "Safe" is not accurate; this is just the best of many bad options, and is what should happen if the car decides for any other reason that it is not auto-drivable.


Exactly!


Watching the Youtube videos of Teslas getting confused while on Autopilot made me immediately realize that the tech has a ways to go--there are roads that have odd paint or other odd features that confuse the car and make it choose a path that would get you killed if you didn't manually intervene in time.


I'm not sure I like the focus on the driver here. I do think he was a fool for trusting the device.

It genuinely sounds like the autopilot failed to prevent a collision that, whether right or wrong, we would expect it to before broadly deploying these devices.

A semi is a very large and pretty reasonably defined visual mass that intersected with the path of the vehicle.

What did the autopilot do and when did it do it seems to be missing from this round of reports on this incident and thats pretty lame.

Edit- Notice there is no mention of a COLLISION warning being issued? Only nagging "hands on wheel".


Despite the terrible name Tesla's system is supposed to assistive, the driver is still supposed to be in the loop.

If the driver totally abdicated all involvement, the accident is his fault. He's the final safety system and it sounds like he didn't even try to do his job.


Names are important. Calling it "autopilot" implies that it does the driving for you, and people will take that at face value. It should be named in a way that accurately describes its abilities.

If it is actually just assistive, and isn't autopilot, then calling it such is incredibly dangerous. It'd be like calling homeopathic junk a "cancer medicine", which is illegal mind you.


"Autopilot" in an aircraft doesn't imply it flies for you. And in an aircraft, there are multiple levels.

"Autopilot" is no worse than "cruise control" in this regard.


“Autopilot” is worse, not because of what “autopilot” actually means in aviation, but because of what “autopilot” suggests to the car-buying public, who are, generally, not aviators.


What does the car-buying public think those folks in blue suits are doing at the front of the plane?


Giving each other handskys!


No, I think many people just don't like the term and so decided everyone is confused by it. They're not.

It's not self-driving, it's autopilot. That's the contrast they're drawing, and it actually IS clear to the car-buying public. A car is the second biggest single purchase you're going to make in your life, so people aren't going to /not/ figure out what the term means before buying the car.


If the market is confused about the term and it's not clear in practice - it's still not a good term to use.

I definitely think it's worse than cruise control as you suggest. Cruise control systems control your cruising speed. Autopilot has way too many asterisks right now.


Pilots require years of specialized training. Airplane autopilots have been around for decades and you aren't going to have an issue with unskilled pilots attempting to rely on them for things they can't handle based on a name alone.

Also, airplane autopilots do fly the plane for you (including take-off and landing). They can certainly do so more safely than Tesla's "autopilot" does. A lot of this has to do with the aviation environment being much more controlled.

This whole comment thread seems like a distraction in the form of a non sequitur, though. If Tesla is falsely claiming capabilities that their system does not actually possess right in the name, and that causes deaths, then they could easily be liable, and at a minimum they need to stop.


There is no evidence that Tesla owners are getting misled by the name.

This very article describes a situation where the name of the feature doesn't seem to have played any role in the accident.


This is a case where bad word choice in marketing could literally kill people.


Agreed, with a couple caveats:

1. Any system that abstracts away human control bears some fault in failures of the human-machine system. Perhaps Tesla can get out of this legally through some fine print, or even ethically because the driver bears the final responsibility in the system. But they ought not design a system that is pleasant and convenient to use irresponsibly, which is what they have here.

2. Autopilot is supposed to handle issues like this. The Ars article says:

> ...a Florida highway crash when the Tesla he was driving struck a tractor-trailer as the semi was crossing an intersection of a divided highway that did not have a traffic signal.

So what if the intersection is unmarked. So what if the semi pulled out in front of him. So what if he ignored warnings. Autopilot should have seen the hazard and braked.


That's also the fundamental problem with all existing forms of autonomous vehicle AI. Google was the furthest along with Level 4 and 5 autonomy but even they have backed off that development track as they couldn't muster the resources (a massive fleet collecting data) necessary to make the next leap in progress.

Anything less than Level 4 autonomy will require constant driver attention. While there is technically Level 3 autonomy, the premise of a system which can operate with the operator intervening within some "limited span of time" in an emergency is fundamentally flawed. The situations in which an AI will have to kick the decision making back to the human driver are inherently time sensitive. Any sort of time-sensitive response from the driver would have to occur in such a short window that one could never safely remove their attention from the driving process for any meaningful amount of time. Even a time window as short as 5 seconds is too long to react to a situation at highway speeds (the car would have traveled 440 feet if it's at 60mph).

My point is that this is an existential problem in the field of autonomous vehicles to a point that even the most ambitious players are admitting defeat. Most, if not all, the commonly discussed benefits of an autonomous car (all the things you can do with a comfy seat and an internet connection in the time one spent driving, also no worrying about DUIs) are unachievable with Level 0-3 technologies. They might make driving safer if they're implemented as a fail-safe for bad drivers but if they're allowed to take full command for any period of time, they are fundamentally unsafe and undesirable in terms of what the public is demanding from the promise of a truly driverless car.


I am going to disagree there. Tesla names its system autopilot. Uses the fact that autopilot that includes collision avoidance reduces accidents by 40%, and then claims that there system is safer than a human (assisted human safer than a human, genius!). Elon musk has a statement every month about self driving autonomous driverless cars. They even have the gall to say that new cars have "full self driving hardware", is proof of outrageous statements even required? And all this hype/misleading advertising is fine because Tesla showed a few warnings? And the driver didn't have sufficient reason to believe the car was capable of complete autonomy. A simple Google search will tell you that this driver wasn't the only one.


> What did the autopilot do and when did it do it seems to be missing from this round of reports on this incident and thats pretty lame.

I am pretty sure this is because it has been reported on pretty thoroughly.

https://www.tesla.com/blog/tragic-loss

https://www.nytimes.com/interactive/2016/07/01/business/insi...

The general sense is that the system is designed to apply brake when both radar and the camera system agree there was an obstacle (this may no longer be true since there was indication that decoupling was being worked on).

It has been suggested that the white trailer against a bright blue sky didn't register on visual detection. Thereby there was not a consensus between radar and visual that there was an obstacle so the brake was not applied.

> we would expect it to before broadly deploying these devices.

And this is why there is focus on the driver. What is reasonable expectation? Are there adequate warnings and documentation about the capabilities of the system? Autopilot is not marketed as an autonomous driving system...it (at the moment) explicitly says drivers should continue to monitor the system to manually intervene if required.

Further NHTSA research suggest as high as a 40% crash reduction since the wide introduction of Tesla's autopilot mode:

https://techcrunch.com/2017/01/19/nhtsas-full-final-investig...

This is a single accident...and arguably the driver was the final safety gate and they failed to apply corrective action (in addition to the automated safety systems). It does not appear as if the driver had sufficient reason to believe the system was capable of complete autonomous operation. And it does not appear as if there was a critical flaw or bug in the system that was putting consumers at risk.

And just briefly going over the items in the roughly 500 page report there does appear to be a lot of detailed examination of the systems.


Awesome reply, thanks.

I'm actually even more bothered by this.

This looks like a super common scenario people in cars face daily.

I acknowledge the technical obstacles to "wide view" forward sensors but wow, this system seems very very limited.

""“The limited time the target is in the field of view prior to impact challenges the system’s ability to perform threat assessment and apply the CIB system. A target is usually recognized very late or not at all prior to impact.”"""

It seems like the FOV of the sensors is not adequate.

The system should be looking backward to assess the danger of a sudden braking?

Sensor fusion should resolve ambiguity via probability... one line in the reporting suggested it needs both systems to "agree" which is a binary, this is probably just reporter confusion.

This system doesn't seem ready for anyone younger then 21 full stop.

If it doesn't radically improve it's going to have many more problems as it gets in the hands of a wider range of drivers beyond affluent people in middle age+.

These systems are "lane keepers" and really the marketing and driver education should be tightly focused around what they DO NOT do.

I genuinely look forward to less humans in control of cars but this is going to be a rough road.


There has been discussion about the limitations of the system...including a lot about how a Lidar system would have mitigated this. I think this underscores the fact that driver's must view the Tesla system as a collection of safety features rather than a fully autonomous system.

And I think you are correct that there is driver maturity required in order to exercise restraint/use the system responsibly given this and its potential abuse.

But a 40% decrease in crashes involving Teslas after the wide deployment of the Autopilot safety features would seem to indicate (to me) still that the overall safety of both Tesla drivers and the public around them is improved by its existence and availability.

> If it doesn't radically improve it's going to have many more problems as it gets in the hands of a wider range of drivers beyond affluent people in middle age+.

This is a very interesting and astute observation. While I question whether or not the current driver sample/population is materially more or less responsible than those that would have access to the upcoming more affordable Model 3, it is something worth monitoring (by the NTSB).

At the end of the day this is an evolving area. While this incident highlights the limits of Tesla's system, it is only one and there is evidence to suggest that overall safety is improved despite its limitations.


I believe Tesla rolled out an update shortly after the crash that fixed the 'bug' of it not recognising the white trailer against the blue sky.


> It does not appear as if the driver had sufficient reason to believe the system was capable of complete autonomous operation. And it does not appear as if there was a critical flaw or bug in the system that was putting consumers at risk.

I am going to disagree there. Tesla names its system autopilot. Uses the fact that autopilot that includes collision avoidance reduces accidents by 40%, and then claims that there system is safer than a human (assisted human safer than a human, genius!). Elon musk has a statement every month about self driving autonomous driverless cars. They even have the gall to say that new cars have "full self driving hardware", is proof of outrageous statements even required?

And all this hype/misleading advertising is fine because Tesla showed a few warnings? And the driver didn't have sufficient reason to believe the car was capable of complete autonomy. A simple Google search will tell you that this driver wasn't the only one.


You're saying this is a matter of "driver shaming"? Could be, especially given the stakes for Tesla. It would be helpful to know exactly what these warnings look/sound like, because as a previous poster noted, many drivers learn to ignore warnings if they're persistent, such as a check engine light (or "low washer fluid", which I receive in my BMW, and whose sound is identical to that for more serious issues.)


There is no mention of a "collision" warning is there?

That is very disturbing and why I'm bothered by this round of reporting.

The warnings seem to be just "hey you aren't supposed to hands off for this long" which, as others point out, are just the type of notifications that people start to ignore as they grow more comfortable with technologies.


>Could be, especially given the stakes for Tesla.

This is a really important thing to remain vigilant about - Tesla and other self-driving car developers have an absolute incentive to blame/excuse/deny when it comes to fault for any autonomous vehicle accident.

My money is on us seeing a LOT of 'this dead driver was a total idiot' and 'this totally crazy thing happened that the autopilot could never have predicted and 100% of human drivers would have failed too'.

The hard truth is a lot of people are going to die in the process of self-driving vehicles reaching maturity. This truth is not politically tenable for any party to admit.

The real risk in this pattern of accidents followed by blame avoidance is that valuable lessons might not be learned from incidents. The aviation industry accident investigation process went through a very similar early life and many lives were lost unnecessarily before it matured into the robust, root-cause-driven, appropriately powerful system it is today. That was what it took for aviation safety to reach what I think most would agree is a suitable level of governance, and TBH I think autonomous vehicles are going to need to go through this same process and will hopefully get to a similar level of maturity without the loss of too many lives.


> The hard truth is a lot of people are going to die in the process of self-driving vehicles reaching maturity. This truth is not politically tenable for any party to admit.

Is it acceptable for reasonable, rational people? A lot of people die today in auto accidents. A lot of people will die in the next 50 years as self-driving cars are developed. But I expect that fewer people will be dying in auto accidents 50 years from now. Does that make tragedies during development an acceptable sacrifice?


Not gonna go there personally in terms of judging what level of human tragedy would or would not be acceptable for future human lives saved.

However, the concern here is not (primarily) that someone died, it's that the reasons they died are not likely to be fully and impartially assessed, and all possible lessons learned from their death, because nearly everyone involved with any power is incentivised for that to not happen.


I think Tesla rolled it out too soon and marketed it as something it is not yet capable of and people died because of it.


Sounds like he either committed suicide or was unconscious/disabled before the accident.

An autopilot shouldn't just hand over control to the driver and pray when it needs help and the driver is unresponsive; it should try to come to a stop on the side of the road, then shout for help. Shouting via a cell call to 911 (or country-specific equivalent) wouldn't be a bad idea.

"This is a Tesla P85. My driver is unresponsive. The car is stopped on I-95 southbound, 2.1 miles south of exit 4 past I-295. The car is red. Please send help. This message will repeat three times."


But this wasn't a disengagement of the autopilot, he just didn't have his hands on the steering wheel.

Not to mention that you don't actually need to have autopilot running to get the emergency braking feature. Autopilot or not the car should brake before going full throttle into the back of a semi.

For example: https://www.youtube.com/watch?v=eD89Fc_ofXc


It's not a terribly informative video as one can't tell what the car did and what the driver did. Did the car or driver steer around the car in front, who/what operated the break first, did the driver have to specify the degree of breaking or was that all down to the vehicle. Was the driver watching the car in front of the vehicle he was following; did the vehicle do that, what - if anything - did the vehicle react to. At what point does the driver receive control back from the vehicle if it did take over control, was it full or partial control and up to what point.

None of that appears in the video AFAICT?

Extra credit: if the vehicles in front were attempting a carjacking is it possible for the Tesla driver to floor it and push the car in front out of the way?


Description says autopilot was disabled, and the quick successive beeps you hear are the emergency braking system detecting an impending collision and braking. The driver is steering (and was before; no autopilot).

This is how emergency braking works in every car that has it.


Apparently they​ added a feature to do this after the crash


That reads like a Bruce Sterling novel.


The article mentioned how newer Tesla vehicles have a "strikeout" system where the autopilot software disables itself until the next car startup if the driver repeatedly ignores warnings.

While this is definitely an improvement, how does that actually work? If the driver isn't paying attention, how do the newer vehicles force the driver back into paying attention and back to taking full control of the car? This seems like a really hard problem, and the article doesn't really dig into it.

Related: https://en.wikipedia.org/wiki/Dead_man%27s_switch


The vehicle starts to slow down (eventually coming to a stop) and turns the hazard lights on.

Here is a video: https://youtu.be/UiNDVdmF9ws?t=47s


While some (elsewhere) are arguing the system could have gracefully handed back control by slowing down to an eventual stop, it's pretty clear the driver was at fault. It's also not clear how the autopilot could have done better - is slowing down on a high speed road any less dangerous? It needed a human in the loop.

The evidence was that the driver was wilfully ignoring the safety warnings, just like ignoring a fence or sign above a steep cliff. That increased the fatal risk to him and other road users. Any other (less litigious) country would immediately place the fault where it clearly lies.


> , it's pretty clear the driver was at fault

It's pretty clear both the driver is primarily at fault and Tesla's poorly designed system (as it existed at the time) compounded the driver error.

> is slowing down on a high speed road any less dangerous

Yes, slowing down gradually and pulling over is safer than careening down the road with zero input the human driver which as you admit "needs to be in the loop".


Let's forget the monthly musk talks of self driving cars or Tesla claims of "fully self driving hardware". Let's assume that Tesla didn't even have the misleading name of autopilot and merely advertised the assistive technology separately.

Would you be fine if a car with collision avoidance ran full speed into a collision? Would you be fine with a company selling an incapable system as a safety feature and face no responsibility because they mentioned somewhere that the system can fail?


I know they changed the software somehow (although I don't know exactly how).

It seems like if the driver isn't interacting and following the rules (like not keeping their hands on the wheel) why not slow the car and have it pull over with hazard lights on?

My car's systems (though not as advanced) will simply turn off if you're not doing your part.


They did. From the article: "Since the crash, those reports have said, Tesla has updated its Autopilot feature to include a strikeout system, whereby drivers who repeatedly ignore safety warnings risk having their Autopilot disabled until the next time they start the car."


That's good. Like I said I didn't know the exact change they made (WaPo is paywalled, I read the Ars article).

Thanks.

As long as they are so advanced having the system eventually pull over seems like a good idea. Turning the system off won't work great if the driver falls asleep and the beeps don't wake them. At least if it pulled over they'd be safe. Probably a really hard thing to do though.


It doesn't pull over yet, but if you repeatedly ignore warnings it will put on flashers and slowly pull to a stop.


Really? Excellent. That's most if the way there.


I'd love to compare the results of this final analysis with the FUD that I remember coming out following this crash on Reddit et al with a series of people saying 'I told you so'.

Complex systems of risk mitigation tends to get pushed in the media as an ideal solution to these accidents well before the complete facts come in. Common sense precaution is typically ignored in this culture of total risk adverse superiority that floats to the surface after these events and would ultimately result in very few of these experimental products, which drive innovation, being tested (as is clearly the case in many risk adverse cultures such as Japan and Switzerland).

With any early adopter's of automation whose success is partially dependent on human intervention (ie, flying planes) having stories of failures of incentives to know when or not to actually listen to the warnings provided by the interface over your own intuition is helpful not only in training but also in the design of these systems.

To not expect a certain degree of these types of failure situations where the system largely acted as intended is naive and ultimately defeatist.

I hope the drivers realize the risk they are engaging in here and take appropriate behavioural change as a result but I can't imagine many formal systems that would practically be effective here at deterring these situations other than maybe adapting educational training and the content of the warning systems.


As someone who doesn't own a car but is often a cyclist or pedestrian, I don't want to be reliant on drivers' (potentially lacking) common sense for my safety.


That's why when I bike daily in Toronto I take full lanes where it's obvious there isn't much room for bikes on the side or cars are turning right.

I also pass on the left when cars will clearly be turning right on a green or red light.

I'm amazed that this is a rare thing for a biker to do, when it should be common sense, but it's always best to design systems for the lowest common denominator. Still I'm amazed so many bikers put so much trust in other cars and assume they are looking into to their mirrors when they turn or merge.

The best advice I've heard as a cyclist is that it's safest to assume no driver is paying attention and act accordingly. This is common sense precaution.

When it comes to designing automated driving systems there is also a safe assumption that common sense (by other people) isn't expected to a certain degree so prepare for edge cases.

But in the particular case I'm not sure what you can do beyond running those 7 warnings before the crash. All I said is that the language used and severity of the warnings could be tested in experiments as well as considering the marketing of the efficacy of the systems to early adopters of alpha systems. Basically eliminating over confidence in existing oversight mechanisms.


This was the crash where the Tesla drove at full-speed into the back of a trailer yes?

I don't see how the "7 safety warnings" are relevant here, that was just the cars reminder that he should put his hands back on the steering wheel. It does not mean the car detected it was losing vision and would stop steering the vehicle shortly. The safety warnings had nothing to do with the Tesla failing to recognize the obstacle ahead and brake.

I don't think the future looks good for autonomous driving if the NTSB is going to accept an explanation that there is no vehicle failure here because it gave some regular reminders to put your hands on the steering wheel.


The hands on the steering wheel is a proxy for paying attention. That's really the key point of these "assist" features: you need to be paying attention just as much as you would be otherwise. Unfortunately, this reduces their effectiveness to being backup.


That's great, if you're looking to take someones license away. That is not what the NTSB does. The assist feature failed to recognize the obstacle ahead and worse yet, it failed to recognize it's own failure. This is what the NTSB is investigating.

The autopilot did not disengage. If it disengages and tells the driver to take over immediately, hey maybe (big maybe) we can accept that this was not a technical failure and solely the drivers fault. But that didn't happen.


At no point should the system have to "[tell] the driver to take over immediately", because the driver should always be in control. That is the point of the assist technology: to act as a backup for driver error. The driver is not acting as backup for the autopilot error.


Until we get to real autonomous cars, we'll probably have more issues with people and their hubris. My car has a bunch of these features, and I have them only in case of messing up. They are redundancy. Nothing more.

Some people are way overestimating what these systems can do or are paying less attention because they think the car will correct for their mistakes. We're going to be in for a rocky few years or so where these advanced systems may lead more crashes from inattentive drivers.

Having used these various technologies, I can firmly say that I can't wait for fully autonomous cars.


Mine does too, but instead of calling it 'Autopilot' it's 'adaptive cruise control' and 'lane keep assist'. The names make it clear the car is not driving its self (although it's clear to me its close to capable for the simple case).

The car makes it VERY clear you MUST steer. The car mostly keeps itself in the lane but is a little bit unsmooth about it (I assume to encourage drivers to do it themselves) and will yell at you loudly if you go more than 10-15s without doing steering yourself and disable its self.

It's a great feature and helps a ton with high crosswinds... but it's very clear it's not meant to be relied upon to drive the car.


> My car has a bunch of these features, and I have them only in case of messing up. They are redundancy. Nothing more.

Thank you for applying a healthy strategy to not killing people with your car.


What you're describing seems like undue faith in the technology to do things that it's not yet mature enough to achieve. That has roughly zero to do with hubris.


Maybe the biggest problem is Tesla calling it autopilot. Just call it lane assist or adaptive cruise control. I know many of the cars will stop working if you take your hands off the wheel too long.


But their plan is to slowly evolve it into a full autopilot feature. I think Elon says they're demoing a CA->NYC no-hands trip sometime in the next year.

Maybe they could have branded it differently early on?


Call it autopilot when it works like an autopilot, but right now it really doesn't do all of that.


How about co-pilot, assists the "pilot" rather than usurping them.


The crash had nothing to do with the driver ignoring autopilot warnings. For some reason the automatic emergency braking feature of the Tesla did not work. This is a serious problem and AEB should be completely separate from autopilot. What is the point of a safety feature which is supposed to stop you from crashing if it doesn't activate? In this case the driver appears to be an idiot who wasn't paying attention, but imagine if he had a seizure, fell asleep, got cut off and brake checked, or something else?


To be fair a vehicle coming across you at a crossroads is hard to judge the speed of, hard to see in time and hard to decide what to do. If you break will it slam in to the side of you; do you need to floor it, maintain speed? What if you're in amongst traffic and can't see the vehicle (visually or via tech assist)?

Is there other info you're using to come to your conclusions here, the article doesn't seem particularly informative about the crash (location, circumstances, etc.).


From the NTSB Report (https://dms.ntsb.gov/public/59500-59999/59989/604694.pdf):

>In concluding the interview, the witness offered the opinion that both drivers had sufficient time to have seen the other vehicle and should have been able to slow or stop sufficiently to have avoided the crash.

The full witness statement is on page 19. It sounds like the truck was making a left turn across the Tesla's path and according to the witness they both had enough time to see the other car - the truck was moving slowly and the Tesla took around three seconds to reach the intersection from the time when the witness first saw it.


Worth noting that the driver seemed particularly enthusiastic about autopilot, to the point of frequently recording his experiences and publishing them on YouTube:

https://www.nytimes.com/2016/07/02/business/joshua-brown-tec...

> CANTON, Ohio — Joshua Brown loved his all-electric Tesla Model S so much he nicknamed it Tessy.

> And he celebrated the Autopilot feature that made it possible for him to cruise the highways, making YouTube videos of himself driving hands-free. In the first nine months he owned it, Mr. Brown put more than 45,000 miles on the car.

> “I do drive it a LOT,” he wrote in response to one of the hundreds of viewer comments on one of his two dozen Tesla-themed videos. His postings attracted countless other Tesla enthusiasts, who tend to embrace the cars with an almost cultish devotion.

That he was a Navy SEAL who specialized in electronics and bomb defusal probably also gave him additional confidence/hubris when driving.


Why does autopilot continue to drive the car without someone holding the steering wheel in the first place? If the car can't drive itself without driver input, then why isn't letting go of the steering wheel the equivalent of disengaging autopilot?

Tesla's technical hubris could have killed more than just the driver.


https://www.tesla.com/blog/tragic-loss

I think Tesla's original statement is relevant to post here. Although likely somewhat biased, they provided a succinct explanation to the reason why the Tesla did not brake automatically to avoid the collision.

From the statement: "Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S."


Has anyone driven one of these using Autopilot?

It strikes me as an eerie feature in that semi-autonomous means you're not controlling the car but at any moment you may have to jump in generally unaware of the forces the car/tires have been experiencing.

It just seems like a major mindset shift from mostly idle passenger to active driver and offhand I feel like until the car can reliably drive itself I'd rather just drive the whole way.


1. I think any autopilot system which makes users to keep their hands on steering wheel and pay equal attention is totally useless. In fact it's detrimental since it gives false sense of control and naturally leads users to lapse.

2. Anyone who read Don Norman knows this is usually fault of the design of the system not the user. The system needs to do more to assist users in these edge cases.


1. Yep, which is why they never should have named it that. That was a very dangerous decision.

2. No, it's the fault of the marketing and system designers for vet giving users the impression it was a 'true' autopilot (see #1)


For a high tech vehicle with so many safety features built in, I don't understand why an automatic collision avoidance system didn't override the autopilot system and start braking well ahead of this high speed impact. Maybe this model doesn't have such a system, or the traffic conditions changed so rapidly there was no time left to brake?


This again brings us back to the fundamentals.

Do you trust your self-driving car AI algorithm? Yes? Sure lets remove the car controls(Accelerator, break and steering wheel). And then safety and other liability is on the car manufacturer.

If you don't have this level of trust on your algorithm you don't really have a self driving car.


Our family is looking at getting a Tesla and my wife brought up this incident. I told her that Teslas are not fully capable of driving themselves and that the guy was probably not paying attention to what he was doing. I'm sad this guy lost his life, but ignoring safety warnings for whatever reason eared him the Darwin award.


am going to disagree there. Tesla names its system autopilot. Uses the fact that autopilot that includes collision avoidance reduces accidents by 40%, and then claims that there system is safer than a human (assisted human safer than a human, genius!). Elon musk has a statement every month about self driving autonomous driverless cars. They even have the gall to say that new cars have "full self driving hardware", is proof of outrageous statements even required? And all this hype/misleading advertising is fine because Tesla showed a few warnings? And the driver didn't have sufficient reason to believe the car was capable of complete autonomy. A simple Google search will tell you that this driver wasn't the only one.


Ars Technica has an article that's not paywalled:

https://arstechnica.com/tech-policy/2017/06/tesla-model-s-wa...


Yes, and you pay in the form of giving away your data to advertisers.

To be sure, WaPo also sells data to their advertisers. At the same time, WaPo has broader coverage and probably needs a larger newsroom as a result. Ars Technica is a subsidiary of Conde Nast, and Conde Nast as a whole has paywalled publications (like New Yorker).

In general, I think it's perfectly fine to promote paywalled content on HN so long as the content itself is good. We have to fund quality reporting/writing, and HN can serve as a curation mechanism to surface "content worth paying for".


I tend to see 1-2 articles a month I want to read from WaPo, but it's about $50/year for a subscription. And many of those articles aren't much more than what other sites have (because they're not deep investigative features).

I'm not interested in paying about $3.33/article.

When they do big investigations I'd be happy to to pay $1 or $2 to read it.

This? No.


You might want to try subscribing to WaPo through Apple News or Google News Stand. They have monthly quotas on # of articles you can ready.

While the principle of "per article" pricing is appealing, it won't probably work in practice: generating content worth paying for is not like spinning up servers because the cost of production occurs upfront _and_ have non-zero marginal cost.


I'm not surprised the pay-per-article thing has never taken off.

I've never used Apple news, I'm kind of hesitant to. I already track the sites I care about through RSS and see many of the same things through Twitter first. I don't really want to start using a third thing that overlaps with the other two.


I agree. The quality of the journalism from many paid website is not very deep, and then I'm taking a risk with buying into a particular political slant or ideology. I really like Bloomberg's reporting lately. They've been writing deep, well researched longform pieces, and leaving the political ideology out.


There's enough good stuff coming out of the New York Times (and I like the Crossword) that I looked into subscribing to them.

At $50 a year or so I would be quite happy to. But it turns out it's a couple hundred. That price is just way too high for the frequency I want to read their content.

So instead if one of these places does really good journalism I end up writing a summary of it on another site.

Now that Apple Pay on the web exists I've been hoping more places would allow access to a single article for one dollar, but it hasn't happened (I assume do the transaction fees).


Ya know I'll walk back that comment slightly and say that I have seriously given thought to sub to WSJ.


They're in the right ballpark for pricing compared to NYT, I just don't think I'd get enough use out of it.


But I am not going to pay for one article that I happen to come across on HN.


Some people just want to see the information without whipping out their credit card, not get into a huge red herring about which economic model we should support.


While this comment is off-topic or meta, I think this is an important point about how to view paywalls. If paywalled articles are being shared and voted for, that's a signal to how valuable the content behind the paywall is. If paywalled articles are always replaced, we will not have this signal.


I don't know. I posted the basically the same article from Ars Technica. But I'm guessing this one got posted first and got more traction so it hit the front page, where it got even more traction.

Once the article is on the front page people posting the same thing from other sources are never going to get voted up.

I think this is more of a first mover advantage thing. This is based on a government report, it's not a deep investigative piece.


Ok, I agree with you on the specific, but my point was directed at the general case where there is some value added by one article over another.


This is why I use Brave. It blocks all advertising and silly paywall scripts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: