Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree completely. As far as I can tell, the driver did not even have hands on the steering wheel. How hard would it have been to put sensors on the steering wheel to require both hands? They didn't even do that. Although even if they did, I agree with your statement that "[t]here is no way to make a human pay that kind of attention with out actually driving the car."


Not difficult at all, and you can make them keep reasonable attention. Look at the new Cadillac driver assist: sensors in the wheel for hand placement -and- eye tracking. If the driver isn’t watching the road/holding the wheel, they get escalating alarms until the autopilot disengages.

And that’s consumer drive assist tech, not “we are experimenting with full autopilot” tech, where I’d think such safety measures would be even more appropriate.

This is a solvable and solved technical challenge. Uber just didn’t devote any resources to it because they don’t appear to give a shit beyond acquiring a legal fig leaf to shift liability from themselves to an individual.


Frequent, randomly scheduled disengagements should keep the driver quite on edge, preventing them from becoming a passenger. But each and every one of them would create additional risk, so the net improvement might be negative. There is just no way to get this right, except for being reluctant of pushing to scale. With all the hype, wishful thinking and investor pressure, this clearly isn't happening.


I've been thinking about this for the last couple of days, and it's definitely a hard problem -- even with steering wheel sensors and eye tracking, it doesn't stop people zoning out and not being ready to react.

I did wonder if you could require the driver to make control inputs that aren't actually used to control the car but are monitored for being reasonably close to how the computer is controlling the car, and then the automation disengages (with a warning) if the driver is not paying sufficient attention. I then realised that may be _worse_ - in the event of a problem, the driver would have to switch to real inputs that override, which may delay action and not be something they do automatically. It would mean they are paying attention more to see if the automation is making errors where they have more time to react though (e.g. sensor failure that is causing erratic behaviour but not led to an emergency situation).

I wonder if a hybrid approach might be viable -- fake steering is used to ensure that the driver is alert and an active participant, but the driver hitting the brakes immediately takes effect and disengages the automation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: