Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have to agree. Just like a normal camera has issues in low-light, it is clear that this camera is diminishing exactly how light the road ahead was. While I can't say confidently that I would have been able to stop to prevent hitting them, watching the video in full screen does lead me to believe that I would have seen them and been able to apply the brakes at least enough to reduce the impact. Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else just prior to the impact. This alone leaves me skeptical to just how much could have been done to prevent this accident.


This is pretty much the experience I have with my dash cam, a Yi. In its recorded video, its automatic exposure control make it look like everything outside of the headline cone is pitch black, but it is actually not. I have seen deer and possums by the side of the road, and debris etc, that did not show up when I later checked the video for the same period. There is enough spillover light from modern headlights that a human whose eyes are dilated and adjusted to dark conditions will see a pedestrian standing on the median, stepping off it, crossing the inner lane towards the car's current lane. More than enough time to begin to brake and possibly swerve. I have dodged animals in a situation similar to this.


Yep that exposure control / sensor quality of the dash cam in the video was rubbish. My own Blackvues produce far, far better results than that. Just look at how nothing is illuminated by street lights, this clearly has the effect of making the poor rider appear "out of nowhere". Also agree it appeared driver was on smart phone most of the time, thus not in control of the vehicle, and had thus no business being on the road as these are systems UNDER TEST.

If that's the best Uber can produce then they ought to hang their heads in shame. Unless it was doctored with... as I find it hard to believe they'd put such rubbish quality cameras in their trials.


Do you trust Uber to provide all the data, or would they selectively produce data favorable to them?

Do you trust Uber to provide unedited raw video, or would they process it to increase contrast, make it appear that nothing was visible in the dark areas of the frame, reduce the resolution, drop frames, etc.?


It's funny how the internal camera which shows how distracted the driver was has way better night vision than the external road camera...


The key here is contrast; plus a IR light at 2 feet works great, at 60 feet...not so much.


The internal camera (let's be honest and call it the scapegoat camera, because that's the only practical use for human "safety drivers" when they are not permanently engaged) must take almost all its light from IR, because we don't see anything of the smartphone screen glare that the eye movement so clearly hints at.


I don't think the driver is looking at her smartphone. I think she's checking the car's monitor (as in a computer screen). Although to be fair, that should be showing the car's view of its surroundings so I don't know what's going on there.

Edit: Nevermind. Someone posted a picture of the car's interior, below and there's no computer screen.


Link?


Sorry - I can't find it. This thread has grown rather.


Ok so this is getting old now, but I just came across the following - which show what I'd expect the roads to look like, and geesh were Uber ever full of crap to release their video which pretty much had the effect of exonerating them.

https://arstechnica.com/cars/2018/03/police-chief-said-uber-...

Please check the videos out.


Yep, exactly..


> Yep that exposure control / sensor quality of the dash cam in the video was rubbish.

Is that the same cam used by the AI to detect obstacles?

I would expect a safe self driving car to include IR cameras that can be more cautious about moving warm blooded creatures.

Surely some more detailed telemetry data would reveal whether the main issue is with the sensors or with the algorithm.


I highly doubt that camera is part of the perception pipeline.


> I have seen deer and possums by the side of the road

Both of those have eyes that act as reflectors and you can see their eyes well before you can actually see the whole animal.

This[0] suggests that the total time required for a human to avoid an incident like this is 3.6s (at 35 mph, casual googling suggests the car was doing 40). Even if we add 1 second of extra time to deal with it I'm not sure that makes the cut.

0) http://www.visualexpert.com/Resources/pedestrian.html


Other people in the thread have pointed out the woman stepped out in a darker area between where the street lights are placed. Reflecting eyes are not the only way to detect an object. A person watching the road would have seen her dark silhouette contrasting to the next patch of light.

Also remember she was not a stationary object. She was in the act of crossing the road. Human eyes/brains are good at detecting motion in low light even if we can't 100% make out what the object is.

I have lived in Tempe and know that part of town well. There are apartments, gas stations, hotels, strip malls, fast food restaurants and a strip club. It's not a pitch black country road.


I know what you're talking about with the eyes, I spend a lot of time driving rural WA highways at night, but no. I have seen deer that had their heads facing the other way and were standing in the shoulder/ditch area. In conditions where i can definitely make out the shape of the deer and its location but the dash cam sensor misses it entirely.

Your last paragraph is a valid calculation if this were a case of a person stepping directly off a curb into the lane of traffic. However, it appears that they were probably standing on the median looking to cross, then stepped off into the left-most lane of traffic, an empty lane, proceeded across that lane towards the lane in which the car was traveling. In this sort of situation human intuition will recognize that a person standing on the median of a high-speed highway is likely to do something unusual. Particularly when you observe the visual profile of, as media has reported, a homeless person who is using the bicycle with numerous plastic bags hanging off it to collect recycling.


Driver didn't see this person because the driver was occupied with smartphone, only occasionally glancing up.

Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.

Perhaps the video was intentionally darkened to simulate this effect. :P


>Also, has anyone here talked about the effect on the eyes of watching a (typically) bright white screen vs letting them adjust to the light of the night yet? This point deserves to be brought up.

Using bright interior lighting at night is something that we've known not to do for more than a century. If the driver couldn't be expected to see the pedestrian because the interior lighting or UX was too bright that is not something that does not reflect favorably upon Uber.


I wonder if the driver is liable.


That's their only purpose. Nobody in their right mind could expect human observers to stay as alert as an actual driver when cruising for days with an AI that is good enough to not require interventions all the time. Passengers add nothing to safety, and an almost reliable AI will make anyone a passenger after a short while.


Completely agreed, but the law needs to take this into account. Human psychology can't just be ignored on this.


I'd like to have an interior view of what driver was actually looking at. It couldn't have been a FLIR monitor, for sure.. it seems more likely to be a phone held in the right hand? Bit hard to tell with the quality of the footage, but driver looked rather tired to boot.

If so (a hand held phone), in Australia that driver would be going to jail for culpable driving causing loss of life.


It could have been anything readable. I got the feeling it was either a Kindle or something like that or maybe even a hardcopy of something printed or written on paper. This was just a hunch but I think it's being validated in my mind by the fact that there was no light seeming to shine on the driver's face but that's probably due to the night vision camera not picking up that type of light? I don't really know. My mind is filling in a lot of gaps here, I realize.

EDIT: Upon re-watching the video a third time and really paying attention to this I don't think there is any real way for us to know without confirmation from the driver them self or an official report on the incident. My mind was definitely deciding things that just aren't discover-able from the video itself.


Here we have it, I believe:

"Uber also developed an app, mounted on an iPad in the car’s middle console, for drivers to alert engineers to problems. Drivers could use the app anytime without shifting the car out of autonomous mode. Often, drivers would annotate data at a traffic light or a stop, but many did so while the car was moving"

https://mobile.nytimes.com/2018/03/23/technology/uber-self-d...

The whole project seemed designed for an outcome like this. Eg allowing app to be used whilst on the move, after reducing from 2 to 1 operators. Culpability ought to lie with Uber.


Here's a picture of the Uber car from inside. No FLIR, just GPS:

https://cdn.geekwire.com/wp-content/uploads/2018/02/Front-_-...

A different picture from that article shows that under the GPS is the gear stick, an emergency button, and a cellphone charger.


According to the filename, those are iPads, which implies they could have been displaying anything (not just GPS).


I think he is, at least, I've never heard of any law that removes responsibility from a driver if driving a self-driving car. I think this will also apply to empty cars, if they get into an accident, the owner is liable.


If I recall correctly the military has done a lot of studies on this.


I compare it to the backup camera in my car. While close up at night it is good, if something or someone is a short distance away I can barely make them out. However, looking in my mirrors I can see them or at least make out that someone or something is there.


A camera can have pretty good dynamic range at night, but it needs a big sensor, a huge lens to operate with a fast shutter speed. In the video, you can already see the motion blur, indicating shutter speed is slower than what it needs to be to identify nearby objects in low light.

Autonomous cars are never going to be viable. Just looking at the cost of high-end SLR sensors and lenses that you'd need to match human eye dynamic range, and you're already looking at an expensive setup, before we even get to things like 360-degree vision and IR/LIDAR/Hyperspectral imaging. And that's in addition to all the compute problems.

Sorry Silicon Valley tech-bros, but it's a fantasy you're chasing that's never going to happen. The quicker we can end this scam industry, the better.

People really need to be told "no".

Probably better to chase after flying cars..


I think you’re comparing object detection to high quality photography though. There are plenty of options that can detect objects at night. Even cheap infrared technology, I would think, would be sufficient for picking up moving objects at night.


Wetware is astonishing stuff. All the propaganda to anthropomorphize machines is showing here... cheap IR sensors are not the issue. AI is not intelligent and inanimate objects have no self.

They should pivit to augmenting drivers, not attempting to drive for them. I would happily utilize a properly designed HUD (meaning I have source access) connected to a fast MerCad or bolometer array.


Sorry for lack of input or varied discussion but I just had to stop and say how goddamn friggin cool it would be to have bolometers hooked up to a smart HUD that didn't interfere with your vision of the road. Something really translucent that smartly blended it's color scheme as to not interfere with the coloration of signs and details beyond your view on the road / around the road.

But you are right, though. I think augmenting drivers sounds like a great idea in the sense you talk about. The kind of augmenting drivers I don't want are those stupid headbands you'd wear that beep like crazy if your head starts tilting in a way that resembles falling asleep. If you are in danger of falling asleep at the wheel and need a device like that I think it's pretty obvious one should take a nap on the side of the road or in a free parking lot, haha. Hopefully if we do wind up headed in that direction the people inventing will have a similar way of thinking and inventing.


> Wetware is astonishing stuff.

It really is. The eye can detect a single photon. Fingertips can detect 13nm bumps, smaller than a transistor on Coffee-lake CPUs.

We're better off acknowledging machine limits to work on other problems instead.


High-quality exists because the human eye is that sensitive and discerning. And there aren't plenty of options that can detect objects at night. IR isn't any cheaper, and then you have to figure out what IR bands you want to detect.


> A camera can have pretty good dynamic range at night

No. A camera's dynamic range is pretty much fixed. If you can capture low-light objects it means high-light objects are completely blown out.


I've read (see [1]) that humans have a low-light ability that approximates ISO 60,000, a pretty large value and larger than simple video cameras provide. However, very high end pro/enthusiast SLR's go considerably higher, see this real-time astrophotography with the Sony a7s at ISO 409,600 (youtube video [2]). The same Sony will work great in full sunlight too.

The Canon ME20F-SH is a video camera reaches ISO 4,000,000. This camera has a dynamic range of 12 stops and is available at B&H for $20,000. [4]

Of course, this isn't exactly the challenge that cameras face when assessing a scene. The dynamic range happens within a single scene all at the same time. Wide dynamic range (WDR) is the term I've seen used in describing video cameras that can handle both bright and dim areas within the same scene.

[1] http://lightartacademy.com/blog/tutorials/camera-vs-the-huma...

[2] https://www.youtube.com/watch?v=ZRzXgSMbBu0

[3] https://www.cambridgeincolour.com/tutorials/dynamic-range.ht...

[4] https://www.bhphotovideo.com/c/product/1187825-REG/canon_100...


The extreme numbers are for static cameras mounted on a tripod with a slow shutter speed. They won't do a tenth of that in a moving car at 45 mph.


No that's not how ISO works. The Canon ME20F-SH shoots high definition video at professional video shutter speeds and has an available ISO range of 800 to 4,560,000. At $20,000 I'm not suggesting that this exact camera would be appropriate for use in autonomous vehicles, but I am pointing out that video systems can now exceed the capabilities of human eyes.

There are a number of video samples shot on the Canon ME20F-SH on YouTube. In these one can see that under low light situations the camera is shooting at ordinary video speed (the camera supports shutter speeds from 24 to 60 fps). I'm not trying to push the Canon ME20F-SH; I don't have any association with Canon. The manual for this camera is available on-line if you'd like to read up on it: [1].

The actual exposure of a video frame or image depends upon the f-stop of the camera's lens (aperture), the shutter speed, and the ISO of the image sensor. See [2].

Basically, each doubling or halving the shutter speeds corresponds to one "full-stop" in photography. Each full stop of exposure doubles or halves the amount of light reaching the sensor. Changing the aperture of the camera's lens by full stops also doubles or halves the amount of light reaching the sensor. Full stops for camera lenses are designated as f1, f1.4, f2, f2.8, f4, f5.6, etc. The light sensitivity of the film or sensor is also customarily measured in full stops. Very slow fine grained color film is ISO 50 and is usually used in full sunlight. ISO 100 is a bit more flexible and ISO 400 used to be considered a "fast" film for situations where more graininess would be acceptable in exchange for low light situations. Each doubling of ISO number corresponds to a full stop. So a photo take with ISO 400 at f2 with 1/1000 second shutter would have the same "brightness" as a picture taken at ISO 100 at f2.8 with 1/125 second shutter (less 2 stops ISO, less 1 stop aperture, and plus three stops shutter speed). Naturally, other factors come into play, the behavior of film or digital sensors at extremely slow or extremely fast shutter speeds isn't linear, there are color differences, and noise issues too. See [3] if you are interested in more about how photography works.

[1] https://www.usa.canon.com/internet/portal/us/home/products/d...

[2] https://photographylife.com/what-is-iso-in-photography

[3] https://www.amazon.com/Negative-Ansel-Adams-Photography/dp/0...


They could have 2 cameras--one for low light, one for high-light fairly cheaply.


1. A good start would have been to put a good camera in the first place because that one is absolute crap.

2. I'm not sure you can fix the exposure on "fairly cheap" dashcams.

3. "high-light" in night settings are likely much lower than standard daylight, let alone bright daylight.

Though I guess you could have an auto-exposure dashcam standard and add a low-light one which is only active in low light conditions.


Which is the same as with our eyes. There are ways around that limitation.


IR can help with that. Headlights are limited by the disincentive of blinding oncoming drivers...

Now that I think about it, self driving cars may be paralyzed by other self driving cars running IR boosted headlights.



The footage from a normal camera should not matter, a self driving car is equipped with stuff that works regardless of light conditions like LIDAR oor IR cameras. This looks to me like a software failure.


The footage from the normal camera does matter in that it's the main way that we (humans) can process the scene. The parent comments are just pointing out that the camera footage is likely darker than the actual scene in person.


The video footage being presented is beyond useless because it is misleading. The important data to determine wether the system misbehaved would look like this: https://www.theguardian.com/technology/video/2017/mar/16/goo...

Waymo cars are capable of sensing vehicles and pedestrians at least half a block away in every directions. I was reserving any judgement on wether this collision could have been prevented, but seeing the video tells me that 1) a human driver might have hit the victim regardless, and 2) I'm very surprised that the LIDAR sensor didn't cause the car to stop to a halt much, much earlier. This is exactly the kind of situation that I would expect self-driving cars to be better than human drivers.


I agree that dashcam/external cam footage is going to be limited and possibly misleading, and I would think/hope such footage isn't the primary factor in evaluating accident cases. But I do think there's value to it. I shouldn't have said that it is the "main" way for us to process a scene, but the most accessible/relatable way.

What you posted looks pretty cool, I don't know enough about it to understand what I should be prioritizing focus on, but we can chalk that up to ignorance. The benefit that driver-view footage has is that it is a viewpoint all of us are familiar with. If you ask me to watch dashcam footage to assess some kind of traffic thing, there's a general expectation of where I keep my eyes and what I notice.

This normal-human-view mode is probably going to be necessary in AV cases in which we determine whether the car's AI did the right thing. Presumably, as AV becomes mainstream and extremely safe, these accidents will involve edge cases and outliers which are poorly interpreted by sensors/non-human-vision. Seeing the scene as a human driver does might be a necessary starting place?

But the Uber case in AZ, IMO, proves your point. The Tempe police quickly made a judgement call based on what seems to be inadequate video. Everyone who can now view the video will also be inclined to think how impossible it would be to avoid hitting the victim, even if the actual scene in-person has much more light. And of course, we don't want to judge AV solely on whether it performs as well as normal humans.


> we don't want to judge AV solely on whether it performs as well as normal humans

You don't think performing as well as normal humans should be sufficient to allow them on the road?

Or are you saying they should be allowed even if their performance is worse than human? (...as long as some other criterion is met?)


j-walking, at night, no reflectors, in the dark.

Even if the camera was brighter uber isn't at fault anyway....


Uber may not be at fault, legally speaking. That's up to the legal authorities to decide.

However, as a society and civilization, and even more so, as engineers and scientists, we are going to expect that the autonomous car matches or exceeds human-level performance in critical situations like this.

Therefore the time spent on investigating, understanding, and discussing the root causes of the accident is worth understanding. Accidents like these generally do not happen due to a single factor. It is necessary to understand all the necessary factors if we want to make autonomous driving systems more reliable.

At the very least we need to understand whether the pedestrian appeared in the other sensors that a human could have identified by looking at the sensor data, and if yes, whether the autonomous system matched or exceeded human-level performance by detecting the pedestrian, and if the pedestrian was indeed detected, why the autonomous driving system failed to respond to the situation.


In North America, isn’t the vehicle owner usually liable, regardless of who is driving?


Surely not? Cars are routinely driven by people who are not owners, and liability for traffic offences (including that the vehicle must be insured) is with the driver.


In my experience typically only minor infractions like parking violations are assigned to the registered owner of the vehicle, but in other case – accidents, running red lights etc. – the driver is liable regardless of who owns the car.


parking violations are assigned to the registered owner because they are not present at the moment they are imposed


I mean in terms of who gets sued for personal injuries. Or to repair damaged vehicles.


No. The general rule is that negligence is required to be held responsible. If I let my next door neighbor borrow my car to go to the grocery store, and he hits someone, I'm not responsible. Unless, the person can prove "negligent entrustment", i.e. it was irresponsible just to let this person borrow my car, e.g. they're a habitual drunk, or blind, or 11.

However, most auto liability insurance covers whoever you permit to drive the vehicle, so the owners policy does typically cover the fender bender on the way to the grocery store.


Correct, the owner's insurance policy is the primary coverage when the owner lends their car to a 3rd party. Obviously in the case of a moving violation the driver is at fault and receives the penalty, but damage is still covered by the owner's policy. In the case where the other driver is at fault, that car's owner's insurance is liable.


This car failed the moose test. Legal details aren't relevant, it's plain rubbish. This is test track pre-alpha stuff, for crying out loud.


The bike probably had a reflector on its pedals.

I would be very interested to learn whether or not the car's autonomous system identified a bicycle at any point prior to the collision.


The car likely didn't identify an obstacle at all, let alone a bicycle, as it didn't apply the brakes.


exactly this. what's the response time of software? it ought to be close to zero and significantly faster than human's. let's say it's a generous 0.5s - no brakes where applied at all, and even with the crappy darkened video we got (place isn't that dark https://www.youtube.com/watch?v=1XOVxSCG8u0 ) the pedestrian was in view for 2 to 3 seconds.

car didn't see it at all even in those last moments.


Which is weird, because regardless of reflectors, it should show up on IR and lidar imaging.


Usually other reflectors are required when riding at night.

But this was a pedestrian, not a cyclist.

Personally, while riding at night, I look like a Christmas tree. $10 on EBay goes far these days in the reflective tape and bike light department:


Well it was a pedestrian but they were walking their bike across the road. It's not like the software should make a distinction between a cyclist in the way and a bicycle with no rider in the way.


In some places (UK for example) you need to have lights on your bike as well--not just reflectors.


If everyone followed the law, this wouldn't have happened for a multitude of reasons. Alas, here we are.


Indeed, it's hard to find pedals without them. Even ones that cost $10 a pair have reflectors. Unfortunately, pedal reflectors are ineffective when the bicycle's path of travel is perpendicular to the light source. The video doesn't reveal evidence of other reflectors, such as the common spoke-mounted ones whose purpose it is to highlight a bicycle traveling crosswise. For a moment, the bicycle is clearly illuminated by the headlights; I don't see any spots of light on the wheels or elsewhere.


When travelling perpendicular to the car, bike pedal reflectors are not visible.

What is surprising is that the bike didn't seem to have Tire reflectors like these:

https://www.wired.com/2011/11/fiks-reflective-rim-strips-for...

They are mandatory in lots of countries, to the point that it's impossible to buy tires without them. All brands come with them.


I have literally never seen that. Wow, that's a good idea!


For a side view, the reflectors on the tires (visible at the end of the video) are way better indicators of “watch out! Bicycle” than those reflectors.


It was a side impact, so pedal reflectors aren’t going to be visible.


Wheel spoke reflectors ought to have been - from what I've seen in the video, there were none (they're surprisingly bright at night).


See this video for a comparison of visibility (not in English, but that's immaterial - set speed to 2x ;)): starting with a "bike ninja" and going all the way to "Christmas tree" https://youtu.be/oAFQ2pAnMFA?t=1m0s


This video looks too dark, as if the camera had not enough sensivity.


It's from 2011, there's been a lot of improvement in consumer-grade cameras since. Even so, it fits my perception IRL: even a small reflector is orders of magnitude better than no reflector, and adding multiple (esp. covering 360 viewing angles) makes you stand out at night; same goes for pedestrians.


The human driver didn't have their eyes on the road the majority of the the time. And yes, the video footage is _nearly_ useless.

Maybe the outcome will be that thermal infrared will be mandated on all sensor packs?


This 100%. When I drive, I watch the road. I don't watch my mobile phone, I don't watch the kids behind, I don't watch my wife. I don't watch the sky. I don't watch the GPS.

I just watch the road in front of me.

My idea is that the car has been behaving well for a long time and consequently the driver lowered is vigilance. Big mistake.


> I just watch the road in front of me.

Unlike many others, sadly - even when they don't have any self-driving tech at all


Any chance the backup driver was looking at a driving-related computer screen? (Speedometer/Video of road/LIDAR/IR camera)


That screen is mounted high in Uber's cars. The driver was looking low.


A fully attentive human driver might have hit this person regardless. Would they have hit them while taking no evasive action whatsoever? No swerving, no brakes?


I don't think so: the dash cam video is misleading. I had multiple ninjas jump at me before, and although I did notice and avoid them, they were not visible on the dashcam until the very last moment. Surely Uber would not release data to intentionally mislead the public?


Well, right, I think even if the camera isn't at all misleading you could have hit the brakes and hit the person at a lower speed.


Even so, I count a full second from when a human paying attention would have seen something just using this video as eyes, until impact. The stopping distance at 35mph is 136ft, which is 2.65 seconds at 35mph, so the accident would still happen but the impact speed could be lower.


Yeah, but at that speed, it's more than possible to swerve around an obstacle rather than screeching to a halt before touching it. Even turning slightly to the left/right would have made a dramatic difference in the outcome to this person's life. Not to mention the person in the car that might have also been severely injured if this was a heavier obstacle.

This was purely bad software, and no failure scenario being programmed in. I really don't think it's that difficult to program split-second reaction to obstacles that appear into the driving path. We need to get to a point where these vehicles can do stuff like this, even in a 2-dimensional way:

https://youtu.be/uLasBsoZBi0?t=1m40s


Well, there's also a question of whether the hardware is up to the task.


That’s a pretty significant difference — check out how quickly the fatality rates increase over 30mph or so:

https://nacto.org/docs/usdg/relationship_between_speed_risk_...

Getting hit at 10mph still is going to suck but it’s a lot more likely to be broken bones and road-rash.


Even if it had just managed to slow from the 38 mph that it was clocked at to 30 mph would lower the probability of death from about 45% to 10%.


https://en.wikipedia.org/wiki/Stopping_sight_distance

They seem to use 2.5 seconds as the standard for drivers to perceive and react to an obstacle, which based upon studies covers 90% of all drivers. 1.5 seconds to perceive, 1 second to react. Then you have maneuver time on top of that 2.5 seconds.

Given this, 1 second seems very low. A large percentage of drivers would probably plow into them at full speed.


Your link says that 2.5 is to allow for worst case situation and below average drivers.

If all but the slowest 10% can react in 2.5 seconds than I would think many would do a fair bit better.

Edit: Apparently the average person is closer to 1.1 seconds. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.372...


why havent they released the Infra Red footage? that will be less deceiving


This dashcam footage was released by the local police. It's likely they don't have the ability to access the autonomous car's working telemetry. Given Uber's legal history I doubt they'll release anything until they're compelled to by law. Personally I find it borderline irresponsible of Tempe PD to release this video and statements based on this video so early in the investigation.


That's probably the exact reason it wasn't released.


> it is clear that this camera is diminishing exactly how light the road ahead was

Someone else thought the same thing and went to get their own footage of the road.

https://www.youtube.com/watch?v=1XOVxSCG8u0


This shows how worrying is autonomous cars in a low-light environment. I suppose LIDAR should be able to pick that up, but sadly it failed miserably.


Even if there were no street lights, the car's headlights should give enough light to see the pedestrian on an empty road.


Low beams at high speed do not give enough advance warning to reliably prevent a collision; as your lights are turned downward, you see a pedestrian only when they're quite close.

In general, traffic safety requires that road planners ensure that one of three conditions always applies:

a) the roads are lighted from above; b) cars are able to use high beams; c) there are no pedestrians crossing the highway.

This can be done in general, mostly by investments in infrastructure to ensure lighting or isolated highways wherever the density doesn't allow to drive with high beams.


> clear the driver was looking at his phone or doing something ...

Seriously, what else can you expect. These companies who do put these things on the road with the justification that "There is a human behind the wheel" should be taken out back and shot in the head...Just pull the plug. No more self driving cars for them. Those are just the kind of tech companies we don't want around...

See, it is not a mistake that they are making. They know well enough that this human behind this wheel is a useless as a dummy. But they do it any way. What does it say about them?


There are other cases where having a backup-driver might help: mechanical malfunction, sabotage, or a more obvious un-sensed danger.


I feel sorry for the 'safety driver' here as it seems likely much of the liability will fall on her. As a transgendered ex-felon she can't have had a lot of fantastic job opportunities. I wonder how much Uber was paying her to sit in the hot seat.


Ok, she didn't have job options. That doesn't excuse her from not doing her job and getting someone killed.


I didn't say I excused her. I said I felt sorry for her.


> Seriously, what else can you expect.

I see waymo drivers all the time actively paying attention to the road.


The difference between Waymo and Uber here should be the difference between being allowed to continue, or getting barred from further self-driving research.


So you think having a driver behind the wheel who could potentially intervene is as bad having absolutely no humans in the car at all?


1. A driver who is not looking at the road cannot "potentially intervene", and is as good as no driver at all..

2. These companies seem to be doing nothing to make sure that the drivers will pay attention always and is always in a position to intervene. They even seemed to allow smart phone usage while they are in the car.

So, according to them, the human behind the wheel is just a decoy to prevent backlash from officials and the public, so that they can always say, "look, there is a human behind the wheel if something goes wrong"...

Also, even if they implement some measures, they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. A driver who is actively driving the car will notice a lot more stuff than a passenger who is just looking at the road. There is no way to make a human pay that kind of attention with out actually driving the car. So at best, your "driver behind the wheel" is as good as a passive passenger.

And as told before, the companies are not even trying to make sure of that.


I could be wrong, but I believe part of the reason for having a human behind the wheel is that it allows the testing to take place under existing driving laws. At some point prior to an unmanned vehicle being allowed on the road, lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel, a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.

In that context, the landscape changes significantly - instead of a self driving car that mowed down a pedestrian, we have a driver who was too busy looking at her phone to pay attention to what her vehicle was doing. From the various articles, it seems that she's not an engineer, and is there in effectively the same capacity as any other Uber driver. If that's the case, she's putting far too much trust into an experimental system. I agree that Uber could do more in the way of technological means to ensure the driver is paying attention, but at some point, an adult with a job needs to be responsible for doing that job.


>lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel..

The framework should have been in place before these vehicles were ever put on the roads. For example, there should have been some formally specified tests for a self driving vehicle before it can be put be on the road, even with a back up driver..

> a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.

Any thing that does not require drivers to keep their hands on the wheel is not a driver assist. It IS the driver. So there should be tests that make sure of the competence of the tech that is in the drivers seat.

I don't know how people let this happen!


>they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. //

I'm certain that if you can design and build a self-driving car that you can design a simplistic human attention monitoring system that will cause the car to pull over if attention level is too low.

Gaze monitoring that checks for looking downwards or away from the carriageway for extended or too often repeated periods wouldd probably be enough.

I imagine the attention of the "vehicle operator" is vital to the proper training of the vehicles -- if they don't see near misses, or failures to slow for potential hazards, or failures to react to other road users then how can the softwares faults be corrected? Do they get a human to review all footage after the drive?


I agree completely. As far as I can tell, the driver did not even have hands on the steering wheel. How hard would it have been to put sensors on the steering wheel to require both hands? They didn't even do that. Although even if they did, I agree with your statement that "[t]here is no way to make a human pay that kind of attention with out actually driving the car."


Not difficult at all, and you can make them keep reasonable attention. Look at the new Cadillac driver assist: sensors in the wheel for hand placement -and- eye tracking. If the driver isn’t watching the road/holding the wheel, they get escalating alarms until the autopilot disengages.

And that’s consumer drive assist tech, not “we are experimenting with full autopilot” tech, where I’d think such safety measures would be even more appropriate.

This is a solvable and solved technical challenge. Uber just didn’t devote any resources to it because they don’t appear to give a shit beyond acquiring a legal fig leaf to shift liability from themselves to an individual.


Frequent, randomly scheduled disengagements should keep the driver quite on edge, preventing them from becoming a passenger. But each and every one of them would create additional risk, so the net improvement might be negative. There is just no way to get this right, except for being reluctant of pushing to scale. With all the hype, wishful thinking and investor pressure, this clearly isn't happening.


I've been thinking about this for the last couple of days, and it's definitely a hard problem -- even with steering wheel sensors and eye tracking, it doesn't stop people zoning out and not being ready to react.

I did wonder if you could require the driver to make control inputs that aren't actually used to control the car but are monitored for being reasonably close to how the computer is controlling the car, and then the automation disengages (with a warning) if the driver is not paying sufficient attention. I then realised that may be _worse_ - in the event of a problem, the driver would have to switch to real inputs that override, which may delay action and not be something they do automatically. It would mean they are paying attention more to see if the automation is making errors where they have more time to react though (e.g. sensor failure that is causing erratic behaviour but not led to an emergency situation).

I wonder if a hybrid approach might be viable -- fake steering is used to ensure that the driver is alert and an active participant, but the driver hitting the brakes immediately takes effect and disengages the automation.


He was clearly not paying attention. So why not cut the crap and say it like it is: the car is driving itself with no supervision.


You are so right. To me this clearly looks like they are reading something below the dash. A book or phone perhaps.

Looking forward to seeing this play out in court.


BTW isn't it a she?


> Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else just prior to the impact. This alone leaves me skeptical to just how much could have been done to prevent this accident.

Wait, aren't you mean to have your hands on the wheels at all times? I don't see what to be skeptical about when if he just followed the law this could have been avoided.

It seems to me the driver might be in for some legal trouble.


But this has got to be just the black-box camera, right? Surely the actual camera they use as a driving sensor is much better than this? Not to mention the LIDAR and all the other sensors that should have caught this.


> Also, watching the video of the interior it is clear the driver was looking at his phone or doing something else.

Probably checking the computer installed for diagnostics of the autopilot system. If it's in self driving mode and you are the engineer in charge, you'd want to constantly check what the system is seeing vs the actual conditions on the road.


If you're the driver of a car you're supposed to ensure safety by looking out, not verifying sensory information. If Uber designed their cars to show a rendering of the computer's perception to the driver, or other sensory output, they would violate that principle.

To me it looks like the guy is just falling asleep at a boring job. In all likelihood that was not an engineer more than any other taxi driver is an engineer.


If you're the driver of a car...

The software is the "driver" of this car. Not the human behind the wheel. Take a look at job descriptions [0] for this. They always include a bit about "operating in vehicle computers". The fact, we don't know what the person is doing.

0 - https://www.indeed.com/viewjob?jk=597616bf7d02d899&tk=1c96sl...


From that description: "Ensure the safe operation of our test vehicles on public roads."

I don't know about current regulations. Are companies now allowed to operate autonomous cars without a driver that pays attention?


The driver, Rafaela Vasquez, is a "vehicle operator," not an engineer. https://heavy.com/news/2018/03/rafaela-vasquez-uber-driver-s...


I am pretty sure Uber uses an iPad app for its autonomous vehicles. The driver is looking at that iPad application periodically along with the physical windshield view.


What kind of things does the iPad display? Should the safety driver even be tasked with looking at it while in 'control' of the vehicle?


If you search "Uber autonomous vehicle" you can see some videos of the display. From what I gather, basically gathers the signals into a human readable model. In general I wouldn't have recommended this driving style but it might have been too dark to see much anyway.


I don’t understand this, I’ve seen a few people comment in the same vein.

People can safely drive in total darkness with the aid of their amazing human eyes and high-beams.

If for some other reason visibility is low you slow down - not rely on glancing at a backlit display ruining your own night vision and taking your eyes off the road for seconds at a time.


> apply the brakes

Or swerve out of the way.


Or flick your high beams, quick beeps, adjust speed... I do all these things if I see anything on a collision course with my vehicle.

It is surprising to learn that these vehicles are operating at night. To collect training data, since nighttime driving is inevitable, perhaps there are ways to simulate night to the computer vision systems during daytime so the human supervisor can still see clearly.


Lol, the "human supervisor", looking at his knees, probably on reddit or tweeting.

Would you trust this system that didn't even manage to slow down at all with a pedestrian slowly pushing a bike directly in front of it, artificially adjusted to be even worse, driving during the day??

I wouldn't.


Human eyes have the same issue: if you are next to a bright light source, the areas without or less light will look much more dark. I assume cameras work the same way?


Cameras work the same way, but much much more poorly. A human eye can see multiple orders of magnitude higher range of light to dark areas at the same time. The accepted estimate is that the human eye can detect a 1 million: 1 range from light to dark in terms of photon intensity.


Uh I think it’s a girl


The driver has been described as male in news reports:

> "The driver said it was like a flash, the person walked out in front of them," Moir said, referring to the backup driver who was behind the wheel but not operating the vehicle. "His first alert to the collision was the sound of the collision."

> "The driver, Rafael Vasquez, 44, …"

https://www.bloomberg.com/news/articles/2018-03-20/video-sho...



What do doctors identify him or her as?


Cheers!


also note that pushing the brakes was not the only option : steering the wheel to avoid collision was another, maybe more efficient. Still, I feel the same as you do : I cannot guarantee I would have avoided this.


I can confidently assert that Asian or atleast Indian drivers will almost assuredly not hit the pedestrian in this scenario; We have trained our eyes and senses to watch out for these as it happens all the time.

EDIT: what i meant, in light of the downvotes is that humans can train themselves to see, and just that folks driving in Asia have heightened sense of alertness, due to their environment. Hope it came out alright.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: