March 21, 2018

Autonomous Vehicle Kills Pedestrian Walking with a Bike

With a death of a pedestrian the seemingly relentless march forward of autonomous vehicles has taken a pause as reported by the New York Times.  From a legislative standpoint autonomous vehicles (AVs) are operating in a piece meal legal environment, and the state of Arizona was an early adopter, inviting these vehicles  to be tested on the state’s road network in a “regulation free zone.  “Then on Sunday night, an autonomous car operated by Uber — and with an emergency backup driver behind the wheel — struck and killed a woman on a street in Tempe, Ariz. It was believed to be the first pedestrian death associated with self-driving technology. The company quickly suspended testing in Tempe as well as in Pittsburgh, San Francisco and Toronto. The accident was a reminder that self-driving technology is still in the experimental stage, and governments are still trying to figure out how to regulate it.”
The Uber car, a Volvo XC90 sport utility vehicle outfitted with the company’s sensing system, was in autonomous mode with a human safety driver at the wheel but carrying no passengers when it struck Elaine Herzberg, a 49-year-old woman, on Sunday around 10 p.m. Sgt. Ronald Elcock, a Tempe police spokesman, said during a news conference that a preliminary investigation showed that the vehicle was moving around 40 miles per hour when it struck Ms. Herzberg, who was walking with her bicycle on the street. He said it did not appear as though the car had slowed down before impact and that the Uber safety driver had shown no signs of impairment. The weather was clear and dry.
There has been early discussion on the computer based “ethics” of the autonomous vehicle, and the fact that the vehicle was being designed to save its occupants first. Autonomous vehicles have been hailed as way to stem the annual deaths of over 37,000 (2016 figures) people on the road by safer, logical control. But the technology is only a decade old, and “now starting to experience the unpredictable situations that drivers can face.”
This tragic incident makes clear that autonomous vehicle technology has a long way to go before it is truly safe for the passengers, pedestrians, and drivers who share America’s roads,” said Senator Richard Blumenthal, Democrat of Connecticut. While autonomous vehicle testing has temporarily halted with this death, investigators  are examining what led to this vehicle’s failure to recognize the pedestrian. Vehicle developers have expressed challenges in teaching the systems to adjust for unpredictable human behaviour. As a professor at Arizona State University expressed “We’ve imagined an event like this as a huge inflection point for the technology and the companies advocating for it,” he said. “They’re going to have to do a lot to prove that the technology is safe.”

Posted in


If you love this region and have a view to its future please subscribe, donate, or become a Patron.

Share on


  1. This was unquestionably a tragic accident. However, I think the point being missed (from reports I’ve seen) is that there was a driver at the controls, who apparently had no time at all to react when the victim abruptly walked into the traffic lane. I’d suggest it’s likely that a severe injury or fatality would have occurred regardless as to whether the car was an AV or one human controlled. Even though I don’t like the headline, here is a brief but factual article:

    1. Indeed there are at least 2 sides to each story. It makes for spectacular headlines that it was an AV. If it had been a normal car we would never have heard about it, as traffic accidents like this happen all the time, all over the world.
      Likely AVs are far more reliable than humans, on average. But because it is new, not yet licensed in most places it makes a good story.
      Uber will check their software and their sensors, and once they determine: it could not have been prevented they will issue a report and continue testing, as they should.
      Deleted as per editorial policy

      1. If you look at the location it’s designed to make the natural walking desire line be across the road at that spot. So instead of redesigning it so that the natural place cross is at a safer place, they just put up a sign and call it a day.
        This is an example of car-centric design that’s common all over. If you’re not in a car, you’re not included in the design and only considered when you’re a nuisance to the dominant class.

    2. I know there are a lot of people who are concerned about AVs. I have some concerns myself, but I feel that generally we will be safer than we are with humans who are prone to error, especially distracted ones.
      As far as this particular case, it would be good to have the video released, so that we can confirm what happened. AVs can’t stop on a dime either and maybe it should have been able to anticipate what might happen, but none of us have seen the video, so any comments on cause are not based on facts.

    3. Now that the video from the crash has been posted we can see that.
      1) The road was poorly lit
      2) The road was terribly designed
      3) The car was going too fast for visiblity
      4) The ‘safety driver’ was busy looking at their phone, not at the road
      A competent driver travelling at a reasonable speed might’ve avoided it. A good road design could’ve avoided it. A properly tested and reliable self driving car likely could have avoided it. None of those things were present in this case and a woman was killed.

      1. I agree with all four of your points – but not your conclusion. “A properly tested and reliable self driving car likely could have avoided it” begs the question of whether anyone is capable of building such a machine.
        “it’s difficult to understand why Uber’s self-driving system—with its lidar laser sensor that sees in the dark—failed to avoid hitting Herzberg, who was slowly, steadily crossing the street, pushing a bicycle.” (
        The wired article says that this was an ideal scenario for the car to detect and avoid the collision. It didn’t swerve. It didn’t even brake.
        The video is misleading, possibly even tampered with. From a comment on Slashdot: “If you framegrab the images and then histogram the light curve it’s hard edged at zero. Someone deliberately made the blacks blacker so it seems like no one could have seen her. Perhaps this is an artifact of the video compression algorithm or the camera itself.” (
        In broad daylight, there would be no excuse whatsoever for this collision regardless of whether the pedestrian was following the rules. The only factor in the car’s favour is the darkness, which has been exaggerated in the video and which should not be a factor because of the car’s lidar. If this is the level of performance in an ideal scenario, what makes us think we can build cars that can cope with less-than-ideal conditions?
        I’m afraid I haven’t the link, but I recall an expert explaining that if a self-driving car on a freeway encounters an immobile car in its lane, it *cannot see it*. It cannot distinguish the stationary car from background road signs etc., and is likely to accelerate into it. We are nowhere near autonomous driving. (
        The public’s faith in computer systems in general, and “artificial intelligence” in particular, boggles my mind. And it is faith: the public really have no idea what these systems are or how they work. These systems have no “intelligence”: they assign patterns to categories. Instead of AI, we should be talking about VI – virtual intelligence – i.e., not intelligence at all.
        One of the scariest aspects of this story is how people have responded to it. People seize on explanations for why it was all her fault. News stories gave the impression that the pedestrian suddenly stepped on to the road. Nothing of the sort happened – she crossed nearly two whole lanes before being struck.
        Setting aside problems with the video, in that darkness, the car was over-driving the conditions. It was going too fast to see obstacles in time to avoid hitting them. It’s the perfect metaphor for our romance with autonomous cars. We are speeding forward into the darkness, hoping that some yet-to-be-invented algorithm will give us light before we crash. We believe because we want to believe.

        1. Normally I would argue it is unlikely the video was tampered with, but this is Uber we’re talking about and that would be right up their alley.
          It does seem likely that even if the video wasn’t tampered with it is giving a false impression of how dark it was. The human eye is generally better at low light vision that standard video cameras. It is quite possible that the victim would’ve been clearly visible to a driver paying attention.
          The behaviour of the human driver while reprehensible was also completely predictable. In 2015 Chris Urmson, at the time the head of Google’s self driving program, described how one of their employees in the car completely ignored the road digging around in the back seat for a phone charger (
          Google (now Waymo) responded to this problem by realizing that human drivers wouldn’t behave safely because they’d lose attention and trust the car too much, so they switched to designing cars such that there weren’t any controls at all and the car absolutely had to be able to handle all situations. This is a much harder task that made it harder and slower for them to get to market. Uber of course decided to continue with the known to be unsafe approach of ‘safety drivers’.
          If you watch the ted talk above you’ll see that even 3+ years ago the cars were quite capable of recognizing stationary vehicles and other obstacles. This is not to say they are perfect. Right now the good ones are perhaps a bit better than human drivers overall, but have different failure cases. There are situations where human drivers wouldn’t make a dangerous mistake, and the car will, and vice versa.
          Responsible companies are continuing to carefully push forward because of the potential, both for saving millions of lives and for making lots of money. They have detailed data on exactly how safe the cars are over the literally millions of miles they’ve driven. While they aren’t there yet they are getting closer and closer to a future in which cars don’t kill hundreds or thousands of people per day.
          That won’t make cars stop being cars, it won’t fix the problems they cause it city design, inactivity, massive public and private expenditures, etc. It will likely cause new types of problems, as the perceived cost of driving drops and so people do it more than ever.

  2. There’s going to be a 5-10 year period (starting now, I guess) where people are going to be as outraged about AV-caused deaths as they *should* be about “normal” traffic deaths.
    Cynics will say we shouldn’t be concerned because these deaths would happen anyways.
    Those of us who care about traffic safety need to use this opportunity to set the terms for the future of transportation. We lost the previous battle in the middle of the 20th century, and now thousands of traffic deaths every year are normal and acceptable. We should fight to make sure that that’s not the case with AVs.

  3. Allegedly the Uber car should have sensed “human on road” and at least braked
    “… this is stingy suggestive of multiple failures of Uber and it’s system, its automated system and it safety driver ..”
    We shall see how Uber responds. With Facebook in the news this week amid data sharing scandals and a multi billion $ value loss, not a good week of high tech and it’s alleged benefit to mankind.
    Do we need humans after all ?

  4. Post

Subscribe to Viewpoint Vancouver

Get breaking news and fresh views, direct to your inbox.

Join 7,303 other subscribers

Show your Support

Check our Patreon page for stylish coffee mugs, private city tours, and more – or, make a one-time or recurring donation. Thank you for helping shape this place we love.

Popular Articles

See All

All Articles