We accept that driving a car comes with risk, and sometimes (too often) people are hurt or die. Except... that we don't accept that at all when a computer is behind the wheel. Will autonomous vehicles ever be safe enough for society to accept them? Listen in as I discuss this topic (and many more) with Emil Michael and Leah Mahtani in today's delightful episode of The Startup Podcast Reacts.
Like many things, people are oblivious to the downsides of the default option.
Human-centric tech leadership
7moThere's a big difference in concept between a human decision that leads to a mistake and a computer decision that leads to the same mistake. We have been trained to believe that technology is relatively infallible and that anything that is released for general consumption is well-tested and safe. Being in the technology space, we know that is absolutely bullshit. We hold the technology to a higher standard because for a lot of things it will give us consistent results (typing numbers and operations into a calculator in a particular order will give you the same result every time.) This translates to expecting a self-driving vehicle to consistently make the *right* decision — even if in reality the vehicle makes the *same* decision, even when the variables may not be completely consistent.