[ExI] Autonomous car ethics

Mike Dougherty msd001 at gmail.com
Mon Jun 27 12:39:33 UTC 2016


On Mon, Jun 27, 2016 at 1:44 AM, spike <spike66 at att.net> wrote:
> Reason I bring it up: the self-driving cars cannot see through any car.
> Every car to the self-driver is a big opaque hulk.  So... the following
> distance for every self-driver increases as the square of the velocity, so
> it can stop in time if the car ahead of it slams into a big immobile object.
>
> This could create a problem.  On a crowded freeway, if the self-driver
> leaves that much space, proles will be constantly diving into it.  So the
> self-driver will need to react by slowing down.  This will encourage proles
> behind the self-driver to dodge around it and swerve back in ahead, causing
> the self-driver to drop back even harder.  Picture that in your mind and
> think it over.

Don't you expect these cars to talk to each other?  They should also
be communicating with the road itself.  In the same way your
gps-enabled route-finder knows about the construction-related
slowdowns and accidents 10 miles ahead, the self-driving car can "see
through" not only the car ahead but the entire route.

I also expect the human-driven cars will be communicating to the
self-drivers.  If you've looked at the reports that the "good driver"
insurance application is tracking, you'll see they have profiled the
kind of driver (either over several sessions or since the car started)
and this profile will likely be shared with the traffic network
(central and peer-to-peer)

Also, the kind of aggressive driving you are picturing will be
immediately/continuously reported via this network.  In a world where
self-driving cars are capable of transporting proles with trouble
following traffic regulations, how many strikes do you think it'll
take before your license/driving privilege is revoked?



More information about the extropy-chat mailing list