That’s stupid. Autonomous cars require one and only one direction:
You shall not crash on, by decreasing priority, people, animals and property.
That’s all. “In the event of unavoidable accident…” is downright retarded, how the ■■■■ should the AI determine that crash is unavoidable? In the event of imminent crash, you will reduce energy (cut engine, brake until contact, release brake right before collision to increase elasticity of impact) and trigger active security, not speculate about the avoidability of the crash!
Philosophers always muddle everything…
Yes, and safe AI only requires Isaac Asimov’s 3 laws of robotics
Heuristics and simulations. It can do that in parallel to all the other important tasks it has to do. It can find an alternate module to improve the outcome and improve the chances of survivability for all living beings involved. Letting the AI determine the best possible way to react, as can be determined in such a short time frame, isn’t “downright retarded”, it’s a perfectly sensible option.
The AI doesn’t needs to make moral judgements. It needs to tell whether it’s a person or not, not its age, race or behavior. The goal is to not crash against it, not crash purposely on A to avoid hitting B. And this might be related to managing collsiion energy -if ther’es a chance of “chidl-from-behind-truck”, the correct behavior is what sensible drivers do -slow down and keep a pace where, evne if you hit the child-behind-the-truck, you don’t smash all his bones. This is not aobut turning hard and avoiding the child by hititng a car in the opposite direcion, is about keeping speed below fatality threshold when the environment has obstructions.
That si, you-shall-not-crash, not “please, pick whether you roll over the dog, hit the the driver’s friend or a child or three elderly people”. You don’t effin crash on people / animals / stuff.
As for crashes, there is no need to determine whether it’s unavoidable or not so the AI turns on moral judgement. If all trajectories end in a crash, the AI must reduce energy for all parts involved, within the physical boundaries of the occupants -i.e. not brake or turn as hard as to break their necks. Engineers may make a morla judgement on whreto put the “inminent cash” threshold , which probably would be “the point when human reaction by the incoming projectile can’t avoid impact”. And even then, it might be preferable to crash on a parked car than be hit by the higher speed incoming vehicle.
The thing is that cars could drive safely even when there would be people trying to throw themselves under the wheels from behind a wall. But it would be not how humans are used to drive a car. People can assume there will not be a suicidal man behind every corner and at every crossing. Even when breaking rules can assume that others are not breaking them.
If by any means someone would program car to ignore certain things because people inside must go fast, then it is these humans fault and these humans ethics. Its stil narrow intelligence, not AGI that could add something from itself, creatively thinking, developing position, maybe even making errors while learning.
The thing is… XKCD already pointed it out:
In most situations, the AI just needs to be as assumptive(?) as a human driver. Most people don’t throw themselves on cars for insurance fraud. Most people don’t crash their car on others on intent.
Of course as you point, avoiding crash in situations where the AI can’t be sure about security conditions should be implemented -not drive fast near schools, not drive fast in cities with obstacles to the sensors, if you don’t know what’s gonna happen in 0.5 seconds slow down… this kind of designed, built in moral choices.
What upsets me are the stupid ■■■■■■■■ like the “tram dilemma”, because the solution to the “tram dilemma” is to a) see people on the tracks away enough to brake and b) effin’ stop the tram. That is, AI security must be more behavioral than reactive; not react to moral dilemmas, but avoid them by moving at sensible speeds according to environment and sensor conditions.
That is, you shall not crash on people/animals/stuff.
I love potatoes too.
Classical music in the last scene.
I wonder what will they listen in future tho.
Happy Mothers Day!
That would be one hell of a weird pronhub promotion…
Pornography?!? I endorse that product AND service.
Hello lovelies and Happy Mothers’ Day!