AI Drivers: Death and Taxes (okay, no Taxes)

by John Lakness, AI Scientist

How Safe is Safe?

If you’ve ever spent more than a few minutes considering how to train a car to drive on a highway full of other cars, or have ever played Grand Theft Auto, or even if you can just remember back to the first time you drove a car on the highway yourself, you’ve inevitably come to the realization that absolutely nobody out there is safe.

Perfect Safety

Let’s just assume that you have trained an artificial intelligence to be 100% safe, bear no risk of injury to passenger nor do any harm to any other living thing. Put that AI on an empty road and it does fine. Put it in the rain and snow and gale-force wind and it moves a bit more cautiously, but it’s fine. It knows the laws of physics. The uncertainty of its environment is bounded and it proceeds within those limits.

But that’s not really the problem, is it? We wouldn’t have over 5 million vehicle crashes per year if human drivers were all alone out there either. Put that vehicle out on the road with other vehicles, pedestrians, and cyclists, and you have a completely different problem. Consider the problem of driving along a 2-lane highway with an oncoming truck. There is no absolute guarantee that the truck will not swerve to run you over. So now the AI has a problem: there is an agent with unknown behavior which is posing an imminent threat, but there is no way to guarantee that it won’t kill everybody. However, it was programmed to prefer, in all cases, non-lethal injury to any chance of death, so the car swerves off the road into a grove of trees, safe from the truck. No fatalities, but probably not what the passengers were hoping for.

Probably Safe

If we want useful AI drivers, we have to teach them heuristics about what humans will “probably” do. That truck will ‘probably’ follow the road markings. That baby on the side of the road will ‘probably’ stay right where she is. The cross traffic will ‘probably’ stop when the light is green. That pedestrian walking toward the street will ‘probably’ not walk right into it. So we teach the AI to give up notions of perfect safety and instead rely on heuristics.

But here we make assumptions about behavior, far greater than perhaps we should. Next time you’re driving on the highway, just try to convince yourself that you won’t be dead in 10 seconds if the truck next to you breaks those assumptions. You survive because humans are ‘probably’ safe, but unfortunately we the average safety doesn’t mean much. If each human you encounter is a Bernoulli trial for expected behavior, then survival on the road follows a geometric distribution, so even a tiny fraction of unsafe behavior turns into a death sentence over time.

I think about this all the time because I ride a bike to work, constantly at the mercy of erratic drivers. I’ve been lucky every time so far, but the message is clear: humans are not predictable. There is no bound on human uncertainty. Roads full of human drivers are unsafe for everybody. Probably safe is not good enough.

The emergence of Ethics

So we need AI drivers, and we know there is no such thing as safety in the company of humans, but we hope at least that they can perform tasks ‘ethically’. This implies a balance between objectives. When we say we want ‘safe and efficient’, we’re really proposing some kind of tradeoff between the two that we don’t really understand nor agree on.

To consider how we can train a machine to act in an ‘ethical’ manner (whatever that means), we have to consider the two basic models for training an artificial intelligence to act:

  1. End-to-end replication
  2. Agent behavior response

End to End Training

End-to-end is conceptually quite simple: given a history of sensory input data, train a model to replicate the desired response. You can do this through replication of human behavior, or reinforcement feedback, or some combination of both. Most machine learning models work, in the general sense, by fitting a certain structure of manifold to the data available in order to optimize some objective measurement of that fit. This fails in circumstances where the manifold structure is unable to represent the data distribution, the data is an incomplete representation of the problem domain, or optimization method is unable to find the proper manifold.

Let’s explore that a bit. The sensor data comes in as video and lidar mostly. This is turned into convolutional and recurrent neural networks with some width, depth, and activation function. There is nothing about this particular class of manifold which would suggest its ability to represent the real world, but it does show a pretty good capability to encode fairly arbitrary signals. That is, in fact, its strength as a method, but also its liability to overtraining.

The data itself also must be sufficient to represent the scenarios which are to be encountered. Obviously this is impossible to do exhaustively, but there is a question to be asked about whether this can be done ‘advantageously’. What I mean by that is that the AI must outperform humans, and in order to do so, it must be supplied with data that covers a large fraction of crashes with a representative sample. We have to ask questions about the diversity of crashes. If every crash is sufficiently unique within the manifold representation, it will be impossible to learn anything useful in avoiding a crash.

The optimization method is also both critical and highly suspect in present form. The vast majority of driving data is without incident, and the manifold is trained with stochastic gradient descent. Consider an unsafe driving practice, commonly performed in the data without incident. It’s not hard to consider that it’s quite likely that the optimization method would prefer to fit the vast majority of incident-free dangerous driving rather than the rare times it had catastrophic results.

There are ethical considerations in managing all three of these issues to produce a desired result, but ultimately the end-to-end model AI is not an ethical machine. Once trained, it will simply respond to inputs in black-box form without consideration of any set of value objectives. If it has been poorly trained for a particular situation, it will react unpredictably, or likely as if the unfamiliar components are not there at all. There is little difference, mind you, between a paper bag and a baby in the middle of the road if the training data has not encountered the baby.

Agent Behavior Response

Agent behavior response, on the other hand, gives agency to an object by replicating what actions they may take in the situation. It takes the sensor data and creates an abstraction of the scene objects to explicitly derive a high-order representation of the scene. The driver must predict what each object is likely to do next, through simulation of physics and behavior. Physics is fairly well-known in most situations, but behavior is impossibly complex, with all the same considerations as in end-to-end training. Finally, a decision must be made, and this is where we apply our own ethical reasoning algorithm to derive an action.

Implicitly however, we’ve been learning ethical strategies in this framework all along in order to predict agent behavior. A sensible person has an ethical weighting of personal safety, safety for others, and achievement of their goal in driving. We would also learn that a baby or a dog has no ethic of self-preservation in the face of oncoming traffic. Given the perfect AI, It would not be hard to replicate or improve upon the ethical behavior of human drivers. Surely this would bring up some interesting court cases if we ever got this far. It would be a great day actually, if we actually advanced AI to the point of ethical considerations rather than functional.

Market Forces

Unfortunately, we’re probably not headed in that direction. Agent-behavior modeling is extremely difficult to develop, and it’s not clear that we have a proper framework to understand behavior. While this would have been the dominant modeling approach of the first 30 years of AI, it has proven to give slow progress toward functionality, while end-to-end modeling has proven to be fast and easy for relatively small team to deliver an apparently-functional AI. In the race to market, this means that nearly all development today must converge to the more efficient solution.

Apparently Functional

Without knowing any details of the Uber pedestrian fatality, we can assume any of three hypotheses alone or in combination:

  1. The AI was unable to consider the effects of it’s actions
  2. The AI was not properly trained to recognize the situation
  3. The situation broke expectations

A spiritual Machine?

A lot of literature in the domain of AI ethics would suggest that today’s AI is perfectly able to consider all possibilities and consciously make the decision to do harm. I can’t vouch for the advancement of AI at Uber, but unless the Uber AI has already taken over, enslaved us all, and placed us in a simulation to obfuscate its existence, conscious ethical reasoning is just not anywhere near the present state of autonomous driving AI. Getting to this point is something that we should hope for rather than worry about.

But that’s not to say that it’s unsafe, or even less-safe than human drivers. It can be safer than humans without being ethically conscious. The end-to-end AI that dominates the current race to market for autonomous driving, is not an ethical intelligence. Even if it is safe, the failure modes of this method are numerous, poorly-characterized, and completely opaque to the scientists designing them. Surely this is well-understood by researchers oriented toward safety, so the general approach is to work backward from the perception-action loop to insert various levels of abstracted reasoning about objects and behavior.

The first step along this path is generally splitting the problem into object scene representation, and action. In this abstraction, the perceptual system recognizes objects and their configuration in the scene. The scene data is fed into an action model and trained with the driver data. By creating also a parallel path to an ‘expert system’ which confines the behavior, we can insert the first inklings of an ethical decision-making process. But both parts of this process are fairly dumb. Perception may be flawed, and the objects have behavior of their own.

Perceptual Shortcuts

There are many ways that the AI may have been insufficiently trained, but for understanding, it’s useful to focus on a particular type of failure in the perception system. Training a computer vision system to recognize an object generally results, at best, in the simplest shortcut rule that can separate most of the particular object type from most of all the other object types. There are three basic ways that this can go wrong.

Firstly, the shortcut may have actually nothing to do with the intrinsic properties of the object. For instance, if you peek under the hood of some of the most elaborate video activity recognition models, you will find that the model is dominated by very simple rules, such as the color of the background, because, for instance, soccer is usually played on grass while gymnastics happens on a blue mat. Babies are found in cribs, not in the middle of the highway.

Secondly, the training data obscures massive complexity of objects. If there is a person in a car, the data tells the AI that it is a car, and not a person. The shortcut rule is then highly contextual, and learns essentially that a person cannot be in the middle of the street. We don’t have to tell a human intelligence that people are in cars, and that is what is meant by annotating the object as a car instead of a person. Perhaps the machine will understand, but we don’t really know; it doesn’t ask questions.

The third point is technical, and essentially that the training method of stochastic gradient descent may enable these failures because the stochastic nature will necessarily average out the influence of these infrequent situations. Face plus sidewalk equals person. Face plus road equals car.

The interesting thing is that human brains work in the opposite manner. We pay attention to and learn from surprises, while ignoring everything else. We learn subtleties by paying attention to the small differences that arise in a rare situation. This is actually our own perceptual failure mode, extensively documented. Stage magicians know we can’t even pay attention to ordinary-looking things if we try, and newscasters know they can create mass panic over extremely rare events simply because they are rare. Attention makes learning efficient, and rare events are the most efficient things to learn from, but it’s not well-understood how to replicate this type of attention.

Breaking Expectations

Assuming that the AI runs an object behavior response model, it has to be able to create an accurate behavior model, either by assuming rationality, or by learning from observation, which is to presume at least predictability. But we know that both are only ‘probable’ at best. If 99% of humans are perfectly rational and ethical 99% of the time, there is still 2% of the time where we face unbounded risk. We either assume that humans act rationally, or forget that they don’t. We may be more predictable than rational, but even that doesn’t help much. We miss sensory inputs. We see and hear things that aren’t there.

But even in a situation when humans do act rationally, or predictably 100% of the time, we have the problem that other people’s behavior is a reaction to our own, and therefore, we can’t change our own behavior without breaking the assumptions of the other agent’s behavior model. When you consider that their behavior is the result of their prediction of our behavior, which is determined by our own expectation of theirs, you see that the rabbit hole goes extremely deep.

Equilibrium

It’s not clear at all that the complexity of this situation has an ethical solution. If you fundamentally can’t predict the behavior of another agent in response to your own, then you can’t effectively draw causality or blame. Human intelligence is remarkable in that somehow, given all the complexity of this dynamic system, we’ve found an equilibrium behavior that only kills us once every 100 million miles or so. To me, that’s incredible, and it’s something that we’ve learned almost completely from scratch with only minor incomplete suggestions from laws and driver’s education.

So if we don’t really understand human intelligence, but we can replicate it with enough data, where does that get us? Perhaps the best we can expect is a hyper-vigilant human, and certainly that would be better than a fallible human. If we were training to conform to the data, then this would be roughly our goal. AI sits behind the wheel, sees a situation, estimates behavior, considers possibilities, estimates results, and chooses the best action roughly the way that a human would. How much safer would this be? How much more efficient? I don’t think we really have a clear answer to this, although the data appear to suggest that a lot of crashes are from remarkable foibles of human intelligence, which is good news, but humans are also remarkably good at avoiding crashes, often by breaking rules and norms, and coordinating in strange ways. The last time I was on the highway, the bumper fell off the truck in front of me. Everybody reacted together and we all survived. How? Would an AI so readily replicate this complex behavior? Many manufacturers now use safety warning systems, which are extremely helpful as a backup, but in most, but not all, cases I’ve seen, the warnings trail the alert human’s reaction. The hope is that more data and better vigilance will create a much safer outcome, but the AI is a different kind of learner than a human, I’m not sure we can be absolutely certain that the result will improve from this alone. We are fallible, but actually very smart and creative, and we spend a lot of time learning at the wheel.

There is also the question of law and ethics if we ever get to this point, and clearly there is liability on both sides if a situation is ever reached where a decision needs to be made between the lives of passengers versus the lives of other motorists. I won’t propose a solution, but I will say that there is considerable danger in deviating from established norms, given that the other agent’s behavior cannot be known, and they will often react by predicting the other person’s behavior with at best a projection of their own ethics or possibly a worst-case scenario of their expectations of others.

Communication

My dog thinks that cars are giant bugs, and he’s not that far off from a behavioral perspective. Like insects, cars driven by humans exhibit complex social behavior with extremely rudimentary communication, just three flashing lights, except in the case of BMWs, which have their turn signals disabled.

Now I will make a bold claim: If there is any great leap forward in the safety and efficiency of transportation systems, it will be achieved mostly through communication among vehicles.

Communication completely changes the equilibrium. In game theory, communication solves the prisoner’s dilemma. For autonomous driving, communication eliminates most of the issues of behavior modeling, reducing the AI to a rather simple tracker of anomalous objects. It also moves the coordination among agents from a probabilistic behavior estimation problem to a multi-agent optimization. Complex coordinated behavior, from accelerating at green lights, to highway drafting, to avoiding a couch falling off a truck on the highway, becomes possible with some certainty, rather than simply hoping that others will react as we assume. While it’s true that an AI will be more predictable in this manner, and therefore allow better coordination, it is still not perfect if the sensor inputs are not known, and especially unpredictable within the black box of complex ML models because we don’t know exactly what would change the behavior.

But I admit that there is a lot that communication doesn’t solve. It’s possible, for instance, that turn signals and brake lights are the best that humans can do, given limited ability to synthesize and respond to information. It’s also a reality that most cars will not have communication capabilities, nor automated systems to participate in the safety and efficiency of the group.

Is It Ready?

The controversy over AI ethics in autonomous driving boils down to this single question. At some point, somebody has to define a statistical test that can be tested against data to determine whether the autonomous driving AI is good or bad. And it all must be done on some limited set of data collected in conditions similar to, but not exactly full autonomy. Should the test simply predict that the vehicle is safer than a human? How would we ensure that we get the same results in the real world as we got in the analysis? And when the analysis has been fully-vetted, and the engineers have done as much as they can, and clearly everybody is working for the common good, but something bad happens nonetheless, what is the proper response?

Recent Posts

Leave a Comment