VU BreakThru

Home » News » Sublating Binaries and an Intelligent Vehicles Case Study

Sublating Binaries and an Intelligent Vehicles Case Study

Posted by on Thursday, March 28, 2019 in News, The Ethics of Artificial Intelligence (AI).

Artificial IntelligenceWritten by Douglas H. Fisher and Haerin Shin

We recently passed the halfway mark of our University Course on the Ethics of Artificial Intelligence, and much has happened since our introductory blog post. In this and the next post, we want to illustrate a theme that we face frequently in class, and that we believe to be important for reasons of understanding technology and of its societal implications. We have observed that our class discussions and virtual forum posts often shows signs of “binarism,” whereby two scenarios or ideas are framed as an “either/or” choice – namely, mutually exclusive. In fact, a great deal of discussion in our society exhibits this behavior as well. While binary or dichotomous analysis can be useful for charting strengths/weaknesses or benefits/drawbacks in view of constitutive AI applications, they can also deteriorate into identitarian politics or essentialist ontologies (gender, race, class, merit/intelligence, physical or mental ‘fitness,’ and more). Such an approach can readily lend itself to absolutist ethics whereby modes of existence and intelligence get mapped on to hierarchies such as superiority/inferiority or right/wrong. Binarism can also contribute to gross misunderstandings of technology and its power to shape our society.

In this blog we introduce one way in which we sublate binaries – we suggest that when we are presented with an apparent binary choice, we should try to imagine finer-grained characteristics and circumstances concerning the choice, which inevitably leads to a much larger set of nuanced options. In Ethics of AI, this inevitably dictates that we address the capabilities, both current and projected, of AI in greater detail, thereby integrating technological understandings into our discussion of ethics and societal implications. Indeed, sometimes the primary pedagogical motivation for sublating binaries is to take a deeper dive into the technology that we discuss with students.

For instance, the “moral choices” that so-called autonomous vehicles will have to make when faced with life and death situations is a good scenario to illustrate this tactic of sublating binaries. In our class readings (“the moral machine”, ethical vehicle” simulations) and elsewhere, thought experiments like the “trolly scenario” – which were previously used almost exclusively in texts and lectures on moral decision making – are given new life. An in-depth analysis of the parameters involved shows that intelligent vehicles of the future may have to choose between hitting three adults, a child, or even killing the driver by either veering left or right or staying on course, respectively. This is a ternary, but still mutually exclusive, highly constrained choice. What should a “smart” vehicle do, under such circumstances, and for what reason?

Such life-and-death choices are not only interesting but also important to consider, but they can also become binary, last-minute decisions that would result from myopic, reactive vehicles that probably lie at the “not really smart” end of the (potential) AI spectrum. In addition to the moral ambiguities that undergird the priorities involved (e.g. age, gender, quantity, cultural practice, and even species-specific preferences), one key shortfall we identified is that in most scenarios presented in such sources, smart vehicles are neither communicating with one another nor receiving inputs from sensors and cameras on the streets. Moreover, we observed, they also do not seem to be driving defensively – in other words, they fail to predict and anticipate such abrupt developments. We sublate the binary of the last-minute reaction by considering the multitude of decisions that the vehicle could make beforehand, and an even greater range of possible outcomes that an AI should be able to consider in making such decisions. The width and scale of said options sublate binary reaction, thereby attesting to the multivalence of the issue at stake.

To get a handle on how a smart vehicle might operate, we could consider how one drives when at one’s very best. We could, hypothetically, imagine ourselves on a motorcycle – which may present a more precarious situation at hand. The base expectation would be that we should not stay in another driver’s blind spot; slow down as we approach a blind corner; refrain from tailgating; or if we are unable to make eye contact with a driver who wants to turn onto the street where we are traveling, we must slow down and give the other driver room since they might not see us; we should steer clear of a weaving vehicle in front of us; and avoid the heavy foot and vehicle traffic on West End leading up to a Vandy football game. These are only some of the ways in which we might drive defensively.

It is within “easy” reach of AI technology that smart vehicles of the future will be able to drive more defensively than we ever could, or want to, perhaps almost entirely eliminating last minute, reactive life-and-death moral decisions. The morality of the machine is manifest as the defensiveness and informedness with which it drives. In the next post we sketch some possibilities to that end, thereby sublating oversimplified scenarios of ethics by digging deeper into AI understandings. A single unified version of the two posts can be found on the Ethics of Artificial Intelligence website.


Leave a Reply

Back Home   

Recent Posts

Browse by Month