VU BreakThru

Home » News » Sublating Binaries: Smart Vehicles as Defensive Drivers

Sublating Binaries: Smart Vehicles as Defensive Drivers

Posted by on Monday, April 1, 2019 in News, The Ethics of Artificial Intelligence (AI).

Artificial IntelligenceWritten by Douglas H. Fisher and Haerin Shin

In our previous post, we began discussing a theme of “binarism,” whereby two scenarios or ideas are framed as two (or a very few) mutually-exclusive choices. Binarism grossly oversimplifies discussions of and actions on technology and ethics.

We have split our discussion into two posts, one introducing the strategy of sublating binaries and one to dive into our illustration of smart vehicles as idealized defensive and informed drivers. A single unified version of these two posts can be found on the Ethics of Artificial Intelligence website.

One way to sublate binaries is that when we are presented with a binary choice, we imagine finer-grained choices that are subordinate to the original. Our illustration of this general strategy is sublating the life-or-death “moral choice” of a not-very-smart vehicle; rather, there is a multitude of subordinate choices made by a vehicle that drives more defensively and with more information than its myopic, reactive sibling.

Consider that an idealized smart vehicle will have more high-function sensors, cameras, knowledge bases, image processing and other reasoning capabilities, than their kind do today. Hopefully, it will be able to anticipate potentially dangerous situations before they happen. It will observe an erratic vehicle on the road, steer clear, and notify law enforcement without asking for permission. A smart vehicle could also identify the models of all vehicles in the vicinity, and be aware of their capabilities, ranging from good old-fashioned manually operated vehicles to those that are as smart as it is, smarter, or not quite as smart, and in what ways. It will notice whether the driver of a manually-driven vehicle that wants to turn into its trajectory has made eye contact (implicit communication). The smart vehicle will hopefully be able to reason possible outcomes in the near future, should such observations be factored in.

Idealized smart vehicles will also be able to reason with uncertainty (e.g., the animate “object” two blocks ahead and on the right on the curbside is a human with confidence 0.98, and if so, a man with confidence 0.85, who appears to be staggering, confidence 0.6, and staring at something, probably a smartphone, confidence 0.7). Uncertainties will be integrated with the AI system’s ability to analyze possible future outcomes. For example, a defensive AI driver will judge the rate of vehicles entering the highway in front of it and estimate the probability of a vehicle emerging when it, the defensive AI driver, reaches the intersection.

Smart vehicles will communicate with each other, and already do, with a vehicle downstream reporting traffic conditions to vehicles and personal devices upstream. In the future, all manner of information that we have mentioned above could be communicated amongst smart vehicles, perhaps through centralized servers. And vehicles could ask each other questions, for instance, about their intents (e.g., “Are you taking the off ramp that is coming up in 2 miles?”).

In a smart, connected city, a smart vehicle may have access to the information known to city infrastructure, such as images that are already tagged as accidents, reckless drivers, or a band of school children on a fieldtrip. The smart vehicle could broadcast to this infrastructure as well, in an interactive manner. For example, we looked at an interesting example of such potentials — the smart intersection.

In sum, we do not want an AI system that waits until the last minute to make a reactive binary decision in an emergency, based on solitary judgments. We would want one that anticipates future outcomes based on information that it receives from its own sensors, other vehicles, and the city itself, acquiring the ability to preempt such turns and make a multitude of smart choices throughout a given journey. An AI system in the near-future could, and should be the best defensive, networked and thus multi-laterally informed driver in the world.

Whether this ideal of the smart vehicle comes to fruition or not, the exercise of sublating binaries in a case like those considered above reveals the complexity of real and potential AI systems, with functionalities like image processing, reasoning with predictability and uncertainty, we well as social connectedness. These course lessons on the capabilities of AI systems are repeatedly and cumulatively reinforced in consultation with other societal sectors (e.g., education and environmental sustainability) and in a variety of media representations (e.g., film, literature, social media activities, and scholarly research). Our goal is that students become better commentators on popular depictions of AI and their social implications. We also hope our students, who will be the inhabitants, producers and designers of the future, to be equipped to serve these roles well. These exercises also remind the instructors of the inherent follies of all binary choices, for we ourselves are not immune to the lull of oversimplified scenarios.

This post and the previous one focus on the technology of smart vehicles and what is possible in an idealized, yet hopefully foreseeable future. Our in-class and virtual forum discussions on this topic, and of smart cities generally, were also informed by other readings and issues, notably implications for privacy, which present other lessons on sublating binaries. Our discussions benefited from the in-class participation of Professor Abhisek Dubey of the Vanderbilt Initiative for Smart-City Operations Research (VISOR), led by Professor Gautam Biswas.


Leave a Reply

Back Home   

Recent Posts

Browse by Month