Entanglement
Machine learning and human ethics in driver-less car crashes
DOI:
https://doi.org/10.7146/aprja.v6i1.116013Abstract
This paper is based on driver-less car technology as currently being developed by Google and Tesla, two companies that amplify their work in the media. More specifically, I focus on the moment of real and imagined crashes involving driver-less cars, and argue that the narrative of ‘ethics of driver-less cars’ indicates a shift in the construction of ethics, as an outcome of machine learning rather than a framework of values. Through applications of the ‘Trolley Problem’, among other tests, ethics has been transformed into a valuation based on processing of big data. Thus ethics-as-software enables what I refer to as big data-driven accountability. In this formulation, ‘accountability’ is distinguished from ‘responsibility’; responsibility implies intentionality and can only be assigned to humans, whereas accountability includes a wide net of actors and interactions (in Simon). ‘Transparency’ is one of the more established, widely acknowledged mechanisms for accountability; based on the belief that seeing into a system delivers the truth of that system and thereby a means to govern it. There are however limitations to this mechanism in the context of algorithmic transparency (Ananny and Crawford).
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Maya Indira Ganesh
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyrights are held by the individual authors of articles.
Unless stated otherwise, all articles are published under the CC license: ‘Attribution-NonCommercial-ShareAlike’.
The journal is free of charge for readers.
APRJA does not charge authors for Article Processing Costs (APC)