Welcome to Etherpad!
This pad text is synchronized as you type, so that everyone viewing this page sees the same text. This allows you to collaborate seamlessly on documents!
To prevent your pad from appearing in the archive, put the word __NOPUBLISH__ (including the surrounding double underscores) anywhere in this pad. Changes will be reflected after the next archive update.
Warning: DirtyDB is used. This is fine for testing but not recommended for production. -- To suppress these warning messages change suppressErrorsInPadText to true in your settings.json
e
2
\
Presentation
"mongrel ways"
comes from feminist activist background
she wants to avoid aristotle - would this be possible?
Interested in a crisis of ethics
in big data
,
the algorithmic regulation of everyday life.
need for
ethics of algorithms - too much attention to the algorithm.
nobody wants to do "algorh
i
thm of ethics"
The trolley problem and abortion
http://www.pitt.edu/~mthompso/readings/foot.pdf
a simplistic problem flattening ethics on a binary choice.
anxieties about algorithms and
silicon-valley
e
nterpreneurial e
nthusiasm too.
emergence of technology in society.
reducing ehtics to programmable outcomes..
technology design.
I
nstead p
roposes ethics as a relationship
Ethical research: how can ethics be approached without relating to the philosophical arguments – how is ethcis?
more than programmable rules.
..a sociotechnical framework for evaluation and accountability.
BMW and daimler
and Audi
jointly
bought the mapping company
to develop software for
driverless car
Understanding it is not about ethics, but about infrastructure.
the system needs to be interfaced with the german civil space.
Error is one part of a much larger ethnography of ethics.
Established history of the mathematization of risk, from crash-testing to computer simulated descendants.
heteromation of labour
- when the machine
calls for help?
help somewhere else.. by somebody else.
continuation between
automation, augmentation, heteromation
"the machine calls for help"
how does the link between ethnography and ethics work cross-cuturally? You mention the US and German contexts but what else?
There was this film on German speaking TV (also in Switzerland and in Austria) recently where the Trolley problem was enacted. Bascially, a pilot descides to shoot down a plane that has been hijacked (around 164 passengers) or not, and the plane would affect a football stadium of 70,000 people. He is being judged in court. The ‘audience’ was allowed to vote on what their judgment would have been, guilty or innocent and the complexities involved.
http://www.daserste.de/unterhaltung/film/terror-ihr-urteil/index.html
http://www.vice.com/de/read/ich-bin-rechtsanwalt-und-finde-dass-terrorihr-urteil-volksverdummung-ist
Comments & Questions
mutual composition of ethics and aesthetics // aesthetics of/with "almost minds" // cultural-technical normalisation of deviance
I have a question about driverless cars - if you have to keep your hands on the wheel, then why use them? What are they for - if this is not just a step towards a moment when we don't have to keep our hands on the wheel?
* the question Christian is asking about 'control' - if it is not controlled, then is it even a car any more? Is it even fulfilling the 'role of car'? (And this is a gendered question!) -- there is a reference which might be interesting here to the way the William S. Burroughs writes about control.
http://eng7007.pbworks.com/w/page/18931079/BurroughsControl
Also, bearing this in mind, it seems difficult in a project which is *about* ethics to elide the co-evolution of this technology with military applications.
in synthesis, automation, reproduction, - we have seen how granular analyses can re-produce real speech, can this be done with 'real ethics'?
Discussion
Søren: your paper has a social technical approach.
Nice because when technology enters complex sociological situations, 'shit happens' :-)
A suggestion: pilot: we need instruments that show what instruments are failing. We need to see through the interface to see whats going wrong.
is the system or are the instruments failing?
Peter Bøgh Andersen made some studies of navigation and instruments/interfaces on ships that might be relevant - you can find it here:
http://imv.au.dk/~pba/
(I'm a bit unsure which one it is...).
Sometimes automatization makes us believe that technology can do more than it actually can; smooth interfaces can be the cause of accident because the user will trust the machine too much.
Maya: German authorities have told Tesla that "autopilot" is a misleading term.
When dealing with ethics - how?
Maya: Exploring several things. Inspired by ethnography (Renee: "critical ethnography") Taking a slice and going quite deep. Exploring the moment of emergence. Does not want to do anthropology. Geoff: "auto-ethnography"
Martino:
Binarisation/technolosation of ethics, applied ethics. It emerged in a weird way, now ready to be applied. What came before?
The Pascalian
Wager ?
https://en.wikipedia.org/wiki/Pascal
's_Wager //
http://plato.stanford.edu/entries/pascal-wager/
The footnote explains "It is problematic to assign a value of 100 to this choice", I don't want to accept this. Ethics has to get out of this input-output mode.
The sound of cars
is necessary for people to feel they are in power
and
are
acting according to accountibility
goes back to visions of cities by car companies, ex Citroen asking Le Corbusier
The Google car and the Google City. A history of imagining urban landscape and the role of cars. Smart city - machine readable city
Distribution of authority is difficult to locate ? in an input-output model
Geoff:
'autoethnography'
Study of primates
(Don
n
a Haraway)
changes when understanding of familial placements changes.
'auto' as human-machine relation
or even, a driver-less approach for the research method or research project as a whole?
Lili Irani
(work on Mechanical Turk
http://limn.it/microworking-the-crowd/)
how do you talk about your work in different contexts? Irani: "You need to allow your ethnography to be light on your feet"
Dave: what would be a drivers-test for a driverless car?
cars are tested in the making, technology has passed the test, but is not reliable at all
The car interface is a particularinteresting case – for (perhaps) two reasons:
A) The cultural implication of “driving” – of this particular interface. Cars are emblematic for being "in control" (a particular gendered practice of “accountability”?). Can we imagine a car without a driving interface? (people have been opposed to the electrical car for much less – because it doesn't have the necessary 'roar'/the audible/perceptible response of control)
B) The historical relation between cars and imaginations of cities: E.g., le Corbusier who was asked by Citroën to design a city for cars (la ville radieuse)... What would Google’s city look like today?... a city without the tension that follow from control interfaces/accountability? A city that is readable for machines, smooth,frictionless and very different from the constant events and negotions that we normally associate with the urban experience?
Daphne: enjoyed ethics as a relationship. Reminded of the project:
Liam Young:
Where
The City Can’t S
ee
.
https://vimeo.com/157920130
interest in locating glitches/errors
ethical apps to
make you shop responsibly. ridiculous!
The Florian Cramer text referred to in this and the previous discussion (design fiction of future city optimised for driverless cars is included within it)
http://cramer.pleintekst.nl/essays/crapularity_hermeneutics/
---
title: Maya Ganesh – An Ethnography of Error
slug: an-ethnography-of-error
id: 262
link:
https://machineresearch.wordpress.com/2016/10/07/an-ethnography-of-error/
guid:
https://machineresearch.wordpress.com/?p=262
status: publish
terms: Uncategorized
This post has been written in relation to, and as a subset of, a body of work –an ‘ethnography of ethics‘ – that follows the emergence of the driverless car in Europe and North America. An ethnography of ethics is an acknowledgment of the need for a “thick” reading of what ethics means – and does not mean – in the context of big data: how it is constituted in relation to, and by, social, economic, political and technical forces; how it is put to work; and what its place is in a moment when autonomous vehicles and artificially intelligent computing receive significant interest and support. I argue that ethics is not necessarily an end-point or outcome, but is a series of individual and system-level negotiations involving socio-technical, technical, human and post-human relationships and exchanges. This includes an entire chain encompassing infrastructure, architectures, actors and their practices, but is more than its constituent parts. Thus, what is emerges as ethics is a culture around the maintenance, role and regulation of artificial intelligence in society.
There are 48 synonyms for error according to the Roget’s English Thesaurus. Error, as a category, is as big as, and keeps defining, its opposite, which is, perhaps, not even an opposite, but is more like another part of. Error is a twin, the Cane to the Abel of accuracy and optimisation. Rather than cast error out, or write it off, I want to write it in, and not just as a shadow, or in invisible ink, as a footnote, or awkward afterthought.
Lucy Suchman is a feminist theoretician who thinks about what it means to be with and alongside technologies. She asks about “the relation between cultural imaginaries -that is, the kind of collective resources we have to think about the world – and material practices. How are those joined together?” (2013). In that vein I want to think about what it means to be in close relationships and working with machines that, in a sense, rely on human judgment and control for optimisation.
I believe it may be important to think through error differently because of how increasingly complex it is to think about responsibility and accountability in quantified systems that are artificially intelligenti. How do you assign accountability for errors in complex, dynamic, multi-agent technical systems?
Take the case of the recent Tesla crash, the first death of a human being in a driverless car context. In May 2016, an ex-US Navy veteran was driving a car and watching a Harry Potter movie at the same time. The man was a test driver for a Tesla semi-autonomous car in autopilot mode. The car drove into a long trailer truck whose height and white surface was misread by the software for the sky. The fault, it seemed, was the driver’s for trusting the auto-pilot mode. The company’s condolence statement clarifies the nature of auto-pilot (Tesla 2016):
When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.
Herein lies a key idea that runs like a deep vein through the history of machine intelligence: that machines are more accurate and better than humans in a wide variety of mechanical and computational tasks, but that humans must have overall control and responsibility because of their (our) superior abilities, because of something ephemeral, disputed, and specific that we believe makes us different. Yet, we are allowed to, and even expected to, make mistakes.
For machines, error comes down to design and engineering, at least according to Google. Early in its history the Google driverless car was a little too perfect for humans; it follows the rules perfectly – exactly what they are programmed to do. Humans however, break the rules: they make mistakes, take short cuts, and break rules (Naughton, 2015):
Google is working to make the vehicles more “aggressive” like humans — law-abiding, safe humans — so they “can naturally fit into the traffic flow, and other people understand what we’re doing and why we’re doing it,” Dolgov said. “Driving is a social game.”“It’s a sticky area,” Schoettle said. “If you program them to not follow the law, how much do you let them break the law?
The Tesla crash outcome follows a certain historical continuity. American scholars Madeleine Elish and Tim Hwang (2014) show that in the history of cars and driving in America, human error tends to be cited as the most common reason for accidents; the machine is not flawed, it is human error in managing the machine. In the 1920s-30s when a number of crashes occurred, ‘reckless driving’ rather than poor design (of which there was a lot back then) was blamed for accidents (Leonardi 2010) There has been a tendency to “praise the machine and punish the human” say Elish and Hwang. So, the machine is assumed to be smart but not responsible, capable but not accountable; they are “almost minds” as Donna Haraway famously said of children, AI computer programs and non-human primates (1985).
One of the other ways in which error and accountability are being framed can be understood through the deployment of the “Trolley Problem” as an ethical standard for driverless car technology. In this, responsibility for accuracy and errors is seen to lie with software programming. The Trolley Problem thus also determines what appropriate driving is in a way that has never quite been outlined for human drivers.
The Trolley Problem is a classic thought experiment developed by the Oxford philosopher, Philippa Foot (originally to discuss the permissibility of abortion). The Trolley problem is presented as a series of hypothetical, inevitably catastrophic situations in which consequentialist (or, teleological) versus deontological ethics must be reconciled in order to select the lesser of two catastrophes. In the event of catastrophe, should more people be saved, or should the most valuable people be saved? In short: how can one human life be valued over another?
Making this difficult decision is presented as what artificial intelligence will have to achieve before driverless cars can be considered safe for roads; the problem is that software have not yet been programmed to tackle this challenge. If machine learning intelligence is to be relied on to solve this problem, it first needs a big enough training database to learn from. Such a training database of outcomes from various work-throughs of the Trolley Problem have not been made. Initiatives such as MIT’s new Moral Machine project are possibly building a training database of human- level scenarios for appropriate action.
However, the Trolley Problem has since fallen out of favour in discussing ethics and driverless cars (Davis 2015). Scholars such as Vikram Bhargava, working with the scholar Patrick Lin, have already identified limitations in the Trolley Problem and are seeking more sophisticated approaches to programming decision-making in driverless cars (2016). The Trolley Problem, and other ethical tests based on logical reasoning, has been one of the ways in which ethics has been framed: first, as a mathematical problem, and second, as something that lends itself to software programming.
There has been a call to look at the contexts of production of technology for greater transparency and understanding of how AI will work in the world (Crawford, 2016; Elish and Hwang 2016). Diane Vaughn’s landmark investigation and analysis of the 1986 Challenger space shuttle tragedy gives us some indications of what the inside of technology production looks like in the context of a significant error. In this, Vaughn names the normalisation of deviance as the culprit for the design flaw, rather than malafide intent (Vaughn, 1997).
The ‘normalisation of deviance‘ refers to a slow and gradual loosening of standards for the evaluation and acceptance of risk in an engineering context. The O rings on the rocket boosters of Challenger that broke on that unusually cold January morning in Cape Canaveral, Florida, did so despite considerable evidence of its questionable performance in low temperature conditions. The space shuttle’s launch date was also repeatedly delayed for this very reason. Yet, in what is possibly one of the best resourced space research organisations, NASA, how was this vital information overlooked? The normalisation of deviance is as much an organisational-cultural issue as it is about the technical details. Vaughan’s detailed ethnography of the managerial, technical and organisational issues that led up to the Challenger disaster presents a valuable precedence and inspiration for the study of high-end technology production cultures and how errors, crises and mistakes are managed within engineering.
Design or use-case? Intuition or bureaucracy? Individual or organisation? The sites of being and error-ing only multiply.
This ethnography of error comes up against a planetary scale error that queers the pitch. Australia is located on tectonic plates that are moving seven centimetres north year; so, the whole country will move by five feet this year. This may not mean much for human geography but it means something for the shadow world of machine-readable geography: maps used by driverless cars, or driverless farm tractors, are now going to have inexact data to work from (Manaugh 2016). It’s difficult to say how responsibility will be assigned for errors resulting from this shift.
References
Bhargava, V (forthcoming) What if Blaise Pascal designed driverless cars? Towards Pascalian Autonomous Vehicles. in Patrick Lin, George Bekey, Keith Abney, and Ryan Jenkins (Eds.), Roboethics 2.0. MIT Press.
Crawford, K (2016) Artificial Intelligence’s White Guy Problem. The New York Times.
http://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0
retrieved July 25, 2016
Crawford, K. and Whittaker, M (2016). The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term . Symposium report.
https://artificialintelligencenow.com/media/documents/AINowSummaryReport_3.pdf
Retrieved October 2, 2016.
Davis, L.C (2015) ‘Would you pull the trolley switch? Does it matter?’ Lauren Cassani Davis in The Atlantic, October 9, 2015. Retrieved October 1, 2016
http://www.theatlantic.com/technology/archive/2015/10/trolley-problem-history-psychology-morality-driverless-cars/409732/
Elish, M and Hwang, T (2014) Praise the machine! Punish the human! The contradictory history of accountability in automated aviation. Comparative Studies in Intelligent Systems – Working Paper #1 Intelligence and Autonomy Initiative1. February 24 2015. Data & Society. Accessed
http://www.datasociety.net/pubs/ia/Elish-Hwang_AccountabilityAutomatedAviation.pdf
Retrieved September 23, 2015.
Elish and Hwang (2016) An AI Pattern Language Published by the Intelligence & Autonomy Initiative of Data & Society.
http://autonomy.datasociety.net/patternlanguage/
Retrieved October 5, 2016
Foot, P (1967) The Problem of Abortion and the Doctrine of the Double Effect. Oxford Review, No. 5. Included in Foot, 1977/2002 Virtues and Vices and Other Essays in Moral Philosophy.
Haraway, D (1990) Primate Visions: Gender, race and nature in the world of modern science. Routledge
Leonardi, P (2010) From Road to Lab to Math: The Co-evolution of Technological,Regulatory,and Organizational Innovations forAutomotive Crash Testing. Social Studies of Science 40/2; 243–274.
Manaugh, G (2016) Plate Tectonics Affects How Robots Navigate. Motherboard
http://motherboard.vice.com/en_uk/read/plate-tectonics-gps-navigation
retrieved October 2, 2016
Orlikowski, W.J (2000) Using Technology and Constituting Structures: A Practice Lens for Studying Technology inOrganizations. Organization Science, Vol. 11, No. 4 (Jul. – Aug., 2000), pp. 404-428.
Naughton, K (2015) Humans Are Slamming Into Driverless Cars and Exposing a Key Flaw, Bloomberg Technology News, December 17, 2015:
https://www.bloomberg.com/news/articles/2015-12-18/humans-are-slamming-into-driverless-cars-and-exposing-a-key-flaw
retrieved February 5, 2016
Spector, M (2016) ‘Obama Administration Rolls Out Recommendations for Driverless Cars’, Wall Street Journal, September 191, 2016.
http://www.wsj.com/articles/obama-administration-rolls-out-recommendations-for-driverless-cars-1474329603
Retrieved October 1, 2016
Suchman, L (2013) Traversing technologies: Feminist research at the digital/material boundary. From video and transcript of a talk at the University of Toronto at the colloquia series Feminist and Queer Approaches to Technoscience:
http://sfonline.barnard.edu/traversing-technologies/lucy-suchman-feminist-research-at-the-digitalmaterial-boundary/
Tesla (2016). A Tragic Loss. Blog post on Tesla website.
https://www.teslamotors.com/blog/tragic-loss
retrieved June 2016
Vaughan, D. (997) The Challenger launch decision: Risky technology, culture and deviance at NASA. University of Chicago.
i I follow the definition of artificial intelligence proposed by Kate Crawford and Meredith Whittaker, that it is a “constellation of technologies comprising big data, machine learning and natural language processing” as described in the recent symposium AI Now: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term. Symposium report available here:
https://artificialintelligencenow.com/media/documents/AINowSummaryReport_3.pdf
Retrieved October 2, 2016.