Presentation 30 seconds of silence background noise of machines + voice inside head similar to the wavenet babbling..? surprise of combining "machine" + "learning" is learning a metaphor, an analogy...?what is in the learning? taking the term machine learning at "face value" - even though it can of course be questioned if it "really" concerns learning. Learning in a mathematical understanding - what kind of pedagogy should we use to talk about this kind of learning? What pedagogies is already there in algorithmic models? specific idea of 'training' --> Un-supervised learning Li Irani, Amazon Mechanical Turk -> badly paid, precarious contracts, monitored & respond to what is expected from them "integrating workers output directly in their algorithms" no way to capture 'pure' knowledge from them human input becomes dehumanized so that it fits API APIs for humans at side of the machine: pedagogy look into how neural networks were conceived experiments with animals in labs / vivisection, electroshocks What the Frog's Eye Tells the Frog's Brain < Lettvin, Maturana, McCulloch, Pitts (1959) http://192.168.42.103/var/www/lib/whatthefrogseyetellsthefrogsbrain.pdf Neural networks literally developed next to labs for vivisection. Reflex capture of sensation is what NN are created for. Pre-processing of the visual Manual for Learning OpenCV, Bradski, Kaehler 52008), Oreilly The mouse in the laboratory --> Learning machine metaphor reinforcement learning Paolo Freire (pedagogue) meeting Papert! // AI meets (the pedagogy of) the oppressed! http://192.168.42.103/var/www/lib/Paulo_Freire_Pedagogy_of_the_Oppressed.epub 3 strong concepts from Pedagogy of the Oppressed (written in Chile, in exile): * banking pedagogy: learners are not empty slates * not take for granted that person who come to the process is human, pedagogy as humanizing process * pedagogy & liberation: never alone -> mutual liberation, internalization Nicolas' questions: * can machine learning become dialogical process from the start? * what kind of machine reflexivity can trigger human reflexivity & vice versa? * what does trainer doesn't know s/he knows? what does algorithm doesn't know it knows? Questions & comments from the question "what does the algorithm doesn't know it knows?", i wonder: is there a pedagogical legacy in machine learning algorithms? if so, it might be urgent to feed it back and forward. -> liberation can be already prototyped latent in the learner-machine's legacy. re. Alienation - makes me think of Artaud's solution to alienation - Cruelty. Is it possible to be cruel to an algorithm? How might cruelty play into liberation of human-machine learning? A: absolutization of ignorance is the legacy of the algo just math? Is a frog not able to differentiate a tasty fly from a poisenous fly by remembering past actions? are you looking into translation of pixels/words into bytes? SorenP: good critique of machine learning, and smartness (and intelligence) in order to avoiding conflicts, maybe interesting to make conflicts instead . Most materials about optimising learning (of algorithms). Where is the agency of workers of algorithms Kind of pedagogy you put in place produces exactly kind of knowledge Is it about improving worker conditions? Freire interesting for binding learners to teachers. Need to work from both sides. We are here (using etherbox, with interest in and access to tool) different attitude/pedagogical context when you type together vs lonely interface. Kristoffer: Colonisation of workers by machines http://mediaartscultures.eu/xmlui/handle/10002/439 (self-promoting an old text that is very related to your talk here) Nicolas: try to formulate reasons why he is frustrated by answers Modes of conversation, ways relationship between algorithms and humans are imagined are usually frustrating/limited. https://turkopticon.ucsd.edu/ Zach Blas: Fag Face http://www.zachblas.info/works/facial-weaponization-suite/ Face Cages. masks that escape face recognition algorithm, produced from the training data (http://www.zachblas.info/works/face-cages/) Fag Face: http://www.zachblas.info/works/facial-weaponization-suite/ A feed back loop (not a negation) I wonder where you situate the edge of machine reading, and where these platforms are that would be able to facilitate a dialogue between the machine and the public that 'feeds' the machine. does it extend to stages of a system before it is in operation (the planning of a system like the mechanical turk), and how do you imagine that this conversations could take place? Permanent beta maybe? Jara: play with notion of legacy is there pedagogical legacy in machine learning algorithm? what latent liberation does algorithm have? Nicolas: Freire: absolutisation of ignorance, process = making you aware of what you already know Very nice how you take on the problem of the analogical frame. So the way in which the "machine learning", or "neural network" concepts construct this double-bind that seems very difficult to break out from. And decide to work in it reversing the analogy and introducing instead the pedagogy of the oppressed, but are these analogies reversible? Is it possible for meaning to flow against the currents that establishes them for very clear urges? Nicolas: What would an algorithm say when it would speak? Geoff: For sure not that it is alone! What about 'art of approximation' in pedagogical process, takling back to legacy of statistics? it would be interesting to keep the comparison exercise with other critical pedagogies like those of Illich (Tools for Conviviality) or Ranciere (Ignorant schoolmaster) Interesting discussion about the legacy - maybe there are other legacies in computing - e.g. if one reads Jacob Gabory's queer history of computing, http://rhizome.org/editorial/2013/feb/19/queer-computing-1/ ! computing was also developed through a resistance towards determination and the traditions of measurement in math. So there is a theoretical - philosophical tradtion in computing that is not about deterministic facts but about developing math as a language and the computer as a post/meta medium. Maybe thinking the computer along this legacy is a way to continue this thinking. Interesting how the theme of 'manual labour' arises again as a raw material in smart/automated systems - thinking in terms of the pedagogical relationship between the 'human' trainer and the machine, they are working towards a point where the machine can become autonomous, such that the human is no longer a necessary component of the system. In terms of Freire's ideas of mutual support, what might this process of machine-autonomisation mean? Is it "liberation", and for whom? Is the human facilitating their own redundancy as a worker? Christian: am remindedof Allan Blackwell’s critique of HCI in the age of “smart interaction” where heargues for a “Humane” Computer Interaction (as opposed to HCI/Human Comuper Interaction). He addresses the history of AI in HCI and compares “expertsystems” with machine learning systems --- saying that if the completion of theTuring test happens in with ML it not the machine that becomes human (as it wasthe aspiration in an expert system) .... but because the human becomes morelike the machine. Your image of the Amazon Mechanical Turk worker expemplifiesthis in tragic ways (isolated, alienated, anonymous) Brian: ethical difference between supervised/unsupervised learning? -> I didn't get 2nd questiion Brian: is it dangerous in this more-than-human apparoch to personify the algorithm and not the researcher? Brian: pedagogy reduced to protocol (in the galloway sense) -- Amazon the real site of power Would there be ways to do 'machine relearning' or 'machine unlearning'? Nicolas, I think this is the link, 6 years old, maybe 7, where Zittrain discusses Turking and that the training used is for like 'facial recognition software for the military' instead of just programming circuits, or matching patterns for 'neutral' reasons. https://www.youtube.com/watch?v=Dw3h-rae3uo --- title: Nicolas Malevé – Machine pedagogies slug: nicolas-maleve id: 86 link: https://machineresearch.wordpress.com/2016/09/26/nicolas-maleve/ guid: https://machineresearch.wordpress.com/2016/09/26/nicolas-maleve/ status: publish terms: Uncategorized --- As a starting point, I would like to describe a few steps of the concrete process of training in a typical machine learning task1: the creations of annotations to be used by a computer program that will learn to classify images. A worker connects to the Amazon Mechanical Turk (AMT)2 and selects a task. In our example, she selects an image annotation task3. She faces a screen where a label and its definition are displayed. When she confirms she has read the definition, she is shown another screen where the label is followed by different definitions. The workflow is regularly interrupted by such control screens as her requester suspects her to work without paying enough attention. When she clicks on the right definition, a list of 300 square images is displayed from which she has to select the ones corresponding to the label. When she decides she has selected all the appropriate images, she clicks “next” and continues to her new task. The list of images she has to choose from contains “planted” images. Images that are known to the requester to correspond to the label. If the worker misses the planted images, her task will be refused and she won't receive the 4 cents the requester pays for it. At least three workers will review the same 300 images for the same label and the images selected by a majority of them will be included in the dataset. The worker will not be notified if her selection matches (or doesn't) another worker's selection. She works in isolation and anonymously. The images and their labels are then grouped in classes of objects. A learning algorithm is fed with these data and trained to associate a label and a series of images. It will be shown a series of images containing both matching and non-matching objects. It will be “rewarded” or “penalized” whenever it detects appropriately in the images the object corresponding to the label. Every interpretation that doesn't correspond to the truth stated in the training set will be considered an error. It will be retrained multiple times until it finally matches the most successfully the images according to the ground truth4. It is a very mechanistic approach to training. The machine is rewarded when behaving properly5 and reinforces the kinds of associations that lead it to produce the satisfying answer. It is expected from it to exhibit the proper behavior, not to create a rich internal representation of the problem it needs to solve. The more the algorithm behaves as expected, the more it is granted a human quality. It becomes intelligent, a “thinking machine”. The surge of neural network based algorithms this last decade reinforces this tendency. The neural net model is inspired by the communication between the neurons through the synapses observed in the brain. The algorithm doesn't only show an “intelligent” behavior, it also works at the image of the human brain. The greater its success, the greatest the demand for more data and therefore more human annotations. While the algorithm acquires the status of an intelligent entity, the AMT worker is increasingly assimilated to the machine. Frantically responding to the platform's request, she is routinely executing tasks that are too costly to implement algorithmically and is increasingly assimilated to machines. Cheaper than an algorithm, she becomes a process available through an API. What strikes me in this process is the relationship between learning and alienation. The agencies of the human worker and the algorithmic agents are both reduced and impoverished. The human worker is insulated (from his co-workers and from the algorithm he is preparing the “intelligence”), his margin of interpretation is narrowly defined and the indecent wage forces him to a tiring rhythm of work. The algorithm is trained as an animal in a lab, receiving signals to be interpreted unequivocally and rewarded or punished according to the established ground truth it cannot challenge. If the training/teaching of machines implies a reflexion about liberating practices of pedagogy, where should we look for inspiration? This question lead me to examine a series of principles expressed in The Pedagogy of the Oppressed, the seminal book of Paulo Freire. Freire, trained as a lawyer, chose to work as a secondary school teacher, and later became the minister of education of Pernambuco, before he had to escape Brazil after the military coup. The book was written in Chile, in 1968, a few years before the election of Salvador Allende. For Freire, it only makes sense to speak of pedagogy if it includes the perspective of the liberation of the oppressed (Freire, 1969). As a marxist, Freire sees his pedagogical method as a way for the oppressed to learn how to change the conditions under which they can transform a world made by and for their oppressor. A first very important concept developed by Freire is what he calls the “banking” pedagogy. The oppressor imposes a world in which only the members of a certain class have access to knowledge or are born to acquire it6. The others merely have the right to assimilate passively a never ending recital: Lima is the capital of Peru, two and two make four, etc. The learners are considered empty entities where their master make the “deposit” of fragments of knowledge. The empty oppressed is filled with the oppressor's content. But the master is not interested that the oppressed may productively use this knowledge for the improvement of his/her condition. What the learner learns in such a scheme is to repeat and reproduce. The knowledge “desposited” by the oppressor remains the oppressor's property. The pedagogy proposed by Freire is in total opposition to this idea. For him, the oppressed never comes “empty” of knowledge and the first stage of the educational process is to make the learner realize s/he has already produced knowledge even if (and even more so) this knowledge doesn't count as such in the traditional pedagogical framework. This leads to a second point. The humanity of the subject with whom s/he engages in a pedagogical relationship is not taken for granted. The subject comes alienated and dehumanized. The category “human” is a problematic one and it is only through the process of learning that humanization takes place. And what counts in the process of humanization is precisely to get rid of the oppressor the oppressed hosts inside him/her. The oppressed is made of the oppressor and has internalized his world view. Freire insists regularly on the fact that a teaching that would fail in the process of helping the learner to free oneself from the oppressor's world view, and merely let him acquire more power through knowledge will ultimately fail in creating a revolutionary subject. It would risk to create better servants of the current oppressor or, worse, new and more efficient oppressors. The third book's striking point is the affirmation that nobody is a liberator in isolation and that nobody liberates him/herself alone. Liberation through pedagogy always happens when the learner and the “teacher” are mutually liberating each other. There is no idea a priori of what the liberation pedagogy should be. Both entities are learning the practices that will lead to freedom from the relationship itself. I would now like to use these three principles (“banking” pedagogy, the internalized oppressor and mutual liberation) to revisit the methods of learning used in machine learning. And use these principles to articulate prospective questions. For Freire, the relationship between the learner and the teacher is considered as a situation of mutual liberation. If we apply this to machine learning, we need first to acknowledge the fact that both the people who teach machines and the machines themselves are entrapped in a relationship of oppression where both are loosing agency. To free algorithms and trainers together, both need to engage in a relationship where an iterative dialog is possible and where knowledge can circulate. This should lead us to examine with great scrutiny how this relationship is being enframed and scripted. Usually for instance, the data collection and the “ingestion” of the data by the algorithm are two distinct processes separated in time and space. Making it impossible for a dialogical relationship to happen. How then to reconnect both processes and make machine learning become a dialogical process from the start? For Freire, one should not take for granted that a learner is “human” when s/he enters a pedagogical relationship. S/he will follow a process of humanization when the relationship unfolds. This resonates, although in a distorted manner, with a certain discourse in Artificial Intelligence that softly erodes the human/machine divide as the algorithm learns. What is different though is that Freire insists on maintaining the human/non-human demarcation. What he proposes is to base the distinction not on an a-priori ontological quality of the beings but on their trajectory of liberation. What would matter then for us is how much human and machines are able to fight their alienation. The core of the learning practice should be found in a form of reflexivity where one would follow a process of humanization through which she manages to extract and get rid of the oppressor inside. We could then ask: “what kind of machine reflexivity can trigger human reflexivity and vice versa?”. And also how this cross-reflexivity can help identify what constitutes the oppressor inside. This leads us to a third Freire's idea: the banking principle, according to which the oppressed is considered as an empty entity where knowledge should be stored and repeated. This represents a complete erasure of what the learner already knows without knowing it. What does the trainer doesn't know s/he knows? What does the algorithm doesn't know it knows? What they both ignore, if we follow Freire, is their own knowledge. And to which extent this knowledge unknown to them is the knowledge of their oppressor or their own. To answer these questions they have only one choice: to engage in a dialog where two reflexivities are teaching each other the contours of their alienation and at the same time how to free themselves from it. References Bradski G, Kaehler A (2008) Learning OpenCV, Sebastopol:O'Reilly Media, p461. Freire P (1970) Pedagogia del oprimido, Mexico:siglo xxi editores. Irani L (2015) Difference and dependence among digital workers: The case of Amazon Mechanical Turk, South Atlantic Quarterly, 114 (1), pp. 225-234. Kobielus J (2014) Distilling knowledge effortlessly from big data calls for collaborative human and algorithm engagement, available from http://www.ibmbigdatahub.com/blog/ground-truth-agile-machine-learning [accessed 10 October 2016] 1The examples in this text focus on supervised learning. See https://en.wikipedia.org/wiki/Supervised_learning Ideally the ideas discussed here should be nuanced and extended when applied to other forms of machine learning. 2Amazon Mechanical Turk is a “meeting place for requesters with large volumes of microtasks and workers who want to do those tasks” (Irany & Silberman, 2013). A requester, in AMT terminology, is a business that publishes a task for workers, human providers in AMT terminology, to complete. See The requester best practice guide, http://mturkpublic.s3.amazonaws.com/docs/MTURK_BP.pdf 3 This example is inspired by one of the largest image annotation processes, ImageNet, a database of images for visual research that offers tens of millions of sorted and human annotated images organized in a taxonomy. ImageNet aims to serve the needs for training data of computer vision researchers and developers. See http://image-net.org/ 4“a baseline set of training data labeled by one or more human experts” (Kobielus, 2014). 5 "When a mouse is running down a maze to find food, the mouse may experience a series of turns before it fi nally fi nds the food, its reward. That reward must somehow cast its influence back on all the sights and actions that the mouse took before finding the food. Reinforcement learning works the same way: the system receives a delayed signal (a re- ward or a punishment) and tries to infer a policy for future runs (a way of making deci- sions; e.g., which way to go at each step through the maze)." (Bradski and Kaehler, 2008 ) 6See Freire's insistence in addressing this question as a political problem rather than an ontological one in his discussion with Seymour Pappert: http://www.papert.org/articles/freire/freirePart2.html