I

n a Covid-19 world increasingly organized through new technologies of algorithmic governance, racialized surveillance regimes, biometric data collection, and contact tracing, Louise Amoore’s Cloud Ethics: Algorithms and the Attributes of Ourselves and Others (2020) couldn’t be more timely. By fabulating an ethicopolitics of algorithmic systems, or what she nominates a cloud ethics, Amoore contributes to a growing body of scholarship dedicated to critiques of artificial intelligence, surveillance regimes, and algorithmic governance. While much of this work accords that by opening “black boxes” of algorithmic decision making we can forge better accountability, Amoore takes a more nuanced stance, engaging in complex conversations about authorship, madness, traces, and ethics as they relate to human-algorithmic relationships.

Amoore’s book visits an array of sites and studies, including a UK oncology hospital deploying Intuitive Surgical’s da Vinci medical robot, the ICITE securitization platforms of US intelligence agencies, the Geofeedia mapping platform—used by Baltimore County Police targeting protestors in the wake of Freddie Gray’s murder—and the AlexNet convolutional neural network—dedicated to largescale visual recognition—to name a few. In these explorations, she exhibits deep technological understanding of complex algorithmic processes, but also an erudite ability to frame them within complex debates on ethicopolitics. Thus, this book should be of interest to those curious about contexts of algorithmic use in contemporary sites of securitization, policing, medical work, and corporate data accumulation, but also to those invested in philosophical debates on ethics and technopolitics.

Throughout her monograph, Amoore builds upon her own broader body of work dedicated to the geopolitics of data, algorithms, and securitization, while also drawing upon emergent scholarship in science and technology studies, media studies, and surveillance studies. Significantly, she also deeply engages with poststructural theory and feminist technoscience, frequently invoking scholars such as Jacques Derrida, Karen Barad, Michele Foucault, Judith Butler, and N. Katherine Hayles. As she demonstrates, there is much to learn from those who have long been accounting for relationships between the self and the other, algorithmic or otherwise. This facilitates Amoore’s intervention into the growing field of algorithmic and AI ethics—a field often (though not exclusively) funded by tech corporations that settle for partial disclosure of algorithmic source code in order to right prejudicial wrongs. Against this, Amoore places her field of inquiry beyond an episteme of transparency, accountability, and legibility. Instead, she carves out a place that “begins with the opacity, partiality, and illegibility of all forms of giving an account, human and algorithmic” (Amoore, 2020: 8). This really gets at the heart of her argument, which suggests that algorithms emerge from human-algorithmic illegibilities. For Amoore, an ethics of algorithms begins not simply by mapping their associated prohibitions and permissions, or by opening up black boxes, but rather by understanding algorithms as ethicopolitical entities themselves.

The book’s six chapters are divided into three parts. The first part, “Condensation,” begins with “The Cloud Chambers: Condensed Data and Correlative Reason.” This is where Amoore dives into the materiality of clouds, looking to early 20th century particle physics experiments by Charles Thomas Rees Wilson. These found that clouds produce optical formations reproducible in laboratory environments, known as cloud chambers. Cloud chambers, Amoore historicizes, could induce subatomic perceptibility beyond that of normal human observation. Cloud chambers continue to serve as her allegory for algorithmic spatiality, visuality, and invisibility. In this way, Amoore’s cloud ethics refers not only to the cloud computing era, but also to how actual clouds and algorithms alike embody spatial arrangements filled with potentiality.

Amoore breaks her cloud ethics into the categories of Cloud I and Cloud II. The first highlights the spatiality of algorithmic decision-making and infrastructures, from offshore data centers and tax-free jurisdictions used by Google to Amazon’s Elastic Compute Cloud architecture. Cloud I homes in on the linear logics of algorithmic observation, representation, and classification. Conversely, Cloud II, or what she calls a cloud analytic, looks at computational regimes that transform what is rendered perceptible. A cloud analytic allows scholars to, in the words of Donna Haraway (2016), “stay with the trouble” and refuse algorithmic coherence. As Amoore questions, what if we can’t actually see things in the cloud due to algorithms’ ability to alter the aperture of observation? Corporations and governments, she suggests, exploit this fuzziness to extract data patterns and features.

Amoore’s second chapter, “The Learning Machines: Neural Networks and Regimes of Recognition,” looks at how machine learning algorithms, from those of deep neural network algorithms to cloud-based medical data systems, are connected to human practices. While we are in the midst of a moral panic about machine learning, the problem, she shows, is not machines surpassing what should be their limits, but rather the limits of autonomous human subjects. Machine learning algorithms and their utilization in border patrolling, surgery, facial recognition, fraud detection, and more, reveal that individual human algorithmic authors are not the only beings responsible for discriminatory results. This is because inputted biases are always creating new biases and futures authored by machines—a point Amoore continues to iterate upon in her following chapters.

The book’s second part, “Attribution,” begins with “The Uncertain Author: Writing and Attribution.” This chapter focuses on how algorithms’ ways of being exceed what is present in a given moment, surpassing what is written into source code. Here Amoore begins with the New York City Algorithmic Accountability Bill calling for the city to make public algorithms and source code behind automated decision-making systems. As she argues, “it would be insufficient to respond to the violences and injustices of an algorithm by ‘scrutinizing the source code’ because the act of writing these algorithms substantially exceeds that source code, extending into and through the scattered elements of us and multiple data points that gather around us” (Amoore, 2020: 88). In other words, algorithmic authorship is always heterogenous, constantly being modified, edited, and recoded as algorithms engage in the world. Looking to natural language processing that trains algorithms through posthumously written literature, Amoore points to how authorial language continues to proliferate in unexpected modes postmortem. By solely relying upon source code transparency for accountability, one reifies fictions of singular authorship. Thus, instead of attributing a NYC policing algorithm to a singular racist author, one should instead locate “the multiple and dispersed acts of writing that are its condition of being” (p. 94). I suppose the question then becomes how to map these multiple acts and authors and how to concretely mitigate their harms, posthumous or otherwise. Further, how does acknowledging that racist algorithmic source code is indeed haunted by a longue durée of racist violence alter our own points of intervention? To answer this, I suggest bringing Amoore’s framework into conversation with critical race and technology studies scholarship that explicitly centers how palimpsests of racial capitalism, coloniality, and more materialize contemporary power relations (Atanasoski and Vora, 2019; Browne, 2015). Such work shows how race itself has long functioned as a technology of white social and political control, and how these histories inform racist conditions of possibility in the technological present. This work supplements Amoore’s as it accords with her interrogation of complex and dispersed historical transits, while offering concrete racial justice politics through which to ground action.

Amoore’s fourth chapter, “The Madness of Algorithms: Aberration and Unreasonable Acts,” looks at the phenomenon of algorithms appearing to have deviated from rationality. There are ample instances of this, for instance, Microsoft’s Twitterbot Tay, which, within less than a day of existence, began spewing white supremacist tropes on social media in patterns unplanned by its makers. Here Amoore suggests that focus on what seems like algorithmic aberrations often overlooks how algorithmic rationality is in fact built upon madness, and how algorithmic logic itself modulates the threshold between reason and unreason. As she argues, moments of algorithmic madness are actually contexts in which algorithms are giving accounts of themselves. Opening up technological black boxes to secure algorithmic good behavior is not enough. Instead, we have to “think of algorithms as capable of generating unspeakable things precisely because they are geared to profit from uncertainty, or to output something that had not been spoken or anticipated” (Amoore, 2020: 111).

If opening up the black box of algorithmic decision-making is not enough for a cloud ethics, the question then becomes, what are those invested in algorithmic ethics to do? Luckily the book’s final chapters map tactual routes for a cloud computing ethics. Chapter five, “The Doubtful Algorithm: Ground Truth and Partial Accounts,” suggests that the idea of doubtfulness can be a useful place of intervention in comprising ethical intervention. This is in part because algorithms are overwhelmingly used to optimize decision-making in contexts of doubt and uncertainty. For instance, algorithms are frequently used to target doubtful voters and influenceable consumers. Yet, as Amoore shows, in such instances of doubt, algorithmic trees in fact flourish, forging multiple branching points, parameters, and options. This illuminates a world of algorithmic doubtfulness, in which algorithms express and proliferate doubt. As she writes, “an orientation to doubtfulness decenters the authoring subject, even and especially when this author is a machine, so that the grounds—and ground truths—are called into question” (Amoore 2020, 151). Doubt, in this way, becomes a starting point for the political intervention of centering other ground truths.

Finally, Amoore’s last chapter, “The Unattributable: Strategies for a Cloud Ethics,” conceptualizes her call for multiples sites of attribution, responsibility, and intervention in crafting an ethical algorithmic stance. As she writes, “the methodology of a critical cloud ethics must also fabulate, must also necessarily invent a people whose concrete particularities are precisely unattributable, never fully enclosed within the attribute and its output” (Amoore, 2020: 158). Methodologically, a cloud ethics rejects anchoring narrative in one authorial source, and instead allows source code readers and users too to become part of an algorithm’s future. In this way, Amoore’s cloud ethics is a world apart from that of corporate-funded AI fairness and accountability conferences and journal articles eager to spell out simple steps for enacting algorithmic ethics. The journey, Amoore makes clear, is far more complex.

Yet Amoore’s cloud ethics also often reads a world apart from organizing calls for more technologically just worlds. While Amoore suggests that an ethical algorithmic orientation in and of itself generates new ways of acting in the world, there are plenty of people organizing daily against algorithmic violence based more upon lived experiences and movement-based work rather than a cloud ethics framework. This isn’t to deny the importance of a philosophy of cloud ethics, nor to simplify complex and often amorphous algorithmic entanglements. Rather, I wonder what form of cloud ethics, or perhaps cloud justice, might emerge by grounding organizing frameworks. There is, after all, urgent work being done daily to curb algorithmic abuse by tech corporations and their users, especially that which exacerbates contexts of racial capitalism, sexism, transphobia, US imperialism, and more. I myself have learned a great deal from the tenants at Atlantic Plaza Towers in Brooklyn who successfully organized together to thwart their landlord’s implementation of a heatmapping facial recognition system in their building, thereby staving off algorithmic materializations of racial surveillance, eviction, and gentrification. At the same time, there is an emerging world committed to building collective platforms, infrastructures, and computing systems rooted in antiracist, abolitionist, and decolonial technological future-making (Amrute and Murillo, 2020; Benjamin, 2019). The organizers of such projects often make strategic calculations about algorithmic utilization based upon lived geopolitical realities and justice-based horizons. Yet perhaps, following Amoore’s emphasis on heterogeneity, there are ways of conceptualizing an ethicopolitics attentive to algorithmic cloudiness while also supporting concrete efforts to mitigate algorithmic harms and build other algorithmic futures. This ethicopolitics might attend to the accumulation of various histories, authors, and traces as they impact the everchanging present, but it also might look to the analytics, tactics, and strategies of those who have successfully thwarted the implementation of violent algorithmic systems, historically and in present time.

References

Amoore L (2020) Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.
Amrute, S., & Murillo, L. F. R. (2020) “Introduction: Computing in/from the South.” Catalyst: Feminism, Theory, Technoscience, 6(2). Doi: 10.28968/cftt.v6i2.34594.
Atanasoski, N & Vora K (2019) Surrogate Humanity: Race, Robots, And the Politics of Technological Futures. Durham: Duke University Press.
Benjamin, R (2019) Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity.
Browne, S (2015) Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Haraway D (2016) Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press.

Erin McElroy is a postdoctoral researcher at New York University’s AI Now Institute having earned a doctoral degree in Feminist Studies from the University of California, Santa Cruz with a focus on the politics of space, race, displacement, and technology in postsocialist Romania and Silicon Valley. Erin is cofounder of the Anti-Eviction Mapping Project and the Radical Housing Journal, both projects dedicated to housing justice across gentrifying terrains.