I

n 2018, two years after Britain had voted to leave the European Union and after Donald Trump had been elected US president, Yasmin Green, the Research and Development Director for Google’s Jigsaw, gave a Ted Talk on Google’s latest contribution towards countering extremism.  “When someone looking for extremist material searches something like “How do I join ISIS?” they will see an ad that invites them to watch a YouTube video of a “cleric” or “defector;” someone who has an authentic answer,” she explained. ‘Abdullah-X’ is an animated character, which Google worked in partnership with Moonshot CVE to produce. It first appeared in 2014, exploring “complex and challenging issues” facing young Muslims through videos on a tailored YouTube channel. Voice-overed by a “former [British Pakistani] extremist,” Abdullah-X offers his thoughts on an array of topics that range across “the real meaning of Jihad,” “considerations for Muslims on Syria,” the media and propaganda, Islamism, Islamophobia, and free speech (after Charlie Hebdo).  He is a central character in the Redirect Method project, which “aims to steer young minds away from extremism.” The project, which has produced a plethora of video content since 2014, uses the power of online advertising to target those identified as susceptible to ISIS’s online messaging, and offer instead voices attuned with the discourse of the Countering Violent Extremism (CVE) industry to debunk that messaging.

Launched in its first wave in 2016, the Redirect Method program is one of Google’s outward-facing contributions to CVE. Drawing on the proven use of artificial intelligence acquired from mass behavioral data, Green was effectively outlining another way in which Google know-how could be used for identifying, mediating and manipulating the actions of targeted audiences for desired outcomes. The latest iteration of surveillance tech being showcased was the ability to use artificial intelligence not just to profile populations and predict future behaviors but to actually alter and modify that behavior. It is noteworthy, if unsurprising, that Yasmin Green’s talk did not provoke the same kind of uproar caused by the Cambridge Analytica scandal, which exposed the ways in which personal and psychographic data could be harvested to influence voting behavior, and was used by both Trump’s presidential campaign and the UK “Leave” campaign. Where the latter had laid bare the power of algorithms and artificial intelligence, and brought home to those reeling from the ‘Leave’ and Trump victories the full ability of digital technology to upset the democratic process, the use of surveillance technologies for CVE remained in keeping with the normative narrative of security, aligned with the interests of Liberal Democracies. These contradictory public responses reflect the ways in which popular consensus for intrusive digital surveillance intervention is maintained.

By 2016, the use of digital tools to modify behavior was a relatively widespread strategy being used across commercial markets and the political sphere. We can note that it was the burgeoning US state interest in developing counter-terrorism initiatives and strategies which provided early incentives and material support, as well as generous testing grounds, for the development of numerous digital technologies being cultivated in Silicon Valley. In 2013, some years before the voting scandals were exposed, SCL, the parent company of Cambridge Analytica, was contracted by Britain’s Ministry of Defense to carry out a pilot study to test Target Audience Analysis, a research methodology promoted by SCL to “identify emerging groups, the motivations behind their formation and their likely behaviors in a given context.” The study, which focused on young unmarried males in a country under military occupation, appeared to use the same approach later used by Cambridge Analytica for the ‘Leave’ campaign.

In laboratorial community surveillance programs which have flourished through the War on Terror, employed in sites of US-UK military occupation and in the West’s domestic spheres, surveillance industries have burgeoned and fashioned a plethora of surveillance systems, products and technologies in competitive but also collaborative and synthesizing globalized markets. The profitability of surveillance driven prediction products for commercial and security purposes means this sector now dominates the global economy. And though US-based Big Tech remain the prevailing forces – five of the top six world’s largest companies by market value are Apple, Microsoft, Amazon, Alphabet and Facebook – China has arisen as the world leader in surveillance technologies, aided in large part by the narrative, borrowed from the West, of a War on Terror in Xinjiang which has involved the use of Xinjiang as a testing ground for mass surveillance systems, and as an advert for the effectiveness of its systems (see Byler in this Forum). Yet while China’s approach of state-led political capitalism invites valid critique of its authoritarian modes of operation, the coercive modes of liberal digital capitalism operated by its Western competitors are generally less recognized for their calculating and manipulative effects. While the latter’s quiet and privatized approach to mass profiling, and psychological and behavioral modification has been key to the success of the liberal model, the ascendance of China’s non-liberal state-steered approach is presenting a challenge to Silicon Valley. China as surveillance hegemon stands as a thorn in Silicon Valley’s own culture war between Palantir-led disaster nationalists and the liberals who advocate for a more transparent version of surveillance capitalism. The contradictions inherent within the latter position are worth scrutinizing further for their particular traversing of surveillance capitalism with liberal ideologies.    

In the words of Eric Schmidt, chair of the US Department of Defense's Defense Innovation Advisory Board and former CEO and Chairman of Google and Alphabet, the name Jigsaw is intended to reflect an approach to the world as a “complex puzzle of physical and digital challenges” and highlight Google’s objective of solving those frictions through Big Tech. Smart cities, ubiquitous computing, and the seemingly endless digital mediation of everyday life encounters, actions and transactions, are all solutions being proffered. The ways in which these solutions synthesize the digital and the physical illuminate well the securitizing nature of Google’s interventions.  In centering its mission around forging “a safer internet for a safer world,” Jigsaw essentially operates as Google’s geopolitical arm. Through Jigsaw, Google has worked with local partners in Venuzuela, Iraq, the Ukraine and Kenya, not to mention France and New York to address issues of censorship, disinformation and harassment as well as violent extremism. We might think of Google as the East India Company of the 21st century, but where Google is both the corporation of frontier digital capitalism and the missionary endeavoring to convert the masses to digital literacy, activity and ideology.

At the time Jigsaw was launched, Google’s global domination was looking relatively secure, and the think tank aimed at restoring and reasserting Google’s “righteous” reputation; the liberal do-gooder big tech company that put ethics before profit. In reality, Jigsaw supports Google’s collaborative work with US foreign policy and defense agencies, serving US state communication and security interests overseas whilst also facilitating the digital company’s expansion into new territories. Where the fight for a “safer internet” goes, Google’s accrual of expanses of behavioral data and access to new markets generally follow. Its commitment to online security shows Jigsaw to be an exercise in civilizationary tech, carrying the “white man’s burden,” marketing itself as protecting oppressed groups, granting access to freedom of information and expression, reaching out to those left behind. Having on numerous occasions, apparently without irony, circulated images of himself meeting the natives of frontier regions – in the Chimbu region of Papua New Guinea or in Peshawar, Pakistan  – Jared Cohen, Director of Jigsaw and previously a member of the US Secretary of State’s policy planning staff,  explained in 2016 to the digital online magazine Wired, “Every single day, I want us to feel the burden of the responsibility we’re shouldering.”

Undeniably there are resemblances here with Western imperialism’s classical form. Just as the missionary front of liberal democracy was back-ended by colonial violence and the hierarchies, exploitations and subjugations of racial capitalism, so Google’s adoption of liberal-democratic ideals as its corporate identity shadows its historic involvement in and support for CIA-led counter-terrorism initiatives, based upon, as Arun Kundnani has shown, racist logics of cultural essentialism and dehumanization. The digital surveillance capabilities Google has developed through funding from US intelligence agencies have furthered the unfreedoms of infrahuman targets whilst the same technologies have expanded and enhanced information access, search engine efficiency and individualized experience for services users as a whole.  In this sense the outward contradictions between Google’s mission to suppress and exclude certain content in the name of countering extremism, and its simultaneous aim to counter online censorship in the name of freedom of speech, is perfectly consistent with the liberal paradox; the authoritarian elements of Google’s interventions in the counter-terrorism domain simply reflect the materialization of liberalism’s emphasis on the need to respond violently to (perceived) security threats while at the same time reaffirming of capitalism’s symbiotic relationship between militarism and markets.

But of course the age of surveillance capitalism, an age in which Google has pioneered the way, arrived at a time when post-democratic trends were already well underway in the West, when mass surveillance, especially in Britain, was institutionalized, when policy making was, as Richard Seymour has noted, increasingly technocratic and popular desires were less important than population management. As the algorithmic interventions of digital capitalism have propelled these trends, the security state has equally benefited from the deep learning achieved via commercial markets, social networking sites, and everyday online interactions. Regardless of the route, both deregulation and the will for mass surveillance in the post 9/11 context have been critical in enabling the broadening and deepening of algorithmic control that is weighing upon us creeping totalitarian countenances.

The development of liberal authoritarian regimes of population management has been achieved via the openings made by neoliberal arrangements alongside more entrenched structural racisms which allow for exceptional spaces of governance. Regarding the first, the authoritarian currents of liberal digital capitalism have some roots in the greater weight of power granted to private sector corporations under neoliberal capitalism, as well as in the acute push for securitization that defined the start of the twenty-first century. Deregulation, a defining feature of neoliberal capitalism, and by the mid-1990s the politically normative approach to corporate governance, offered a gateway for Google, Facebook and other digital tech companies to extract and manage user data without independent scrutiny or oversight. The gain was not one way. US intelligence and defense agencies were keen to draw on the private sector to avoid political scrutiny in Congress. This reliance on the private sector for security has also been the approach in the UK since the 1980s when CCTV cameras were installed making Britain the most surveilled country in the world and a highly desirable market for AI technology firms currently. It has been highly effective in its decentering of the state and dispelling charges of authoritarian-like features within the nominally “democratic” liberal polity.

Secondly, the post 9/11 ramp-up of securitization offered the perfect breeding ground for surveillance capitalism. Google, alongside other Silicon Valley start-ups received its early nurturing, support for product development, and mentoring from personnel in or closely connected to the US military and intelligence arms of the state. Led by  John Poindexter, former National Security Advisor to Reagan and Director of DARPA Information Awareness Office under Bush II, the US security state’s aim to achieve “Total Information Awareness,” to map social networks, life patterns, habits, and in general “predict future behavior” in a context where political consensus was agreeable to foregoing privacy, were all critical factors in Google’s early fostering. This elective affinity between state intelligence desires and novice digital tech aspirations, what Shoshana Zuboff terms "surveillance exceptionalism", paved the way for surveillance capitalism which in its very configuration aimed to reduce uncertainties of population behaviors, to know and modify the subject psyche accordingly.  For digital tech, funding and chances to be involved in mass surveillance projects offer deep learning opportunities. As noted in a presentation to the US National Security Commission on Artificial Intelligence, surveillance remains “one of the first and best customers” of AI startups, and an entire generation of “AI unicorns is collecting the bulk of their early revenue from government security contracts.” For the state, digital tech provides information, intelligence and know-how with speedy efficiency unachievable in the public domain.

Resisting mobilizations in the form of the protests led by Big Tech workers over recent years are a moving and hopeful counter to this authoritarian and over-reaching trend. Google employees’ condemnation of their employer’s collaboration with the Trump administration, demands that Google end Project Maven, a drone program it was developing for the Pentagon, and protests against Google’s secretive development of Dragonfly, a censoring search engine developed for the Chinese market, have all had successful results. Project Maven was officially terminated, and Dragonfly was subsequently abandoned. But an effective reckoning with big tech overreach will require scrutinization of the spatial blurring between domestic and imperialist surveillance forms, the intimate connections between surveillance in liberal and authoritarian states, and the dependencies en masse of big tech on surveillance contracts which encompass policing ‘suspect communities’ at home – (Google’s continued support for “countering violent extremism,” including its £1 million donation to a London-based CVE fund was supplemented earlier this year by a further £400,000) – as well as in the context of war.

Nisha Kapoor is Associate Professor of Sociology at the University of Warwick. Her research interests are in critical race and postcolonial theory; race, gender and the War on Terror; and her most recent work centres on border regimes, surveillance and the security state. She is author of Deport Deprive Extradite: 21st Century State Extremism (2018, Verso).