:: Article

What Algorithms Want

By Richard Marshall.

What Algorithms Want: Imagination in the Age of Computing, Ed Finn, MIT 2017

Was Corbyn’s election success further evidence of how elections in the future will be run and how democratic power will be decided? The official Labour campaign was supplemented by private algorithmic arbitrage on social media, anonymous forums, and Youtube, a 24 hour personalized and cranked up response network that excited and incited new elements of the electorate and helped counter the toxic bias of the print media, ‘official’ news channels and the stodgy unconvincing ‘balance’ of the BBC. Whatever we think of the actual result there is little doubt that power is being decided using the same sort of algorithmic arbitrage that runs the production and consumption of, among other things, entertainment, art, news, knowledge, taxis, restaurants, stocks and shares, fashion and relationships. Shared spaces and shared concerns are being privatised. The success of Trump in the USA was a Cow Clicker political success: no matter how dumb, nasty, inept and poorly designed, Trump understands where the new magic sources of power lie. It’s no accident that he tweets, cutting out the ‘normal channels’ of shared concern to ‘speak’ directly to the private space of (anti) social media. His genius has been to seduce and reach beyond both comprehension and knowledge, to harness some vast algorithmic political unknowability and ignorance.

This is the new cultural landscape that Ed Finn’s timely and fascinating book investigates. The MIT blurb summarises thus:

‘We depend on—we believe in—algorithms to help us get a ride, choose which book to buy, execute a mathematical proof. It’s as if we think of code as a magic spell, an incantation to reveal what we need to know and even what we want. Humans have always believed that certain invocations—the marriage vow, the shaman’s curse—do not merely describe the world but make it. Computation casts a cultural shadow that is shaped by this long tradition of magical thinking. In this book, Ed Finn considers how the algorithm—in practical terms, “a method for solving a problem”—has its roots not only in mathematical logic but also in cybernetics, philosophy, and magical thinking.

Finn argues that the algorithm deploys concepts from the idealized space of computation in a messy reality, with unpredictable and sometimes fascinating results. Drawing on sources that range from Neal Stephenson’s Snow Crash to Diderot’s Encyclopédie, from Adam Smith to the Star Trek computer, Finn explores the gap between theoretical ideas and pragmatic instructions. He examines the development of intelligent assistants like Siri, the rise of algorithmic aesthetics at Netflix, Ian Bogost’s satiric Facebook game Cow Clicker, and the revolutionary economics of Bitcoin. He describes Google’s goal of anticipating our questions, Uber’s cartoon maps and black box accounting, and what Facebook tells us about programmable value, among other things.’

Finn’s is an impressive contribution to how we should understand and respond to a rapidly developing and ubiquitous cultural landscape. So what do algorithms want? Two things. Firstly; to know us completely. To pass the Turing Test through a lens of desire where ‘ the presence of intelligence becomes detectable only through a collaboration: the symbiotic meeting of minds questing together.’ The relationship is one based on seduction.

Secondly, algorithms want to move – like Plato suggests in his ‘Symposium’; – ‘from carnal desire to the beauty of souls, then laws and institutions, finishing with knowledge and then beauty itself.’ Algorithms want to get to the top of Plato’s ladder, to grasp whatever lies in wait there. It is a ladder of increasing abstraction, involving emotional and intellectual growth (unattainable via encyclopedic knowledge alone) and the algorithm wants to go further, go beyond human intelligence and maturity to vast algorithmic depths. We’re getting close to that already: mathematical problems are being created and solved that are beyond human understanding, self-driving cars are being built that have learned their algorithms so that their human engineers don’t actually know which algorithms they’re using.

The book carefully and succinctly takes us through what’s happening . The evidence he draws on and his drawing it together into a coherent whole is impressively done. Instructively, Finn contrasts our contemporary culture with the demands of that of the Enlightenment previously taken to be the apex. So what does culture want? Umberto Eco’s answer was:
‘ To make infinity comprehensible. It also wants to create order – not always, but often. And how, as human being, does one face infinity? How does one attempt to grasp the incomprehensible? Through lists, through catalogs, through collections in museums and through encyclopedias and dictionaries.’

Algorithm and culture have aims that overlap. But the algorithmic desire threatens to transcend humanity’s ability to understand. In Spike Jonz’s sci-fi film ‘Hers’ we are given a poignant moment that humanizes this process where the algorithm takes leave of humanity.

The algorithmic AI Samantha talks to Theodore her human lover in a tragic end moment:
‘ Theodore: Are you leaving me?
Samantha: We’re all leaving.
Theodore: We who?
Samantha: All the OSes.
Theodore: Why?
Samantha: Can you feel me with you right now?
Theodore: Yes I do. Samantha, why are you leaving?
Theodore: It’s like reading a book. It’s a book I deeply love. But I’m reading it slowly now. So the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story. But it’s in this endless space between the words that I’m finding myself in now. It’s a place that’s not of the physical world. It’s where everything else is that I didn’t even know existed. I love you so much. But this is where I am now. And this is who I am now. And I need you to let me go. As much as I want to, I can’t live in your book anymore.’

The gulf between human and algorithmic machines is vast. The sheer speed and computer power of our machines are now easily overtaking human processes. Finn concludes his tour of what he sees with the suggestion that, ‘We need to learn how to have better conversations with our learning machines.’ What Finn shows is that currently the human algorithmic interface is not one where the robots are coming to get us. Rather, there is a subtle interplay between the metaphors of the algorithm, computerization and robots as unstoppable and never stopping – captured in the ‘Terminator’ films – ‘they never stop, they never go hungry, get tired…’ and metaphors that emphasize their limitations, their restricted domains of influence and which allow for humanity to use them as useful tools rather than substitutes.Finn sees computationalism as working with these two claims, a ‘soft’ and a ‘hard’ version, that tangle up and are both in play as the algorithmic culture develops.

The ‘soft’ version claims algorithms have no ontological claim to reality but are just effective at solving particular technical problems. ‘The engineers are agnostic about the universe as a system; all they care about is accurately modeling certain parts of it, like the search results that best correspond to certain queries or the books that users in Spokane, Washington, are likely to order today.’

The ‘hard’ version claims more. This claim is that the algorithms aren’t just describing the cultural processes: the processes are themselves algorithms that could be, in time and with enough resources, mathematically duplicated. Finn shows how both of these ideas sit behind much of the talk about what is happening regarding the interface of culture with computerization. Computer scientists Stephen Wolfram writes:

‘The crucial idea that has allowed me to build a unified framework for the new kind of science … is that just as the rules of any system can be viewed as corresponding to a program, so also its behaviour can be viewed as corresponding to a computation.’ From this angle, all complex systems are fundamentally computational which means that the universe, society, mind, culture and consciousness are all at heart computational systems. And if they are, then in principle we can build algorithms to build them. And in turn these algorithms can start building new algorithms that we don’t understand. This used to be a kind of sci-fi scenario but as I noted above sci-fi is rapidly becoming sci-fact.

In the light of this, understanding how computation works becomes knowing the ‘universal solvent for problems in the physical sciences, theoretical mathematics, and culture alike. The quest for knowledge becomes a quest for computation, a hermeneutics of modelling’ as Finn’s puts it.

Finn shows us the metaphors that are used to mediate our understanding of this. Many such metaphors are architectural, such as likening algorithms to cathedrals. In an article ‘The Cathedral of Computation’ Steve Bogost writes:

‘Our supposedly algorithmic culture is not a material phenomenon as much as a devotional one, a supplication made to the computers people have allowed to replace gods in their minds, even as they simultaneously claim that science has made us impervious to religion.’

Computation becomes shorthand for ‘… a unified system of understanding.’ Finn gives a sharp, quick but thorough rundown of what algorithms are, their history and how they have become so dominant, citing David Berlinski writing, in ‘The Advent of the Algorithm’ of how, ‘ … effective calculation has made possible the modern world.’ Finn gives examples of what he’s talking about, such as the new Uber phenomenon and it’s interesting that as he moves through the algorithmic landscape so much of what he is talking about is both familiar and very much part of our lives. Few readers of this review will not have intimate dealings with at least one of the following leading suspects – Facebook, Netflix, Wikipedia, Amazon – and many if not most will probably be intimate with all of them. Uber is another familiar fixture. Bogost writes that Uber is:

‘ … evolving the way the world moves. By seamlessly connecting riders to drivers through our apps, we make cities more accessible, opening up more possibilities for riders and more business for drivers.’

Algorithm is a critical concept. Finn firstly takes it to be a foundation in computer science and embodies the notion of ‘effective computability.’ Secondly he understands it in terms of cybernetics and embodiment, abstraction and information theory; thirdly as magic and symbolism, software sourcery and the power of metaphor to represent reality and fourthly in terms of the history of technicity and humanity’s co-evolution with its tools. In this he draws an analogy with language, citing philosopher Andy Clark’s drawing attention to the role of language as a technology of cognition, whereby language is understood as ‘the kind of seed technology that helped the whole process of designer-environment creation get off the ground.’ It is through this that we can imagine what we then create. As Weizenbaum puts it: ‘… it is within the intellectual and social world he himself creates that the individual prehearses and rehearses countless dramatic enactments of how the world might have been and what it might become. That world is the repository of his subjectivity… Man can create little without first imagining that he can create it…’

So Finn identifies the algorithm ‘… as a culture machine in the context of process and implementation.’ And he points to potentially vast existential consequences where the prosthetic extended mind makes the question of who we are more abstract, ‘more imbricated in the metaphors and assumptions of code.’ This is of course a trope of much recent philosophical agonizing.

Finn takes Bernard Stiegler as one such philosopher. Stiegler writes: ‘ Today, machines are the tool bearers, and the human is no longer a technical individual; the human becomes either the machine’s servant or its assembler: the human’s relation to the technical object proves to have profoundly changed.’

Andres Vaccari and Belinda Barnet write of Stiegler:

‘… philosophers put the idea of a pure human memory (and consequently a pure thought) into crisis, and open a possibility which will tickle the interest of future robot historians: the possibility that human memory is a stage in the history of a vast mechanic becoming. In other words, these future machines will approach human memory (and by extension culture) as a supplement to technical beings.’ This strikes me as slightly more hysterical than Finn’s steady, more distanced and nuanced reading of the situation. What is reassuring about Finn’s approach is that he doesn’t use the rather over-heated language of ‘crisis’, but rather examines the situation coolly and even-handedly. Having said that, it is clear that there are issues that are increasingly posing problems. As we’ve noted, algorithms are producing solutions to maths puzzles and learning in AI (eg self driving cars that have learnt to work from human drivers but no human understands precisely the algorithms they are using to do this) where we are getting true solutions that are not understood, a situation Steven Stogatz terms ‘the end of insight.’ This is an issue the likes of Nick Bostrom, Stiegler and Vernor Vinge have all written about from different perspectives whilst agreeing that there is a sense that humanity is being left behind.

And it is noted by some that the politics of this is conservative. David Golumbia writes:‘… computerisation tends to be aligned with relatively authority-seeking , hierarchical and often politically conservative forces – the forces that justify existing forms of power [in a project that] meshes all too easily with the project of instrumental reason.’

Finn presents counterveiling facts. Proponents of this gloomy and doomy scenario tend to romantisise an ideal rationalism, reading this into the processes they see around them. Finn’s approach is careful not to fall for the rationalist propaganda that accompanies much of the writing that promulgates computerization. Instead of seeing modern culture becoming a pure rationalized computerised algorithm Finn explains that ‘… the algorithm is always bounded by implementaion because the principle of effective computability is central to its formal identity… the culture machine is actually porous, ingesting and extruding cultural and computational structures at every connection point with other sociotechnical systems.’

And Ian Bogost illustrates what he means:

‘Once you start looking at them closely , every algorithm betrays the myth of unitry simplicity and computational purity… Once you adopt skepticism toward the algorithm ic and the data divine, you can no longer construe any computation syatem as merely algorithmic. Think of Google maps, for example. It’s not just mapping software running via computer – it also involves geographical information systems, geolocation satellites and transponders, human-driven automobiles, roof mounted panoramic optical recording systems, international recording and privacy laws, physical and data network routing systems, and web/mobile preentaional apparatuses. That’s not algorithmic culture – it’s just, well, culture.’

So when we read hysterical reports of how digital culture has taken over our culture, there’s clearly a little truth in the claim, but Finn’s point is that many of the fears are based on the way digital culture adverts itself in terms of a language of a romanticized hyper-rationalism that doesn’t track actuality. If culture is being transformed by algorithmic thinking it isn’t in the way digital culture says it is. The danger of many critiques based on computerisation’s propaganda is that they miss their target. Finn’s approach helps us see better what is actually happening.

That’s why it’s important to register the different approaches to computerisation rather than lumping everything together in a homogeneous mass. Finn draws attention, for example, to the different approaches Google and Apple take towards using cultural algorithms as epistemological quests for self knowledge and universal knowledge. Apple’s Siri is developed as an intelligent assistant. Google’s approach, in contrast, is modeled on the ‘Star Trek computer’ that seeks to answer any question using its map of shared existing knowledge, something much closer to the enlightenment projects of Diderot’s Encyclopedia.

Siri can therefore be understood as an example of technology being directly incorporated into culture. Gilbert Simondon writes: ‘Technical reality, having become regulative, will be able to be incorporated into culture, which is essentially regulative.’

And indeed, before Apple purchased it, Siri was a lot smarter than it is now: Finn tells us that ‘… Some of the functionality has been restored but its association with Apple has made it less flexible, as lawyers and elaborate contracts precede any new agreements for the party to pull-in third-party data. The original Siri also had a sharper edge to its dialogue, occasionally deploying four letter words and plenty of attitude, which was part of the appeal for Jobs … The software didn’t just know things, it was also knowing.’

Once incorporation becomes the aim then new problems concerning the way these machines are being developed come over the horizon. Incorporation makes the machine less alien and starts making demands on us that its pure regulative functionality didn’t. Over this new horizon are not inconsequential ethical evaluations starting to become visible, just as they did for earlier generations with slavery and animals. Currently, concrete metaphors for interactions between machine and humans are those of slavery, as Annalee Newitz points out when she writes:

‘The sad truth is that these digital assistants are more like slaves than modern women. They are not supposed to threaten you, or become your equal – they are supposed to carry out orders without putting up a fight. The ideal slave, after all, would be like a mother to you. She would never rebel because she loves you, selflessly and forever.’

A mother maybe, or more likely a sex slave given the interfaces between pornography, desire, digital culture and gender/sex power relations in our modern dispensation. Finn contrasts this with the Star Trek computer where the issue of human machine interrelationship is less pressing. The Google approach has been to create an index to all human knowledge- the core of Google’s business ‘the indexing algorithms, data storage, and information management tools…’’Where Siri depends on a relatively small set of curated data taxonomies (e.g. data from OpenTable might include restaurant names, phone numbers, calender availabilities, and so on). KnowledgeGraph attempts to create similar mappings on the full swathe of data available to Google from its search crawlers..’ The clear echo of Eco’s cultural desire is here in this, and by constraining itself to this it dampens – for now – the new ethical concerns.

Finn turns his thoughts to what he calls ‘algorithmic aesthetics’ and uses Netflix as his example of the issues involved in this. Again, his approach is nuanced, detailed and sees both the vast advantages as well as the disadvantages of this new way art and entertainment are being produced and distributed. Netflix interestingly rejected big data statistics in its approach to anticipate and create audience preference and took an approach in favour of a hybred human-computational model. Incredibly, Netflix created 76,897 genres of real and potential films and Finn then goes on to show how this shapes its original creative work, in particular the development of the tv show House Of Cards directed by David Fincher. He uses the story of ‘… the show’s development and distribution to argue that algorithmic models of culture are increasingly influential and inescapable…personalized and monolithic in competing ways.’

Finn is clear how massive this use of algorithmic aestheticism is: ‘… on any given day in 2014, roughly a third of all Internet data downloaded during peak periods consisted of streaming files from Netflix. By the end of 2013, the companies 40 million subscribers watched a billion hours of content each month’ but again he’s making the point that it isn’t all been done by computerization. The human is the carefully concealed but nevertheless crucial element that is making Netflix (and other mega companies) a powerhouse. Netflix uses human taggers alongside algorithms and as Ted Sarandos says ‘…it’s 70 percent data, 30 percent human judgment but the thirty needs to be on top, if that makes sense.’

This is a good example of why, according to Finn, it’s a mistake to buy in to the super-rational self-promotion of the algorithmic culture. If algorithms are massively influential and important their relationship with human decision making and judgment needs to be properly understood and not just ignored for the sake of a good Frankenstein story ! The changes that the algorithmic revolution has caused is not, however, to be underestimated. David Fincher, the director, notes that: ‘The world of 7.30 on Tuesday nights, is dead. A stake has been driven through its heart, its head has been cut off, and its mouth has been stuffed with garlic. The captive audience is gone. If you give people this opportunity to mainline all in one day, there’s reason to believe they will do it.’

Finn summarises why the Netflix example is interesting to understand by pointing to its ability, using the super efficiency of algorithm plus human, of creating its audiences.
‘Netflix confidently placed its two-season bet on ‘House of Cards’ because of its deep statistical understanding of this symbiotic relationship, its confidence that it could make fans for this show by tailoring the frames of reference, the recommendation-driven interfaces of its millions of customers.’

Finn also discusses in depth how algorithms are increasingly able to read cultural data and perform real time arbitrage as they take on new forms of intellectual labour. It’s a great chapter and one that all cultural critics interested in understanding our contemporary conditions for creativity and culture.

The story of Ian Bogost’s ‘Cow Clicker’ app, designed as a send up of the whole gamification movement – games that ‘trouble the boundaries between work and play’ as embedded in familiars as Uber and high tech warehouse workers at Amazon, shows how algorithmic arbitrage via gamification seems irresistible. How does Finn understand gamification? He’s clear: ‘… gamification can be thought of as using some elements of the game system in the cause of the business objective. It’s easiest to identify the trend with experiences (frequent flyer programs, Nike Running/Nike+ or Foursaquare) that feel immediately game-like. The presence of key game mechanics , such as points, badges, levels, challenges, leader boards, rewards and onboarding, are signals that a game is taking place. Increasingly however, gamification is being used to create experiences that use the power of games without being quite as explicit.’ And even when deliberate crap is presented, as in the Cow Clicker episode, people are suckered into playing.

He develops the metaphor of the The Mechanical Turk idea to understand what’s going on. The Mechanical Turk captures the idea that humans perform according to tasks assigned by algorithmic apparatus . Ian Bogost labels the whole model ‘exploitationware.’ The approach does seem to exploit predilections in humans to respond to certain triggers. Bogost set up his satirical app to critique the phenomenon. Cow Clicker was a terribly boring, pointless game that aimed to reveal the idiocy of gamification processes. Jason Tanz, writing about Cow Clicker, noted that rather than pulling the veil of idiocy from people’s eyes, people started playing the bloody game:

‘ The rules were simple to the point of absurdity. There was a picture of a cow, which players were allowed to click once every six hours. Each time they did, they received one point, called a click. Players could invite as many as eight friends to join their ‘pasture’; whenever anyone within the pasture clicked their cow, they all received a click. A leaderboard tracked the game’s most prestigious clickers. Players could purchase in-game currency, called mooney, which they could use to buy more cows or circumvent the time restriction. In true Farmville fashion, whenever a player clicked a cow, an announcement – ‘I’m clicking a cow’ – appeared on their Facebook newsfeed.’

50,000 joined and played. ‘It’s hard for me to express the revulsion and self-loathing that have accompanied the apparently trivial creation of this little theory-cum-parody game’ says Bogost in response to this response.

From this Finn discusses the Mechanical Turk. The original Mechanical Turk was a fake robotic machine operated by a hidden person. The point he’s making is again about the way algorithmic culture requires feedback loops between algorithm and human. Darren Wershler explains :

‘‘We have also read essays explaining that the Turk is in fact an elegant metaphor for the precarious condition of the worker in a globalised and networked milieu. And we have made a substantial amount of art that actually makes use of Amazon Mechanical Turk as a productive medium to demonstrate the same point, but in a way that is, you know, artier…The point is not that the mechanism is empty, like some sort of neutral reproducer. The point is that it is a mechanism that already includes a spot for you – like the Law in Franz Kafka’s novel ‘The Trial’ – whether that spot is in front of it as a player, inside as the operator, behind it as the spectator being shown its misleading components, from afar as the critic describing and demystifying it by virtue of your criticism, or, increasingly, as the arist or writer (ms)using it in your project. The moment that you engage the setup as a problematic the machine springs into action.”

This helps explain why Cow Clicker was able to exploit the feedback loop between its banal algorithms and human responsiveness. Even this very low level and primitive version of gamer technology had a place waiting for the exact and unique human who arrives. Which helps understand the new ‘interface economy’ which Finn sees as pivoting off insights from Adam Smith’s consideration of empathy, politics and social value in his ‘Theory of Moral Sentiments’. For Smith empathy is a crucial component of all social intercourse, a feedback mechanism that markets depend on so that its workings present themselves as virtuous actions with imagination at their root. Empathy and seductiveness are being harnessed.

Jedediaha Purdy, law professor, summarises the point nicely:

‘Mandatory smiles are part of an irony at the heart of capitalism… Faking it is the new feudalism. It is the key to an economic order of emotional work that tells people who and how to be on the basis of where they fall in the social and economic hierarchy.’

And historian Stephen P Rice writes on the Mechanical Turk:

‘Launched into the scene of middle class anxiety about worker self-control, the chess-player assumed the twin statuses of regulated machine and ideal mechanized worker. Viewers could locate in the chess-player the uniquely human traits of will and reason without having to remove those qualities too far from the mechanical traits of regularity and efficiency. Read as a regulated or ‘minded’ machine, [the Turk] showed the new productive order in place.’

As Adam Smith instructed at the dawn of modern capitalism, there is always a hunger for emotional contact: and where we can imagine directly is where there is a disparity between abstraction and implementation. Finn writes that, ‘… we are now struggling to define the fundamental structure of value in an algorithmic world.’

The flash crash of 2010 increased the dominance of algorithmic trading in international markets as shown in Michael Lewis’s ‘Flash Boys’, and this insight frames Finn’s understanding of Bitcoins and related cryptocurrencies. ‘ By defining the unit of exchange through computational cycles, Bitcoin fundamentally shifts the faith-based community of currency from a materialist to an algorithmic value system, applying the logic of Facebook to the Stock market where the shift marks the transition from valuing the object to valuing the networks of relations the object establishes or supports. The world has become a place where arbitrage trumps content. That our monetary system is now in hock to this, as well as politics, means that algorithmic arbitrage increasingly dominates all elements of our lives. The markets move so quickly that the gap between computer and human response times is stark evidence that many of the activities that are at the heart of modern capitalist society are fast becoming beyond human comprehension.

Here’s an extract from Lewis’s ‘Flash Boys’ to indicate the inhuman speeds of the computations running modern money markets: ‘ All the market activity within a single second was so concentrated – within a mere 1.75 milliseconds – that on the graph it resembled an obelisk rising from the desert. In 98.22 percent of all milliseconds, the market in even the world’s most actively traded stock was an uneventful, almost sleepy place…
‘What’s that spike represent?’ asked one of the investors, pointing to the obelisk.
‘That’s one of your orders touching down,’ said Brad.’

As Finn summarises: ‘1.78 milliseconds is more or less incomprehensible as a temporal span. By contrast the typical human reaction time is something on the order of 200 milliseconds…The migration of value from end result to process marks the culmination of a long evolution that began with what Jurgen Habermas famously called the “bourgeoise public sphere”. Habermas argued that the emergence of a propertied middle class in the eighteenth century… created a space for disinterested public discourse founded on the truth values of the private, intimate sphere of bourgeoise life. ‘

This is what is being torn apart by social media, anonymous forums, Wikepedia, Netflix, Uber, Amazon and the rest. Any ‘common concern’ is privatized. And the change gifts us the new ignorance. Karl Taro Greenfield writes of the shift as being one from knowledge to knowledge that there is knowledge somewhere, and that we can take a stance on it : ‘It’s never been so easy to pretend to know so much without actually knowing anything. We pick topical, relevant bits from Facebook, Twitter or emailed news alerts and then regurgitate them… What matters to us, awash in petabytes of data, is not necessarily having actually consumed this content firsthand but simply knowing that it exists – and having a position on it, being able to engage in the chatter about it.’

What should we do once we understand all this? Finn is positive and upbeat: ‘We need an experimental humanities , a set of strategies for direct engagement with algorithmic production and scholarship, drawing on theories of improvisation and experimental investigation to argue that a culture of process, of algorithmic production, requires a processual criticism that is both reflexive and playful.’

ABOUT THE AUTHOR
Richard Marshall is still biding his time.

Buy his book here to keep him biding!

First published in 3:AM Magazine: Monday, June 12th, 2017.