:: Article

go hack yourself

Samir Chopra interviewed by Richard Marshall.

Samir Chopra is a philosopher of liberation who broods deeply on FOSS’s liberatory capacities, on the threats and opportunities of a cyborg world, on why we should hack ourselves, on robots and the law, on the threat of Amazon, on resisting Harry Potter, on why better treatment of artificial agents could help animals, on why there needs to be more women and non-white philosophers and on cricket and the relationship between nationalism and franchise. As Roy Batty says, ‘All those moments will be lost in time, like tears in the rain.’ But not yet…

3:AM: What made you become a philosopher?

Samir Chopra: During my school years, I had wanted to be a student of the humanities, and thought I would study literature and/or history at university. But I was growing up in a developing country and thanks to a great deal of societal pressure I became convinced studying the humanities was self-indulgent, that the only possible careers lay in the usual areas: technology, the sciences, medical school etc. So I wound up studying mathematics, statistics and computer science. During my graduate work in computer science, I thought a bit about the social implications of computing but made nothing of it (my advisors Muray Turoff and Roxanne Hiltz were pioneers in what is now called social networking software). I went on to work at Bell Laboratories and soon became bored. I had read some philosophy of science and physics, some existentialist literature, and I had a girlfriend who was studying literary theory and whose books I raided all the time. I realized what was common to all these things was philosophy and that if I wanted to be happy perhaps I should study that. My mother passed away around the same time, and I was deeply, deeply unhappy. I thought perhaps philosophy could help me to reconcile myself to a world that seemed bizarrely cruel and in which, in Dostoevsky’s words, everything was possible. So there you have it, I started studying philosophy as a kind of therapy. But it was also a return to a kind of intellectual domain that I had wanted to participate in for a long time.

3:AM: You wrote ‘Decoding Liberation’ some time ago to look at issues arising from free and open source software (FOSS). You saw FOSS as ‘a liberatory enterprise in several dimensions ’ and take a Marxist view on this vis a vis a critique of modes of production. So how is FOSS potentially liberatory?

SC: In Decoding Liberation: The Promise of Free and Open Source Software Scott Dexter and I set out to explore FOSS’s liberatory capacities in terms of its associated political economy, in the autonomy—from proprietary vendors—that it promised its users, in the liberation of proprietary computer science, in the freedom of software users to share with each other, in the freedom of, ultimately, our cyborg selves in a world that is underwritten by computers and their software. We think of it as something that could set us free in many ways, sometimes as a model to be emulated, sometimes as a radical intervention in fossilized ways of thinking: perhaps by freeing workers from outmoded modes of production, property and ownership, by freeing programmers from proprietary models of software production and unlocking their creativity by letting them freely interact with a larger community of programmers, by freeing computer science from the constraints of proprietary science (a great deal of computer science is kept closed and violates scientific standards of objectivity and peer-review when done in the proprietary mode).

One of the biggest blights on modern culture is the rampant growth of ‘intellectual property’ talk, an incoherent notion that threatens to choke creativity and innovation in many of its sectors. Free software shows how its strictures can be upended, how they can be creatively worked around, how communities can evolve new methods of cooperation and sharing in the creation of cultural artifacts.

The Aaron Swartz case shows us the tragic consequences of IP talk run amuck; or consider how academic publishing is still held hostage by these journals that don’t let us share our work with each other. FOSS provides a model that might work in these areas and bears closer looking at.

The Aaron Swartz case shows us the tragic consequences of IP talk run amuck; or consider how academic publishing is still held hostage by these journals that don’t let us share our work with each other. FOSS provides a model that might work in these areas and bears closer looking at.

3:AM: You were fearing at the time that the liberatory moment was slipping away and that the utopian hopes were under threat. Were you right to be worried back then? Are things better or worse now?

SC: I am not sure. In one dimension some of the most fervid debates about FOSS have died down, GNU/Linux has become quite entrenched in many domains, and free software licenses are becoming more widely used. In yet others, there does not seem to be any holding back the proprietary juggernaut (desktop and gaming for instance). Copyright regimes don’t seem to show any signs of being ameliorated by the new technical order (even though the music industry and publishing industry are changing). Patent regimes are still onerous and expensive. It’s hard for me to tell whether real change is possible given the entrenchment of the incumbents and the economic and political power they command and wield. Within FOSS itself, as we noted in Decoding Liberation, there was a co-optation of some of the early idealism of the movement to make it more palatable to a corporate audience. Perhaps that will continue and it might all become too watered down.

3:AM: One aspect of the book that was particularly interesting to me was your vision of a world full of code, a cyborg world where ‘distinctions between human and machine evanesce’ and where ‘personal and social freedoms in this domain are precisely the freedoms granted or restricted by software.’ Can you say something about what you argued for there?

SC: I think what we were trying to get at was that it seemed the world was increasingly driven by software, which underwrote a great deal of the technology that extends us and makes our cyborg selves possible. In the past, our cyborg selves were constructed by things like eyeglasses, pencils, abacuses and the like—today, by smartphones, wearable computers, tablets and other devices like them. These are all driven by software. So our extended mind, our extended self, is very likely to be largely a computational device. Who controls that software? Who writes it? Who can modify it? Look at us today, tethered to our machines, unable to function without them, using software written by someone else. How free can we be if we don’t have some very basic control over this technology? If the people who write the software are the ones who have exclusive control over it, then I think we are giving up some measure of freedom in this cyborg society. Remember that we can enforce all sorts of social control over people by writing it into the machines that they use for all sorts of things. Perhaps our machines of tomorrow will come with porn filters embedded in the code that we cannot remove; perhaps with code in the browsers that mark off portions of the Net as forbidden territory, perhaps our reading devices will not let us read certain books, perhaps our smartphones will not let us call certain numbers, perhaps prosthetic devices will not function in ‘no-go zones’, perhaps the self-driving cars of tomorrow will not let us drive faster than a certain speed; the control possibilities are endless. The more technologized we become and the more control we hand over to those who can change the innards of the machines, the less free we are. What are we to do? Just comply? This all sounds very sci-fi, but then, so would most of contemporary computing to folks fifty years ago. We need to be in charge of the machines that we use, that are our extensions.

We, in short, should be able to hack ourselves.

3:AM: This links up with the issues of autonomous artificial agents and the law. This is partly about robots and how we should treat them isn’t it? Some might think that issue some sort of sci fi spoof but it’s an increasingly pressing issue isn’t it? Are we getting close to a time when AI will pass the Turing test?

SC: The kinds of robots that most people worry about are still a while away but we already have self-driving cars, drones, and even things like mobile robotic guns. These, on their own, cause many legal headaches. In my book on artificial agents and the law, we began with a very simple, mundane problem that legal scholars have been thinking about for years and which almost immediately gets into philosophical issues: Can agents enter into contracts? Can they manifest the intention to do so? Can they ‘want’ to enter into a contract? We suggest yes, and rely on the intentional stance to do so, and think they could be treated like legal agents, even if not legal persons. Once you treat them as legal agents then you have to think about other legal and philosophical issues: can they know something, which can be then attributed to their legal principals? After all, corporations are attributed the knowledge gained by their legal agents. I provided a knowledge attribution analysis for artificial agents, one that I think makes it coherent to say that a ‘thing’ like a program or a robot could know something. So it turns out, that even if you begin with a very straightforward problem, you immediately start to encounter interesting philosophical problems. These are pressing issues because artificial agents are very widely used and have tremendous amounts of executive power. People might imagine that automation’s problems will only begin once we have robots asking for civil rights but there are some interesting issues to be dealt with right now. The good folks over at the Concurring Opinions legal theory blog organized an online symposium on my blog and we had some really wonderful discussions there (here and here).

I agree with many philosophers who suggest that the Turing test is not the most interesting question to be asked of AI. But there is still an important insight at its heart: we are beings with an overwhelming first-person perspective and we extend our community to those with whom we can see ourselves sharing these perspectives with. What happens when we encounter beings that match us externally—in behavior and perhaps even in verbal utterances—and for whom we can use third-person descriptions but we feel they don’t share our subjective point of view? What then? Will they ever be members of our societal groupings? Will we ever consider them persons? What’s stopping us?

3:AM: So what are the issues as you see them for legal regulation of these artificial agents?

SC: I think the most important ones are: how are they to be slotted into our legal categories given their autonomous and quasi-intelligent behavior, and given that autonomy is not a binary concept but more of a position in a wide spectrum? We have these new beings in our midst, and it behooves us to understand their capacities and see how we can fit into them our existing agency categories (like personhood, responsibility and free-will). Can they be described as ‘agents’ in the philosophical sense? This is a pretty fascinating issue; how do we understand agency in the first place? How do we identify agents in this world? Can they be described as ‘legal agents’? If they are legal agents, can they have duties toward their principals? Can they be attributed knowledge? Does that knowledge become the principal’s? This makes us think a bit more about what knowledge is. Who is liable for their actions? What is the distinction between moral responsibility and legal responsibility? Can they be legal persons? Which of course, gets us into the issue of how we distinguish legal persons from metaphysical or moral persons.

I’ve tried to answer some of these questions but I think I’ve only scratched the surface. It’s made me think a great deal about how law intersects with philosophy and how the two areas of theorizing feed into each other. Law is an outcomes oriented practice, it is pragmatism writ large in many ways and we will often find that the answers it provides to these questions will reflect that orientation. I find that fascinating, and think it will affect philosophical thinking about these very fundamental questions.

3:AM: So do you think that Amazon, for instance, knows stuff, as opposed to any humans working for Amazon? Is privacy under threat and is that a bad thing?

SC: Yes, I think Amazon does. Both the corporation and its artificial agents. In the third chapter of my book on artificial agents and the law, as I said above, I offered an analysis of knowledge attribution for artificial agents, under which it makes sense to say that it does. When you see knowledge in the terms of that analysis, it turns out that yes, privacy is under threat because artificial agents can be said to know things about us that we might not want them to know. In one section of the book, we explore whether we can say Gmail knows what we are emailing and if Google knows it too. The answer according to our analysis is yes. It is irrelevant to argue, as Google and the NSA for that matter often do, that humans are not reading your emails. What matters is what the capacities of the entity reading your email are, what it can do with that information. This has implications for all kinds of surveillance activities that are carried out in automated fashion. Think of the government’s Echelon program for instance or all the snooping the NSA does or deep packet inspection that ISPs and content holders carry out on the ‘Net. They all rely on the Google Defense: don’t worry, humans aren’t looking at your data. That’s nonsense. Programs can ‘know’ things too, and that knowledge can hurt us.

3:AM: A surprising issue arising out of this is your support for a campaign not to buy Harry Potter books. What’s the issue here?

SC: Ha! I’m impressed that you tracked that link down. I’d even forgotten I had that on my home page. Most broadly, it was about pushing back on restricting access to information, about copyright law run amuck, about the kinds of ludicrous control that content owners, publishers and distributors have taken on themselves. The original issue is quite minor, but the related issues are not. If you read the link, you’ll see the publisher is almost trying to enforce a real Orwellian kind of control. I’m in general, quite worried about the kinds of restrictions that might be placed on readers in the future by e-books through things like DRM. We are used to many freedoms associated with our paper books: we can tear out a page, mark them up, copy them, loan them in turn to many people, resell them and so on. E-books can be restricted in many ways that can reduce their usefulness to us. That, I think, is a crucial issue and is one that libraries should be very aware of when they turn their budgets over to e-collections. Not everything digital is ‘more free’; sometimes the digital can be less free if the code works that way. And of course, if we are not free to change the code, not free to change the law, which binds us, then we have submitted to a form of intellectual and political control.

3:AM: Donna Haraway writes of a ‘cyborg manifesto’ in terms that include socialist-feminism. Do you see this as being part of the way you understand the liberatory enterprise you linked earlier with FOSS and seems to connect with issues of laws towards technologies?

SC: I think the linkage with what Haraway writes about takes place at a very basic stage; after that her theorizing diverges. Like her, I think that our identities are very likely to be different down the line when technologies merge boundaries between us and the rest of the world, between us and other kinds of beings. It might be that the political and moral categories that we place ourselves in will change; and this I think connects with the issues I mentioned above, about how the laws we design to regulate our society and its interactions with artificial agents and us will change because our categories and concepts will have been changed by technology. We might then find that the categories and identities of political and moral philosophy need to be changed and that the theorizing that is built on them will too. Perhaps technologies will, as they sometimes seem to, blur the physical differences between sexes, between man and machine. What will then happen to the reams of theorizing that has been explicitly or implicitly built on these differences? Perhaps the law will suggest machines could be persons. How will future generations think about ‘person’ then? There is a lot of scholarship that I’m not sufficiently familiar with, in say, animal studies or transhumanism, that might have something to say about this too.

But in general, I would say that we have constructed some arbitrary and self-serving lines between ourselves and the world, between man and animal, man and machine. These could crumble in the face of technological advances. Interestingly enough, sometimes I get this response to my talking about personhood, even legal, for artificial agents, where people say I’m denigrating humans. My response is, no, not really, I think thinking like this could help us treat animals better!

3:AM: Talking of feminism, you’ve written about the place of women in academic philosophy. Many of the interviewees of this series have been concerned about the lack of women philosophers and the way they are treated in the academy and you too have concerns don’t you? What’s your diagnosis of the problem and are there and solutions you can see?

SC: Well, I’ve tried to address this problem in a couple of posts on my blog. In one, I addressed the often rude and crude discursive environment present in many philosophy departments and fora, which I called the Dickhead Theory (‘there are too many dickheads—male ones—in philosophy’). In many ways, philosophy simply mirrors our sexist, patriarchal society: bad-tempered and ill-mannered men consistently rule the roost, impose their standards on others, and because they are in positions of power, others seek to emulate them. The McGinn affair is a good reminder of this. I think it’s terrible that in philosophy of all disciplines we see so much unphilosophical behavior. It never ceases to amaze me that grown men with Ph.Ds in philosophy bicker like children and throw all sophisticated reasoning out the window while arguing. It’s like philosophy is a contact sport or something. All of this—the modes of discourse, the sexual harassment—creates a very hostile environment for women.

I really wish there were more women in philosophy; it would be so much better for the discipline. We could have new perspectives, new ways of thinking possible, new areas of philosophical investigation. I don’t know what the solution is: perhaps more aggressively recruit women students, provide close mentoring, pay attention to sexual harassment complaints, start a discipline-wide conversation about this (as I hope, seems to be happening now, in the wake of the McGinn affair), and so on. The moves to include more women speakers at conferences, to recognize their scholarship adequately should all help. Women should be seen as equal participants in the philosophical enterprise.

On a side note: I have a selfish interest in seeing feminism succeed: I think it will make the world a better place for men too. I think men are oppressed—sexually, politically—by patriarchy too, but are somehow suckered into thinking otherwise.

3:AM: Does your work begin to link with issues in metaphysics where people like Dave Chalmers raise the possibility of consciousness in all objects in a kind of new Leibnizian mode?

SC: I’m not sufficiently familiar with this aspect of Chalmers’ work here so my answer here is going to be quite tentative. I will say that the question of artificial agents does raise the question of what kind of conscious or subjective experience those kinds of entities could have in a very interesting way. What if they develop a rich vocabulary of qualitative predicates and use them just like we do? Would we back at the situation that Putnam talked about a long time ago in ‘Robots: Machines or Artificially Created Life?’ We attribute conscious, subjective experience to other humans as an abductive inference, because we are members of the same linguistic community and so on. What if some of these these conditions—especially the linguistic ones—are met with artificial agents?

3:AM: You’re also a cricket expert. You’ve written about the changes that face the game and see a conflict between old-guardism of Australia and England and the new guard of India. So what could and should lie ahead for the game – and given that the changes seem driven by new technologies, political power and super-money franchises are there philosophical issues here that link with your interests about the law and AI?

SC: This is a hard question. I wrote a whole book on this! But still, I would say that cricket needs to find a way forward, to find a balance between the longer and shorter forms of the game, to protect Test cricket, to incorporate technology properly, for national boards to become better organized and more professional, to treat players as professionals and not patriots in whites, to listen more to fans, to involve them and players in the administration of the game, to stop being so greedy about television rights money, to sort out the scheduling issues that render the modern cricket calendar so ludicrous. One good thing about the franchise presence in cricket is that it had made us think a bit more about how national boards run the game and how they appeal to the idea of the nation constantly to manipulate players. It’s also made us think about cricket’s labor market and how players are still treated in feudal fashion by their boards. We need industrial action in cricket: more player unions, for instance. (I also write about this over at my blog on ESPN-Cricinfo; cricket is a very rapidly changing game and plenty of change, not all of it good, has happened and continues to happen.)

I don’t know about connections with law and AI. I think one thing that the current umpiring-technology dispute—this whole DRS affair—going on in cricket shows that technological systems are poorly understood. We think of them as just artifacts, but in fact they are quite complex assemblages of artifacts, protocols and humans. Their boundaries are poorly understood and they do not exist autonomously, independent of the humans that operate them and use them.

In the realm of political philosophy, I would say cricket shows the power of nationalism in a very interesting way. It is country, not club, dominated, but franchises have a chance to change that.

3:AM: Your cultural heritage is Indian and you’re in New York and you’re a philosopher in the academy. We’ve mentioned the place of women in philosophy but non-white philosophers in the academy are also rare. Is this another issue that needs addressing?

SC: I think so. There are very few non-white philosophers as you point out. The Graduate Center at CUNY has very few. But, I think some steps could be taken by hiring more aggressively, by looking to recruit more international students from diverse backgrounds and then providing them good mentoring, by seeking to cultivate a diversity of opinions. It does not help of course, that in general, Eastern philosophy or indeed, most non-Western philosophy is still considered esoteric or marginal or not serious. The representatives of those traditions then tend to be seen as not being serious either. This issue needs addressing for some of the same reasons that the presence of women in philosophy should be: philosophy, as an activity, as a discipline, is considerably impoverished by a narrow focus and a restricted membership. We simply aren’t doing justice to the human condition or our place in the world by being so narrow in our ambit. Our claims to universality and atemporality ring quite hollow when they emanate from such a parochial standing. Bringing in women and non-white philosophers gives us a chance to change this. Different histories will be introduced and perhaps our philosophical bookshelves will be stocked differently. I can’t see a downside to this.

3:AM: And finally, for readers here at 3:AM, are there five books other than your own that you could recommend for us that wil take us further into your philosophical world?

SC: Reading James, Nietzsche, Quine, Foucault, Dewey, Wittgenstein, Rorty, Freud and Marx would do it, I think. On cricket Gideon Haigh, CLR James, Ramachandra Guha are wonderful. On free software Richard Stallman is the man. On law and AI, just read my book.

Richard Marshall is still biding his time.

First published in 3:AM Magazine: Friday, November 1st, 2013.