Like it means something

File:Facebook like thumb.pngJames Gleick’s The Information starts with a simultaneous appearance in 1948, both of the first transistor and the first scientific discussions of ‘the bit’ as a fundamental unit of measurement. Overall, the book tells the story of how those two technologies — the engineering breakthrough contained in that now-ubiquitous miniaturized form of digital storage and the scientific paradigm shift of that now-universal way of measuring just what is being stored — conspired together to transform our experience of the world. His intention is to recapture some of the credit for the massive social upheavals occasioned by the digital revolution on behalf of ideas: not to reject the importance of the technical knowledge that allows us to build resistors, but to make room as well in the historical account for the radical shift in theoretical knowledge that renders it even sensible to imagine DNA as speech, tennis scores as music or an image as a coded message. Thinking about how to get more conversations over the same phone line, or how to ensure a message has been received correctly, or how to fit more patient data into a smaller space, or how to make a recorded song sound more like the original, will in each case require some metric of how much of the thing you have. We ended up in a world where we not only came up with measurements for each case, but the same measurement for every one. Here’s Gleick on how big a change that represented:

For the purposes of science, information had to mean something special. Three centuries earlier, the new discipline of physics could not proceed until Isaac Newton appropriated words that were ancient and vague — force, mass, motion, and even time — and gave them new meanings. Newton made these terms into quantities, suitable for use in mathematical formulas. Until then, motion (for example) had been just as soft and inclusive a term as information. For Aristotelians, motion covered a far-flung family of phenomena: a peach ripening, a stone falling, a child growing, a body decaying. That was too rich. Most varieties of motion had to be tossed out before Newton’s laws could apply and the Scientific Revolution could succeed.

In my own work, trying to capture how policy makers and the state imagine capital (including in my recent rambling thoughts on the subject) I wrestle a lot with a similar set of transformations that occurred in the birth of finance as a discrete field. I just took a three day seminar on the history of financial crises and no one but seemed to think it much mattered that ‘finance’ didn’t exist as a coherent object of reference until the 20th century, and lacked much of its current valence until the 1970s. Finance was a word that meant the means or capacity to pay one’s debts, and by the late 19th century, also came to refer to careful thinking about income and expenses. There was banking (and banking failures), money (and currency crises), public finance (and power and territory reordered in the service of paying off royal debts). But when the word gets used today, it can’t be disentangled from images of the Wolves of Wall Street, can’t help but act as mediator between the interest rates set by the Fed and the dividends paid out by Apple (on which, see JW Mason’s solid analysis), can’t escape from a seemingly natural home in ‘the markets.’

For those in the know, the constitution of finance inevitably depends, in some inchoate way, on the Basel Committee on Banking Supervision; for those who don’t, the Basel Committee is just one part of an arcane object, or one location in a country lying beyond the economic frontier, necessary but dangerous, complicated and obscure, wild but tamable for those who have the right kind of knowledge. But that obscurity results partially from a gradual expansion of referents over the last 200 years, from a term with a narrow meaning little differentiable from ‘bookkeeping,’ to a bloated pastiche that includes practices which used to be derided as immoral ‘speculation,’ sold as ‘insurance’, offered as opportunities for ‘investment’, or understood as ‘depositing money in a bank.’

But it occurred to me today that the transformation of the world hand in hand with the transformation of the word is not always a process that’s driven by the search for ordered, scientific clarity.

Consider, for example, that for the generation born after 1998, there will never be a world without a ‘like’ button. In the interaction with facebook, ‘like’, as verb, takes on an active, social sense slightly askew from its prior usages. When I was 15 years old, liking Radiohead meant I possessed a preference that was stationary, inert and internal, ready to be dragooned into action only once I was forced to choose between alternatives, a thing I might take out to to show a potential friend or choose to keep to myself, a feeling that related me as much to myself as to a network of my teenage classmates. To like something in the facebook era by contrast not only to have something, but is in the stronger sense to act. It is is to make a mark in the world. ‘To like’ becomes not only to possess an internal orientation — a feeling or an affect or an emotion — but to engage in a form of communication, one directed to a crowd of friends and acquaintances, plus a less-than-predictable network of relations of relations. In being inseparable from this act of communication, ‘to like’ something in this way leaves behind the world of private preferences, secret pleasures, silent joys.

The meaning of words lies not only in their use but in the networks of incoherent, sometimes contradictory meanings they are used to express. Words divide up the world into manageable categories, leaving certain senses behind even as they pick up new ones, picking up certain meanings and abandoning others. Perhaps the current generation will never use ‘like’ in ways  that are noticeably different from how I do. But it is one possible future of the word, and of the world. To finance is no longer limited to its original sense in English of paying a ransom to release a prisoner. Nor is liking something bound to have quite the same freight, or carry quite the same information, as when we were young.

supermassive-black-hole

Cogs

supermassive-black-hole
I’ve got some pushback on my idea that the European Commission might be a place where it’s ever possible to exercise ethics or transcend dehumanized institutional logics. The point I was trying to take from Duncan Kennedy is that we cannot know, until we have spent some times engaging with an organization, whether it is so internally inflexible and on balance harmful that it should be resigned to the scrap heap of history.

Now, I have some sympathy for a kind of utopia where people get to keep remaking the institutions in which they work (this is, in particular, Roberto Unger’s utopia). If this is your utopia, you could say that any amount of bureaucracy, stability and institutional authority is a shortcoming that needs to be fought against. That’s fine so far as utopias go, but in the meantime we don’t live in that world and a person’s got to eat. On the other hand, we do live in a world where moments of individual judgment can not only make a difference for some individual or group of individuals, but actually shift the waters of history one way or another, even if in only a tiny way. I agree that people who are soothed into waking sleep may miss those moments, letting the spirit of the machine win out, but working inside an organization doesn’t necessarily end up that way.

The pushback came in the form of the claim that, when it comes to the Commission, we have left the a priori behind and obviously entered a black hole of ethical action and judgment. But this is too easy. Even a preliminary attempt at thinking about the possible scenarios reveals a complex of possibilities.

1. So, for example, maybe the EU is a broken, unquestionably harmful, and irredeemable political project, and the best thing that can happen for Europe, democracy, social justice, all those things we care about, is that the whole edifice crumble into dust: no matter what comes after, it will be better than what we have now.

2. Or maybe the EU is a broken, harmful political project that should have been stopped before it got to where it is, but it is hard to know whether it should be reformed or scrapped, because it’s quite possible that what comes after it will be much worse.

3. Or maybe: the EU is a politically conflicted, conceptually contradictory political project. Its institutional logics improve the lives of some and worsen the lives of others; they empower some democratic wills while suppressing others.

A. The ethical and political valence of the EU project are determined only by the players at the top: the Council, and maybe sometimes/to some degree the ECJ. EC bureaucrats only ever have one choice: to quit their jobs, or to put into practice the logics of the machine determined at the top.

B. Same as A, except EC bureaucrats have a third choice, which is to be obstructionist and slow-moving in the implementation of logics they find distasteful.

4. Or, same as 3 (politically conflicted, conceptually contradictory) but the institutional logics aren’t fully determined at the top. Instead, the contents of those logics or normative structures are so open, so indeterminate, that there are opportunities to choose or at least exercise some judgment all the way down.

5. Same situation as 4 (politically conflicted logics, real opportunities for judgment), but the institutional culture is so bland, the daily practices so thoughtless, that no one who both cares about how the world is organized and who is capable of discerning the existence of ethical and/or political choices in the implementation of the Commission’s multiple logics, actually sticks around long enough to have moments to exercise that judgment.

6. Or, say, the EU’s multiple institutional logics are actually associated with different parts of the EC as an institution. To the degree that you are politically committed to one of those logics–say, gender equality at work–being part of the EC bureaucracy can provide an opportunity to work in a setting driven by a political logic that you care about, but nonetheless provides few opportunities for judgment or ethical action. Of course, by supporting this work, one also lends legitimacy and institutional power to the Commission and to the EU project as a whole.

A. And furthermore, it might be that this is true, but that working in that setting nonetheless provides few opportunities for judgment or ethical action. At best, one is, paradoxically, a cog in a machine that one feels contributes to justice; at worst, a cog in a machine that contributes to someone else’s idea of justice, but not yours.

B. Or, in a slightly different scenario, there are opportunities for judgement and ethical action, but they only come to people with patience, political savvy, the intellectual chops for academia and the charm for sales.

Part of my point when it comes to choosing a job is that I don’t know which one of these situations corresponds to the real world of the Commission. This is just a off-the-top of my head typology of the unknowns one faces when thinking about what it means to work inside one organization. Even in the best-case scenario that the real world is scenario 6A, a person who goes to work there may not find themselves in the particular part of the organization for which their particular skill set and commitments actually empower them to do anything that they care about or which feels like making a difference. The whole “a priori” thing is that I am not sure it’s possible to answer them without spending some real time in the belly of the beast.

There is a whole lot more to be said about both the ‘inside the job’ and ‘outside the job’ practices that can make living in the world compatible with a sense of an ethical self. My key advice for people trying to balance security with their political ideals is to have patience and hedge heavily against the lifestyle that seems to come pre-packaged with a career choice: don’t get used to a level of comfort (a mortgage, private school for the kids, a second property, the annual Caribbean vacation) that you may have to abandon if (when) you discover the job is killing you.

 

 

Make Work

A friend, who has the intellectual chops for academia, charm enough for sales, and the ethical heart of a British-style social drama, writes to ask if I would “kill him” if he told me he was entertaining thoughts of working for the European Commission.

The background here being not only that he’s young enough to still be choosing a career, but that he had previously expressed particular distaste for certain of those among his peers who he saw as headed to Brussels to participate in the make-work at the centre of the EU’s log-rolling, authoritarian market-making machine in return for the promise of reasonable work-life balance, job security and 5 weeks a year of paid vacation. This being a sentiment which, I can’t lie, I had some sympathy for.

“I got rather seduced,” (no doubt) “by a lovely lady telling me how I would have a great life working on things that matter to me.” (A committed feminist, she was, apparently). “All that, and with the possibility of a life outside of my professional life: i.e. 5 weeks of paid holiday a year.” (His addendum, somewhat hyperbolically: “I don’t want to end up 40 and alone. Ahhh… what do I do with my life!?”)

Now, as an aspiring teacher to a profession that is known for taking in young idealists and turning out depressed sociopaths, I’ve actually muddled somewhat over the question of how to prevent professional momentum from taking young people places they don’t want to go. I myself had a number of years where the question of what to do with my life bore down with the strength of a thousand suns. My response to him boils down a lot of my thoughts on the issue:

You will do well wherever you go, so long as you refuse to give up your inquisitive mind and critical perspective. The Commission could benefit from people who haven’t bought in to the European project hook, line and sinker, and who know especially that they have alternatives if they end up feeling like they aren’t contributing to anything that matters. It couldn’t hurt, for the purposes of bringing some value to the democratic accountability of the place, either, that you haven’t yet ‘transposed’ your ‘modalities’ into the arcane vocabularies of Brussels English. In all seriousness, though, so much of what matters in your work isn’t “what’s your job?” but “how do you do your work? how do you relate to your work? how do you, as someone with an identity and personality that is separate from that work, relate to this ‘job’, this ‘thing you do’?” Also, you aren’t choosing a career now. Find something to do for the moment, but never stop thinking of it as an awesome 7-year post-doc that will have something come after.

In other words, the question, when it comes to work that involves judgment, creativity and thought, isn’t “what will your work be?” but “what will you make of your work?” Not, will you win the prize, but what will you do with it when you do?

Postlethwait’s speech here, at the end of Brassed Off, provides a good tie in for three caveats: first, these thoughts are a bit partial, and much of what I have to say was, it turns out, largely foreshadowed in questions raised by Duncan Kennedy in the early 1980s. In Rebels from Principle [pdf], a piece he wrote for the Harvard Law School Bulletin, he wrote:

the locus of conflict between oppression and liberation can’t be conceptualized as always outside us. It is inside us as well, inside any liberal or left organization, and also inside the apparently monolithic opposing organizations, like corporate law firms. I think it follows that there are no strategies for social transformation that are privileged a priori — either in the sense that they designate the right place to struggle because struggling in that place will lead most certainly to the overthrow of illegitimate hierarchy and alienation, or even in the much more limited sense that some struggles have an absolute moral priority over others.

Second, I am troubled by the fact that this advice can be given to lawyers and certain other professionals, but seems a much poorer fit for, say, the heroes of all of those British-style social dramas. I suppose that the capacity to have some power, some say, in what your work means or how it’s organized, is one of the reasons for the success of the labour movement.

Third, none of this means that we should define ourselves by our job. In fact, I mean exactly the opposite. There are gardens to be planted, communities to be built, children to be raised and music to be played. Ultimately, there is a world to be (re)made.

But sometimes — often —these things, too, are work and much of them take judgment, and creativity, and thought. And we are defined in large part by how we do our work. Half the battle is choosing where to apply ourselves, and where not to: when the music matters, and when it matters bollocks.

Out of the wilderness

So I love–love–Freddie de Boer. There is, given the defensiveness in his writing, obviously a big slice of the American left-liberal blog-o-sphere who absolutely hates him for his politics, or for the way he expresses his politics, or for the timing of his expression of his politics or…something. But I find his engagement with questions of ethics and strategy, his resistance to the fetishization of American machine politics as the sole locus of social change in the directions of justice, his earnest, forthright, sometimes fearless articulation of his own take on various moments—I find all of it inspiring, energizing, so often just on-the-nose. The fact that he is willing to say “maybe browbeating young people isn’t the best way to get people thinking about class and intersectionality on university campuses” while also having the capacity to powerfully express the essential, powerful and sublime irrationality of human generosity in the face of a culture addicted to stories starring homo economicus, is enough to give me some hope about the future of Western civilization.

But I want to talk about something else. I want to talk about jealousy. I simply cannot understand how Freddie writes so much. He is one, maybe two years younger than me. He has published papers, other papers in the works, is almost done his dissertation. He has had to spend most of the six years of his doctorate, unlike my set-up, handling teaching responsibilities alongside his own research, plus attending to various on-campus commitments. It’s clear, from his writing, that he doesn’t succeed in his professional life by unplugging from popular culture, either. Quite to the contrary. His blog posts indicates that he is active on facebook, scouring his friends feeds for signs of the American pop-liberal zeitgeist, that he still finds time to read some fiction, that he has movies and kinds of movies that he likes.

I don’t know how he does it. But I have an inkling. Let’s set aside for a moment whether I am as smart as Freddie, whether I have his analytical capacity. Give me the benefit of the doubt for a moment that I’m a smart guy, that I can tackle and manipulate ideas with the best of them. Okay, so the question is: why am I not producing?

There are to my mind two ways to put the answer. On the one hand, I am tormented, haunted, by the breadth and depth of my ignorance. There is this old joke chart that points that the real gift of learning isn’t so much knowledge as it is ignorance: you may increase the number of “things” you know over time, but the horizon of things of which you are ignorant also expands. Getting how something works, how it really works, always seems nearly within grasp, so that just one more article, one more book, will be all that’s needed to settle the questions that you set off with. It is, beyond this, extremely hard work for the curious mind to remember that not every point of confusion can be explained or explored now; that the journey into the wilderness started with a purpose, and that tracking through it without leaving a trail may mean adventure for you, but is ultimately of no use to anyone else.

The other way of putting it is that in terms of fear rather than distraction. I often feel that the things I want to express are, if not complicated, at least a bit out of left field. It feels to me like it will be a waste of time, or an embarrassment (I can’t even spell embarrassment without a spell check), to write things online or even in publication, that I haven’t fully thought out. Objectively, I think this is garbage: the world is generally full of generous, thoughtful people who want to check their own prejudices and intuitions against those of others. It’s also laughably narcissistic: how many people would really care enough about what I have to say that the relative quality of what I put out matters? Nonetheless, it’s the psycho-cognitive situation in which I find myself.

So wish me luck. I am going to try and put out more rough drafts–more missives from the wilderness. But I am also going to try sending stuff that feels incomplete for potential publication. It’s like my masters’ supervisor always said. Academic work is an iterative process.

In the end, there is no ethical progress, and no strategy, in silence.

The Rub

Ev11_45_12---Ballot-Box_weben if you hope that the Scots choose (choose!) to stay with England and Wales and Northern Ireland (and the Cornish :D), this piece by Irvine Welsh is an essential expression of what’s thrilling about the Scottish vote, which is that it represents a vindication of something true and real and powerful about the democratic principle. If the Scots choose to stay, it feels as if ‘politics’ will go back to process, to back and forth, to the channeling of imagination through a frame that, no matter how real the consequences, can’t help from being weighed down by its similarity to American idol. I don’t think that the Scots need to leave to feed this more expansive, imaginative, open idea of what democracy can be. But being given the opportunity to think more openly about how to organize society–aye, there’s the rub.

Teach the Controversy

Over at the Soros-funded Institute for New Economic Thinking, there have recently been a few blog-posts about the potential of, and the need for, economics curriculum reform. In a recent example, Abdul Alassad characterises the problem as follows:

rational debates of ideas has been replaced by dogma, to the detriment of society. A dogma is a set of principles laid down by an authority as incontrovertibly true. Today, economics is taught as a set of assumptions that are unquestionably infallible, static, and undeniably true.

This misses the mark. Very few trained economists think that the dominant economic models are universal or incontrovertible. Rather, the danger of current economic teaching lies in the presentation of single models as the baseline for the analysis of economic problems, within a broader framework that relies on a single mode of economic analysis (the neoclassical synthesis). The result is that those whose exposure to economic concepts is limited to undergrad teaching come away with an attitude to the heterodoxy equivalent to the grade-schooler belief that there’s no such thing as negative numbers. Until you teach them how to manipulate unfamiliar ideas, questions that depend on those concepts will seem nonsensical.

So what’s the alternative? The failure of neoclassical models to predict or prevent the financial crisis, and its complicity in unavoidable perpetuation of inequalities under thirty years of neoliberalism could be used as an argument for simply replacing the dominant paradigm with another. The push for a more historical approach to economics provides a different, and likely more fruitful, answer: teach the controversy. It’s not clear why undergrads shouldn’t be exposed to the incompatible models of the Keynesians and the monetarists, marginalists and institutionalists, Marx and Hayek, Friedman and Coase.  For that matter, why shouldn’t they spend more time engaging with hard questions about the relationship between economic variables and real-world social practice, à la David Graeber or Thomas Piketty?

Of course, undergrads exposed to a variety of models, with often conflicting opinions about how policy will effect outcomes, and to theoretical texts that raise questions about the true nature of economic practice may end up somewhat confused about how the real world works. But this is exactly as it should be: if the last ten years have taught us anything, it’s that the world needs fewer, not more people convinced that they know how to organize an economy.

We’re All Capitalists Now

Those concerned about inequality often place emphasis on the “income share of labour,” a.k.a. the ratio between the amount doled out in wages and the amount doled out in profits, treating it as a useful index of “how workers are doing.” This is logical enough insofar as workers are the ones, so the story goes, who have to rely on wages to eat.

In this sharp if somewhat technical review of Piketty’s Capital in the 21st Century, Peter Lindert (there’s a link to the pdf here) reiterates how unhelpful this measure is.

Shares of labor versus capital in current income…have never proved to be good predictors of inequality, and continue to be poorly correlated with it over time and space.

One caveat that Piketty raises in Capital is that returns on capital, high or low, only matter for his analysis insofar as they are concentrated in the hands of the few. If capital wealth was equally held by everyone, or if returns on capital were doled to everyone by the state on a per capita basis, then increasing the rate of return on capital would at worst have no impact on income inequality and, if returns to labour were unequally distributed, could actually increase overall income equality. Here’s Lindert again:

Having 60 percent of national income go to labor incomes could reflect perfect equality, with 60 percent of the population equally sharing labor incomes and the other 40 percent equally sharing property incomes. Or it could mean horrific inequality if the 60 percent going to labor were shared by everyone except one propertied ruling family.

Of course, one doesn’t need such extreme hypotheticals to make the point. In today’s industrialized economies, many if not most workers actually rely solely on income from capital for the last 10-25% of their lives (i.e. when they are retired). In such societies, one way that returns on labour could fall is if workers all decided to to be richer in their retirement than during their working years – i.e. to save more in the early days and spend more on the backend. Of course, to say “we are all capitalists now” does not deny the massive inequality in the holding of wealth, or that some are for all intents and purposes wholly excluded from any ownership of the commonweal. But it does demand that we shift attention away from abstract class categories toward questions of actual distribution and how economic structures impact on its evolution.

Lindert’s first example points to a very different problem. In most people’s minds, what Marx called exploitation–the idea that some were able to get an income from the social weal without working–was synonymous with the immiseration of the working classes. Yet one possible future (one that concerns some proponents of a basic income) is a world in which there is a reasonable level of income equality, but in which only some people have access to (or choose to get ) additional benefits derivable from work. It seems unlikely that workers in such a world could be called exploited; it is certain that the income share of labour would still tell us close to nothing about how just the society was.