Persistent Error


One of the thing that I now worry about more than I used to, given the news, is that everything that goes on in the world is within an order of magnitude the result of people spinning the wheel on a random process that allows folly to linger as orthodoxy just long enough to be embarrassing to our descendants—like bellbottoms or crocs. I mean, we got to the moon, and iPods exist, so it can’t be performative error-echos all the way down, right? Still, sometimes when it comes to efforts to order our lives together, it feels like every field of knowledge—law, economics, psychology, sociology—works this way.

This premise puts in serious question the lengths that I will go to be rigorous in my work, to be fair to my sources, to feel satisfied that my claims are backed up by relevant, substantial evidence. If winning ideas are drawn from a pot without any bias toward nuanced claims or good ideas or rigorous evidence, why do we bother doing what we do? It’s not just that maybe I could have been smoking this whole time if it doesn’t actually cause cancer, it’s more broadly a sense of “the worst are full of passionate intensity and the best/lack all conviction.”

Why be so fastidious about reading the bus schedule correctly, when all it means in practice is that, some person having convinced the bus driver that the schedule was wrong, we are sat in certainty that we got the time right, in a town the bus isn’t actually coming back to until after the Christmas holidays?



At the New York Review of books, Masha Gessen writes about Trump and the language of the autocrat. It’s a great essay about an important issue, but it seems to go astray in its diagnosis of the issue.

Trump also has a talent for using words in ways that make them mean nothing. Everyone is great and everything is tremendous. Any word can be given or taken away. NATO can be “obsolete” and then “no longer obsolete”—this challenges not only any shared understanding of the word “obsolete” but our shared experience of linear time.

And then there is Trump’s ability to take words and throw them into a pile that means nothing.

Here is an excerpt, chosen from many similar ones, from his interview with the AP about his first hundred days in office:

Number one, there’s great responsibility. When it came time to, as an example, send out the fifty-nine missiles, the Tomahawks in Syria. I’m saying to myself, “You know, this is more than just like, seventy-nine [sic] missiles. This is death that’s involved,” because people could have been killed. This is risk that’s involved, because if the missile goes off and goes in a city or goes in a civilian area—you know, the boats were hundreds of miles away—and if this missile goes off and lands in the middle of a town or a hamlet …. every decision is much harder than you’d normally make. [unintelligible] … This is involving death and life and so many things. … So it’s far more responsibility. [unintelligible] ….The financial cost of everything is so massive, every agency. This is thousands of times bigger, the United States, than the biggest company in the world.

Here is a partial list of words that lose their meaning in this passage: “responsibility,” the number “fifty-nine” and the number “seventy-nine,” “death,” “people,” “risk,” “city,” “civilian,” “hamlet,” “decision,” “hard,” “normal,” “life,” the “United States.” Even the word “unintelligible,” inserted by the journalist, means nothing here, because how can something be unintelligible when uttered during a face-to-face interview?

This seems wrong to me. The words when used in these ways are not drained of meaning. They are not rendered meaningless. Words, when used these ways, are doing things, and those doings are part of the meaning. A bird that lands on a wire does not stop having wings. Someone who is pointing his fingers in opposite directions is still pointing. Catch a tire on your fish hook. You are still fishing, and you’ve still caught something. Ask yourself: why is Trump using these words, not others? ‘Responsibility’ is in the quote because it is supposed to be, and if Trump gets away with having it enter and exit sidewise, it is because people do not listen the way that we think: they hear buzz words connected by a frayed filament of grammar, and are thereby mollified, appeased, pacified. They fill in the blanks. The blanks do not work the way we think either. Yes, no, there is no propositional sense, no report. But a great deal is being communicated, and deliberately so, through connotation, association, subject-verb-object and Trump being the speaker.

We do not like it. But no one wants to eat a tire, either. And that does not make it correct to say that pappy caught nothing fishing, so we’re eating nothing for dinner. As Austin taught us, when we ask, mouth full of rubber, “is there any salt?” we expect more than just an account of what’s in the kitchen cupboard.


Whats Water.Andrew Sullivan is well known as the Catholic Republican who, because of his personal experiences, took on causes that were unpopular with other conservatives and thereby made himself somewhat of a cause celebre among American progressives. He deserves praise for the courage of standing against his tribe on principle, and he’s also a great writer. Nonetheless, I am always wary of his arguments, as they often derive from old-school, small-c conservative commitments to fundamental human fallibility, the resulting necessity of centralized authority and hierarchy, and a suspicion of anything too new. I think in his recent and widely-shared piece on the meaning of Donald Trump, this conservative reading of his sources has gotten the better of him.

The picture of America Sullivan draws has little to do with democracy as we know it (and reads selectively from Plato, to boot). At best, it describes the cultural commitments of an increasingly narrow slice of liberal middle class America; at worst, only the nightmarish fantasies of its opponents. What is America actually suffering from? The bottom half of earners are doing worse now than in the mid 1970. Black people are worse off than they were back then, too, and are incarcerated at 4-5 times more than they used to be. Wages are stagnant, the distribution of wealth is unthinkably uneven and those who were responsible for the financial calamity that led to dispossession and despair were never held to account–“no banker went to jail” as they say.

If their diagnoses differ, it is clear that Occupy Wall Street and the Tea Party, along with the supporters of both Bernie and Trump, have been inspired by a deep sense of injustice about the distribution of power and wealth in American society. And in that sense of injustice, they are right.

To describe America’s problem as a surfeit of democracy is thus to bend the meaning of the word to its breaking point. The influence of money on American politics, both before and after Citizens United, has corrupted the ideal (and here I steal from Larry Lessig) of a republic for, by and of the people. The centrality of wealth to the American political system, given the massive inequality of wealth, renders the system a loose oligarchy. The problem is not, per Sullivan, the democratic licentiousness of the populace. The problem is in the democratic deficit of the political sphere.

You might say that money doesn’t matter, because the candidates who win aren’t the ones who get the most money. This ignores the question of what the field of candidates would look like, and what sorts of policies they would be promoting, if the whole process wasn’t awash in cash. Congress spends 2/3 of its time fundraising. Water doesn’t determine which fish will win in a race, but it’s still pretty important to the outcome.

Sullivan’s version of America’s problem requires him to recount a just-so-story about the rise of Trump that is neither credible in structure nor a good fit to the actual history. If his reading of Plato is right, then after the elites have been toppled, his story goes, a dictator arises by exploiting antipathy and distrust of the elites. But if the elites have been toppled, where is the political benefit in challenging them? Is it not a more believable hypothesis, given the evidence, that Americans are raging against elite corruption because there has in fact been a centralization of power, and a disproportionate allocation of benefits, to a narrow few? Beyond whether his parable makes sense on its face, there is the problem that he can fit Plato’s narrative about the slide from democracy to tyranny to the American case only by imagining that political systems somehow develop according to some evolutionary logic of ideal types. And that means letting the actual political actors off the hook.

As many have argued, Trump’s ascendance is hardly without precursors in the American political discourse. Trump is the harvest of what the Republican party has sowed: exploitation of racial difference for political gain? a disdain for any principle that stands in the way of electoral advantage? a willingness to sacrifice substance for rhetorical splash? Has Sullivan not heard of Karl Rove? But the Democractic party doesn’t get off scot-free, either. As well-documented by this excellent historical review in nplusone, Bill Clinton’s victory in the 1990s was rooted in his party’s turn away from labour, the middle class and the poor, expressed most clearly in the party’s simultaneous deification of free trade and its denial of trade’s distributive costs. Economists like to make great hay of the overall gains that can be made from open trade, and depending on where you are standing, the big numbers do go up slightly. But the size of those numbers don’t do much for Flint, Michigan. For the last twenty-five years, the Democratic party as much as the Republican has been perfectly willing to embrace a policy that enriches the country at the expense of the working class, while blaming the working class for their resulting unemployment and penury, and actually making life harder for those who find themselves out of work.

Trump is certainly wrong to place the blame for any of this on China, Mexico, the Muslims. And in his diagnosis of America’s ills, in his prescriptions to overcome them and in his campaign methods, he’s not only wrong, but dangerous. But his popularity lies not only in the novelty of his scapegoating, but in being one of two candidates in this election who has refused to look at the struggle of America’s popular classes, and place the blame back on them.

America’s problem isn’t that there is too much democracy, but that there is too little. And the rise of Bernie Sanders, Occupy–even the Tea Party–suggests that Americans may be ready to re-balance the ledger. We don’t need Plato’s cynicism to see that clearly.

Rule Thyself/Read Together

Just Do It

One of the unexplored concepts for a themed blog or tumblr or twitter account or…—anyway, a concept which lays fallow for reasons that will quickly be made clear—would have a title something like “read together.”

To explain: the upsides of being in my location in a global division of labour that nominally assigns me the task of reading books and articles, and writing down my thoughts about what I have said cannot be overstated. Among the downsides is living in a professional community that is continually, constantly training me in the habit of using the passive voice; another is that the same voracious curiosity which undergirds the satisfaction I draw from my work is also a constant source of frustration insofar as I am forced, in trying to make my way from the reading to the writing, to leave interesting thoughts by the way side. And nothing is more frustrating than having to abandon an apparent parallel, a subtle link, between two sources. Part of the recipe of being a successful academic, apparently, lies in cultivating a boundless curiosity while curating a strict discipline over the paths we allow it to take us down. A capacity for caprice, certainly, but also the prudence to almost never exercise it.

The tone of “read together” would thus be imperative: it would offer two excerpts from my reading online, in a tone of invitation, and with an implied plea for the reader to do something with materials that I am sure could produce insight if only their relation to one another were fully explored. So, for example, from a masterful review of the issues raised by the trial of Oscar Pistorius, South Africa’s famed “blade runner,” for the murder of his girlfriend Reeva Steenkamp:

The full citation from Corinthians tattooed on Pistorius’s upper back reads:

I do not run like a man running aimlessly;
I do not fight like a man beating the air;
I execute each stride with intent;
I beat my body and make it my slave
I bring it under my complete subjection
To keep myself from being disqualified
After having called others to the contest.

The line about making my body my slave is not in most translations from Corinthians, nor is subjection described as ‘complete’. Pistorius was raising the stakes. He was also punishing, or even indicting, himself.

And, from a shorter piece on the causes of the recent rise in injuries in the NFL:

Advertisements are now composed entirely of jump cuts between rippling bodies yelling, barking, testifying to some endless purgatory of reps, sets, and routines. Menacing homilies about commitment linger on screen to be joined by this model of shoe or that style of gear. “Every single day,” we hear Tom Brady chant stoically, “every single day,” as his image, multiplied a thousandfold by technology, drills relentlessly with itself, perfectly in sync, in a macabre echo of authoritarian spectacle.”You are the sum of all your training,” Under Armour threatens us, before urging, finally, at the end, “Rule Yourself.” In its unalloyed praise for the eternal necessity of discipline, the sports commercial is a worthy heir to Puritan austerity. Excess physique is grace rewarded. Lean muscle is proof that God loves us and wants us to be strong.

See? There is something there. There is also, at a stretch, a cute echo back to the molding of the self in the academic life, in that “Rule Thyself.” But I tell myself I have other places that I must direct my energies, so let me just quote from Horkheimer and Adorno:

It is not merely that domination is paid for by the alienation of men from the objects’ dominated: with the objectification of spirit, the very relations of men—even those of the individual to himself—were bewitched.

No doubt somebone, somewhere, is already putting these pieces together. After all, somebody is always one step ahead, better prepared, more disciplined. That’s why we have to keep training, right?

Don't ask your female students to babysit

just-stopIf you supervise a graduate student, or a student doing an honours thesis, the offer to do some research assistance for you can be an attractive proposition. If your student is lucky, he or she may actually get to do research in this research assistant job, for which he or she will get almost vanishingly small credit but through which, at least, he or she may actually learn things that are valuable to their development as an intellectual. True, in many cases research assistance work turns out to be little more than footnote checking, but even here the work allows the student to read a text he or she has some subject matter interest in. While the work might sometimes end up having little to no relation to his or her actual academic interests, even in these marginal cases there may be some peripheral learning about the nitty-gritty ins-and-outs of the academic grind. Booking hotels, ordering letterhead and answering emails about asking whether a conference participant can get partial reimbursement for a flight upgrade to business class (no, they can’t) may all be taxing, but there are certain inescapable, technical dimensions to academic life. While having a handle on the mess that goes into planning a conference (or getting a book published, etc.) may not always be what a student signed up for when he or she took on a “research assistant” position, being able to handle these technical details–and technical glitches–is still educational.

Now: even when such jobs get advertised across the department and include some vague description of the work to be performed and even in the ideal case when there is an opportunity to discuss the details of the job with you before you start working together, the students who end up taking on the work are still inevitably entering a Faustian bargain. Working in most cases for substandard wages justified by the perfect storm of universities budget cuts, neoliberal managerialism that valorizes almost any labour-side cost-cutting, and wage floors for research assistants negotiated under university-wide collective bargaining that end up being treated as ceilings, these students justify the choice to themselves by hoping that the professors they work with will be able to write them a reference letter, or a faith that work inside the university is somehow more intrinsically beneficial than work outside it, and a belief, as per above, that they may learn something more in an experience working alongside an academic whose work they admire (at least by osmosis), than in working part time outside research proper.

How true is this narrative on the part of the student? The only fair answer is “it depends.” Sometimes a positive experience with a student will leave the two of you happy, life-long collaborators; perhaps, your experience will if nothing else provide you with sufficient interpersonal knowledge to write a sincere, supportive reference letter; if you are generous, the student may actually learn something from you that would have been impossible to get out of a classroom experience alone. On the other hand, some of your interactions with research assistants will inevitably be limited to a single meeting, a few documents emailed back and forth, and a few hours of their time that they will never get back but that, if you are lucky, will still have made a contribution to your academic projects. Tant pis.

Or more strongly: caveat emptor. For, despite all of the downsides of the Faustian bargain laid out above, we can at least say that it is a bargain, viz. a bilateral agreement. Nothing in this hypothetical forces the student to answer the job posting for a research assistant, nothing requires them to take the job once they have heard what it actually involves—or even to keep it once the contents turn out not to match what was on the label—and nothing stops them from finding some other, quite possibly more lucrative, part time job if they actually need financial support to complete their studies.

This story has a few small problems, and one big one. On the one hand, it is relatively easy to find holes to poke in this simple version. Some international students can only take on-campus jobs. Quitting any job is awkward, let alone a job under a professional whose field you want to continue working in after you quit. Students, unfortunately, don’t always know better, even if we can say they should. Furthermore, students shouldn’t be made to bear all the responsibility for professors who are simply terrible at delegating, and worse at managing.

But if the student taking the job is actually your student, the story fails much more catastrophically. The situation that plays when you act both in an academic-supervisory relationship to the student and as their boss, rehearses all the arguments about the nature of real power in employment relationships which labour lawyers have long offered to economists who believe that the existence of labour markets somehow implicitly discipline employer behavior. Namely, it is a situation in which it is very difficult for your student to say no. Your students quite rightly believe that you are a central component in the apparatus they use to push along their academic career. They rely on you for reference letters, not only for subsequent degrees and future job applications, but also for funding applications internal, national or international, occasionally conferences and symposia. There may be a collection of departmental administrative tasks you are required to fulfill on his or her behalf. You are expected, in every case, to help usher your student’s research project toward completion, which involves at minimum signing off once it has reached a stage where it can be read by others academics but might also include, if the student is fortunate, reading drafts, discussing roadblocks, and suggesting paths for further research. Often, bless them, these students will look up to you, or admire you or at the very least admire your work. But even if you have ended up together by chance, from the perspective of the student you essentially act as a monopolistic supplier of a large number of very important services.

It would be easy at this point to lapse into an overdetermined analogy of the economics of service provision between you and your students: overpricing of services under inadequate levels of competition, what could be accounted for as an underpricing of their services in the resulting barter relationship or, equivalently, as a mispricing of the services that they would provide to you. Luckily, there is a much easier way to explain the resulting conflict of interest. The dynamic the two of you face is this: your students depends on you to say ‘yes’ to a number of requests that they might make of you over the space of a year or longer, and every ‘no’ that greets a request you make of them will inevitably cast the shadow of some potential ‘no’ from you, no matter how remote. The relationship between you and your students is not reciprocal, cannot be. Yet the logic of reciprocation–I scratch your back, you scratch mine–is hard to shake, and harder to deny. You may want to believe that a student’s refusal to do some task for you, or their decision to quit a task halfway through, might have no effect on your treatment of them as a supervisor. But if so, you should ask yourself of your past and current students, of those who have done work for you and those who haven’t, which you know more about, have spent more time with, or feel generosity for.

In part, the worst aspects of this dynamic can be avoided by the steps identified above: opening the job to all the students in the department, including details about the nature of the work, and discussing it with candidates before hiring someone. If the work involves substantive research, then it is likely that your own charges are likely to not only be most qualified, but also most interested. Yet work that involves substantive research is also least likely to be a bad deal, and therefore minimizes the chance that the student will feel stuck doing work they don’t want to do by their inability to say ‘no’ to you.

Okay. Now let us remind ourselves that we live in a world where women attend university at much higher rates than men, but are still underrepresented in the highest levels of politics, industry and academia. Let us take as an example that women, now nearly half of law school entrants in the United States, remain a tiny sliver of partners at high-profile law firms. But note specifically that though women are overrepresented at universities, they remain less than half of the population of doctoral candidates. That successfully tenured women at universities is a smaller portion still of all tenured academics. That when academic job applications are submitted under female names, they are systematically rated as less competent, even when the content of the applications are otherwise the same. It’s probably worth thinking for a moment about how the perpetuation of these inequalities are linked simultaneously both to women’s persistently outsized contribution to childcare responsibilities (and indeed, to all care responsibilities) and to the perpetuation of stereotypes about women’s natural role reflected in the idea that any given women is likely to take significant time off of work to engage in child-rearing. Perhaps too we can think about how people’s perceptions of their capacities, and especially women’s perceptions of their capacities, are strongly influenced by both stereotypes and how others characterize their capacities.

Is it really necessary for me to spell out the rest? Does the sense now swim into view of why asking your female students, and your female students alone, to perform childcare responsibilities for you, might contribute to the perpetuation of academic inequality between men and women? When certain of your students are asked to spend some portion of their time, not doing substantive research with you, not editing or footnoting your work, not even phoning an airline to ask for a free upgrade to business class for a conference keynote (“I am sorry, ma’am, but that’s just not possible”), but instead performing a task that is stereotypically in the bailiwick of women, is it clear why this could only be understood as a material disadvantage to them, given that the opportunity cost is precisely time that could be spent on their own intellectual and academic development, including by answering belligerent conference emails for some other professor? Is it clear how, when you ask not just some students, but your students to do this work, that this material disadvantage is very hard to attribute in any way to them, given the tribulations involved in saying no to one’s supervisor? Might asking your female students to do this work, when you would not ask your male students to come over and do your gardening, risk giving them a sense that you somehow view them in a less full light, academically, than you view their male colleagues? Might this not-so-subtle implication not only hurt their feelings, but detract from their desire or willingness or confidence in their own work, in ways that materially detract from their success? If they did avoid such hurt feelings, could they do so other than by taking your request as anything other than an affront, which would, even if it didn’t impact on their confidence, nonetheless sour your relationship with them? Might a soured relationship with you harm their academic careers as well?

Don’t do it. Just don’t. Childrearing is hard! Sometimes you need a babysitter. If you have no shame, go ahead and post the job through the departmental email list, and see if you get any bites. But there are professionals, trustworthy professionals, who can be hired to do this work for you. Finding them, it’s true, often takes research. Luckily, you are a research professional, and if you don’t feel like finding a babysitter is a good use of your time, you can always pay one of your students to find one for you. Experience with balancing childcare responsibilities with an academic career, after all, is a lesson we can all learn.

And now a rant from our sponsor

This image does not imply endorsement, it is simply a reminder of what impassioned speech can look like. Please contact for removal for any reason.

A friend writes with his impression of the Dutch:

Amsterdam is lovely, somehow a less offensive variety of gentrification and urban development, some of it quite stunning as with the incorporation of the old harbour to the north into the city. Weather can be a real bitch, but has been unseasonably warm. Going away for a few days to the Frisian Islands tomorrow, walking across the mudflats, biking across the barren landscape of the dunes. … I’m liking the Dutch. They’re very critical, yes, but it’s a positive disposition, not one of resignation. No wallowing in melancholy, so often touted as the hallmark of true interpersonal intimacy down south, but a sober, practical attitude that navigates and negotiates emotions in as far as they ultimately enable us to transform and move forward. Very affirmative. Less intuitive, perhaps, and not such élan and fatalism, but not inert, not shallow, and not cold.

One of the things that I realized, linguistically and philosophically, when I was forced into reading Adorno for three months, is that negation doesn’t have origins with any relation to bad, unfortunate, or miserable. The idea of positivity being associated with fortune and happiness seems to have arrived from the soft-headed, hippy-dippy psychological school of “positive thinking” which presumed (and now preaches the idea) that, if you imagine something in your head, i.e. if you really try to “posit it” (whence positive) and take for granted the premise of its becoming, that this will somehow bring it into real-world existence. I mean look, I’m a social constructivist, I think that wide-scale belief is the very substance of our social world, but shit like The Gift mistakes a sociological insight for a psychological one, and reduces the profound premise of existentialism (“we always have the freedom to act even when there are consequences”) to a patently false pretense of self-help (“you can do anything if you set your mind to it!”). Anyway, in the result, “positivity” became associated with happiness and success and good tidings–and “negativity” with the sense of inviting their opposites.

Of course, this is doubly unfortunate: not only because it universalizes a misreading of “positive” that makes both references to “positive law” and “positive social science” nigh-incomprehensible to anyone who lives outside the university, but stupid also because to negate something need not mean replacing a thing with its opposite–it simply implies putting something else in its place. Thus, ideally, the “negative” encompasses that part of thought and practice that goes beyond the imagining of “what if things were such and such a way” to the more practical, fraught task of thinking “what if the nominally existent was replaced with something else” or the even more charged practice of demanding “this nominally existent thing should be replaced with another.” To negate is simply to deny, to say no to the merely existent.

The critic is not the cynic, but literally one that judges, a person not only capable of saying both “yes” and “no” but also of stopping to say “are you sure” and especially “am I?” There is something sick, I think, about cultural practices rooted in the belief that problems can be solved simply by saying “yes” to any idea, new or old, so long as it is well-packaged and expressed with enthusiasm or certainty. I suppose, compared to the dominant strand of the American zeitgeist, that a country willing to raise a quizzical eyebrow, pause before jumping onto the wagon of every fad that bristles with enthusiasm, and reject the magical thinking of “by believing it, we can make it so” will look like an elephant graveyard of nay-saying Eeyores. But nothing could be further from the truth. For inasmuch as the Russian stereotype of fatalism is anything more than a stereotype, it has nothing to do with being critical and has everything in common with the eager-beaver American disease: whereas in the lands of Slavic stereotype, there is an almost overweening willingness to say yes to everything that already is–no matter how bad–and no to any idea about how things might be better, in the always-on digital Manhattan of Twitter, Entertainment Tonight and BuzzFeed, the almost-laughable but ultimately tragic logic of the TEDtalk circuit doles out gold stars to every nincompoop self-deluded enough to stand in front a crowd and expound breathlessly on an idea that promises everything–everything–and at almost no cost.

What I am getting at here is of course being critical is a constructive disposition, and even a “positive” one, but just not in the insane sense in which that word is batted around the Oprah-bookclub lowlands of North American public discourse. The alternative to critique is a society where everyone is shitting themselves with excitement about a future in which we all get to be the next Steve Jobs, all while 2% of the population is in jail, literacy rates are declining and social mobility is lurching in the direction of the ancien regime. It is almost enough to drive you out of your house and into a bathtub in the street. I’ll take boring, slightly wry, but ultimately well-managed conservatism over that hokum any day.

Like it means something

File:Facebook like thumb.pngJames Gleick’s The Information starts with a simultaneous appearance in 1948, both of the first transistor and the first scientific discussions of ‘the bit’ as a fundamental unit of measurement. Overall, the book tells the story of how those two technologies — the engineering breakthrough contained in that now-ubiquitous miniaturized form of digital storage and the scientific paradigm shift of that now-universal way of measuring just what is being stored — conspired together to transform our experience of the world. His intention is to recapture some of the credit for the massive social upheavals occasioned by the digital revolution on behalf of ideas: not to reject the importance of the technical knowledge that allows us to build resistors, but to make room as well in the historical account for the radical shift in theoretical knowledge that renders it even sensible to imagine DNA as speech, tennis scores as music or an image as a coded message. Thinking about how to get more conversations over the same phone line, or how to ensure a message has been received correctly, or how to fit more patient data into a smaller space, or how to make a recorded song sound more like the original, will in each case require some metric of how much of the thing you have. We ended up in a world where we not only came up with measurements for each case, but the same measurement for every one. Here’s Gleick on how big a change that represented:

For the purposes of science, information had to mean something special. Three centuries earlier, the new discipline of physics could not proceed until Isaac Newton appropriated words that were ancient and vague — force, mass, motion, and even time — and gave them new meanings. Newton made these terms into quantities, suitable for use in mathematical formulas. Until then, motion (for example) had been just as soft and inclusive a term as information. For Aristotelians, motion covered a far-flung family of phenomena: a peach ripening, a stone falling, a child growing, a body decaying. That was too rich. Most varieties of motion had to be tossed out before Newton’s laws could apply and the Scientific Revolution could succeed.

In my own work, trying to capture how policy makers and the state imagine capital (including in my recent rambling thoughts on the subject) I wrestle a lot with a similar set of transformations that occurred in the birth of finance as a discrete field. I just took a three day seminar on the history of financial crises and no one but seemed to think it much mattered that ‘finance’ didn’t exist as a coherent object of reference until the 20th century, and lacked much of its current valence until the 1970s. Finance was a word that meant the means or capacity to pay one’s debts, and by the late 19th century, also came to refer to careful thinking about income and expenses. There was banking (and banking failures), money (and currency crises), public finance (and power and territory reordered in the service of paying off royal debts). But when the word gets used today, it can’t be disentangled from images of the Wolves of Wall Street, can’t help but act as mediator between the interest rates set by the Fed and the dividends paid out by Apple (on which, see JW Mason’s solid analysis), can’t escape from a seemingly natural home in ‘the markets.’

For those in the know, the constitution of finance inevitably depends, in some inchoate way, on the Basel Committee on Banking Supervision; for those who don’t, the Basel Committee is just one part of an arcane object, or one location in a country lying beyond the economic frontier, necessary but dangerous, complicated and obscure, wild but tamable for those who have the right kind of knowledge. But that obscurity results partially from a gradual expansion of referents over the last 200 years, from a term with a narrow meaning little differentiable from ‘bookkeeping,’ to a bloated pastiche that includes practices which used to be derided as immoral ‘speculation,’ sold as ‘insurance’, offered as opportunities for ‘investment’, or understood as ‘depositing money in a bank.’

But it occurred to me today that the transformation of the world hand in hand with the transformation of the word is not always a process that’s driven by the search for ordered, scientific clarity.

Consider, for example, that for the generation born after 1998, there will never be a world without a ‘like’ button. In the interaction with facebook, ‘like’, as verb, takes on an active, social sense slightly askew from its prior usages. When I was 15 years old, liking Radiohead meant I possessed a preference that was stationary, inert and internal, ready to be dragooned into action only once I was forced to choose between alternatives, a thing I might take out to to show a potential friend or choose to keep to myself, a feeling that related me as much to myself as to a network of my teenage classmates. To like something in the facebook era by contrast not only to have something, but is in the stronger sense to act. It is is to make a mark in the world. ‘To like’ becomes not only to possess an internal orientation — a feeling or an affect or an emotion — but to engage in a form of communication, one directed to a crowd of friends and acquaintances, plus a less-than-predictable network of relations of relations. In being inseparable from this act of communication, ‘to like’ something in this way leaves behind the world of private preferences, secret pleasures, silent joys.

The meaning of words lies not only in their use but in the networks of incoherent, sometimes contradictory meanings they are used to express. Words divide up the world into manageable categories, leaving certain senses behind even as they pick up new ones, picking up certain meanings and abandoning others. Perhaps the current generation will never use ‘like’ in ways  that are noticeably different from how I do. But it is one possible future of the word, and of the world. To finance is no longer limited to its original sense in English of paying a ransom to release a prisoner. Nor is liking something bound to have quite the same freight, or carry quite the same information, as when we were young.