Category Archives: Uncategorized

Don’t ask your female students to babysit for you

just-stopIf you supervise a graduate student, or a student doing an honours thesis, the offer to do some research assistance for you can be an attractive proposition. If your student is lucky, he or she may actually get to do research in this research assistant job, for which he or she will get almost vanishingly small credit but through which, at least, he or she may actually learn things that are valuable to their development as an intellectual. True, in many cases research assistance work turns out to be little more than footnote checking, but even here the work allows the student to read a text he or she has some subject matter interest in. While the work might sometimes end up having little to no relation to his or her actual academic interests, even in these marginal cases there may be some peripheral learning about the nitty-gritty ins-and-outs of the academic grind. Booking hotels, ordering letterhead and answering emails about asking whether a conference participant can get partial reimbursement for a flight upgrade to business class (no, they can’t) may all be taxing, but there are certain inescapable, technical dimensions to academic life. While having a handle on the mess that goes into planning a conference (or getting a book published, etc.) may not always be what a student signed up for when he or she took on a “research assistant” position, being able to handle these technical details–and technical glitches–is still educational.

Now: even when such jobs get advertised across the department and include some vague description of the work to be performed and even in the ideal case when there is an opportunity to discuss the details of the job with you before you start working together, the students who end up taking on the work are still inevitably entering a Faustian bargain. Working in most cases for substandard wages justified by the perfect storm of universities budget cuts, neoliberal managerialism that valorizes almost any labour-side cost-cutting, and wage floors for research assistants negotiated under university-wide collective bargaining that end up being treated as ceilings, these students justify the choice to themselves by hoping that the professors they work with will be able to write them a reference letter, or a faith that work inside the university is somehow more intrinsically beneficial than work outside it, and a belief, as per above, that they may learn something more in an experience working alongside an academic whose work they admire (at least by osmosis), than in working part time outside research proper.

How true is this narrative on the part of the student? The only fair answer is “it depends.” Sometimes a positive experience with a student will leave the two of you happy, life-long collaborators; perhaps, your experience will if nothing else provide you with sufficient interpersonal knowledge to write a sincere, supportive reference letter; if you are generous, the student may actually learn something from you that would have been impossible to get out of a classroom experience alone. On the other hand, some of your interactions with research assistants will inevitably be limited to a single meeting, a few documents emailed back and forth, and a few hours of their time that they will never get back but that, if you are lucky, will still have made a contribution to your academic projects. Tant pis.

Or more strongly: caveat emptor. For, despite all of the downsides of the Faustian bargain laid out above, we can at least say that it is a bargain, viz. a bilateral agreement. Nothing in this hypothetical forces the student to answer the job posting for a research assistant, nothing requires them to take the job once they have heard what it actually involves—or even to keep it once the contents turn out not to match what was on the label—and nothing stops them from finding some other, quite possibly more lucrative, part time job if they actually need financial support to complete their studies.

This story has a few small problems, and one big one. On the one hand, it is relatively easy to find holes to poke in this simple version. Some international students can only take on-campus jobs. Quitting any job is awkward, let alone a job under a professional whose field you want to continue working in after you quit. Students, unfortunately, don’t always know better, even if we can say they should. Furthermore, students shouldn’t be made to bear all the responsibility for professors who are simply terrible at delegating, and worse at managing.

But if the student taking the job is actually your student, the story fails much more catastrophically. The situation that plays when you act both in an academic-supervisory relationship to the student and as their boss, rehearses all the arguments about the nature of real power in employment relationships which labour lawyers have long offered to economists who believe that the existence of labour markets somehow implicitly discipline employer behavior. Namely, it is a situation in which it is very difficult for your student to say no. Your students quite rightly believe that you are a central component in the apparatus they use to push along their academic career. They rely on you for reference letters, not only for subsequent degrees and future job applications, but also for funding applications internal, national or international, occasionally conferences and symposia. There may be a collection of departmental administrative tasks you are required to fulfill on his or her behalf. You are expected, in every case, to help usher your student’s research project toward completion, which involves at minimum signing off once it has reached a stage where it can be read by others academics but might also include, if the student is fortunate, reading drafts, discussing roadblocks, and suggesting paths for further research. Often, bless them, these students will look up to you, or admire you or at the very least admire your work. But even if you have ended up together by chance, from the perspective of the student you essentially act as a monopolistic supplier of a large number of very important services.

It would be easy at this point to lapse into an overdetermined analogy of the economics of service provision between you and your students: overpricing of services under inadequate levels of competition, what could be accounted for as an underpricing of their services in the resulting barter relationship or, equivalently, as a mispricing of the services that they would provide to you. Luckily, there is a much easier way to explain the resulting conflict of interest. The dynamic the two of you face is this: your students depends on you to say ‘yes’ to a number of requests that they might make of you over the space of a year or longer, and every ‘no’ that greets a request you make of them will inevitably cast the shadow of some potential ‘no’ from you, no matter how remote. The relationship between you and your students is not reciprocal, cannot be. Yet the logic of reciprocation–I scratch your back, you scratch mine–is hard to shake, and harder to deny. You may want to believe that a student’s refusal to do some task for you, or their decision to quit a task halfway through, might have no effect on your treatment of them as a supervisor. But if so, you should ask yourself of your past and current students, of those who have done work for you and those who haven’t, which you know more about, have spent more time with, or feel generosity for.

In part, the worst aspects of this dynamic can be avoided by the steps identified above: opening the job to all the students in the department, including details about the nature of the work, and discussing it with candidates before hiring someone. If the work involves substantive research, then it is likely that your own charges are likely to not only be most qualified, but also most interested. Yet work that involves substantive research is also least likely to be a bad deal, and therefore minimizes the chance that the student will feel stuck doing work they don’t want to do by their inability to say ‘no’ to you.

Okay. Now let us remind ourselves that we live in a world where women attend university at much higher rates than men, but are still underrepresented in the highest levels of politics, industry and academia. Let us take as an example that women, now nearly half of law school entrants in the United States, remain a tiny sliver of partners at high-profile law firms. But note specifically that though women are overrepresented at universities, they remain less than half of the population of doctoral candidates. That successfully tenured women at universities is a smaller portion still of all tenured academics. That when academic job applications are submitted under female names, they are systematically rated as less competent, even when the content of the applications are otherwise the same. It’s probably worth thinking for a moment about how the perpetuation of these inequalities are linked simultaneously both to women’s persistently outsized contribution to childcare responsibilities (and indeed, to all care responsibilities) and to the perpetuation of stereotypes about women’s natural role reflected in the idea that any given women is likely to take significant time off of work to engage in child-rearing. Perhaps too we can think about how people’s perceptions of their capacities, and especially women’s perceptions of their capacities, are strongly influenced by both stereotypes and how others characterize their capacities.

Is it really necessary for me to spell out the rest? Does the sense now swim into view of why asking your female students, and your female students alone, to perform childcare responsibilities for you, might contribute to the perpetuation of academic inequality between men and women? When certain of your students are asked to spend some portion of their time, not doing substantive research with you, not editing or footnoting your work, not even phoning an airline to ask for a free upgrade to business class for a conference keynote (“I am sorry, ma’am, but that’s just not possible”), but instead performing a task that is stereotypically in the bailiwick of women, is it clear why this could only be understood as a material disadvantage to them, given that the opportunity cost is precisely time that could be spent on their own intellectual and academic development, including by answering belligerent conference emails for some other professor? Is it clear how, when you ask not just some students, but your students to do this work, that this material disadvantage is very hard to attribute in any way to them, given the tribulations involved in saying no to one’s supervisor? Might asking your female students to do this work, when you would not ask your male students to come over and do your gardening, risk giving them a sense that you somehow view them in a less full light, academically, than you view their male colleagues? Might this not-so-subtle implication not only hurt their feelings, but detract from their desire or willingness or confidence in their own work, in ways that materially detract from their success? If they did avoid such hurt feelings, could they do so other than by taking your request as anything other than an affront, which would, even if it didn’t impact on their confidence, nonetheless sour your relationship with them? Might a soured relationship with you harm their academic careers as well?

Don’t do it. Just don’t. Childrearing is hard! Sometimes you need a babysitter. If you have no shame, go ahead and post the job through the departmental email list, and see if you get any bites. But there are professionals, trustworthy professionals, who can be hired to do this work for you. Finding them, it’s true, often takes research. Luckily, you are a research professional, and if you don’t feel like finding a babysitter is a good use of your time, you can always pay one of your students to find one for you. Experience with balancing childcare responsibilities with an academic career, after all, is a lesson we can all learn.

And now a rant from our sponsor

This image does not imply endorsement, it is simply a reminder of what impassioned speech can look like. Please contact for removal for any reason.

A friend writes with his impression of the Dutch:

Amsterdam is lovely, somehow a less offensive variety of gentrification and urban development, some of it quite stunning as with the incorporation of the old harbour to the north into the city. Weather can be a real bitch, but has been unseasonably warm. Going away for a few days to the Frisian Islands tomorrow, walking across the mudflats, biking across the barren landscape of the dunes. … I’m liking the Dutch. They’re very critical, yes, but it’s a positive disposition, not one of resignation. No wallowing in melancholy, so often touted as the hallmark of true interpersonal intimacy down south, but a sober, practical attitude that navigates and negotiates emotions in as far as they ultimately enable us to transform and move forward. Very affirmative. Less intuitive, perhaps, and not such élan and fatalism, but not inert, not shallow, and not cold.

One of the things that I realized, linguistically and philosophically, when I was forced into reading Adorno for three months, is that negation doesn’t have origins with any relation to bad, unfortunate, or miserable. The idea of positivity being associated with fortune and happiness seems to have arrived from the soft-headed, hippy-dippy psychological school of “positive thinking” which presumed (and now preaches the idea) that, if you imagine something in your head, i.e. if you really try to “posit it” (whence positive) and take for granted the premise of its becoming, that this will somehow bring it into real-world existence. I mean look, I’m a social constructivist, I think that wide-scale belief is the very substance of our social world, but shit like The Gift mistakes a sociological insight for a psychological one, and reduces the profound premise of existentialism (“we always have the freedom to act even when there are consequences”) to a patently false pretense of self-help (“you can do anything if you set your mind to it!”). Anyway, in the result, “positivity” became associated with happiness and success and good tidings–and “negativity” with the sense of inviting their opposites.

Of course, this is doubly unfortunate: not only because it universalizes a misreading of “positive” that makes both references to “positive law” and “positive social science” nigh-incomprehensible to anyone who lives outside the university, but stupid also because to negate something need not mean replacing a thing with its opposite–it simply implies putting something else in its place. Thus, ideally, the “negative” encompasses that part of thought and practice that goes beyond the imagining of “what if things were such and such a way” to the more practical, fraught task of thinking “what if the nominally existent was replaced with something else” or the even more charged practice of demanding “this nominally existent thing should be replaced with another.” To negate is simply to deny, to say no to the merely existent.

The critic is not the cynic, but literally one that judges, a person not only capable of saying both “yes” and “no” but also of stopping to say “are you sure” and especially “am I?” There is something sick, I think, about cultural practices rooted in the belief that problems can be solved simply by saying “yes” to any idea, new or old, so long as it is well-packaged and expressed with enthusiasm or certainty. I suppose, compared to the dominant strand of the American zeitgeist, that a country willing to raise a quizzical eyebrow, pause before jumping onto the wagon of every fad that bristles with enthusiasm, and reject the magical thinking of “by believing it, we can make it so” will look like an elephant graveyard of nay-saying Eeyores. But nothing could be further from the truth. For inasmuch as the Russian stereotype of fatalism is anything more than a stereotype, it has nothing to do with being critical and has everything in common with the eager-beaver American disease: whereas in the lands of Slavic stereotype, there is an almost overweening willingness to say yes to everything that already is–no matter how bad–and no to any idea about how things might be better, in the always-on digital Manhattan of Twitter, Entertainment Tonight and BuzzFeed, the almost-laughable but ultimately tragic logic of the TEDtalk circuit doles out gold stars to every nincompoop self-deluded enough to stand in front a crowd and expound breathlessly on an idea that promises everything–everything–and at almost no cost.

What I am getting at here is of course being critical is a constructive disposition, and even a “positive” one, but just not in the insane sense in which that word is batted around the Oprah-bookclub lowlands of North American public discourse. The alternative to critique is a society where everyone is shitting themselves with excitement about a future in which we all get to be the next Steve Jobs, all while 2% of the population is in jail, literacy rates are declining and social mobility is lurching in the direction of the ancien regime. It is almost enough to drive you out of your house and into a bathtub in the street. I’ll take boring, slightly wry, but ultimately well-managed conservatism over that hokum any day.

Like it means something

File:Facebook like thumb.pngJames Gleick’s The Information starts with a simultaneous appearance in 1948, both of the first transistor and the first scientific discussions of ‘the bit’ as a fundamental unit of measurement. Overall, the book tells the story of how those two technologies — the engineering breakthrough contained in that now-ubiquitous miniaturized form of digital storage and the scientific paradigm shift of that now-universal way of measuring just what is being stored — conspired together to transform our experience of the world. His intention is to recapture some of the credit for the massive social upheavals occasioned by the digital revolution on behalf of ideas: not to reject the importance of the technical knowledge that allows us to build resistors, but to make room as well in the historical account for the radical shift in theoretical knowledge that renders it even sensible to imagine DNA as speech, tennis scores as music or an image as a coded message. Thinking about how to get more conversations over the same phone line, or how to ensure a message has been received correctly, or how to fit more patient data into a smaller space, or how to make a recorded song sound more like the original, will in each case require some metric of how much of the thing you have. We ended up in a world where we not only came up with measurements for each case, but the same measurement for every one. Here’s Gleick on how big a change that represented:

For the purposes of science, information had to mean something special. Three centuries earlier, the new discipline of physics could not proceed until Isaac Newton appropriated words that were ancient and vague — force, mass, motion, and even time — and gave them new meanings. Newton made these terms into quantities, suitable for use in mathematical formulas. Until then, motion (for example) had been just as soft and inclusive a term as information. For Aristotelians, motion covered a far-flung family of phenomena: a peach ripening, a stone falling, a child growing, a body decaying. That was too rich. Most varieties of motion had to be tossed out before Newton’s laws could apply and the Scientific Revolution could succeed.

In my own work, trying to capture how policy makers and the state imagine capital (including in my recent rambling thoughts on the subject) I wrestle a lot with a similar set of transformations that occurred in the birth of finance as a discrete field. I just took a three day seminar on the history of financial crises and no one but seemed to think it much mattered that ‘finance’ didn’t exist as a coherent object of reference until the 20th century, and lacked much of its current valence until the 1970s. Finance was a word that meant the means or capacity to pay one’s debts, and by the late 19th century, also came to refer to careful thinking about income and expenses. There was banking (and banking failures), money (and currency crises), public finance (and power and territory reordered in the service of paying off royal debts). But when the word gets used today, it can’t be disentangled from images of the Wolves of Wall Street, can’t help but act as mediator between the interest rates set by the Fed and the dividends paid out by Apple (on which, see JW Mason’s solid analysis), can’t escape from a seemingly natural home in ‘the markets.’

For those in the know, the constitution of finance inevitably depends, in some inchoate way, on the Basel Committee on Banking Supervision; for those who don’t, the Basel Committee is just one part of an arcane object, or one location in a country lying beyond the economic frontier, necessary but dangerous, complicated and obscure, wild but tamable for those who have the right kind of knowledge. But that obscurity results partially from a gradual expansion of referents over the last 200 years, from a term with a narrow meaning little differentiable from ‘bookkeeping,’ to a bloated pastiche that includes practices which used to be derided as immoral ‘speculation,’ sold as ‘insurance’, offered as opportunities for ‘investment’, or understood as ‘depositing money in a bank.’

But it occurred to me today that the transformation of the world hand in hand with the transformation of the word is not always a process that’s driven by the search for ordered, scientific clarity.

Consider, for example, that for the generation born after 1998, there will never be a world without a ‘like’ button. In the interaction with facebook, ‘like’, as verb, takes on an active, social sense slightly askew from its prior usages. When I was 15 years old, liking Radiohead meant I possessed a preference that was stationary, inert and internal, ready to be dragooned into action only once I was forced to choose between alternatives, a thing I might take out to to show a potential friend or choose to keep to myself, a feeling that related me as much to myself as to a network of my teenage classmates. To like something in the facebook era by contrast not only to have something, but is in the stronger sense to act. It is is to make a mark in the world. ‘To like’ becomes not only to possess an internal orientation — a feeling or an affect or an emotion — but to engage in a form of communication, one directed to a crowd of friends and acquaintances, plus a less-than-predictable network of relations of relations. In being inseparable from this act of communication, ‘to like’ something in this way leaves behind the world of private preferences, secret pleasures, silent joys.

The meaning of words lies not only in their use but in the networks of incoherent, sometimes contradictory meanings they are used to express. Words divide up the world into manageable categories, leaving certain senses behind even as they pick up new ones, picking up certain meanings and abandoning others. Perhaps the current generation will never use ‘like’ in ways  that are noticeably different from how I do. But it is one possible future of the word, and of the world. To finance is no longer limited to its original sense in English of paying a ransom to release a prisoner. Nor is liking something bound to have quite the same freight, or carry quite the same information, as when we were young.

Cogs

supermassive-black-hole
I’ve got some pushback on my idea that the European Commission might be a place where it’s ever possible to exercise ethics or transcend dehumanized institutional logics. The point I was trying to take from Duncan Kennedy is that we cannot know, until we have spent some times engaging with an organization, whether it is so internally inflexible and on balance harmful that it should be resigned to the scrap heap of history.

Now, I have some sympathy for a kind of utopia where people get to keep remaking the institutions in which they work (this is, in particular, Roberto Unger’s utopia). If this is your utopia, you could say that any amount of bureaucracy, stability and institutional authority is a shortcoming that needs to be fought against. That’s fine so far as utopias go, but in the meantime we don’t live in that world and a person’s got to eat. On the other hand, we do live in a world where moments of individual judgment can not only make a difference for some individual or group of individuals, but actually shift the waters of history one way or another, even if in only a tiny way. I agree that people who are soothed into waking sleep may miss those moments, letting the spirit of the machine win out, but working inside an organization doesn’t necessarily end up that way.

The pushback came in the form of the claim that, when it comes to the Commission, we have left the a priori behind and obviously entered a black hole of ethical action and judgment. But this is too easy. Even a preliminary attempt at thinking about the possible scenarios reveals a complex of possibilities.

1. So, for example, maybe the EU is a broken, unquestionably harmful, and irredeemable political project, and the best thing that can happen for Europe, democracy, social justice, all those things we care about, is that the whole edifice crumble into dust: no matter what comes after, it will be better than what we have now.

2. Or maybe the EU is a broken, harmful political project that should have been stopped before it got to where it is, but it is hard to know whether it should be reformed or scrapped, because it’s quite possible that what comes after it will be much worse.

3. Or maybe: the EU is a politically conflicted, conceptually contradictory political project. Its institutional logics improve the lives of some and worsen the lives of others; they empower some democratic wills while suppressing others.

A. The ethical and political valence of the EU project are determined only by the players at the top: the Council, and maybe sometimes/to some degree the ECJ. EC bureaucrats only ever have one choice: to quit their jobs, or to put into practice the logics of the machine determined at the top.

B. Same as A, except EC bureaucrats have a third choice, which is to be obstructionist and slow-moving in the implementation of logics they find distasteful.

4. Or, same as 3 (politically conflicted, conceptually contradictory) but the institutional logics aren’t fully determined at the top. Instead, the contents of those logics or normative structures are so open, so indeterminate, that there are opportunities to choose or at least exercise some judgment all the way down.

5. Same situation as 4 (politically conflicted logics, real opportunities for judgment), but the institutional culture is so bland, the daily practices so thoughtless, that no one who both cares about how the world is organized and who is capable of discerning the existence of ethical and/or political choices in the implementation of the Commission’s multiple logics, actually sticks around long enough to have moments to exercise that judgment.

6. Or, say, the EU’s multiple institutional logics are actually associated with different parts of the EC as an institution. To the degree that you are politically committed to one of those logics–say, gender equality at work–being part of the EC bureaucracy can provide an opportunity to work in a setting driven by a political logic that you care about, but nonetheless provides few opportunities for judgment or ethical action. Of course, by supporting this work, one also lends legitimacy and institutional power to the Commission and to the EU project as a whole.

A. And furthermore, it might be that this is true, but that working in that setting nonetheless provides few opportunities for judgment or ethical action. At best, one is, paradoxically, a cog in a machine that one feels contributes to justice; at worst, a cog in a machine that contributes to someone else’s idea of justice, but not yours.

B. Or, in a slightly different scenario, there are opportunities for judgement and ethical action, but they only come to people with patience, political savvy, the intellectual chops for academia and the charm for sales.

Part of my point when it comes to choosing a job is that I don’t know which one of these situations corresponds to the real world of the Commission. This is just a off-the-top of my head typology of the unknowns one faces when thinking about what it means to work inside one organization. Even in the best-case scenario that the real world is scenario 6A, a person who goes to work there may not find themselves in the particular part of the organization for which their particular skill set and commitments actually empower them to do anything that they care about or which feels like making a difference. The whole “a priori” thing is that I am not sure it’s possible to answer them without spending some real time in the belly of the beast.

There is a whole lot more to be said about both the ‘inside the job’ and ‘outside the job’ practices that can make living in the world compatible with a sense of an ethical self. My key advice for people trying to balance security with their political ideals is to have patience and hedge heavily against the lifestyle that seems to come pre-packaged with a career choice: don’t get used to a level of comfort (a mortgage, private school for the kids, a second property, the annual Caribbean vacation) that you may have to abandon if (when) you discover the job is killing you.

 

 

Make Work

A friend, who has the intellectual chops for academia, charm enough for sales, and the ethical heart of a British-style social drama, writes to ask if I would “kill him” if he told me he was entertaining thoughts of working for the European Commission.

The background here being not only that he’s young enough to still be choosing a career, but that he had previously expressed particular distaste for certain of those among his peers who he saw as headed to Brussels to participate in the make-work at the centre of the EU’s log-rolling, authoritarian market-making machine in return for the promise of reasonable work-life balance, job security and 5 weeks a year of paid vacation. This being a sentiment which, I can’t lie, I had some sympathy for.

“I got rather seduced,” (no doubt) “by a lovely lady telling me how I would have a great life working on things that matter to me.” (A committed feminist, she was, apparently). “All that, and with the possibility of a life outside of my professional life: i.e. 5 weeks of paid holiday a year.” (His addendum, somewhat hyperbolically: “I don’t want to end up 40 and alone. Ahhh… what do I do with my life!?”)

Now, as an aspiring teacher to a profession that is known for taking in young idealists and turning out depressed sociopaths, I’ve actually muddled somewhat over the question of how to prevent professional momentum from taking young people places they don’t want to go. I myself had a number of years where the question of what to do with my life bore down with the strength of a thousand suns. My response to him boils down a lot of my thoughts on the issue:

You will do well wherever you go, so long as you refuse to give up your inquisitive mind and critical perspective. The Commission could benefit from people who haven’t bought in to the European project hook, line and sinker, and who know especially that they have alternatives if they end up feeling like they aren’t contributing to anything that matters. It couldn’t hurt, for the purposes of bringing some value to the democratic accountability of the place, either, that you haven’t yet ‘transposed’ your ‘modalities’ into the arcane vocabularies of Brussels English. In all seriousness, though, so much of what matters in your work isn’t “what’s your job?” but “how do you do your work? how do you relate to your work? how do you, as someone with an identity and personality that is separate from that work, relate to this ‘job’, this ‘thing you do’?” Also, you aren’t choosing a career now. Find something to do for the moment, but never stop thinking of it as an awesome 7-year post-doc that will have something come after.

In other words, the question, when it comes to work that involves judgment, creativity and thought, isn’t “what will your work be?” but “what will you make of your work?” Not, will you win the prize, but what will you do with it when you do?

Postlethwait’s speech here, at the end of Brassed Off, provides a good tie in for three caveats: first, these thoughts are a bit partial, and much of what I have to say was, it turns out, largely foreshadowed in questions raised by Duncan Kennedy in the early 1980s. In Rebels from Principle [pdf], a piece he wrote for the Harvard Law School Bulletin, he wrote:

the locus of conflict between oppression and liberation can’t be conceptualized as always outside us. It is inside us as well, inside any liberal or left organization, and also inside the apparently monolithic opposing organizations, like corporate law firms. I think it follows that there are no strategies for social transformation that are privileged a priori — either in the sense that they designate the right place to struggle because struggling in that place will lead most certainly to the overthrow of illegitimate hierarchy and alienation, or even in the much more limited sense that some struggles have an absolute moral priority over others.

Second, I am troubled by the fact that this advice can be given to lawyers and certain other professionals, but seems a much poorer fit for, say, the heroes of all of those British-style social dramas. I suppose that the capacity to have some power, some say, in what your work means or how it’s organized, is one of the reasons for the success of the labour movement.

Third, none of this means that we should define ourselves by our job. In fact, I mean exactly the opposite. There are gardens to be planted, communities to be built, children to be raised and music to be played. Ultimately, there is a world to be (re)made.

But sometimes — often —these things, too, are work and much of them take judgment, and creativity, and thought. And we are defined in large part by how we do our work. Half the battle is choosing where to apply ourselves, and where not to: when the music matters, and when it matters bollocks.

Teach the Controversy

Over at the Soros-funded Institute for New Economic Thinking, there have recently been a few blog-posts about the potential of, and the need for, economics curriculum reform. In a recent example, Abdul Alassad characterises the problem as follows:

rational debates of ideas has been replaced by dogma, to the detriment of society. A dogma is a set of principles laid down by an authority as incontrovertibly true. Today, economics is taught as a set of assumptions that are unquestionably infallible, static, and undeniably true.

This misses the mark. Very few trained economists think that the dominant economic models are universal or incontrovertible. Rather, the danger of current economic teaching lies in the presentation of single models as the baseline for the analysis of economic problems, within a broader framework that relies on a single mode of economic analysis (the neoclassical synthesis). The result is that those whose exposure to economic concepts is limited to undergrad teaching come away with an attitude to the heterodoxy equivalent to the grade-schooler belief that there’s no such thing as negative numbers. Until you teach them how to manipulate unfamiliar ideas, questions that depend on those concepts will seem nonsensical.

So what’s the alternative? The failure of neoclassical models to predict or prevent the financial crisis, and its complicity in unavoidable perpetuation of inequalities under thirty years of neoliberalism could be used as an argument for simply replacing the dominant paradigm with another. The push for a more historical approach to economics provides a different, and likely more fruitful, answer: teach the controversy. It’s not clear why undergrads shouldn’t be exposed to the incompatible models of the Keynesians and the monetarists, marginalists and institutionalists, Marx and Hayek, Friedman and Coase.  For that matter, why shouldn’t they spend more time engaging with hard questions about the relationship between economic variables and real-world social practice, à la David Graeber or Thomas Piketty?

Of course, undergrads exposed to a variety of models, with often conflicting opinions about how policy will effect outcomes, and to theoretical texts that raise questions about the true nature of economic practice may end up somewhat confused about how the real world works. But this is exactly as it should be: if the last ten years have taught us anything, it’s that the world needs fewer, not more people convinced that they know how to organize an economy.

“Project, Opposition, and most Embarrassingly, Truth”

cross-posted from EUI Global and Transnational Perspectives Working Group


credit: McHugh-Russell

Over at n+1, an editor’s essay on the fragmented pasts and fraught promise of World Literature has spawned a small collection of thoughtful responses. In trying to capture a sense of what weltliteratur might be for, and why the contestants always seem to have fallen short of the mark (“Alas, Rushdie; alas, Naipaul.”), the editors string together an impressive array of traditions and examples, showing how each contributes to a synthesis that fails as much as a whole as in its individual parts.

Steeped as it is in the altogether modern desire to express the universal in the particular—i.e. to not only craft a particular voice, but to somehow choose voices that can stand in for the whole— the editors conclude that perhaps the disappointments of the genre arise not from the particular attempts that have been made of it, but in the shape of the ambition itself.

One of the responses, from Poorva Rajaram and Michael Griffith, dismisses the essay as a lament for a “right kind of universalism” that is not only unrealized, but unrealizable. They end by suggesting to those unsatisfied with the output of the spirit of capitalism as embodied in the publishing houses of northern capitals that they might simply “read more to their taste.”

Is the complaint fair? The essay is an attempt to investigate not what literature should be consumed, but how those engaged in its curation, can support human connection across difference (what Rorty would call the “education of the sentimental imagination”) and stay founded in a commitment to the political value of aesthetic freedom, without becoming Global Literature, i.e. “an empty vessel for the occasional self-ratification of the global elite.”

Joshua Cohen’s letter responds clearly to this ambition. In his view, the problem isn’t with the ends, but with the means. Literature relevance has passed: “Social consciousness has become the new beauty. The political has usurped the aesthetic.” Be that as it may, it’s not clear how much this differs from the essay’s own conclusions, which lay out, by reference to Trotsky (!), a blueprint for an alternative, “internationalist” literature.  Rather than an aesthetic practice with universalist pretensions, the concept here would be an explicit project that beats the path to freedom and solidarity by countering prevailing politics and tastes, rooted in the effort to articulate truths.

The important thing that Rajaram and Griffith seem to have forgotten is that the essay’s authors are not anonymous readers of the stuff put out by those northern-capital publishers, but rather a group of 20- and 30-somethings who edit a surprisingly influential literary journal published just down the street from them. When those editors provide the outline of a project for literature, the curious reader might, instead of suggesting sources that fit the bill, inquire into what exactly those editors have been printing for the last ten years. Because when one starts to look at the diversity, anger, curiosity and honesty that one finds between the journal’s pages, one can only conclude that the essay is neither reading guide, nor lamentation.

It’s a manifesto.