Smarter ducks

Over on the New York Times economix blog, an argument for high taxation and robust government spending using data from, of all places, the Republican-supporting Heritage Foundation in response to those who think that those who pay low taxes are ‘lucky duckies.’ As an example of the cute analysis:

Equatorial Guinea: According to the Republican-leaning Heritage Foundation, those who live in this small country in sub-Saharan Africa are lucky duckies indeed. Because of recently discovered oil deposits, the citizens of Equatorial Guinea pay less than 1 percent of the gross domestic product in taxes. The comparable figure for the United States is 26.9 percent of G.D.P., according to Heritage.

However, Equatorial Guinea doesn’t seem to be a very pleasant place to live. The people are poor and have little freedom. Heritage says that “persistent institutional weaknesses impede creation of a more vibrant private sector” and “the rule of law is weak.” This sounds suspiciously as if government is too small to do its job properly. But I’m sure that the citizens of Equatorial Guinea don’t mind having a dysfunctional government; after all, they’re lucky duckies.

Perhaps the most interesting part of this short piece – one of the clearest, quickest arguments for the idea that working markets requires a strong, effective government – is that it comes from Bruce Bartlett, a former policy advisor to Reagan, Bush Sr., and Ron Paul. It demonstrates that even someone who has worked with a headstrong libertarian type sees the need for effective government presence in any good society.

The conclusion of his article demonstrates another point however. Bartlett believes that high taxes and low regulation (like Denmark) are preferable to lower taxes and less ‘business freedom.’ So it’s worth keeping mind that, even convincing people that government is important and necessary to a functioning economy doesn’t mean they’ll be convinced that it should be on the side of a functioning society. Still, if you can lead a duck to water…

Remistifying “Digital Literacy”?

In the comments, my dear friend Everett remarks on a recent piece (http://nyti.ms/qhON3m) appearing in the NY Times.

It is a lament and a diatribe about the decline of the thinker and the rise of the information junkie in an increasingly “post-idea” and “post-Enlightenment” world where our capacity for rational thought has allegedly diminished, despite all of our technological advances. Neal Gabler contends that information itself might be partially to blame: “It may seem counterintuitive that at a time when we know more than we have ever known, we think about it less.” He remains skeptical about the possibilities afforded by social media and the Internet. They are part of the problem. While the online world excels at facilitating countless micro-discussions and exchanges on almost every conceivable topic, this hyperactive space tends to crowd out avenues for the slow churning of grand arguments and theorizations.

In one way his regular commenting on my blog (and, as you’ll see, at least one element of the post itself) goes some way to providing a quirky counter-current to his position.

The argument of the piece (a bit ironically, if Gabler’s argument is right, given its appearance in a print publication) is a bit muddled on what the problem is, what the sources of it are and what the implications might be, but his point about digital media can probably be summed up in his claim that “you can’t think and tweet at the same time…” His big idea is ‘short form media is bad for big ideas.’

While it is true that the average blog post is shorter than the average book, the problem with critiques like this is that they are stuck using a metric of information density which uncritically borrowed from the age of Gutenberg. Sure, it’s impossible to summarize big ideas in 140 characters. But a huge portion of people use Twitter not as a way to communicate directly, but only as a way to encode other kinds of communications. Why does the NY Times have its own dedicated microurl? Because of how  frequently people were using Twitter to link to articles in the Times. So when Gabler claims that Twitter is bad “because tweeting…is largely a burst of either brief, unsupported opinions or brief descriptions of your own prosaic activities…a form of distraction or anti-thinking…” he’s providing an unfairly narrow image of how social media is used.

Another example. It’s true that I waste some amount of time on facebook watching videos of cats chasing lasers (though my favourite online video remains this classic of cats who shoot lasers). But most of my time there is spent following links posted by friends, reading the comments they write on these articles, commenting on their positions, and, when I’m lucky, getting into an even more extended conversation on these topics. The reality is, the majority of my discussion of “ideas” now happens not IRL, but on facebook. This concerns me, certainly. But not because it heralds the doom of thought itself.

One could respond that, among those using online technologies, my network of friends is anomalous, and that, though Gabler’s vision of Twitter may be narrow, he’s right about the majority of online content. Well fine, but then, the only important question is, are people talking about big ideas more or less than before Twitter? Because I am willing to wager with 1 to 1 odds that most of Western societies has always talked about the mundane details of their lives, most of the time. Were the biggest celebrities in 1899 intellectuals, actors, or war heroes?

But let me get back to Gutenberg: no doubt, reading a lot of articles online is different from reading an entire book. But it’s not clear to me which form of reading allows more thinking. As I read Gabler’s piece, I stumbled on his use of “Gresham’s Law” (which is sad, because I spent much of August reading political economy). So I looked it up on wikipedia. It turns out that basically, Gresham found (by accident) that bad money will always replace good where both are available in the market. Which also implies that my bothering to provide a hard link to the wiki page on Gresham is kind of silly, because as my experience indicates, if people stumble with ideas in their traipsing through the blogosphere, they will do the legwork to find out more. Indeed, a lot of my online activities will lead into a web of related readings, some followed links, some watched videos. It’s not deep reading, but rather, networked reading.

What are the implications of this change in the nature of reading, for thinking, for ideas and for culture? A fascinating question no doubt, and one which is being addressed obliquely in the literary sphere. But knowing the answers, like knowing whether we talk to each other less (or more) about ‘things that matter’ than we might have in 1899, would require actual research, rather than just rehashing the warning given by Plato in the Phaedrus against the written word (it’s also online) whenever new communications technologies. I suppose communications analysis suffers from its own Gresham’s Law.

Now, here’s a fascinating idea: I noticed in a public presentation yesterday, the habit people have of looking up terms or sources they aren’t familiar with isn’t limited to when they are reading online. They do it in public, too. I wonder what my blogosphere will think about that.

Demystifying “Digital Literacy”

Over at her New York Times blog, Virginia Heffernan quotes some pretty hyperbolic claims about the future of work in the United States, inter alia, that 65% of jobs which will be held by today’s grade-school kids will be unrecognizable to us – though admittedly, the claim may turn on what how exacting a standard of ‘recognizable’ we apply. Any exaggeration is due to from Cathy Davidson, a Duke scholar who research focuses include the impact of technology on learning and higher education, whose new book, Now You See It turns on questions of attention and technology in learning.

What’s most hopeful, and surprising, about the collection of findings Heffernan cribs from Now You See It:

Online blogs directed at peers exhibit fewer typographical and factual errors, less plagiarism, and generally better, more elegant and persuasive prose than classroom assignments by the same writers.

That finding has now been quoted hundreds of times by bloggers, some presumably delighted that their particular medium, often the target of neo-luddite laments regarding the prospects for digital-age literacy, shows real promise as a mode of written communication (at least, it should be noted, among engaged top-tier undergrads).

The implications are more complex. A friend, now completing her PhD in rhetoric at the University of Waterloo, had intended to investigate the process by which students learn academic practices related to the use of sources. Yet one of the key lessons of her research is just how poorly most undergraduate assignments are designed. At best, such assignments – generally in the form of the poorly defined ‘review paper’ – require students to practice skills which will be useful to them neither in “the real world” nor in the academic practice of the professor who is teaching the class.

At first, Heffernan uses these and other results drawn from Davidson’s book to take somewhat arbitrary potshots at Tom Pynchon and Michael Ritchie’s film The Candidate. Of course, attacking the content of critique and analysis in the undergraduate classroom is, of course, somewhat beside the point. Luckily, at the end of her post, Heffernan gets back on point, suggesting that higher education should be tied into the task of improving, not deriding, digital literacy. What my friend’s research highlights is that this is not simply a matter of insufficient room for collaboration, “web accountability” or multimedia savvy: instead, improving learning outcomes may be simply a matter of designing assignments which allow students to write in a register which seems – and is – relevant: like writing a blog post.

Some notes on Greece

So, is the Greek government a massive overspender? Greek government spending as a portion of GDP is 49.5%. This compares to 56.2% for France, 43.8% for Canada, an OECD average of 44.5% and a Euro area average of 50.5%. That’s 2010 numbers. In 2007, Greece was at 46.6, France at 52.4%, Canada at 39.4% and the Euro area and OECD averages at 47% and 41.4%, respectively.

Are the Greeks, as suggested by many commentators, a lazy nation which retires early? Recent numbers from Eurostat on employment rates [direct link, pdf] show that Greek participation in the labour market until age 64 are not lower significantly lower than elsewhere in Europe. The overall employment rate for 20-64 year-olds is 59.6%, compared with a Euro area of 64%. Among 59-64 year-olds, the Greek rate is just over 3% lower than the Euro area’s 46% average. As this data shows, the source of the lower Greek numbers are caused by a lower participation rate for women: the employment rate for men is 1% higher than the 75% reached on average in the Euro area. The gap between Greece and Germany, which currently has a 71% employment rate among 20-64 year-olds would have a more significant impact on competitiveness if it weren’t for the significantly lower GDP per capita in Greece.

So it’s true that Greeks are retiring somewhat earlier than elsewhere in Europe – but one might consider them entitled, considering that they work more hours per year than anyone else in Europe. They work 25% more hours per year than the average European, 200 hours per year – the equivalent of five weeks work – longer than Americans. So, even taking into account the retired and unemployed, Greeks aged 20-64 are still working on average, 1222 hours per year, compared to a European average of 1040 hours per year. Lazy greeks, indeed.

The question then becomes, how did the government get into such dire financial straits? The answer is that the richest and best-paid in Greece don’t pay their taxes: Germany collects 37% of its GDP in taxes, Greece 29.4%. As of 2010, two thirds of Greek doctors self-reported incomes under 12 000 Euros – in a country with a GDP per capita of twice that amount – which entitled them to pay no tax at all. The Greek crisis is not about the average worker; it is about Greeks best-paid, and if they had been paying taxes for the last ten years, Greece would not be in the financial mess it is in right now. No doubt, the failure to collect those taxes falls on the shoulders of the government, but that is no justification for the amount of calumny which is continually heaped upon the Greek people for this mess.

Problem definition, regulatory logics and the incoherence of politics

Monday’s schedule included a well-organized forum held at the new Centre for Law in the Contemporary Workplace (I attended by videoconference). The discussions centred on the issues raised by the Supreme Court’s decision in Fraser, especially the extent of the constitutional protection for collective bargaining under the Charter of Rights and Freedoms.

Fraser‘s relevance to my own research derives not only from its focus on freedom of association, but on the Court’s increasing reliance on international labour law. As has become typical of discussions of this issue since the release of the BC Health Services decision, the most controversial comments came from Brian Langille, law professor at the University of Toronto. Without getting into too much detail, Langille’s criticisms (and his indictment of the majority was scathing) reiterated two themes of his recent work. First, he suggested that the court had lost sight or failed to correctly answer the fundamental question: “what is it we are trying to do?” Second, he suggested that the court did a bad job of two forms of derivation, both the transposition of international responsibilities into constitutional commitments and the translation of constitutional principles into constraints on government law-making.

When it comes to international labour law, I think there’s a deep problem with Langille’s approach. His criticisms share a basic premise with formalist approaches to law, namely that rules can be correctly derived from higher-level principles; and that these principles can also help resolve conflicts between rules whose application would be in conflict in specific cases. Now, the original critique of this claim from the critical legal studies movement was that such derivation is non-deterministic: that there is no politically neutral, logically coherent process by which legal conclusions can be drawn regarding the application of principles in specific situations. However, it is not this claim which concerns me – even most “crits” have retreated from this version of the claim – but rather a precondition for its possibility. What bothers me is that in some cases, it is not the interpretations of the principles which are contested, but the principles themselves.

Lawmaking, after all, is a political process. The players and participants in the process want different things. In a review of Bauer, Pool and Dexter’s 1964 study of the political process surrounding antebellum US trade policy,[1] Theodore Lowi notes an important finding:

The outcome depended not upon compromise between the two sides in Congress but upon whose definition of the situation prevailed. If tariff is an instrument of foreign policy and general regulation for international purposes, the anti-protectionists win; if the traditional definition of tariff as an aid to 100,000 individual firms prevails, then the protectionists win.

The advantage of Langille’s framing of the question – “what are we trying to do?” – is helpful insofar as it sets aside debates between formalism and functionalism, and implicitly sides with those who see no divide between principle and policy: both are cast simply as a matter of what the law is meant to do, and how it ‘works’ to accomplish that task. Once the problem has been defined and the successful policy choice promulgated into law, legal adjudication and administration can be made to cohere on the basis of a purposive interpretation of the resulting rules.

Unfortunately, purposive interpretation in international labour law is not so easy. I have spent much of the last week scanning the record of the last ten years of discussions at the ILO’s Governing Body, regarding the reform of standards and supervision processes. These are discussions of process, mind you, not discussions leading to actual international standard-setting. What these discussions reveal is unsurprising: action being taken and rules amended despite the absence of any consensus about problem definition. Without compromise at the level of problem definition – except for an agreement not to agree – the unfortunate result is a set of processes which reflect multiple, often incoherent logics. Each party tries to convert their interest into a principle, but neither principle prevails.

Such conflict of problem definition is just as likely to be reflected in international labour standards. While it is true that the ILO Constitution sets out high-level normative aims, the relevance of international regimes relies on their possession not only of a goal, but also an operative logic, i.e. an understanding of how specific norms will be realized by the policy or standard in question. Reading those Governing Body decisions has made clear to me that the resulting rules or procedures may actually embody conflicting norms which are inherent to the system, not accidental; the high-level aims may be purposefully vague and multivalent; and the resulting institutions may rely on multiple, incoherent logics. What a ‘correct’ derivation from the resulting texts might look like, in this type of situation, is without question a non-deterministic inquiry.



[1] The book is Raymond A. Bauer, Ithiel de Sola Pool, and Lewis A. Dexter, American Business and Public Policy: The Politics of Foreign Trade (New York, Atherton Press, 1963); the review is Theodore Lowi, “American Business, Public Policy, Case-Studies, and Political Theory” (1964) 16:4 World Politics 677

Human Rights Lip Service

In late October, parliament voted against adopting Bill C-300, which would have provided some level of human rights accountability for Canadian mining (and oil and gas) companies operating beyond Canadian borders. The bill, originally introduced by Liberal MP John McKay, was defeated 140-134 by the votes of a unanimously opposed Conservative party, and supported by the absence of numerous Liberal party members. Among those who chose not to appear was Liberal leader Michael Ignatieff.

The bill had passed two earlier Parliamentary votes. Most believe that the vote was a result of aggressive, last-minute lobbying from industry representatives, who claimed that firms (read “they themselves”) would be driven to incorporate elsewhere were the bill to become law.

What drastic measures would the law have imposed were a Canadian company involved in a ‘violation of international human rights standards’? As put concisely by Canadian Business Magazine, the project in question “…would become ineligible to receive financial services from EDC, and the Canada Pension Plan could no longer invest in [the Corporation’s] securities.”

There may have been problems with the administrative scheme set up under the Bill, shortcomings which could have been overcome if more MPs had taken an interest in rendering the bill workable. One would expect that kind of effort from a party, and a leader, which have made their name promoting human rights values. Their decision can hardly be called disappointing, however, since that would require the result to be out of character. In practice, Ignatieff has made an art of supporting those values while undercutting the rights themselves.

How is it possible that human rights protection, seen as trumps in Canadian law, fell so easily to the wayside when it came to regulating business practice? Chris Brown, an international relations professor at LSE, has put the matter plainly: “The enforcement of rights by the international community has been determined, in practice, by the foreign-policy imperatives of the major powers, and political, commercial and financial considerations frequently get in the way of a high-priority, even-handed policy on human rights.”

The vote on C-300 draws that lesson squarely, but also sharpens the edge of its rule: in buying into a standard for the protection of human rights, even middle powers don’t feel that they can afford to pay for more than lip service.


Refs: Chris Brown, “Universal Human Rights? An Analysis of the ’Human-Rights Culture’ and its Critics” in Robert G Patman, ed, Universal Human Rights? (Houndmills [England]: Macmillan Press, 2000) 31, at 40

Dr. Pepper is hurting America

One could call a recent episode, in which the employees at a Mott’s factory in upstate New York’s Williamson face a $1.50 an hour pay cut combined with other benefits reductions just another day in the continued American slide toward inequality. Yet as noted by New York Times writer Steve Greenhouse, the strike is interesting because the concessions are being demanded at at time when the parent company, The Dr. Pepper Snapple Group, is showing healthy profits.

As noted by Leo Casey over at Dissent Magazine’s blog, there’s nothing new about the race to the bottom which has undermined middle class incomes over the past 40 years. Wages for the bottom 90% of the American workers have stagnated for the last 30 years, at the expense of the wages of the top 10%. That’s 20 years of growth for which all of the benefits have flowed to society’s richest.

There is no reasonable argument that this is fair – data shows that the change can’t be attributed to growing gaps in educational attainment.

Besides fairness, however, there is growing understanding, backed up by evidence and theory, that inequality is a large part of what caused the financial crisis.  Former chief economist at the IMF Simon Johnson lays out arguments to that effect from Robert Reich and Raghuram Rajan, no economic slouches themselves. While admitting the long term fiscal problems faced by the United States, Johnson points out that the immeditate causes of the fiscal crunch was paying for the financial crisis – one facilitated by 30 years of growing inequality.

Johnson’s argument is about the implications of this understanding for US fiscal policy, but it also provides a useful perspective on the Mott’s strike. A recent book from Richard Wilkinson and Kate Pickett (you can read a defence against their critics here) has demonstrated the almost unbelievable numer of ways in which equality improves the lives of whole societies (that is, not just the poor); the work of Johnson, Rajan and Reich simply adds another reason to realize that the US has far from crossed the line from reasonable into irresponsible.

Some public advocacy groups have taken a hard tack on inequality, yet public awareness on the causes of inequality have as of yet gained much less traction, and policy responses seem focused on tax measures alone. It is all well and good to focus on individuals and their earnings, but ultimately distribution is a result as much, if not more, of the regulation of the market as it is of post-income readjustment. The Mott’s strike demonstrates just one of the myriad ways in which corporations – empowered and informed by legal rules and government policies – are allowed to increase their share of the total economic pie. It is this wealth which has increasingly found its way into the hands of America’s richest.

If Americans want to do something about inequality – and the crisis has shown that we all have a stake in America rebalancing its economic pie – then they have to do more than raise taxes on the beneficiaries of corporate largesse. They have to go after the largesse itself, with policies which ensure a fairer distribution between business and workers in their common enterprise. That requires a political strategy which focuses not only on the individual workers, but on the larger economic ramifications of short-term corporate policies.

It requires progressives not only to stand in solidarity with the striking workers, but to point out to American independents, fiscal conservatives, and anyone willing to listen, that is not only a matter of Mott’s shortchanging handful of workers. These policies, and those like it, have implications for American social outcomes, global financial stability and the nation’s fiscal health.

So even if it has the ring of comedy, we have to start pointing out the greater truth of the matter, much as John Stewart did when he called out the hosts of CrossFire: it’s not that the demand in Williamson for concessions are bad. It’s more than that.

Dr. Pepper is hurting America.