Home | english  | Impressum | Datenschutz | Sitemap | KIT

The University as a Creative Destroyer of Social Capital

The University as a Creative Destroyer of Social Capital
Autor: S. Fuller Link:
Quelle:

Nr. 3, 13. Jahrgang, S. 21-31

Datum: Dezember 2004

Schwerpunktthema - Wissenspolitik - ein neues Forschungs- und Handlungsfeld?

The University as a Creative Destroyer of Social Capital

by Steve Fuller, University of Warwick, UK

The university is distinguished as an institution of knowledge governance by its dedication to what the author calls the 'creative destruction of social capital'. That is, in their research function, universities create advantage; in their teaching function, they destroy it. This dual function has been historically tied to the university's institutional autonomy. However, as the university has incorporated more of society into its activities - and thereby truly universalized the knowledge it produces - it has opened itself to factors that threaten to dismember its institutional integrity. The author considers a series of these factors in this paper, arguing that their growing significance reflects the decline of the welfare state and the emergence of 'capitalism of the third order'.
This tendency has had many historical well-wishers, who together reveal liberalism's instinctive scepticism toward knowledge-bearing institutions combined with an openness to information technology. Moreover, as the state has shifted its role from provider of knowledge as public good to regulator of intellectual property, a curious rewriting of the politics of knowledge governance has occurred. Thus, much of the critical thrust of my paper focuses on the influential claim by Edmund Kitch that knowledge tends to escape its bearers, unless the state arrests its flight through legislation. Because the exact opposite is truer to history, the significance of the university as a knowledge-bearing institution tends to be grossly underestimated, and hence under threat in these neo-liberal times. The author addresses this threat in the final section of the paper, along with some ideas about how it may be overcome.

1     The University as the Ideal Knowledge-Bearing Institution

In the time-honored equation “knowledge is power", power involves both the expansion and contraction of possibilities for action. Knowledge is supposed to expand the knower's possibilities for action by contracting the possible actions of others. These others may range from fellow knowers to non-knowing natural and artificial entities. This broad understanding of the equation encompasses the interests of all who have embraced it, including Plato, Bacon, Comte, and Foucault. But differences arise over the normative spin given to the equation: Should the stress be placed on opening or closing possibilities for action? If the former, then the range of knowers is likely to be restricted; if the latter, then the range is likely to be extended. After all, my knowledge provides an advantage over you only if you do not already possess it. In this respect, knowledge is what economists call a positional good (Hirsch 1977), a concept that will loom large in the pages that follow. In this context, it helps to explain our rather schizoid attitudes toward the production and distribution of knowledge. We do research to expand our own capacity to act, but we teach in order to free our students from the actions that have been and could be taken by others.

By virtue of their dual role as producers and distributors of knowledge, universities are engaged in an endless cycle of creating and destroying “social capital", that is, the comparative advantage that a group or network enjoys by virtue of its collective capacity to act on a form of knowledge (Stehr 1994). Thus, as researchers, academics create social capital because intellectual innovation necessarily begins life as an elite product available only to those on the cutting edge. However, as teachers, academics destroy social capital by making the innovation publicly available, thereby diminishing whatever advantage was originally afforded to those on the cutting edge. Recalling Joseph Schumpeter's (1950 [1942]) definition of the entrepreneur as the “creative destroyer" of capitalist markets, the university may be regarded as a meta-entrepreneurial institution that functions as the crucible for larger societal change. This process mimics the welfare state's dual economic function of subsidizing capitalist production and redistributing its surplus. Not surprisingly, then, universities magnified in size and significance during the heyday of the welfare state, and have been now thrown into financial and wider institutional uncertainty with the welfare state's devolution (Krause 1996).

Moreover throughout its history, the university has been institutionally predisposed to engage in the creative destruction of social capital. In the Middle Ages, they were chartered as permanent self-governing bodies in a world of limited sovereign reach. Keeping the peace was often the most that a realistic sovereign could hope to achieve. Thus, in exchange for loyalty to the local ruler, universities were legally permitted to set their own curricula, raise their own capital, and even help manage the region's everyday affairs. This was the context in which universities were chartered as among the first corporations (i.e., universitates, in Medieval law). This orientation marked a significant shift from the much more populous residential colleges of the Islamic world, the madrasas, which depended on the benefaction of intrusively pious patrons, or the more venerable, but also more routinized, training centers for civil servants in imperial China (Collins 1998). To be sure, like these institutions of higher learning, the Medieval universities were broadly dedicated to the reproduction of the social order. However, because the universities were founded in times and places that were profoundly disordered, academics were immediately thrown into situations where their words and deeds effectively brokered alternative futures.

Given these origins, it is not surprising that academics have found it relatively easy to seed social unrest, which invariably they have interpreted as bringing order to an otherwise disordered situation. Perhaps the signature case of universities' imposing order is the Humboldt-inspired research-and-teaching university of the modern era, which is fruitfully conceptualized as a social technology for incorporating large segments of the population into the production and distribution of knowledge (Fuller 2002b). For example, exemplary works by eccentric geniuses were transformed into employment schemes for ordinary trainee academics. Kuhn would later call this routinization the “disciplinary matrix" sense of “paradigm," which has become the backbone of modern graduate education (also known as normal science). Thus, modern academia transformed Newton's Principia Mathematica from an imperfectly realized masterwork to a blueprint for a collectively realizable project. More generally, this attempt to cast the university as a social technology for truly universal knowledge has accelerated the institution's tendency to drift from what I have called a monastic to a priestly mode (Fuller 2000a: chap. 5; Fuller 2002a: chap. 4): the former stressing the virtues of institutional autonomy, the latter those of societal transformation.

Perhaps the clearest epistemic marker of this drift is the benchmark for original research. In the monastic mode, the inquirer's empirical resources are typically confined to the university's grounds, which means a reliance on the campus library or oneself (or sometimes students) as primary databases. Under the circumstances, historical and philosophical studies provide the via regia to knowledge of the particular and the universal, respectively. But as the university has extended its political ambitions into the priestly mode, these two disciplines were replaced, respectively, by sciences focusing on ethnographic field work and experimental laboratory work. Accordingly, universities have undertaken substantial commitments to transform and govern areas, or “sites,“ often far off-campus. This has not only driven a physical and psychological wedge between the university's teaching and research functions, but it has also recast the university as a participant in power structures about which many of its staff, over the years, have had serious reservations. Yet, at the same time, staff loyalty to particular universities has diminished, so that nowadays complainants are more inclined to look toward the greener pastures of other campuses than to try to reform their current institution.

However, the most obvious recent university policy that illustrates the university's priestly mission is affirmative action legislation, which quite explicitly takes forward the university's regulative ideal of creatively destroying societal advantage by giving priority to traditionally underprivileged groups in the hiring and promotion of academic staff, as well as the selection and sometimes even evaluation of students (Faundez 1994). This point, which generally goes unappreciated by the policy's many critics, highlights the distinctive sense in which universities (and other chartered corporations) have participated in the more general processes of societal reproduction. For, here we have a legally self-perpetuating social institution whose process of inter-generational role replacement is not family-based. In other words, universities are pioneers in the decoupling of social reproduction from biological reproduction.

2     The Knowledge Society as Capitalism of the Third Order

To understand the integral role of universities to the latest phase of capitalism, consider two general ways of thinking about the nature of capitalism. The more familiar one is a first-order account about how producers are engaged in perpetual - and largely self-defeating (according to Marxists) - competition to make the most out of the least, and thereby generate the greatest return on investment, also known as 'profits'. Whatever its other merits, this account takes for granted that the relative standing of competing producers is self-evident, so that no additional work is required to identify the 'market leaders'. But in fact, such work is needed. This second-order account of how producers publicly demonstrate their productivity is the context in which 'capitalism' was coined by Max Weber's great German rival, Werner Sombart, in 1902 (Grundmann and Stehr 2001). What contemporaries, notably Thorstein Veblen, derided as the 'conspicuous consumption' of successful capitalists, Sombart treated as the principal means by which capitalists displayed their social standing in a world where social structure was no longer reproduced as a system of fixed heritable differences. Thus, capitalists had to spend more in order to appear more successful.

However, it would be misleading to think of these expenditures as allowing capitalists to luxuriate in their success. On the contrary, it spurred them to be more productive in the ordinary, first-order sense, since their competitors were quickly acquiring comparable, if not better, consumer goods. Indeed, before long, the competition was so intense that it became necessary to spend on acquiring the connoisseurship needed to purchase goods that will be seen - by those who know how to see - as ahead of the competition's purchases. By the time we reach this 'third-order' capitalism, we are at the frontier of the knowledge society. That the 'knowledge society' might be a more polite way of referring to third-order capitalism should not be prima facie surprising. After all, the founding father of scientometrics, Derek de Solla Price, trawled through the welter of national economic statistics, only to find that the indicator that showed the strongest positive correlation with research productivity was not a measure of industrial productivity, but of electricity consumption per capita (Price 1993; Fuller 2002a, chap. 1).

A certain vision of economic history is implied in the above account of capitalism. In pre-capitalist times, consumption was done at the expense of production, which explained (for example) the fleeting success of Spain and Portugal as imperial powers. They failed to reinvest the wealth they gained from overseas; they simply squandered it. In contrast, capitalist consumption is second-order production supported on the back of increased first-order production. From a sociological standpoint, the most striking feature of this 'before-and-after' story is its suggestion that capitalism is innovative in altering the sense of responsibility one has for maintaining a common social order. In pre-capitalist times, this responsibility was, so to speak, equally distributed across its members, regardless of status. Lords and serfs equally bore the burden of producing the distinction that enabled lords to dominate serfs. Expressions like 'mutual recognition', 'respect', and 'honour' capture this symmetrical sense of responsibility. However, in capitalist times, it would seem that, like insurance in today's devolved welfare states, individuals bear this burden in proportion to their desire to be protected from status erosion. Thus, those who would be recognized as superior need to devote increasing effort to a demonstration of their superiority.

This last point becomes especially poignant in advanced capitalist societies, where at least in principle the vast majority of people can lead materially adequate lives while spending less time and effort on first-order productive pursuits. However, this situation simply leads people to intensify their efforts at second-order pursuits. As a result, for example, individuals spend more on education and firms on advertising, even though the advantage they gain in terms of first-order production is marginal or temporary. Yet, this expenditure is necessary for one to be seen as 'running with the pack'. Thus, we return to the concept of positional good introduced at the start of this article. The logic of producing such goods predicts that, over time, one's relative status will decline, unless it is actively maintained, which usually involves trying to exceed it, thereby raising the absolute standard that everyone needs to meet. Thus, an expanded production of positional goods, combined with increased efficiency in the production of material goods, results in the systemically irrational outcomes that we have come to expect (and perhaps even rationalize) as our 'knowledge society'. Specifically, the resources spent on acquiring credentials and marketing goods come to exceed what is spent on the actual work that these activities are meant to enhance, facilitate, and communicate.

Of course, such a classic case of means-ends reversal is not systemically irrational, if it marks a more-or-less conscious shift in values. Thus, it may not take much to be persuaded that we really do produce in order to have something to sell, and we take up particular jobs in order to have a platform for showing off our credentials. The struggle for recognition therefore overtakes the struggle for survival - the ultimate triumph of the German over the English tradition in political thought (Fukuyama 1992, chaps. 13-19). But this point acquires more of a sting in the case of so-called 'public goods', especially knowledge. In the case of such goods, producers are (supposedly) not only unable to recover fully the costs of production, but they would also incur further costs, were they to restrict consumption of their good. However, I would urge that so-called public goods be analysed as simply the class of positional goods that most effectively hide their production costs, specifically by everyone paying into a fund whose actual beneficiaries are undisclosed, perhaps because they are indeterminate (Fuller 2002a, chap. 1).

This abstract point may be illustrated by answering a concrete question: Why is Einstein not entitled to a patent for his theories of relativity? The answer is that Einstein's theories were innovative against a body of physical science whose development had been funded by the German state through taxation and other public finance schemes, major beneficiaries of which were institutions of higher education. These institutions were, in turn, open to anyone of sufficient merit, who would then be in a position to contribute to this body of knowledge. Einstein happened to take advantage of this opportunity that was in principle open to all taxpayers. But even if Einstein had not existed, it would have been only a matter of time before someone else would have come along to push back the frontiers of knowledge in a comparable manner. But as long as it remains unclear from what part of the population the next Einstein is to be drawn, the public finance of higher education is justified. In that case, Einstein does not deserve the economic advantage made possible by a patent because he simply exploited an opportunity that had been subsidized by his fellow citizens. I propose this as the 'deep rationale' for the production of public goods like university education and research that have been the hallmarks of welfare state regimes.

3     The Welfare State's Role in Making Knowledge Appear “Self-Protective"

That knowledge would be the paradigm case of a public good is itself no mystery. It may have required much effort for Edison and Einstein to come up with their ideas, but once those ideas were published, anyone could potentially benefit from them. A logical conclusion of this line of thought, exploited by the U.S. legal theorist Edmund Kitch (1980), is that knowledge resists commodification to such an extent that the state must intervene to restrict its flow through intellectual property legislation, which ensures that knowledge producers can reap at least some of the fruits of their labors. Kitch imagines that knowledge is so naturally protective of its own interests that, in effect, a special class of laws is needed to protect knowledge producers from the knowledge they produce!

Thus, Edison is entitled to a patent because of the likely commercial benefit afforded by his ideas, since once I understand how Edison invented the first incandescent light bulb, I am in a good position to design similar goods more efficiently that can be then sold more cheaply, and thereby corner a market that would otherwise belong to Edison. (In the economic history literature, this is sometimes called the “Japan Effect“, whereby it is always better to run second in unregulated market competition.) But why do similar worries not arise in the case of Einstein's discovery of relativity theory? In other words, suppose economists took seriously both the costs of acquiring the training needed to put Einstein's theory to any sort of use and the fact that this training would allow the trainee to earn a reasonable living as a physics instructor, if not design a way to supersede Einstein's theory that would merit the Nobel Prize. It that case, questions would be raised, not only about whether Einstein might not also be entitled to some legal protection, but also whether knowledge is as naturally footloose as Kitch and other public goods theorists make it out to be.

Two interrelated issues need to be explored here. The first is the source of the difference in our normative intuitions concerning Edison and Einstein as knowledge producers: Why should the former but not the latter be entitled to legal protection? But the second, more general issue is the source of Kitch's influential intuition that knowledge is inherently “self-protective“. My response to the first question will lay the groundwork for answering the second question. I shall argue that by overlooking the background political economy of knowledge production, Kitch's thesis about the self-protective nature of knowledge gets matters exactly backwards. In short, specific, mostly state-based, institutions (most notably the university) have been required to ensure that knowledge possesses the sorts of properties that Kitch personifies as self-protective. It should come as no surprise that Paul Samuelson (1969), the most influential welfare state economist of the post-WWII era, coined the phrase “public good“ (albeit to formalize the only non-protective function that Adam Smith prescribed for the state), or that the need for public finance schemes to support scientific research should have been first raised by a utilitarian philosopher with strong welfarist concerns, Henry Sidgwick (Lutz 1999, p. 110).

So let us ask: Why is Einstein not entitled to legal protection? Einstein's theory of relativity was innovative against a body of physical science whose development had been funded by the German state through taxation and other public finance schemes, the main beneficiaries of which were institutions of higher education. These institutions were, in turn, open to anyone of sufficient merit, who would then be in a position to contribute to this body of knowledge. Einstein happened to take advantage of this opportunity. But even if Einstein had not existed, it would have been only a matter of time before someone else would have come along to push back the frontiers of knowledge in a comparable manner - so it is assumed. But as long as it remains unclear from what part of the population the next Einstein is likely to be drawn, the public finance of higher education is justified (imagine a compulsory lottery). In that case, Einstein does not deserve the economic protection afforded by a patent because he exploited an opportunity that had been subsidized by his fellow citizens.

Now, why would the state have undertaken such a public finance scheme in the first place? Here we must resort to some political metaphysics. The state must presuppose that some knowledge is vital to the national interest, yet there is no natural incentive for any particular citizen to engage in its pursuit. Therefore, the state must provide the sort of universalized incentive scheme exemplified by free public education. Germany acquired this mindset, courtesy of Baron Helmut von Moltke, the mastermind of its victory in the Franco-Prussian War of 1870-71. Von Moltke argued that a healthy nation was always ready for “total war", that is, not merely strategic engagement with a definite goal in sight (the classical aim of warfare), but rather the ongoing removal of any threat to national security. This was the idea of a “permanent state of emergency", which would come to be the signature stance toward research and education policy in Cold War America, a period of unprecedented university expansion (Noble 1991).

In a sense, then, Einstein received advance payment for the theory of relativity by having been allowed to obtain the training necessary for making his revolutionary breakthrough. To be sure, many other people underwent similar training and failed to arrive at anything of comparable significance. But that just underscores the risk that the state, on behalf of its citizens, undertakes when it raises taxes for mass public education: There is no guarantee that the benefits will outweigh the costs. In contrast, some situations that call for new knowledge are sufficiently obvious that citizens, regardless of prior training, will find it in their self-interest to try to meet them. In that case, an innovator is vulnerable to similarly oriented individuals who are in a position to make marginal improvements that end up displacing the innovator from the market. Edison's discoveries occurred in this environment, which justifies his entitlement to a patent.

Now, in either Einstein's or Edison's case, is knowledge self-protective? Clearly not in Einstein's case. On the contrary, the state had to seed opportunities for his kind of knowledge to be produced. Edison's case is a bit more ambiguous, but even here the answer is no. After all, the only people capable of capitalizing on Edison's innovation were those who were already thinking along similar lines. There is no reason to think that mass publication of the details of Edison's incandescent light bulb would have enabled most Americans to design such a product for home use, let alone mass consumption.

In order to address the more general question of the source of the idea that knowledge is somehow self-protective, I begin by returning to the eighteenth-century European Enlightenment to pose the problem in its most basic form: Should knowledge production be granted any special legal protection? What are the grounds, if any, for the regulation of intellectual property transactions - or, in less economically presumptuous terms, the regulation of intellectual life? Here laissez-faire and dirigiste responses can be distinguished.

The laissez-faire response is that once people enjoy sufficient wealth not to have to live hand-to-mouth, they ought to use their leisure to improve themselves and the polity. The implied analogy, perhaps made most explicit in the opening of Aristotle's Metaphysics, is the imperative to physical fitness among the well-fed as a sign of both one's superior status and preparedness to defend that superiority in warfare. Moreover, one would not be capable of advancing the frontiers of knowledge, were one not in a position to expend resources on lines of inquiry that might end up bearing no fruit. Thus, the fiscal benefit typically granted to the production of intellectual innovation in the eighteenth century was a prize, not a salary, grant, or for that matter, royalty. In other words, the reward consisted of a largely ceremonial event to mark the formal recognition of the innovation. Potential rivals for the prize were presumed to have independent means of material support, by virtue of either literal or adopted fathers: i.e., inheritance or patronage. (This is not the place to explore the Darwinian-cum-Freudian implications of this situation.) In either case, they harbored no expectations of living off their innovations, as today's royalty regimes potentially allow. The contest to solve some problem left over by Newton was regarded in the spirit of a game, in which even losers never lose so much that they cannot return to compete in the next battle of wits.

The dirigiste response is associated with the reasoning behind the patent law provision in the U.S. Constitution. The U.S. founding fathers, whose perspective on human nature owed more to Hobbes than Aristotle, did not believe that a free citizenry would be necessarily inclined toward the pursuit of knowledge. After all, a happy existence may be obtained through relatively effortless and unproductive means, like charging high rents to tenants on one's property. At the same time, the founding fathers also believed in the overall benefits of new knowledge to the progress of the common wealth. This led to a characteristically eighteenth-century strategy of converting private vices into public virtue by providing explicit financial incentives for people to engage in knowledge production, namely, the temporary monopoly on inventions afforded by a patent. Moreover, since the main economic impact of a successful invention is that it destabilizes, or creatively destroys, markets, as more people seek patents, everyone else will soon have reason to engage in the same activity in order to restore their place in the market. Thus, a lethargic economy dominated by rent-seekers is quickly transformed into a dynamic commercial environment.

Both the laissez-faire and dirigiste approaches to the regulation of intellectual life continue to have cultural resonance today. The idea that society is best served by individuals exercising their right to be wrong, a theme that unites civic republican democracy and Popperian philosophy of science, presupposes that inquirers are materially insulated from the consequences of their bold conjectures, just as the laissez-faire approach would have it (Fuller 2000a, chap. 1; Fuller 2002a, chap. 4). More controversially, the dirigiste sensibility lurks in the “orientalism" that has led political economists from Adam Smith onward to demonize the decadence of the East in favor of the industriousness of the West, with Western aristocrats consigned to the oriental side of the divide.

A feature strikingly common to the laissez-faire and dirigiste Enlightenment approaches to intellectual property regulation is the absence of any assumption that knowledge is self-protective. To be sure, both approaches presuppose that new knowledge is potentially available to any rational being inclined to pursue it. However, the inclination to inquiry is not itself universal. Certain economic conditions first need to be in place before the epistemic appetite is whetted. In the dirigiste case, it consists of a financial incentive to counteract the natural tendency to gain the most pleasure from the least effort; in the laissez-faire case, it is simply a generalized cultural expectation of people who are relatively secure in their material existence.

So, if that is the view from the Enlightenment, where does the idea of knowledge as self-protective come from? As so often happens with our ideas about knowledge, the answer lies in a syncretistic understanding of history. That is, factors of rather different origins are treated as contributing to a common contemporary effect.

I have already indicated the determining role of what Alvin Gouldner (1970) dubbed the “welfare-warfare state" in establishing the modern political economy for the production of knowledge as a public good. Each citizen, simply by virtue of performing the fiscal duties of a citizen, contributes to the capital needed to produce public goods and, of course, becomes a potential beneficiary of that investment. However, in fact, most citizens reap modest epistemic returns from their investment, namely, the assortment of skills that enable them to earn a living. The identities of the few who benefit as Einstein did are rather unpredictable, since they would not necessarily have been in direct contact with the researchers whose work theirs builds upon or, for that matter, overturns. Rather, these innovators encounter their precursors secondhand, through textbooks and their often undistinguished classroom interpreters.

Those still in the grip of Thomas Kuhn's mythic history of science easily forget how this very basic element of knowledge consolidation and transmission - a textbook usable by the entire range of a discipline's practitioners - first emerged in the context of nation-building efforts in the late nineteenth century (Olesko 1993). In earlier times, an aspiring intellectual innovator would not have appeared credible, had he not made personal contact with a recognized master of the innovator's discipline. By such cultish means, disciplinary practitioners jealously protected their knowledge so that it could not be easily appropriated by others. And while these ancient prejudices linger in academic hiring practices, the provision of free public education has sufficiently loosened their constraint on actual intellectual innovation to leave the impression that innovators can come from anywhere, thereby contributing to the illusion that knowledge is self-protective.

4     Conclusion: Will Universities Survive the Era of Knowledge Management?

Academics are too easily flattered by talk of “knowledge management" (Fuller 2002a). They often think it points to the central role of universities in society. Yet, the phrase signals quite the opposite - that society is a veritable hotbed of knowledge production, over which universities do not enjoy any special privilege or advantage. Academics have been caught off-guard because they have traditionally treated knowledge as something pursued for its own sake, regardless of cost or consequences. This made sense when universities were elite institutions and independent inquirers were leisured. However, there is increasing global pressure to open universities to the wider public, typically for reasons unrelated to the pure pursuit of knowledge. Today's universities are expected to function as dispensers of credentials and engines of economic growth. Consequently, academics are no longer in full control of their performance standards.

In this context, knowledge managers have their work cut out. Former Fortune editor Tom Stewart (1997) calls universities “dumb organizations" that have too much "human capital" but not enough “structural capital". Behind these buzzwords is the view that a fast food chain like McDonalds' is a “smart organization" because it makes the most of its relatively ill-trained staff through the alchemy of good management. In contrast, business as usual in academia proceeds almost exactly in reverse, as department heads and deans struggle to keep track of the activities of its overeducated staff. If a McDonalds' is much more than the sum of its parts, a university appears to be much less.

Academics remain largely in denial about the impact of knowledge management. Nevertheless, the sheer increase in the number of university heads drawn from business and industry concedes that McDonalds' and MIT may be, at least in principle, judged by the same performance standards. A glaring recent example is Richard Sykes, whose appointment as Rector of Imperial College London was based largely on his successful merger of two transnational drugs companies, Glaxo and Smith-Kline. Not surprisingly, he has recently tried to merge Imperial and University College London into the UK's premier research-led university. Moreover, it is unreasonable to expect the increasing number of academics on short-term contracts to defend the integrity of an institution that cannot promise them job security. Even Ph.D.s quickly acquire the survival skills and attitudes of the much less trained disposable staff one finds at McDonalds'. Thus, they become quite willing and able to move for better pay and work conditions (Jacob and Hellstrom 2000).

Indeed, many academics - and not just professional knowledge managers - have endorsed recent steps taken to disaggregate the unity of teaching and research that has defined the university since its modern reinvention in early 19th century Germany. These steps occur daily with the establishment of each new on-line degree program and science park - the one reducing the university to a diploma mill, the other to a patent factory. Though they pull in opposing directions, these two “post-academic" organizations share an overriding interest in benefiting those who can pay at the point of delivery. In this context, universities appear quite vulnerable, as they have always been hard-pressed to justify their existence in such immediate cost-benefit terms. But it would be a mistake to place all the blame for this “service provider" view of universities on knowledge managers, or even the recent wave of neo-liberal ideology.

Academics who nostalgically recall the flush funding for universities in the heyday of the welfare state often forget that service provision was precisely what lay behind the appeal of academia to policymakers. The public was willing to pay higher taxes because either they (or, more likely, their children) might qualify for a course of study that would enable them to improve their job prospects or academics might come up with a cure or a technique that would improve the quality of life in society. The same mentality operates today, only in an increasingly privatised funding environment.

In short, a Faustian bargain was struck during the era of the welfare-warfare state that was typically cloaked in a social democratic rhetoric. Universities grew to an unprecedented size and significance, but in return they had become the premier site of socio-economic reproduction. In the long term, this bargain has caused the universities to lose their political - and consequently their intellectual - independence, a point that is increasingly clear with the removal of state legal and financial protection. After having been in the service of all taxpayers and judged by the benefits provided to them, universities are now being thrown into a global market where US universities already enjoy a long history of providing high quality knowledge-based goods and services on demand.

At least, this is how the shifting political economy of academia appears from the European side of the Atlantic. It is now common for university heads to complain that lingering attachments to the welfare state prevent governments from charging the full student fees needed to compete with US universities on the world stage. They seem to assume that Americans are willing to pay a lot for higher education at the best institutions because these have a long track record of proving themselves in the marketplace. However, this does not explain how, say, the Ivy League manages to officially charge the world's highest fees, yet require only a third of the students to pay them. Time-honoured universalist, democratic, and meritocratic ideals may explain why the Ivy League has this policy, but the mystery for Europeans is to determine how they have pulled it off.

As it turns out, the European understanding of the American scene - especially at the elite end - is seriously flawed. What makes the flaw so serious is that it involves forgetting what has historically made universities such a distinctive European contribution to world culture. I shall return to this shortly. But at an even more basic level, this flaw should remind us of the long-term corrosive effect that marginal utility thinking has had on how we conceptualize value. Both welfare state economics and the current wave of neo-liberalism agree that the economy is built from transactions in which the traders are simultaneously trading with each other and trading off against their own competing interests. Thus, the rational economic agent is willing to accept a certain price, but only for a certain amount of any good or service. Beyond that point, 'diminishing returns' set in and rational agents shift their spending elsewhere. This means that goods and services are judged by the prospect of their impact on the consumer in the relative short term. Such a frame of reference is fundamentally antithetical to the character of the university.

To their credit, welfare economists have long realized that their conception of the economy tends to devalue benefits that accrue only in the long term and especially to others not intimately connected to the agent (Price 1993). As we saw in the previous section, the welfare state conception of universities as both instances and producers of 'public goods' was meant to address this problem by arguing, in effect, that it is cheaper to indemnify everyone in a society than to target particular citizens for providing the costs and enjoying the benefits. But to unsympathetic neo-liberal ears, this sounds like a concession that higher education is a market with an indeterminate price structure. Could this be because producers and consumers are impeded from effectively communicating with each other? Such a suspicion motivates the knowledge manager's general call for the removal of state barriers to the free competition of universities, which will quickly force them to restructure and perhaps even devolve, in the face of market forces.

However, buried beneath this now familiar line of thought is its anchoring intuition: The paradigm case of all economic activity is the exchange of goods that might occur in a weekly village trade fair between parties trying to provide for their respective households. From that standpoint, the main practical problem is how to clear the market so that no one is left with unsold goods or unmet needs once the sun goes down. This formulation of the problem makes at least three assumptions that are alien to the economic situation in which university has (always) found itself: 

  1. Each trader is both a 'producer' and 'consumer'. In contrast, the two roles are clearly distinguished in any transaction between a university and a prospective client, including a student.
  2. No trader wants a surplus of goods, let alone accumulate as many goods as possible. Unused goods will either rot or be the target of thieves. In contrast, the sheer accumulation of knowledge - be it in books, brains, or databanks - is central to the university's mission.
  3. There is a cyclical structure to each trader's needs that ideally corresponds to the trade fair's periodicity. There are no inherently insatiable desires, only recurrent desires that are met as they arise. In contrast, the idea of termination is so foreign to academic inquiry that attempts to arrest or even channel its conduct have tended to be treated as repressive.

However, universities can be managed as other than multi-purpose service providers joined to their clients by discrete transactions that end once the academic goods have been delivered. Recall that what originally entitled a university to corporate status under Roman law (universitas in Latin) was its pursuit of aims that transcend the personal interests of any of its current members. This enabled universities to raise their own institutionally earmarked funds, which were bestowed on individuals who were "incorporated" on a non-hereditary basis. This typically required renegotiating one's identity through examination or election, as well as being willing to become something other than one already is. Along with universities, the original corporations included churches, religious orders, guilds, and cities. In this respect, being a student was very much like being a citizen. Commercial ventures came to be regularly treated as corporations only in the 19th century. Before then, a business was either a temporary and targeted venture (akin to a military expedition) or an amplified version of family inheritance, the default mechanism for transmitting social status under Roman law.

The corporate origin of universities is of more than historical interest. The oldest and most successful US universities were founded by British religious dissidents for whom the corporate form of the church was very vivid. From the 17th century onward, American graduates were cultivated as “alumni" who regard their time in university as a life-defining process that they would wish to share with every worthy candidate. The resulting alumni endowments, based on the Protestant “tithing" of income, have provided a fund for allowing successive generations to enjoy the same opportunity for enrichment. In return, the alumni receive glossy magazines, winning sports teams (which the alumni worship every weekend), free courses, and nominal - and occasionally not so nominal - involvement in university policy. Two-thirds of Ivy League students have their education subsidized in this fashion. Moreover, the leading public American universities display similar, and sometimes even stronger, tendencies in the same direction. Thus, UCLA, the University of Michigan, and the University of Virginia are “public universities" that are 70 % privately funded, relatively little of which comes from full payment of student fees.

In contrast, the two main strategies for “privatizing" the universities in former welfare state regimes - market-driven tuition fees and income-based graduate taxes - operate with a long-term strategy for institutional survival that is nothing more than a series of short-term strategies. At most, these compulsory payment schemes would enable universities to replace the capital they invest in their students, but they would also provide little incentive for graduates to contribute more than had been invested in them. If anything, such fees and taxes could become a source of resentment, non-compliance, and even overall fiscal failure, since in a world where knowledge is pursued as a positional good, it becomes harder to justify high quality university education on a short-term value-for-money basis.

Therefore, to overcome the knowledge manager's jibe that they are dumb organizations, universities must endeavour to be wholes much greater than the sum of their parts. At the very least, this means that a university's value must be measured beyond the short-term benefits it provides for immediate clients, including students. The ideal of uniting teaching and research promised just such a breadth of organizational vision, one worth updating today. After all, universities are unique in producing new knowledge (through research) that is then consolidated and distributed (through teaching). In the former phase, academia generates new forms of social advantage and privilege, while in the latter phase, it eliminates them. This creative destruction of social capital entitles universities to be called the original entrepreneurial organizations. However, universities have been neither produced nor maintained in a social vacuum. With the slow but steady decline of the welfare state, it is time to recover the university as one of the original corporations, whose style of “privatization" is superior to the “trade fair" model that has dominated modern economic thought and today threatens the institution's integrity.

References

Collins, R., 1998:
The Sociology of Philosophies: A global theory of intellectual change. Cambridge: Harvard University Press

Faundez, J., 1994:
Affirmative Action: International perspectives. Geneva: International Labour Organization

Fukuyama, F., 1992:
The End of History and the Last Man. New York: Free Press

Fuller, S., 2000a:
The Governance of Science. Milton Keynes, U.K.: Open University Press

Fuller, S., 2000b:
Thomas Kuhn: A Philosophical History for Our Times. Chicago: University of Chicago Press

Fuller, S., 2002a:
Knowledge Management Foundations. Woburn, Mass: Butterworth-Heinemann

Fuller, S., 2002b:
The University: A social technology for producing universal knowledge. Technology in Society 25, pp. 217-234

Gibbons, M.; Limoges, C.; Nowotny, H.; Schwartzman, S.; Scott, P.; Trow, M., 1994:
The New Production of Knowledge: The dynamics of science and research in contemporary societies. London: Sage Publications

Gouldner, A., 1970:
The Coming Crisis in Western Sociology. New York: Basic Books

Grundmann, R.; Stehr, N., 2001:
Why is Werner Sombart not part of the core of classical sociology? Journal of Classical Sociology 1, pp. 257-87

Hirsch, F., 1977:
The Social Limits to Growth. London: Routledge and Kegan Paul

Jacob, M.; Hellstrom, Th. (eds.), 2000:
The Future of Knowledge Production in the Academy. Milton Keynes UK: Open University Press

Kitch, E., 1980:
The law and the economics of rights in valuable information. The Journal of Legal Studies 9, pp. 683-723

Krause, E., 1996:
The Death of the Guilds: Professions, States, and the Advance of Capitalism. New Haven, Conn.: Yale University Press

Lutz, M., 1999:
Economics for the Common Good. London: Routledge

Noble, D., 1991:
The Classroom Arsenal. London: Falmer Press

Olesko, K., 1993:
Tacit Knowledge and School Formation. In: Geison, G.; Holmes, F. (eds.): Research Schools. Osiris, New Series, Volume 8, Chicago: University of Chicago Press, pp. 16-29

Price, C., 1993:
Time, Discounting, and Value. Oxford: Blackwell

Samuelson, P., 1969:
Pure Theory of Public Expenditures and Taxation. In: Margolis, J.; Guitton, H. (eds.): Public Economics. London: Macmillan, pp. 98-123

Schumpeter, J., 1950 [1942]:
Capitalism, Socialism, and Democracy. New York: Harper and Row

Schumpeter, J., 1961 [1912]:
The Theory of Economic Development. Chicago: Galaxy Books

Stehr, N., 1994:
Knowledge Societies. London: Sage Publications

Stewart, Th., 1997:
Intellectual Capital. London: Nicolas Brealey

Kontakt

Prof. Steve Fuller
Department of Sociology
University of Warwick
Coventry CV4 7AL, United Kingdom
Tel.: +44 2476 523-940
E-Mail: s.w.fuller∂warwick.ac.uk
Internet: http://www.warwick.ac.uk