Some notes about transaction costs, digital libraries, peer-to-peer computing, organized irrationality, and scientism. ** In response to my arguments against automatic face recognition in public places <http://dlis.gseis.ucla.edu/people/pagre/bar-code.html> some people have objected that I am using an invalid slippery-slope argument. Here it is: (3) Even if the only people in the database today are criminals, the forces pushing us down a slippery slope of ever-expanding databases are nearly overwhelming. Once the system is established and working, why don't we add alleged troublemakers who have been ejected from businesses in the past but have never been convicted of crimes? Then we could add people with criminal records who have served their time, people who have been convicted of minor offenses such as shoplifting, [etc]. And once those people are added, it is then a short step to add many other categories of people as well. One might ask why this argument is valid when I have objected to claims that anti-spam legislation would lead to broader restrictions on speech <http://www.newsbytes.com/news/01/168401.html>. Here is the difference. In the case of automatic face recognition in public places, the current debate is about whether to install a complex infrastructure of cameras, networks, databases of facial images, people trained to look at candidate face matches, etc. Let's say that we install that elaborate infrastructure for the ostensible purpose of finding terrorists. Then, having done all that, the additional step of adding other faces to the database to find more categories of people is trivial. Taking the first step, in other words, dramatically lowers the barriers to taking the subsequent steps. That's what makes it a slippery slope. In the case of spam legislation, by contrast, the proposed measures do not facilitate the imagined subsequent measures in any way. They do not make those subsequent measures technically easier, and they do not provide courts with legal precedents for them. So the face recognition argument is valid and the spam law argument is not. ** Organizational boundaries and the rising tide of standards. Here is a new way to understand transaction costs. Coase's theory of transaction costs, you may recall, explains a mystery: if markets are the most efficient way to organize production, why do capitalist economies give rise to the large command-and-control hierarchies called corporations? The reason, Coase argues, is that market mechanisms are not costless. In fact the costs of transacting business in the market are substantial, so it is often cheaper to organize some activities in hierarchies. Organizational boundaries are then determined by the balance between transaction costs and organizing costs. If transaction costs go down, then everything being equal organizations break up, and if organizing costs go down, then everything being equal organizations expand. The problem is that new technologies tend to affect both transaction costs and organizing costs, so that everything is rarely equal. As a result, the theory is often thought to have few testable consequences, though that hasn't stopped many theorists from deriving such consequences anyway by making unreasonable assumptions. The new way to understand transaction costs starts with an observation and an analogy. The observation is that business is fundamentally a matter of figuring out what to standardize and what to customize. The two imperatives conflict, since standardization lowers costs through economies of scale and customization makes products more appealing to specific customers. Because customers are diverse, all businesses must continually search for the optimal trade-off. Cyber gurus often say that information technology has rendered the age of standardization obsolete, and that new products are thoroughly customized. But that can't be right. Standardization is still massively with us, and in many areas it is even accelerating. The cyber gurus do have a point, though, and it is worth trying to articulate the point more precisely. Businesses are always looking for the optimal dividing-line between standardization and customization; that dividing-line can be drawn in many ways, and new technologies multiply the ways that the line can be drawn. You can, for example, standardize the mechanisms by which customized products are specified, manufactured, and distributed. You can settle on three standards for three different market segments. You can standardize some aspects of a product but not others. You can assemble a broad family of products from standardized modular components. And so on. Standardization and customization are both spreading. That's the observation. The analogy starts from Bill Mitchell's argument about the geographic consequences of pervasive networking. All of us, Mitchell says, are bound geographically to various things: family, work, recreation, community, climate, culture, and so on. Different people are bound to different things, and we experience different bonds of different strengths to different things in different places. To reconcile these conflicting bonds we may travel regularly, or we may move at certain points in our lives when the balance changes among the strengths of the various bonds. Computer networks and the general trend toward digitization, however, loosen some geographical bonds. We can use cheap communication technologies to stay in more continual contact with our families or customers, so we have less need to be geographically close to them. We can obtain access to digital libraries at a distance, so we have less need (relatively speaking) to live near a physical library. Cheap air travel, itself greatly facilitated by computing, also loosens geographic bonds, though perhaps that will change now. In any case, as the bonds change their strengths, it follows that people and things will rearrange themselves around the landscape. People will reckon the strengths of the bonds that attract them to one place or another, and they will relocate accordingly. The point is not that people will become totally unbound, but that the bonds that computer networks do not loosen will become relatively more important in determining settlement and travel patterns. The analogy I want to draw is between geographic and organizational bonds. Just as people are geographically bound to certain people, places, and things, likewise different activities are bound together within the same organization. Of course, the activities could still be conducted in separate organizations, but putting them in the same command structure helps to coordinate them. The greater the need for coordination, the greater the force the binds them into the same organization. No force operates in isolation, though, and every activity will be subject to numerous conflicting forces. The boundaries of organizations will be determined by the resultant of these forces, just as geographic settlement patterns are determined by the forces that Mitchell describes. As computer networks loosen some bonds, it stands to reason that activities will group themselves into different organizational boundaries, with some coordination arrangements that were formerly conducted through command-and-control happening instead through the market, and vice versa. Let us put the observation and the analogy together into one story. Organizations are constantly searching for the right dividing-line between standardization and customization. Every organization is conducting this search simultaneously, and different organizations are likely to come upon similar answers. That is one reason why standards emerge across multiple organizations and even multiple industries. Conforming to industry standards is often valuable in itself, for example because of the ready supply of people who can work with the standard, and so new standards are likely to emerge from this process and take on a life of their own. These include product standards, but just as importantly they include process standards, accounting standards, evaluation and testing standards, training standards, and so on. Some of these standards will have official ISO standards numbers and others will be written into law, but many will be unofficial, even unrecognized, as employees carry solutions with them that have worked well in other jobs. The result of all this is a rising tide of standards, most of which will not be visible to customers. The great paradox of standards, familiar to anyone who understands the Internet, is that standards and customization are not in conflict. To the contrary, a standard often supplies a platform or building-block from which customized products can be made. So long as companies have identified accurately where the dividing-line between standardization and customization should fall, and so long as this dividing-line stands still long enough for new standards to actually take hold, the rising tide of standards will facilitate greater outward diversity even as it reduces diversity behind the scenes. Think of the rising tide of standards as the accretion of successive layers. In the old days, every organization was its own stovepipe, cooking its own custom solution to every problem. As standards become established, the bottommost reaches of the stovepipe are removed and the rest is moved onto the standards. As the sedimentation of standards grows deeper, the stovepipes get shorter and shorter. This is important because the binding force of different activities within an organization is determined in part by the number of nonstandard components they have. Industry standards effectively displace the effort of coordination, moving it from the firm level to the industry level, or to a monopoly that supplies the standard. As coordination becomes easier, binding forces are reduced. It does not follow that organizations fragment into separate pieces that are not bound to one another at all. It does follow, however, that they are likely to regroup. Some organizations will specialize in supplying a single standard, thus becoming a focused monopoly that provides one thin slice through industry as a whole (see Lowell Bryan et al, Race for the World, Harvard Business School Press, 1999). Other organizations will specialize in managing the boundary between standards and customization. Because customization is hard to manage on a large scale, these latter organizations will be major consumers of standards produced by others. This explanation of transaction costs is still very abstract. It depends heavily on an unexamined notion of the optimal boundary between standardization and specialization -- a boundary that takes very different forms in different industries. It also, like most theories of organizations, presupposes that the forces it describes are the only forces in operation. That will rarely be the case. But at least it has some heuristic value. By posing the question of the exact nature of the trade-off between standardization and specialization, it sets in motion in any particular case an inquiry that in my experience is quite productive. ** Digital libraries and the nature of texts. Let me suggest an analogy. In his analysis of social consciousness under capitalism, Marx complained that people are led into a certain mistake: treating commodities as if they were things unto themselves, when in fact they are embedded in webs of relationships among people. When commodities appear in the marketplace, they are clean and packaged, standardized and branded; they are portrayed in dream-world advertisements that ignore the complexities of real life. As a result, we tend to forget that commodities are made by particular people in particular circumstances, and that they are shaped by various institutional pressures. Marx's politics were wrong, but his descriptive analysis is often useful. The critique of commodities directs us to throw off the illusions of advertising and use the tools of social science to look at the place of commodities in chains of human relationship. Having done so, we can see them no longer in isolation but as pieces in a larger puzzle. Let me bring this perspective to a particular category of things, namely texts -- novels, news articles, scientific papers, and so on. I call this an "analogy" because not all texts are commodities; scientific papers, for example, may be purchased by libraries, but the social relations around them -- peer review, for example -- are largely organized on non-commodity terms. When texts are printed and sold on paper, they certainly seem thing-like. You can buy them, file them, shelves them, mark them with a highlighter pen, or read them on the subway. They seem fully detached from the circumstances in which they were produced. Michael Curry has pointed out that books in particular come with a certain subliminal promise, that you can take them anywhere. This is not really true, however, because every book presupposes that its reader lives in a world that is structurally related in way to the world of the author. Scientific papers, for example, presuppose not simply the mastery of a certain jargon but participation in a certain ongoing dialogue among the field's members, as well as an appreciation of the practicalities of doing and reporting that particular type of science. The scientific paper's ideal reader, therefore, is nearly certain to be another member of the scientific community. The same is true in one way or another, to one degree or another, of many if not all categories of texts. This analysis may seem abstract, but it has concrete consequences in people's lives. New graduate students are making a huge transition from one social status to another, and they are now the sorts of people to whom scientific papers (or whatever sorts of scholarly papers they are learning to write) are addressed. Because of this, graduate students need to think of the papers they read as turns in a conversation and as moves in a complicated social world that has its own politics and economics. Before entering graduate school, students are likely to read scientific papers (if at all) as the emanations of an authority -- the blank wall that too many institutions present to people at a distance -- and they are unlikely to think of those papers as having been written by real people that they might expect to meet. Having entered graduate school, however, part of their job is preparing themselves to build professional networks, and that means meeting the authors of the papers they have read and cited in their work. This transition, from treating scientific papers as dead things to treating them as embedded in a set of social relationships of you yourself are also part, is very much what Marx was talking about, and it can go wrong unless the student is provided with a decent theory of it. That is why I wrote "Networking on the Network". With the advent of digital libraries, it will become more natural to understand texts in terms of their social embedding. This is partly because digital libraries simplify the same uses of texts that were always possible on paper, but it is also because the radical changes in technology will help us to see even the old, predigital world in a new light. Think, for example, of the author's contact information that is present in some kinds of texts. Scientific papers have long carried the author's paper mailing address, and more recently they have begun carrying e-mail addresses and home page URL's as well. This contact information makes it possible to "reach through" the text to the author, whether directly by sending an e-mail query or indirectly by making it easier to peruse the author's other papers and research projects. In fact, you can think of research libraries as directories that scientists and scholars use to identify potential professional friends. The institutions of scientific research make it relatively natural to drop a line to the author of a text, but other institutions may work differently. Every institution organizes its own set of relations among people, and these are reflected (among other ways) in the presence or absence of contact information and the custom of using it. The authors of mass-market fiction, for example, typically build mediating structures between themselves and their readers, and there is a paper to be written about how those structures are evolving. Popular authors have long had fan clubs, but now it not unusual for them to post letters to their readers on fan Web sites. Less established authors may respond to their readers individually. Other texts are presented without authors' names at all, but rather as part of a structured communication campaign by an organization, meant either to position the organization in public consciousness or to affect the content of public agendas and the dynamics of public debate. That is another set of relationships around a text, embedded in another set of institutions. A simplistic hope would be that digital libraries, by eliminating all of the technological impediments that separate authors and readers, will dissolve texts altogether so that they can commune directly. This scenario is simplistic in part because authors couldn't possibly commune with all their readers, nor readers with all the authors whose works they read. Authors in effect use texts to multiply themselves, providing low-grade simulacra that compensate for their ability to explain their ideas to everyone individually. But the scenario is also simplistic because it supposes that information technology dissolves institutions, when in fact it usually just changes the pace and dynamics of the relationships that are already in place, intensifying the logic that the institution has already created. The institution may end up changing, even collapsing or ceding ground to competitors, but when that happens it a result of the institution's own dynamics and not because the changes have been dictated directly by the workings of the technology. Scientists in a world of digital libraries will still build professional networks; mass-market authors will still address themselves to mass audiences; organizations will still engage in strategic communication; and so on. What if anything will change qualitatively as a result remains to be seen. This does not end the analysis, though. To the contrary, it defines an interesting space of problems. Let us consider again the case of the scientific paper. Graduate students soon find themselves participating in what David Chapman called the "secret paper-passing network". This is the professional network through which scientists circulate drafts of their research papers. By the time a research paper appears in an archival journal, after a year or two of editorial delays, it has long ago been read by most of its core audience -- the scholars whose opinion the author's career most depends on. The author's professional friends will have had a chance to offer comments, and their names may appear in the acknowledgements section of the finished paper. The friends, in turn, may never see the finished paper until they need to cite it, whereupon they turn to the library to check that they have their quotations and page numbers right. In these ways, a scientific paper is already very much embedded in the social relations of science, and plainly so to everyone involved, and in ways that leave numerous marks on the "thing" -- the published text -- that might end up in an outsider's hands in the library stacks or a newcomer's hands in a first-year seminar. The format and conventions of the scientific paper are, as Chuck Bazerman has shown, very much the historical product of authors' strategies for dealing with the institutional embedding of their work. Digital libraries will not eliminate this embedding, but perhaps the very form of the published work will evolve as the social dynamics around it are intensified. The distinction between preprint servers and online archival journals, for example, is increasingly artificial, a product more of the outdated institutional arrangements of journal publishing than of the practical logic of science as the scientists experience it. It has also long been suggested that digitally stored papers might contain extended content such as complete data sets, working software that readers can run on their own data, much larger collections of images, appendices, and so on. At a deeper level, however, the process of circulating drafts itself changes the nature of the paper. In many realms, such as politics, collective writing exercises are frequently organized largely to compel the authors to agree on what they want to say and how. Perhaps scientific papers, which after all are increasingly written by teams of researchers rather than single individuals, will increasingly work the same way. Drafts can be circulated more easily to progressively wider circles of readers, comments can be obtained more easily, iterated drafts can be circulated again, and so on, with the paper's official authors effectively turning into one circle of authorship among many. The authors are still claiming exclusive credit for the work, to be sure, but they are now more openly negotiating that credit, and their work has its effects on its readers in different, more interactive ways. In some cases, the collective discussion is actually published in the pages of journals in the form of "open peer commentary" made famous by Current Anthropology and Behavioral and Brain Sciences. It also happens less formally in the comments and responses that many journals publish. But these mechanisms, while usefully revealing the dialogical process that normally goes on behind the scenes, are not iterative. A dissertation actually has something of the same character; we make students write dissertations that will gather dust on library shelves because we want the students to go through the transformative experience of conducting a full-scale project, relating it to existing work, and imposing order on the whole sprawling mess in a form that we can judge. Many people who have written dissertations can testify that they are just as happy not to publish the result, given that it was their first time through a very unfamiliar process. The most important product of a dissertation-writing exercise is the dissertation's author, newly minted as a scholar woven into the community and capable of producing scholarly texts. Dissertations might be strengthened if there were better mechanisms, formal or informal, by which chapters could be reviewed by successively wider circles of the author's colleagues-to-be in the field. One approach would be to have students publish their "related work" chapters as stand-alone papers in peer-reviewed online journals specially designed for the purpose; these journals might even employ open commentary on drafts. The motivation to referee such papers should be great, given the political nature and consequences of any survey article. I dwell on research papers because they are the texts that I know best, and the community whose infrastructure for connecting readers and writers is most developed. I realize that the utopia of freely available digital libraries of all research publications is far from inevitable, particularly the part about it being free. The larger theme, however, is the potential for innovation in both the form and process of publication that comes with the technology -- the greater ease, relatively speaking, with which the social relations around a text can be allowed to show through. We will still have texts in such a world, but it will be much clearer to everyone that texts inscribe the workings of the world around them. And the text itself will stop seeming like the natural unit of analysis. Whereas a paper book is, by its nature, a relatively fixed and detachable quantity whose complex embedding in the larger world cannot be understood without real thought, new electronic forms have different attributes -- neither completely unfixed nor completely mired in the details of relationships, to be sure, but intertwined with the social processes around them in different, perhaps yet-unimagined ways from the relatively simple models that have been available so far. All of this poses certain conceptual challenges for the design of digital libraries. Libraries, as we all know, are not basically about paper. Rather, they are about managing the diversity of documents: forming coherent collections of them, representing them, imposing order on them, connecting people with them, computing their emergent properties, and so on. The diversity of documents is important: documents exist in countless formats, structures, languages, relationships, and so on, and they do so partly because they are embedded in so many different institutions and forms of social relationship. Libraries are meta-institutions: they support the work of institutions that work in very different ways. What I'm suggesting is that, as digital documents evolve into more complex embeddings in the institutions that create and use them, digital libraries will be challenged to relate to the institutions they support in even more complex and varied ways. Care should be taken to ensure that implicitly paper-centered attributes of existing library institutions, understandable though they have been, are not automatically inscribed into digital libraries whose potential range of interrelations between documents and users is much wider. Some scholars, for example, hold that the very idea of cataloguing the items in a collection is predicated on those items' permanence, when the digital world is full of items, such as drafts of papers, that are important but temporary. If so then cataloguing will need to be understood in a broader way. This does not mean that cataloguing and other traditional library practices are obsolete -- far from it. It means that the basic library way of looking at the world, starting from an assumption of diversity rather than the computer scientist's typical assumption of uniformity, will be central to the emerging digital world. Boundaries may blur or collapse between libraries and neighboring institutions such as publishing, collaborative authoring, enterprise computing, peer review, records management, informal paper-passing and commenting, data capture and archiving, tenure-and-promotion evaluation processes, personal libraries and filing systems, online conferencing, and so on. Design in such a world will need to begin with institutional analysis, and with a fully drawn understanding of what documents can be in a world where people can readily reach through digital representations to pursue the ends to which documents are a means. ** Parallel computing and the structure of the Internet. Peer-to-peer computing is graduating from its ideological period (centralization bad, decentralization good) and moving into a period of rational system design. You can get a snapshot of this process by looking at the slides from Nelson Minar's talk at the last P2P conference: http://www.nelson.monkey.org/nelson-talks/oreilly-centralization/ Nelson lays out a first rough taxonomy of peer-to-peer architectures for distributed computing. He distinguishes between (1) centralized architectures, in which one machine maintains relations with many other machines, none of which communicate with one another, (2) ring architectures, in which each machine maintains relations with two of the others, all in a row, so that the system as a whole forms a cycle, (3) hierarchical architectures, in which the machines are arranged in a tree structure, and (4) decentralized architectures, in which the machines are not arranged in any definite topology and may contact one another arbitrarily or evolve a connection topology as the task requires. Which of these architectures is best for a given task, Minar argues, should be regarded as a question for technical inquiry, not as an a priori question of ideology. In this sense the term "peer-to-peer" is a leftover from the ideological days, since only two of the four architectures (ring and decentralized) treat their constituent machines as peers. According to this more engineering-oriented approach to peer-to-peer computing, the purpose of a given system topology is not to avoid seizure by the copyright police but simply to use computing resources most efficiently. In that sense, peer-to-peer computing is starting to rediscover the world of research on parallel computing that has evolved independently of the Internet. The language of "topologies" for parallel computing goes way back, and I am struck by the analogy between the argument for peer-to-peer computing on the Internet and the argument that Danny Hillis provides in the opening pages of his book about the Connection Machine. The Connection Machine was a massively parallel architecture that was developed first at MIT and later at the Thinking Machines Corporation in the 1980's and 1990's. Hillis observed that the conventional von Neumann serial computer, though highly evolved, nonetheless makes extremely inefficient use of its circuitry. A modern computer might contain literally billions of circuits, and so it should be capable of billions of processing operations in each clock cycle. Unfortunately, nearly all of those billions of circuits are memory circuits, and nearly all of the memory circuits do nothing on a given clock cycle except perhaps refresh themselves. It follows that an efficient machine needs more of a balance between memory and processing, so that a larger proportion of the circuits can be doing useful work at any given time. Thus the Connection Machine, which consist of many thousands or millions of simple processing elements, all with their own local memory and all executing the same broadcast software instructions (single-instruction-multiple-data operation, or SIMD). The simple processing elements can exchange data by means of two communications grids, a single flat plane and a general-purpose packet-switching network. The Connection Machine is useful for computational problems whose inherent structure entails homogenous processing across the entire data set and whose internal relationships match the Connection Machine's communication topologies. A much-reduced Thinking Machines is part of Oracle now, and it is hard to tell whether the Connection Machine architecture failed for technical or business reasons. In any case, the underlying argument is still partly valid. The analogy between the Connection Machine and peer-to-peer computing is this: the memory elements in a von Neumann serial machine are analogous to personal computers sitting on people's desktops, the centralized von Neumann processor is analogous to large Web servers, and the communications network of the Connection Machine is analogous to the Internet. So the good news is that the Internet is already the Connection Machine, without the high-speed planar communication grid but also without the limitation of SIMD operation. It is just a matter of rounding up spare computational cycles and programming them. Of course, the peer-to-peer community is not alone in viewing the Internet as a platform for distributed computing. The scientific community already works with massive data sets using algorithms, especially simulation, that involve very high levels of inherent parallelism, and the concept of grid computing is to build virtual machines that make the Internet look like a single expansible parallel architecture for these sorts of advanced computations. Despite its undoubted importance, though, scientific computing is in one sense the least interesting case of parallel distributed computing on the Internet. As I pointed out in "Computation and Human Experience", computations are massively parallelizable to the extent that their inherent structure maps onto the structure of the physical world in which the computational elements are arranged, simply because computation is faster and easier to build when the wires are short. Our own physical world has three dimensions, and so the optimal case is a computation whose inherent structure has three dimensions or less. And that is precisely the case in simulations of the physical world, at least the ones whose causality travels at far less than the speed of light, for which it really is too bad that the Connection Machine's two-dimensional communications grid has no parallel on the Internet. The hardest cases are the ones whose structure derives not from the same physical world where the computations will be realized, but from other worlds, such as the social world, whose structures are quite different. This is why research on peer-to-peer computing emphasizes diversity and taxonomy rather than forcing maximum performance from a relatively narrow set of computational models. The computational worldview will always be on the lookout for mathematically simple structures underlying the real-world problem that peer-to-peer work deals with, but in many cases that search will be misguided. The social world does have some mathematically simple structures -- for example, when you get a large enough collection of anything, for example all the cities in a country or all the books in a library, things like Zipf's Law always seem to apply. For the most part, however, the social world simply has structures of a very different sort than those that computational methodology is accustomed to. In this sense, the Internet is curiously allied with its seeming opposite, the von Neumann serial machine. The serial machine may be limited to executing a single instruction at a time while leaving the vast majority of circuits spinning their wheels. But in exchange for this inefficiency, the programmer is freed from the structural analysis that massive parallelism requires. Of course, modern compilers and processors cooperate in discovering small amounts of parallelism, but this is all done automatically and needn't concern the programmer. And programmers do need to analyze the structure of their problems in some sense, for example for purposes of abstraction and modularity. It is just that those sorts of analysis are more routinely and uniformly rewarding analysis that succeeds only by flattening a complex computation onto a three-dimensional universe. The von Neumann processor simply punts on that problem, exchanging a minimum of efficiency on the average problem for a maximum of generality across all of them. The Internet makes a similar compromise. By providing general-purpose switching capability, it does not force any a priori topology of communications onto the programmer or the user community. In exchange for this generality, the Internet runs the risk of congestion. That is why the Internet is best-suited for applications that make low demands on latency, and why controversy rages about whether to provide quality-of-service guarantees for latency-critical applications by overprovisioning the network (simply attaching more and more routers to bigger and bigger pipes) or by complicating the Internet protocols with mechanisms specially suited to guaranteed-latency communications. Grid computing imposes an even more rigorous set of pressures on the Internet: minimizing the latency of a large number of data streams simultaneously, where the data streams have a definite, stable structure as opposed to the moment-to-moment reconfigurability of general packet switching. As these arguments suggest, there is a difference between building a distributed virtual computer on top of a million far-flung Internet hosts, which is easy, and building a distributed virtual computer that uses resources efficiently. Some computations, such as animation, can be decomposed into a large number of independent processes (one frame per processor), and in those cases the simple Internet-as-computer metaphor works very well. But the physical world and the social world are both highly connected places, so most computations are not like that. As it is, an abstraction barrier separates the generic Internet from the diverse computational structures that are built upon it. At some point that abstraction barrier is going to come under pressure, and when that happens we will have to decide whose Internet it really is. ** Toward a global campaign against organized irrationality. We ought to start a global campaign against organized irrationality, such as the assaults on rational thought that are conducted by the use of public relations methods in politics. We would start by naming various types of distortion -- projection, for example -- that are pervasive in professionalized public debate. By explaining in plain language the methods and motives that produce those distortions, we would help citizens to protect their minds against the irrationality that pelts them. The need is profound: democracy will be impossible until civil society can delegitimate organized irrationality. I know that organized irrationality may seem like too overwhelming opponent. But people have defeated other moral outrages in the past, and they can defeat organized irrationality too. To think about how a campaign against organized irrationality might work, let us compare and contrast it with the human rights movement. Precursors aside, the human rights movement as such begins with the UN's Universal Declaration of Human Rights that Eleanor Roosevelt negotiated fifty years ago. Those sorts of international declarations are often misunderstood; after all, they have little or no legal force. For this reason they are often called soft law, and many people are surprised to hear that scholars, diplomats, and activists take soft law seriously. Soft law serves three purposes: it provides an occasion for building global networks of proponents, it requires the proponents to negotiate a common language, and it legitimizes that language as the basis for subsequent debates within and among countries. These purposes interact, with the common language facilitating networking and the growing legitimacy of the language creating a greater motivation for networking. The resulting discourse has its effects indirectly: once the language of human rights becomes entrenched in the public discourse of a country, so that ordinary citizens use human rights language to reframe their own local issues and build new structures of civil society, initiatives to give the language legal force can get under way. Those initiatives will be strengthened both by the international legitimacy of the discourse and by the assistance that human rights activists in different countries can offer one another as the concrete political work begins. I will compare the proposed campaign against organized irrationality to the campaign against violations of human rights in several ways. (1) Naming. A phrase would be needed to provide a common banner for movement activists in different countries to march under. A phrase is not enough in itself, but it can focus the project of developing a more extensive discourse. The ultimate goal is to entrench this discourse in everyday public discussion, so the necessary intellectual work must proceed on both scholarly and vernacular levels. I suggest the phrase "organized irrationality" partly because it resembles an existing phrase, "organized violence". A drawback is that "organized irrationality" is negative, not positive; names the problem, not the values that a solution would realize. Unfortunately the word "rationality" is burdened with all kinds of distracting and spurious connotations. The demagogues who make their living destroying public discourse often use anti-intellectual rhetoric to mock anyone who stands up against organized irrationality, with the result that many ordinary people believe that only trained specialists have the tools and authority to identify the distortions. Trained specialists who can't explain themselves in vernacular language don't help either. (2) Prior identification. Ordinary people can understand the idea of human rights straightaway: when someone tortures you, you are quite clear that something wrong has happened and that it shouldn't happen again to you or anyone else. And once the language of human rights finds a foothold in response to those gross violations, little effort is required to extend it to a wider range of issues, such as access to medical care. Achieving global consensus on that sort of extended usage might be hard, but comprehending in general terms what it would mean is easy. Whereas the goal of torture is to produce a certain mental clarity about the reality of the situation -- we can do this to you -- the practitioners of organized irrationality have the opposite goal -- to produce a confusion that persuades people to wander around in a state of fragmentation or, better, to waste their days screaming incoherent slogans ("victim!") at imaginary enemies. People who have been reduced to physical wrecks by torture may be afraid to stand up against the evil, but they understand its nature. People who have been reduced to intellectual wrecks by organized irrationality will have been immunized against all attempts to help them, and it would obviously be wrong as well as futile to compel them to accept help. Thus, whereas a campaign against violations of human rights logically begins with the most extreme victims, who are the most motivated to write letters and lobby international bureaucrats on behalf of others, a campaign against organized irrationality logically begins with the people who occupy a twilight zone: sensitized to the nature of the evil but not yet lost to it. Fortunately, such people will always exist. People will be found along a spectrum, with the fully healthy at one end, the raving mad at the other end, and a wide range in-between of people who employ the rhetorical devices of organized irrationality simply because they have never heard anything else, but who have not reorganized their personalities in the disturbed way that the demagogues intend. (3) Problems of authority. Democracy requires a strong civil society, and civil society is founded on an extensive network of individuals who have stepped forward to articulate positions on particular issues and mediate between ordinary citizens who agree with those positions and the public authorities who would be responsible for implementing them. That much is a commonplace. It should be recognized, however, that authoritarianism also creates something of that same description, identical in all formal ways to the workings of a democracy except that the "positions" and "issues" are systematically distorted. A democratic society is impossible unless people are immune to organized irrationality, and that will not happen unless they can name the distortions. As a result, a movement for democracy is necessarily didactic in nature, at least when starting from the depths of organized irrationality that most "advanced" countries now suffer. The human rights movement must also teach its language, but the contrast between human rights and its opposite is much easier to explain than the contrast between rational thought and irrationality. The problem is that the demagogues of authoritarianism also pose as teachers, and they too "help" their victims to name various phenomena using phrases (e.g., "political correctness") that twist reality. The work of freeing people is thus superficially similar to the work of enslaving them, and it is hard to make the difference clear to people whose minds have been infected by the madness. The difference, very importantly, is not just one between opinions, but between irrationality and the wide range of rational opinions. In the case of the phrase "political correctness", for example, the irrationality consists in part in the systematic blurring of two ideas: (a) opinions other than the speaker's own, and (b) the practice of forcing one's opinion on others. In this way, heterodox opinion is portrayed as ipso facto repressive, a reprehensible suggestion in any democracy. (4) Cultural meaning. New political meanings are generally made by drawing on the reservoir of meanings that a culture inherits from its history. In that sense human rights means something different in every national context, even as its values of universality stand in tension with particularism of nationalist cultures. Human rights campaigners in every country must search through their respective cultures to find language that fits the global human rights discourse with the language of their own society. This language may come from religion, or from historical leaders who are remembered as especially just. Campaigners against organized irrationality have the same task. They are looking for a language not of justice but of reason, or at least of opposition to unreason. This is hard because of the aforementioned strands of anti-intellectualism that afflict many cultures; campaigners against unreason are too easily labeled as pretentious authorities whose academic niceties are far removed from the robust plain-spokenness of the common people. Campaigners against unreason probably have no alternative to positioning the demagogues as the corrupt authorities that they are. In practice this will usually mean drawing upon and elaborating in a positive way the inchoate populism that can be found in most cultures. The idea is to offer people an identity that extends traditional elements and draws out the forces of sanity that can presumably always be found within them. To be an American, for example, is surely on some level to laugh at doubletalk. The problem is that authoritarianism goes to enormous lengths to colonize popular identity -- it has no alternative, given that the beneficiaries of authoritarian culture are greatly outnumbered by its victims. This is why academic talk does not suffice. (5) Law. The goals of the human rights movement are clear: to make violations of human rights illegal, and to make laws against human rights violations enforceable. The goals of the campaign against organized irrationality are less straightforward. Much as organized irrationality is the enemy of democracy, making organized irrationality illegal would be antidemocratic, not to mention impractical. The campaign against organized irrationality, then, seeks to instill certain norms in each society. It does this not by writing new laws, but by publicizing, explaining, and entrenching certain patterns of thought that identify and reject the twisted rhetorical devices that characterize organized irrationality. The contrast with human rights, however, is not as sharp as it may appear. Laws do not write themselves, and they do not enforce themselves. A revolution in law such as the codification of human rights will not occur, or at least will not have consistent practical effect, unless the underlying principles of human rights are legitimated throughout the society -- not unanimously, perhaps, but beyond any possibility of overturning through an overt campaign or coup. A social campaign, then, whether for human rights or against organized irrationality, is fundamentally going to be won or lost on the battlefield of people's minds. And it is won only when the patterns of reasoning that it promotes are institutionalized in the society, meaning that they are woven into taken-for-granted daily discourse, written into textbooks, appealed to by all sides in public debates, and finally interpreted as synonymous with national identity. Organized irrationality, like all social pathologies, can never be eliminated entirely. It will always remain latent in the culture, waiting for an opportunity to return to the surface. But it can be driven into its cave and kept there so long as society continues to affirm democratic values. The consequences for law will surely be numerous, but literally outlawing irrationality will not be one of them. (6) Interests. The impossibility of outlawing organized irrationality has a nonobvious side-effect: campaigns against it do not provide lawyers with ways of making a living. Human rights campaigns are largely staffed by lawyers, and one of their motivations is that human rights law gives rise to controversies in which lawyers are employed. The point is not that lawyers are entirely mercenary; people in many professions seek ways to align their careers with their values. The problem is simply that it is unclear who can align their careers with the campaign against organized irrationality. This is one reason why such a campaign should include expanded norms of practice for journalists and editors: confronting and rejecting irrationality should no longer be viewed as a breach of the reporter's duty of objectivity -- treating all "sides" to a controversy equally whether they make sense or not -- but quite the contrary should be part of the reporter's job as an upholder of democratic values. Crusading against organized irrationality -- not just random mistakes by ordinary citizens, but the systematic practices by which legions of professional advocates twist language and subvert reason -- should be one way that a journalist can advance professionally. Principled journalists surely do not enjoy having to quote spokespeople who dissolve serious issues into blurry associations, and they should have legitimated grounds, and even material incentives, to refuse to do so. (7) Asymmetries. Human rights campaigners have the advantage that the most serious violations of human rights occur in the least powerful countries. The leading industrial countries, for all their failings, nonetheless uphold high standards of human rights domestically. Human rights campaigns generally do not threaten them, and they are happy to wield the language of human rights selectively in support of their foreign policies. While this does threaten to delegitimate that language, in practice its cynicism is clear enough in contrast to the principled stands of organizations that criticize all governments equally. In the case of organized irrationality, by contrast, the worst offenders are found in the leading industrial countries, and the worst offender by far, the society in which organized irrationality has been most intensively developed and professionalized, has been the United States. This is, of course, not the received understanding. Even though the term "propaganda" was once routinely used as a synonym for public relations in the United States, Cold War propaganda stuck the word "propaganda" exclusively on the communist governments that the United States opposed. While the propaganda of communism was surely reprehensible, we should understand how ineffective it was in comparison to the propaganda of the First World. Vaclav Havel wrote extensively of the emptiness of official language in communist Czechoslovakia, where it was repeated by everyone but believed by no one. Just as the economic system of capitalism proved itself much more capable of manufacturing automobiles and computers, its superiority in the propaganda realm was just as great, and for the same reason. Private propaganda is of higher quality than public propaganda, and the export of American political technologies is one of the gravest dangers to democratic values globally -- even as the labelling of those technologies *as* American is the most straightforward ways of building societal immunity against them. At the same time, the Cold War also provides considerable grounds for optimism. Human rights campaigners once confronted a world power, the Soviet Union, whose hostility to human rights was vehement and overwhelming. Yet the Soviet Union's fell in large part because it was delegitimated through its signing of the Helsinki Accords. Now a global human rights campaign is accelerating in the Chinese diaspora as well. The evil of organized irrationality in the United States should not be equated with the evil of the Gulag Archipelago -- confusing people is not as bad as killing them -- but the magnitude of the challenge is comparable even so. ** Beyond scientism. Having disparaged irrationality, I also want to talk about what Hayek and others have called scientism. Scientism is not science; in fact it's the opposite of science parading as science for the benefit of people -- scientists and non-scientists alike -- who uncritically treat science more as symbol than substance. Here is an example. I know someone who regards science very highly. He holds some strong beliefs: that science is the only possible source of knowledge; that a modern society depends on such knowledge; that the scientific foundations of knowledge are constantly under mortal attack by forces of irrationality that include religion, mysticism, and bad philosophy; and that these onslaughts of irrationalism are so powerful that science -- and thus civilization -- are in grave danger. I was aware of these beliefs, but I hadn't realized their intensity until one evening when I happened to mention that I found plausible the widespread idea that a person's emotional state could have some effect on their physical health. When I said this, my friend looked at me in slack-jawed amazement. He then underwent a long bout of stammering, starting and stopping various bits and pieces of sentences, until at last he was finally able to explain what was happening to him. He told me that he regarded my suggestion about emotional states and physical health as so bizarre that it would be literally immoral even to discuss it, lest the bizarre belief be given a civilization-endangering respect that it does not deserve. He had been trying to explain this to me, except that he believed that explaining it would be immoral, thus the stammering. Finally, though, he persuaded himself that the risk to civilization of explaining the problem to me was less than the risk to civilization that I would pose by spreading my error to others. He was certain that I had been joking, or at best just irresponsibly spouting off without considering the consequences, and he tried to get me to recant. I was amused as heck by all this, and I carefully verified that he had understood me correctly. I then proceeded to torment him by walking through the argument why such a belief would be plausible. After all, I said, scientists regard emotions as electrical and biochemical states in the brain, and the brain is connected to the body. My friend looked as though I had sworn allegiance to Satan, went through another round of stammering, and finally explained to me that it was not possible for mental states to influence the physical world -- that would be magic, and magic is the opposite of science. It soon developed that my highly scientific friend believed in a radical version of mind-body dualism, so radical that the mind could not have the slightest causal dealings with the body. I found this belief nonsensical to the point of delusion, but he refused to discuss any further what he regarded as a gross assault on civilization. This really happened. I will give you another example. When I worked at UC San Diego, a professor in our sociology department named Steve Shapin published a book entitled "A Social History of Truth". It's an account of the social context in which science arose, and particularly the role of trust among the aristocratic class that founded the Royal Society. Steve was being playful with his title, which he intended as a provocative invitation to scholars of early modern science to read his book and consider the fine points of his argument. Little did he know that he would be subjected to a right-wing campaign portraying him as an example of liberal relativism. This campaign made an utter mockery of his argument, ignored it altogether to be honest, and proceeded purely from his title to spin all sorts of intellectual slander about him. This was shameful enough, but what was really shameful was the willingness of some actual scientists to join in. I had lunch with the most vocal of these, a biologist who lectured me at tremendous and very tedious length about the scientific method. It didn't seem to have occurred to him that I had attended eighth-grade science class, not to mention sophomore experimental physics class, and was well-informed on the subject. Indeed it do not seem to have occured to him to take the slightest interest in anything that I had to say. Instead, he "knew" perfectly well what was going on: an attack on science. He ascribed to Shapin the kind of philosophical idealism that some people gloss by saying that "reality is just a social construction". Never mind that most theories that use the phrase "social construction" have nothing to do with idealism. This guy did not have the faintest idea of what Shapin had said; even though Shapin had written an entire book about the social origins of the scientific method, he carried on as if Shapin (in his words) "simply failed to understand" such-and-such basic facts, namely the scientific method. The irony of the situation was pretty serious: this scientist was preaching to me the central importance of empirical inquiry in the fight against dogma, while rejecting in the most dogmatic fashion a serious attempt at empirical inquiry into the practice of science. He was not alone, and I got the impression that a rumor had spread through the local scientific community that a sociologist on their own campus was one of "them"; their stereotyped expectations were so perfectly confirmed that empirical inquiry into the nature of their colleague's views was not required. I have seen this pattern on numerous occasions: when you burrow into the logic of society's most vocal defenders of science, you routinely encounter the most howling antiscience at its core. Having said this, experience shows that my life will now become much harder unless I hasten to add that I am not myself hostile toward science; indeed, it seems to me that the advocates of scientism are the ones who do not believe in science. And I have a theory about where the pattern comes from. New institutions rarely become established without a fight, and the institution's participants routinely keep the fight going, consciously or not, for years and centuries after the rest of the world has moved on. For example, traditional healing practices once represented serious competition to scientific medicine, and reasonably so, since medicine had not yet invented sterilization. The point is not just about the comparative efficacy of the two systems. As traditional social systems broke down, the cultural context of traditional healing practices broke down along with them, and scientific medicine was better fitted to the new social systems. In a social sense the most basic claim of science is not that it works better, but that its methods are public and defeasible. If a traditional healer makes a claim, it is basically a matter of reputation and authority, so traditional healers are regulated by the community's long-term experience and not by its ability to evaluate particular claims. If a scientist makes a claim, on the other hand, the warrant for the claim is, it is said, out in public where anyone with sufficient training can evaluate it. In a modern society where knowledge-claims have consequences for the distribution of power, this kind of public defeasibility is crucial for social institutions to function at all. That is why my friend was so appalled at my seemingly unscientific views, and why he explained his objection in political terms. From his perspective, claims about the emotional basis of physical health -- which he regarded as nondefeasible by their nature -- were a short step away from social collapse and dictatorship. This theory explains many otherwise strange phenomena. Consider, for example, the attempts by Congress and the National Institutes of Health to establish an Office of Alternative Medicine to perform scientific tests of various unconventional treatments that have been offered as medical therapies. They got a guy to run it who was a trained scientist who had been raised in a traditional Native American society. This initiative seemed like perfect common sense to me. If you have large numbers of people believing that they can treat illnesses by strapping magnets to their bodies, it would seem like a valid function of government to support controlled scientific experiments to determine whether these procedures have any medical effect. After all, such experiments are the most routine, most straightforward kind of science in the world. This view, however, was not shared by a vocal group of scientists who were adamantly opposed even to performing the tests. They couched some of their arguments in terms of objections to the methodology or credentials of the people who were chosen to perform the experiments, but it was clear that their objection was in principle. They were not interested in fine-tuning the experiments but in preventing them. Often their arguments were disturbingly circular: no scientific proof existed that these therapies had any benefit, and so it would be wrong to search for scientific proof. By those rules, of course, no science would ever be done. Alternatively, they argued that the therapies should not be tested because they had no theoretical basis -- the purest dogma, as if theory were the test of reality and not the other way around. These thoughts come to mind, as you might have guessed, in response to the recent news report on scientists in the Netherlands (where else?) who are seeking scientific evidence for supernatural explanations of near-death experiences (NDE's). Now, I think it is perfectly proper to subject these phenomena to scientific tests. What bothers me is the assumption, apparently by all parties both pro and con, that the stakes are nothing less than the foundations of science. It seems to me that culture and science are engaged in a pointless battle here, with scientists -- meaning, now, adherents of scientism -- assuming that the phenomena are going to be easily be explained away with a squirt of brain chemicals, and non-scientists -- meaning, now, adherents of a kind of simplistic anti-science -- gunning for the most basic foundations of the mechanistic worldview. This drama, it seems to me, gets in the way of any serious inquiry into the matter, which in turn ultimately strengthens the hand of the anti-scientists and undermines the place of science in culture. (I'm told that you can find the original article about the NDE work at <http://www.lancet.com/search>, free registration required; look for "near death" and it's the first article that comes up, with Pim van Lommel as the first author.) Let us consider another case: the phenomena of people who believe that they can talk to the dead. Talking to the dead is a widespread practice, indeed nearly universal, in shamanistic cultures throughout the world, and it is also an experience that many people in modern societies have had. The firefighters at the World Trade Center routinely say that they can hear their dead buddies directing them through the rubble. It will not suffice to accuse these people of pretending. Some people do pretend about such things, of course, just as some people pretend that they can fix plumbing. But it strains credulity to think that cultures the world round have built enormous ritual systems around pretending that they see images, hear voices, engage in conversations with those voices, and so on. People do have those experiences. The question is what's going on with them. The scientistic impulse is to blow off the phenomenon with some trite explanation: they're crazy, they're ripping people off, they've eaten moldy grain, etc. Followers of scientism routinely claim to know the truth about such matters, even though they believe that knowledge follows from experiments, and even though no such experiments have been performed. Scientism is not science. I'm not going to take a position about the reality of talking to the dead. I do want to argue, though, that it is far from implausible on scientific grounds that people actually do have experiences that deserve to be called "talking to the dead". Science often starts from the metaphors of the age, so let us start from metaphors of software. Imagine that our minds are software architectures that happen to include fairly general provisions for mobile code. Imagine that modules of mobile code can move easily from one person's mind to another, transferred through a wide range of subliminal signalling mechanisms. (Sam Shepard's early plays are based explicitly on this premise.) Perhaps the modules resident in different people's minds can even communicate with one another through the outwardly innocuous arrangements of objects in a room. In such a world, it is entirely imaginable that our selves are distributed, and that each of us, far from being confined to our own heads, is actually spread out in several people's heads. This idea is not far distant from what Freud called "introjection" and it is even closer to what later Freudians called "projective identification". When we die, the theory suggests, we do not instantly depart. Rather, large parts of our psyches remain distributed throughout the minds of the people we knew and encoded in the physical arrangements we left behind. Someone who talks to the dead, on this view, is simply making contact with these remnants of the psyche. It's not a complicated idea. And anyone who knows how computers work could fill in the details. Am I saying that this theory is true? No, of course not. I have no proof. What I'm trying to explain is the unfortunate cultural dynamic by which this entirely plausible, entirely thinkable theory nonetheless remains unthought -- or at least unsaid, even unhinted, in any respectable public discussion. Even to suggest it is to open the doors of the most dangerous conversations in Western society. It is immediately evident, for example, that the distributed psyche theory could be used to explain telepathy, ghosts, and some kinds of clairvoyance. It could provide a scientific basis for the afterlife, metempsychosis, and many other religious beliefs, and especially for the ritual practices of shamans. It is hard to know who would find this theory more threatening, the scientists -- who would have to admit that vast ranges of human experience lie entirely outside the artificial boundaries that they've set for their theories -- or the anti-scientists -- who are probably smart enough to see how many claimed phenomena the mobile-code theory *doesn't* explain. My real point is that science as we know it today is too caught up in cultural double-binds for its own good, or ours. end
This archive was generated by hypermail 2b30 : Sat Dec 29 2001 - 15:46:18 PST