[RRE]notes and recommendations

From: Phil Agre (pagreat_private)
Date: Thu Dec 13 2001 - 00:39:11 PST

  • Next message: Phil Agre: "[RRE]pointers"

    Some notes on how-to's, the true nature of computer science, the
    history of institutional analysis of computing, and the relation
    between network culture and public culture.
    
    **
    
    In defense of how-to's.
    
    I began writing how-to's partly for convenience.  Having answered
    the same question several times, I started writing the answers down,
    taking the trouble to write in a way that would be useful to people
    in other contexts.  At the same time, I had long been aware of the
    value that many people have found in well-done how-to's, and I've
    tried to understand what this value consists in.  A how-to promises to
    explain the procedures that happen all around us and yet that somehow
    remain semi-secret.  Of course, many how-tos' promises are false: how
    to lose weight without dieting, how to make millions without working,
    and so on.   Many how-to's are destructive, such as the manuals that
    sell fantasies of conformity and manipulation leading magically to
    success.  Poorly-done how-to's fail to embody any useful ideas at
    all, offering instructions that seem detailed but are disconnected
    from the real point.  The history of bad how-to's has given how-to's
    a bad name.  What is worse, a certain authoritarian streak, present
    in many cultures, disparages how-to's with sophistry such as "reading
    a book doesn't substitute for doing it", the real agenda being a
    fear that ordinary people might discover in a book the inspiration
    to change their circumstances.
    
    Once these artificial difficulties are cleared away, the hard problem
    remains of specifying what is actually good about a well-done how-to.
    Perhaps there's no mystery at all: writing in a procedural style is
    just an expository device, and the real question is whether you have
    something to say.  From this perspective, how-to's are as diverse as
    texts in general, and if we stop treating "how-to" as a doctrine and
    simply as a format then we can step back and see the different threads
    in the evolution of the how-to genres of particular times and places.
    
    I want to take a different approach, which is to ask how a well-done
    how-to makes a difference, and in particular why a how-to should
    even be necessary.  After all, sociology is full of theories about
    how well-socialized we are.  Social order is said to consist in our
    socialization into cultural or disciplinary value systems, or our
    conformance to rules or our entrainment into habits.  Society, in
    short, is something that we are supposed to be *good at*.  Why should
    we need instruction?  Indeed, sociologists have in recent years headed
    strongly in the opposite direction, viewing social skills as the very
    embodiment of subjection.  "Power" is understood to enclose us and
    remake us in its image, and history is written as the low boil of
    skirmishes between "power" and "resistance".  From this perspective,
    a how-to is not only superfluous but actively harmful -- a baited
    hook, a laughable attempt to deepen the oppression from which society
    is fundamentally made.
    
    Although this orientation in sociological work has usefully directed
    our attention to many important phenomena, I believe that its basic
    orientation is headed full-power in reverse.  The facts are quite
    different than the sociologists tell us: most people are poorly
    socialized into the institutions they participate in, it is entirely
    possible for them to understand and navigate those institutions more
    effectively than they do (either by their own lights or anyone's),
    and individual and group agendas often escape the imperatives of
    the institution.  I have had many years' experience by now of writing
    how-to's and talking to people about them, and I have come to some
    intuitions that are nearly the opposite of those that are taught in
    sociology.  It seems to me that the sociologists conflate two ideas:
    (1) learning the skills of navigating within an institution by
    occupying roles that the institution defines, and (2) in so doing,
    becoming a particular kind of normatively or stereotypically specific
    person that the institution would have you be.
    
    You see this conflation, for example, in Foucault's talk about the
    "production of subjects".  The idea is that, by becoming a doctor
    or a citizen or a psychiatric patient, you become inserted into an
    all-encompassing social being -- a way of seeing, thinking, acting,
    interacting, talking, writing, feeling, and so on.  There is, to
    be sure, some truth in this: to become a doctor is certainly to be
    socialized to a significant degree into ways of thinking and acting
    and so on.  Much of this lies beyond individual consciousness: it
    happens so complicatedly, and in so many different ways, and with
    so much nonobvious structure, and with so many appeals to emotion
    and reason, and with so much seclusion from the outside world, that
    it is bound to change you.  In fact, the very difficulty of becoming
    a doctor is part of what causes it to change you so completely:
    the skills require effort to master, and demands rain down upon
    the emerging doctor from so many directions that great dedication is
    required to integrate them all by slow degrees into a smooth everyday
    performance.
    
    It does not follow, however, that the new doctor's entire social being
    is colonized.  Many doctors remember why they wanted to be doctors,
    and they choose their specialty or their place or manner of practice
    accordingly.  Many of them set about reforming the pathologies of
    the discipline that become evident to them in the course of their
    training.  Some quit.  Some write books.  What makes the difference,
    in large part, is the degree to which they consciously understand
    what's going on -- the extent to which they can identify the practical
    logic of the institution in which they are inserted, together with
    the extent to which they remain anchored in supports for their
    subjectivity that lie outside of medicine as an institution, whether
    in their religion or a political movement or what-have-you.  The
    same goes for people who occupy any role defined by any institution.
    Institutions do not colonize us completely.  It is an empirical
    question how our subjectivity evolves as we pursue our careers within
    various institutions, and it is precisely this question that the
    Foucauldian theory prejudges with its conflation between occupying
    an institutional role and taking up the subjectivity that the role
    prescribes.
    
    The role of a how-to, it seems to me, is to get into this gap between
    an institutional role and a form of subjectivity -- in other words,
    between your job title and the way you think.  In writing a how-to,
    I want to explain how the institution works so you can make it work
    for you.  This should be paradoxical from the sociological perspective:
    by  explaining the very best procedures for burrowing your way
    most completely into the workings of the institution, aren't I also
    sentencing you to complete existential oblivion -- turning you into
    a kind of institutional robot, or what Harold Garfinkel would call
    a "cultural dope"?  In fact the opposite is the case.  By learning
    how to build a social network in the research community, for example,
    you are becoming more deeply embedded in the institution *and*
    becoming less subject to its imperatives.  This is a hard lesson
    for many people: The way to advance to more powerful positions in
    your profession is through networking.  The way to overcome obstacles
    of discrimination or material disadvangage is through networking.
    The way to beat the job market is through networking.  It's not
    just networking, of course, but networking is at the core of the
    process.  To network well, you need to understand what's happening:
    what people's motivations are, what expectations people bring into
    various stereotyped interactions and how this explains the otherwise
    perplexing things they do, how conflicts and traumatic situations
    arise and what's really going on in them, and so forth.  You can get
    "socialized" into all this by improvising as best you can in the face
    of whatever hits you, or you can get a theory and a plan.  I recommend
    the latter.
    
    At the most basic level, the problem with the sociological theories
    pertains to the concept of freedom.  It seems to me that Foucault
    and company share a deep assumption with the liberal theory of
    society that they oppose.  (I mean liberalism in the sense of Mill
    and political theory, not of Dewey and current American politics.)
    For both of them, freedom is a matter of being left alone.  They both
    view society as a maze of enmeshments, and they both posit a utopia
    in which people can relate to one another without their relationships
    laying any existential claim on them.  Freedom, however, is not
    something that you achieve by renouncing institutions, but quite the
    contrary something that you achieve by working the full possibilities
    of institutions of a certain sort.  The material conditions of
    individual freedom, like everything else in society, are located and
    embedded.  On a concrete level this means that I can write how-to's
    that help people get what they want from the research world (which
    is the institution I know best) by throwing themselves most thoroughly
    into mastering the ways and means of that particular institution.
    
    This helps explain a property of how-to's that I mentioned at the
    outset.  How-to's promise to reveal secrets, but the best how-to's
    reveal secrets of a certain sorts: ones that are visible right in
    front of your eyes.  A how-to articulates -- puts language to -- a
    practical logic that you already inhabit, and that the people around
    you (especially the more successful ones) already visibly embody.
    There are plenty of reasons why you might need a how-to to articulate
    this visible logic.  Norms of public humility often prevent people
    from telling the "real reasons" behind their initiatives; asked "how
    did you decide to start this program?", it is normal to speak of the
    good of the organization or society rather than one's own individual
    goals.  As a result, institutions often lack occasions and genres
    for communicating this practical information.  People who participate
    in the practical logic of an institution are therefore often unaware
    of it, or unaware of it in a way that they can communicate.  As a
    result, the knowledge remains under the surface, even as it remains
    massively present in the everyday practices.  Yet precisely for
    that reason, individuals remain incompletely socialized into the
    institution's ways.  This interrupted socialization can be functional
    for the institution -- if everyone understood the game then perhaps
    they would revolt or walk away.  But just as often it is dysfunctional
    -- stalled careers, strange ideas, poor morale, and so on.
    
    Here, then, is the great insight of the how-to: articulating the
    practical logic of the institution can amplify that very logic.
    Put another way, a how-to explains what is already going on, and
    by doing so causes it to happen more intensively.  A how-to rarely
    changes the structure of incentives that the institution creates.
    How could it?  Rather, it helps individuals within the institution
    to clear away the cognitive rubbish of incomplete comprehension,
    and instead pursue their same goals with clearer vision and with
    strategies and tactics that are more aligned with the practices that
    the institution has been trying to inculcate in them all along.  Who
    wins in this deal?  Well, if the institution is designed correctly
    then everyone wins.  I realize that contemporary cultural hostility
    to institutions recoils at the idea of a well-designed institution,
    much less the mortifying slogan that everyone can win.  And it should
    be stated that few institutions are perfectly well-designed from
    this point of view.  Tensions always remain, and nobody who truly
    understands any human institution could retain any illusions about
    utopia.  Even so, a society is defined in large part by its forms of
    institutional imagination, and by its understandings of the relations
    that individuals and institutions could possibly have.
    
    Now, I do not mean to suggest that how-to's materialize from somewhere
    outside of institutions and are injected into them as lubricants.
    How-to's are produced by institutions to the same degree that anything
    is, and the long tradition of business how-to's is easily explicable
    in terms of the incentives and forms of imagination that business
    culture has created.  The books that management consultants write
    to advertise their services generally take just this form: identifying
    some of the fundamental forces that already operate on businesses,
    claiming to analyze them and their consequences more systematically
    than others have, and spelling out the decision frameworks that all
    businesses should use to align themselves more completely with those
    forces going forward.  And my own how-to's have something of the same
    character: they advise individuals to move back and forth between
    identifying emerging themes in their field and building professional
    networks around those themes.  Building a network deepens the epistemic
    conditions for identifying emerging themes; identifying those themes
    creates the conditions in turn for building networks of individuals
    who appreciate their significance and have something to say about
    them.  Someone who fully understands this advice and gets practice
    following it will achieve a profound and hopefully unsettling
    understanding of the nature of social forces and their manifestation
    in people's lives and careers.  In effect the how-to's, mine and
    others', advise people to become the wind, or in another metaphor
    to become channels for the messages that history is urging on us.
    
    Ethical issues arise at this point.  Do we really trust history so
    much that we are willing to transform ourselves into machines for
    doing what it says?  It depends how we understand history and our
    relation to it.  Those who believe in passive optimism will be happy.
    Those who believe that by analyzing social forces they are discerning
    the word of God will be even happier.  Those who believe that social
    changes should be resisted, either because uncontrolled change is
    dangerous or because the will of history lies precisely in society's
    ability to resist the mindlessness of social forces, will be unhappy
    indeed.  And all of these groups will wonder what happened to the
    idea of freedom.  If we are directed to conform ourselves to forces
    that operate on a terrain far larger than any individual career could
    encompass, then in what sense are we choosing our place in the world?
    One answer, simply enough, is that we have choice about what forces
    we want to amplify.  The forces conflict, and we express our ethical
    stances in the ones that we choose to align ourselves with.  That
    makes sense, because if we could change the world entirely to our own
    liking, regardless of the potentials for change that lie immanent in
    the world we happen to have been born in, then we would be back to the
    old bad understanding of freedom as a perfect uninvolved transcendence.
    
    But a better answer is available, and it is this: so long as the
    forces of history seem wrong to you, you have to believe that you
    don't understand them well enough.  A well-done how-to is founded
    on an accurate explanation of the practical logic of the institutions
    in which its readers participate.  The deeper that explanation
    cuts, the better.  Why do you have to believe that the best, deepest,
    most cutting explanation of the universe yields conclusions that
    you can live with?  Because if you don't then you are already dead.
    
    Along the way, this account of how-to's offers a theory of healthy
    institutions: a healthy institution is one that supports a certain
    kind of career, based on the alternation that I mentioned between
    articulating themes and building networks around them.  Professions
    are like this: professionals build their careers through a kind of
    intellectual entrepreneurship in which they identify issues that are
    likely to become important in the practice of the profession in the
    future and then reach out to build networks of the other members of
    the profession who also understand their importance.  Professionals
    become especially successful if they identify important issues
    early and exert the effort to build networks of the people who will
    become most identified with them.  Having done so, they benefit in
    several ways: they become publicly identified with issues that other
    professionals feel a need to catch up with, they become brokers
    who are alone in knowing the abilities and personalities of all
    the relevant players, they are seen to evolve public messages that
    integrate all the current information and perspectives that are
    dispersed throughout the network, and they publicly interpret new
    information in ways that support the projects and identities of their
    peers.
    
    Analogous forms of entrepreneurship are found in many institutions,
    for example in business and politics, and the question arises of the
    conditions under which an institution supports them.  In politics, for
    example, many issues are underdeveloped because their constituencies
    are too numerous, too poor, too disconnected, or too underinvested
    in the issue as individuals despite its importance for the community
    in aggregate.  Political scientists call this the "collective action"
    problem.  An institution can also frustrate issue-entrepreneurship
    by insisting on a stable or stratified social order or by disparaging
    individual initiative.  Individuals need to be able to make careers by
    developing networks around ideas, and institutions can fail to support
    those careers in many ways.  Perhaps there is no way to make money by
    gathering people around an issue, for example because there is nothing
    to sell -- after all, ideas are public goods, easily shared once they
    are revealed.  Perhaps there is no way to find the relevant people --
    the research community solves this problem with the library and norms
    of citation, but your average neighborhood publishes no directory of
    the issue-interests of the people who live in it.  Perhaps nothing is
    really changing, so there is nothing for issue-entrepreneurs to do.
    
    Often the problem is ideological.  Perhaps there is no consciousness
    in a given community that issue-entrepreneurship is the way to get
    ahead, perhaps because people are told, however well-intentionedly,
    that the way to get ahead is to work hard and follow the rules.  The
    world is full of people who follow that advice and then wonder why
    they aren't getting ahead.  Some of these people will maintain their
    state of dissonance until they day they die, and others will go right
    ahead and die in their guts by becoming disillusioned; these latter
    will then go around spreading their gospel of cynicism to others and
    the cycle will begin over.  And who knows?  If the institutions are
    poorly designed then perhaps their cynicism is justified in some local
    sense.  That's why it seems to me that we should audit all of our
    institutions for their ability to support the kinds of careers that
    I have described in my how-to's.  These audits will have many facets,
    including many that have not been identified yet.  It does not suffice
    to scoff and say that our world is so corrupt (in some dim way) that
    it's all a scam and nobody can really get ahead.  That's just false.
    But at the same time, every institution that I know about suffers from
    significant distortions that keep some or all of the legitimate issues
    from being mobilized through the careers and networks of individuals.
    So be it.  That's life.  And it's democracy as well: always imperfect
    by its nature, and always capable of being improved by reaching for a
    deeper understanding of the currents that run through it.
    
    **
    
    What is computer science?
    
    When I was going to graduate school at MIT, most of the professors
    around me were embarrassed to be called computer scientists.  It's
    a dorky name if you think about it: computer science.  Say it a
    few times.  Their complaint was this: why should there be a separate
    field of computer science, any more than there is a separate field
    of refrigerator science?  In their view, computers were just complex
    physical artifacts like any others.  They were engineers and proud
    of it, and they viewed computers through the prism of Herb Simon's
    lectures on "The Sciences of the Artificial".  Simon argued that
    principles of software engineering such as modularity were not at
    all specific to software, but were in fact properties of the universe
    in general.  The structures that evolve in nature are modular because
    those structures are more stable than others and more capable of
    giving rise to further productive evolution.  Computers were simply
    a special case of these universal laws.
    
    It is worth noting that this perspective on computer science differs
    radically from the received view in most textbooks of the subject.
    When the question is asked, "What is a computer?", the most common
    answer is mathematical: a computer is a device that is capable of
    computing a certain kind of mathematical function -- what's called
    a universal Turing machine, a machine that can be configured to
    compute any function that any particular Turing machine can compute.
    The professors at MIT would have none of this.  Of course the
    mathematics of computability was interesting, but it reflected only
    one corner of a much larger space of inquiry.  What they found most
    interesting was not the mapping from single inputs to single outputs
    but the relationship between the structure of a computational device
    and the organization of the computational process that was set in
    motion when that device interacted via inputs and outputs with the
    world around it.
    
    This analysis of the physical realization of computational processes
    was only half the story.  The other half lay in the analysis, also in
    the routine course of engineering work, of problem domains.  This is
    a profound aspect of computer work particularly -- and, the professors
    would argue, all engineering work -- that is almost invisible to
    outsiders.  Computers are general-purpose machines in that they
    can be applied to problems in any sphere.  A system designer might
    work on an accounting application in the morning and an astronomical
    simulation in the afternoon, a workflow system in the evening and
    a natural language interface in the middle of the night.  As the
    problems in these domains are translated into computational terms,
    certain patterns recur, and engineers abstract these patterns into
    the layers of settled technique.
    
    Let me give an example.  I once consulted with a company that was
    trying to automate the design of some moderately complex mechanical
    artifacts.  Each of these artifacts might have a few thousand parts,
    and the designer, in working from requirements to a finished design,
    might have to make several dozen design decisions.  The company
    at that point consisted of nothing but an engineer, a business guy,
    and a few rooms of windowless office space.  I spent several weeks
    sitting at a desk with the engineer and marching through a big stack
    of manuals for the design of this particular category of artifacts.
    I told the engineer that we needed to know the answers to a small
    number of questions, the most important of which is this: in working
    forward from requirements to design, does the designer ever need
    to backtrack?  In other words, is it ever necessary to make a design
    decision that might have to be retracted later, or can the decisions
    be made in such an order that each individual decision can always be
    made with certainty?  If backtracking was required, I told them, their
    lives would get much harder -- not impossible by any means, but a lot
    harder than otherwise.  After a few weeks spent working cases by hand
    by referring to the arcane design rules in the manuals, it became
    clear that backtracking was not only necessary but ubiquitious, and
    that they needed to hire someone who could build a general-purpose
    architecture for the backtracking of parameterized constraints.  It
    wasn't clear that such a person would be available, so I wrote a long
    design document in case they had to assign the task to an ordinary
    programmer.  Soon, however, I found them the employee they needed.
    Now all of them are rich.  I bought a car with my consulting fees.
    Backtracking is an example of a structure that recurs frequently in
    the analysis of problem domains, and we would not be surprised to find
    that outwardly dissimilar domains require similar kinds of backtracking.
    The resulting analogy could then be pursued, and might be illuminating
    all around.
    
    For the professors at MIT, then, engineering consists of a dialectical
    engagement between two activities: analyzing the ontology of a domain
    and realizing that domain's decision-making processes in the physical
    world.  ("Realize" here means "make physically real" rather than
    "mentally understand".)  Ideas about computational structure exist for
    the purpose of translating back and forth between these two aspects of
    the engineer's work.  It is a profound conception of engineering.  And
    nothing about it, they argued, is specific to computers.  Computers
    are especially complex computational artifacts, but every engineering
    problem involves a domain and realizes its requirements in the physical
    world -- end of story.
    
    It's an appealing story, and its appeal lies in the way it dissolves
    the concept of the computer, which normally connotes a sharp break
    with the past, into the great historical tradition of engineering
    design.  I do agree that it's an improvement on the standard story in
    the textbooks, the one that's based on mathematics.  Still, I believe
    that both stories are wrong, and that the MIT professors overlook
    one area in which the design process they describe is different for
    computers than for anything else.  That area pertains to language.
    
    Let me start with a weak form of the argument.  Computers, whatever
    other virtues they might have, at least give people something to talk
    about.  People the world over can commiserate about Microsoft Windows,
    and Microsoft virus outbreaks have nearly superseded the weather as a
    topic of smalltalk.  Your friends in China may not be having the same
    weather, but they are having the same virus outbreaks.  Computers do
    not only transcend geographical boundaries; they also transcend the
    boundaries of disciplines.  Physicists and literary critics may not be
    able to discuss their research, but they can discuss their computers.
    Military people and media artists fight with the same software.  This
    kind of universality is just what the professors were talking about
    in their celebrations of the analytical phase of the design process.
    Computers, in this sense, provide a "trading zone" for discussions
    across different disciplinary languages.  Put another way, computers
    are common layer in a wide variety of activities, and activities of
    many kinds have been reconstructed on top of the platform that the
    most widespread computer standards have provided.  But computers don't
    only give us something to talk about; they also give us something to
    talk *with*.  When we type words or speak into microphones, computers
    can grab our language, transport it, and store it in designated places
    around the world.  So computers are densely bound up with language.
    
    In saying this, however, we have not identified what is distinctive
    about computers.  Computers are distinctive in their relation
    to language, or in more precisely in their relation to discourse.
    Discourses about the world -- that is, about people and their lives,
    about the natural world, about business processes, about social
    relationships, and so on -- are inscribed into the workings of
    computers.  To understand what is distinctive about the inscribing of
    discourses into computers in particular, it helps to distinguish among
    three progressively more specific meanings of the idea of inscription.
    
    (1) Shaping.  Sociologists refer to the "social shaping of technology".
    The idea is that newly invented technologies are somewhat random in
    their workings, and typically exist in competing variants, but that
    political conflicts and other social processes operate to single out
    the variants that become widely adopted and taken for granted in the
    world.  One prototype of social shaping is "how the refrigerator got
    its hum": early refrigerators came in both electric and gas varieties,
    but the electric variety (it is said) won the politics of infrastructure
    and regulation.  Social shaping is also found in the cultural forms
    that are given expression in the styling of artifacts, for example
    tailfins in cars, and elsewhere.  Social shaping analyses can be
    given for every kind of technology, and while language is certainly
    part of the process, the analysis of social shaping does not turn on
    any specific features of language.
    
    (2) Roles.  One particular type of social shaping is found in the
    presuppositions that a technology can make about the people who use
    it.  Madeleine Akrich gives examples of both successes and failures
    that turned on the correlation between the designers' understandings
    of users and their properties in the real world.  A French company
    designed a system consisting of a solar cell, a battery, and a lamp,
    intended for use in countries with underdeveloped infrastructures.
    The company's real goal was to ingratiate itself with the government
    of France, and so it gave no extended thought to the actual lives
    of people who would use the system.  Rather than understand the web
    of relationships in which those people lived, it sought to make the
    system foolproof, for example by making it hard to take apart and by
    employing nonstandard components that could not be mixed and matched
    with components available on the local market.  When ordinary problems
    arose, such as the short wire between the battery and the lamp, the
    users were unable to jury-rig it.  The system did not succeed.  By
    contrast, a videocassette player embodied elaborate ideas about the
    user, for example as a person subject to copyright laws, that it was
    largely successful in enforcing.  In each case, the designer, through
    a narrative that was implicit or explicit in the design process, tried
    in effect to enlist the user in a certain social role.  Language by
    its nature is geared to the specification of roles and relationships
    among roles, and to the construction of narratives about the people
    who occupy these roles.  In this sense, the analysis of roles depends
    on more features of language than the analysis of social shaping
    in general.  Even so, nothing here is specific to computers either.
    The analysis of roles applies very well to automobiles, for example.
    
    (3) Grammar.  Where computers are really distinctive is in their
    relationship to the detailed grammar of a language.  The process of
    systems analysis doesn't actually analyze a domain, as the professors
    would have it.  Rather, it analyzes a discourse for *talking about* a
    domain.  To be sure, much of the skill of systems analysis consists of
    the ability to translate this discourse into terms that can be aligned
    with known techniques for realizing a decision-making process in the
    physical world.  But the substance of the work is symbolic.  Computer
    people are ontologists, and their work consists of stretching whatever
    discourse they find upon the ontological grid that is provided by
    their particular design methodology.  Entity-relationship data models
    provide one ontology, object-oriented programming another, pattern
    languages another, the language-action perspective another, and so
    on.  In each case, the "processing" that the systems analyst performs
    upon the domain discourse is quite profound.  The discourse is taken
    apart down to its most primitive elements.  Nouns are gathered in
    one corner, verbs in another corner, and so on, and then the elements
    are cleaned up, transformed in various ways, and reassembled to make
    the elements of the code.  In this way, the structure of ideas in the
    original domain discourse is thoroughly mapped onto the workings of
    the artifact that results.  The artifact will not capture the entire
    meaning of the original discourse, and will systematically distort
    many aspects of the meaning that it does capture, but the point is
    precisely that the relationship between discourse and artifact *is*
    systematic.
    
    Computers, then, are a distinctive category of engineered artifact
    because of their relationship to language, and computer science is a
    distinct category of engineering because of the work that it performs
    on the domain discourses that it encounters.  So it is striking that
    computer science pays so little explicit attention to discourse as
    such.  Of course, computer science rests heavily on formal language
    theory, which started life as a way of analyzing the structure of
    natural languages.  But there is much more to discourse than the
    formal structures of language.  Indeed, there is much more to grammar
    than the structures that are studied in formalist linguistics.  The
    discourses with which computer science wrestles are part of society.
    They are embedded within social processes, and they are both media
    and objects of social controversies.  Computers are routinely shaped
    through these controversies, and as a result the process of design
    is conducted at least as much in the public sphere as in the design
    process as computer science imagines it.  This is the great naivete
    of computer science: by imagining itself to operate on domains rather
    than discourses about domains, it renders itself incapable of seeing
    the social controversies that pull those discourses in contradictory
    directions.  The discourses that computer science seizes upon are
    usually riven by internal tensions, strategic ambiguities, honest
    confusions, philosophical incoherences, and other problematic features
    that come back subvert both the designer and user sooner or later.
    
    **
    
    Embedding the Internet.
    
    An institution, as I have said before, is a stable pattern of social
    relationships.  Every institution defines a categorical structure
    (a set of social roles, an ontology and classification system, etc),
    a terrain of action (stereotyped situations, actions, strategies,
    tactics, goals), values, incentives, a reservoir of skill and
    knowledge, and so on.  Information technology is deeply interrelated
    with institutions, not least because the main tradition of computer
    system design was invented for purposes of reifying and changing
    institutional structures.  Institutional study of computing, though
    it has many precursors, begins in earnest with research at UC Irvine
    in the late 1970's and with work by Ken Laudon and others.  These
    researchers were interested in the organizational dynamics of the
    adoption of computing at a time when computing was new and computer
    applications were largely developed in-house.  They worked in an
    intellectual climate that was far too credulous about the magical
    claims of rationalism, and their polemical goal was to reassert the
    political and symbolic dimensions of the social action that takes
    place around nearly any computer system in the real world.
    
    Since that time, the world has changed a great deal in some ways --
    and very little in others.  The Internet brought a significant change
    in the technology of computing, for example the practical possibility
    of interorganizational information systems that had hardly been
    dreamed before, much less realized.  The Internet also brought major
    changes to the ideological climate around computing.  It was no longer
    necessary to shout to persuade anyone of the significance of computing
    as a topic of social inquiry, even if the social sciences proper have
    not exactly revolutionized themselves to make computing central to
    their work.  And the metaphors associated with computing changed as
    well.  Rationalism is out, and the central authorities that had long
    been associated with mainframe computing have been replaced by an
    ideal of decentralization.  Of course, ideology should be distinguished
    from reality, and in the real world things are not as different as
    they seem.  Computing is still heavily political and symbolic in
    its real, actual uses, and centralized sources of power are far more
    prevalent than the newly fashionable ideology makes out.  The research
    program of the institutionalists of 1979 is still alive and well,
    and twenty years of exponential technological improvements have done
    remarkably little to outdate it.
    
    To examine in detail how the world is changed, though, it is necessary
    to dig our way more completely from the landslide of ideological
    change that engulfed social studies of computing during the hoopla
    of the 1990's.  It will help to return to the starting-point of
    all serious social inquiry into computing: the evils of technological
    determinism.  Technological determinism -- which can be either
    an openly avowed doctrine or an inadvertent system of assumptions
    -- is usefully divided into two ideas: (1) that the directions of
    technical development are wholly immanent in the technology, and
    are not influenced by society; and (2) that the directions of social
    development are likewise entirely driven by the technology, so that
    you can predict where society is going simply by knowing how the
    technology works.  These two ideas are both wrong, in the sense that
    every serious, detailed case-study finds that they do not describe the
    facts.  Social forces shape technology all the time; in the case of
    the Internet the social shaping can be seen in the implicit assumption
    that the user-community was self-regulating due to the strong
    influence of ARPA, so that strong security measures were not built
    into the architecture of the network.  And the directions of social
    development are not driven by the technology itself, but rather by
    the incentives that particular institutions create to take hold of
    the technology in particular ways.
    
    Technological determinism, as I say, is often an unarticulated pattern
    of thought rather than an explicit doctrine, and so progress in the
    social study of computing requires us to discover and taxonomize the
    forms that technological determinism takes in received ways of thinking.
    I will describe two of these, which we might call discontinuity and
    disembedding.  Discontinuity is the idea that "new technology", often
    defined very vaguely, or "information technology", dated to sometime
    after World War II despite the existence of information technologies
    before that time, has brought about a discontinuous change in history.
    We supposedly live in an "information society" or a "network society"
    or a "new media age" whose rules are driven by the workings of these
    particular technologies.  The problem with these theories is that they
    are wrong.  Of course, new information technologies have participated
    in many significant changes in the world.  But many other things are
    happening at the same time, yet other things are relatively unchanged,
    and the changes that do occur are thoroughly mediated by the structures
    and meanings that were already in place.  It is easy to announce a
    discontinuity that allows us to focus all of our attention on a single
    appealing trend to the exclusion of all else, but doing so trivializes
    a complex reality.
    
    Disembedding supposes new technologies to be a realm of their own that
    is disconnected from the rest of the world.  The concept of cyberspace
    is an example.  In practice, "cyberspace" is used in two different
    ways: either to describe a separate realm within the abstractions
    of the machinery or to announce a discontinuous world-historical
    change along the lines that I described above.  But the metaphor of
    "cyberspace" gets its rhetorical kick from the first idea, that there
    is such a thing as the "online world", as if everything that happened
    online were all the same, and as if everything that happened online
    were unrelated to anything that happened offline.  The reality is
    quite different.  The things that people do "online" are in almost
    every case deeply bound up with the things that they do "offline".
    For example, people rarely adopt fictional identities online that
    disguise their offline identities.  It happens, but statistically
    it is almost imperceptible.  More often people use the "online world"
    to achieve more of the same purposes that they have already adopted
    in the offline world, for example by publicizing the same projects
    that the already publicize in face-to-face conversations and in print.
    The "online world" is not a single place, but is divided among the
    various institutional sites that were already defined in the offline
    world before the Internet came along.  You have your banking sites,
    your hobby sites, your extended family mailing lists, and so on,
    each of them starting from an existing institutional logic and simply
    annexing a corner of the Internet as one more forum to pursue that
    same logic, albeit with whatever amplifications or inflections might
    be implicit in the practicalities of the technology.  People may well
    talk about the Internet as a separate place from the real world, and
    that is an interesting phenomenon, but it is not something that we
    should import into serious social analysis.
    
    This latter analysis points toward one of the tremendous virtues
    of institutional analysis: it encourages us to make distinctions.
    If we were not in the habit of enumerating institutions (education,
    medicine, family, religion, politics, law), then the temptation to
    overgeneralize would be overwhelming.  We would start from the single
    case that interests us most, and we would pretend (1) that the whole
    world worked that way, and in particular (2) that the whole world
    works in one single way, namely that one.  Institutional analysis
    provides us with one of the finest known intellectual vaccines against
    shoddy thinking: try your ideas on several disparate cases and see
    what happens.  
    
    For an example of this effect, consider the famous New Yorker
    cartoon whose caption reads, "On the Internet, nobody knows you're
    a dog".  This cartoon has been reprinted so often that I'm surprised
    that nobody has pointed out a small fact about it: it's not true.
    When people think about anonymity on the Internet, they generally
    have in mind forums like fantasy games where people adopt fictional
    identities, or else loosely woven discussion forums where people
    never learn much about one another as individuals.  But these contexts
    represent a small proportion of the Internet.  For a corrective view,
    consider all of the institutions in which the Internet is embedded.
    In academia, for example, people are generally well-identified to
    one another.  They meet one another at conferences, read one another's
    research, and so on.  They are not interested in being anonymous in
    their online interactions; indeed, the institution creates tremendous
    incentives for promoting one's work, and thus one's identity, at
    every turn.  I find it very striking that academics whose own use
    of the Internet is organized in this way would place so much credence
    in a cartoon that proclaims the very opposite of their own experience.
    
    Institutional analysis also captures important commonalities.  They
    are analogies of particular sorts: they are not hard generalizations
    of the kind claimed by science, but simply the heuristic observation
    that certain analytical frameworks are often useful for describing
    particular cases and for mediating analogies between the cases.
    An investigator can apply a given framework to the analysis of a
    given case, extend the framework by making bits and pieces of novel
    observation that have no analog in the analyses that have been made by
    others, and then advertise that novel result as a heuristic resource
    for the analysis of other cases in the future.  Investigators can read
    one another's analyses, and in this way they can build up a toolkit of
    concepts and precedents, each of which stimulates research by posing
    questions to whatever case is studied next.  Sometimes the analogies
    will pan out and sometimes they will not, but perception will be
    deepened in either case.  That is how interpretive social science
    works, and when it is done well it is a good thing.
    
    So how, from the point of view of institutional theory, does the world
    change with the advent of the Internet?  The answer (or one answer)
    lies in the exact relationship between information technology and
    institutions.  Information technologies are designed, as I have said,
    by starting with language, and historically in the most important
    cases this language has originated with the categorical structures of
    an institution.  Information technology is supposed to be revolutionary
    -- that much is constant in the ideology from the late 1970's to the
    present -- but as the UC Irvine group pointed out, the actual practice
    of computing is anything but.  Indeed the whole purpose of computing
    in actual organizational practices, not just in the majority of cases
    studied but in the very logic of computer system design as it has
    evolved historically, is to conserve existing institutional patterns.
    Now, as studies of print (by Zaret) and the telephone (by Fischer)
    have subsequently shown, media are often taken up to good effect by
    people who are trying to be conservative.  History is not just the
    story of people's intentions.  And this is just the point: history
    is a story of local initiative and global emergence, with linkages
    between them that need to be investigated concretely in each case.
    
    To see the specific role that the Internet has played in this story,
    it helps to start with the concept of layering.  The Internet is a
    layered protocol, and the principle of keeping each layer simple is
    generally credited with the Internet's great success in competition
    with other networking technologies whose initial sponsors were at
    least as powerful in their ability to impose their intentions on the
    world.  But the Internet hardly invented the phenomenon of layering,
    and is best seen as one more expression of a pervasive principle
    of order.  If a complex functionality is broken into layers, then
    the bottom layers can be shared among many purposes.  That helps
    them to propagate, and to establish the economies of scale and network
    effects that makes them work as businesses for the many individuals
    in the industrial ecosystem around them.  Information and communication
    technologies lend themselves to layering, but the institutional
    form of that layering is hardly foretold.  There is much to be said
    for universally adopted architectural layers that are nonproprietary
    as well as simple, but then there is also much to be said for central
    coordination in a complex world, especially when the obstacles to
    the propagation of new layers are considerable or the effort of needed
    changes to established layers is great.  The advantages of layering
    should not be confused with the inevitability of open standards.
    
    The fact is, though, that the principle of layering is operating quite
    strongly in the development of a wide range of digital communication
    technologies, even if the bundling imperatives of companies like
    Microsoft prevent it from operating as forcefully in the area of
    computing technologies proper.  Layers have an important property:
    they are appropriable.  Layers of nonproprietary public networks are
    especially appropriable, and the appropriability of the Internet is
    a miracle in many ways and a curse in others.  Now, the traditional
    institutional studies of computing assumed that there was something
    to fight over, and that the questions being fought were located in the
    immediate vicinity: in the architecture or configuration of computing
    in a given organization.  This analysis works best in the context of
    bespoke organizational applications, especially ones that affect the
    resource allocations and political positions of diverse organizational
    players.  It can also be generalized reasonably well to applications
    that are relatively specific to the workings of a given institutional
    field, for example in the work by Rob Kling (who was part of the UC
    Irvine group) on emerging architectures for scholarly publishing in
    medicine, whose politics involves the contending interests of various
    groups in much the same way that were seen in studies of individual
    organizations twenty years ago.  Standard political issues arise:
    collective action problems among the authors, agency problems that
    allow the authors' supposed representatives on professional society
    staffs to pursue their own interests first instead of those of their
    constituencies, copyright interests that are able to influence the
    legislative system through their greater ability to raise capital
    to invest in lobbying, cultural norms that stigmatize change, and
    the practical difficulties that would be change-entrepreneurs encounter
    in herding cats toward agreement on new institutional arrangements.
    This is the institutional politics of technology-enabled institutional
    change.  Meanwhile, new techologies mediate the political process in
    new ways, and the cycle closes completely.
    
    In the publishing case, conflict arises because individual parties
    can't just go off and change the world on their own.  Powerful
    organizations still mediate collective decision-making processes,
    and before anything can change, those organizations have to sign off.
    Politics still has a central focus.  But some things are different.
    In the old days, conflict took place prior to a system being built.
    It was the design process that was controversial, and then of course
    controversy continued over configuration and everything else once
    the implementation began.  But the language that was inscribed in
    the technology was the outcome of arguments.  Things are different
    in a world of layers, because the more layers you have, the easier it
    is to create facts on the ground by going off and building something.
    So the history of scholarly publishing isn't just an argument about
    how to build some enormous, costly software edifice in the future,
    but about how to respond to the challenge posed by a system that
    someone went off and built, and that is being used on one basis or
    another outside the existing channels of the institution.
    
    The point is not exactly that the institution is an arbitrary
    authority that has been circumvented, because we should always keep
    in mind the distinction between an institution (which is a distributed
    system of categories and rules) and an organization (which takes its
    place as one region within the broad terrain that the institution
    defines).  Somebody's new back-room invention will only have half a
    chance of threatening anyone if it is aligned to some degree with real
    forces operating within the institution, such as the incentives that
    the researchers experience to publicize their research.  In a sense
    the institution splits: its forces are channeled partly through the
    established institutions and partly through the renegade initiatives
    that someone has built with the aid of the Internet.  Then the
    politics starts, itself structured and organized within the framework
    of the institution -- after all, institutions are best understood not
    as zones of mindless consensus but as agreed-upon mechanisms for the
    conduct of disputes over second-order issues.
    
    The Internet, from this perspective, does not so much replace the
    institutions as introduce new elements into the ongoing dynamics of
    controversy within them.  It affords certain kinds of decentralized
    initiative, but even with the Internet the world is rarely so
    compartmentalized that initiatives in one area fail to interact at
    all with existing initiatives in other areas.  Rather, the possibility
    of creating new technologies on top of open network layers simply
    represents a wider range of potential tactics within an existing and
    densely organized logic of controversy.  Creating new technologies
    does create new facts on the ground, but it does not create new
    institutions.  The gap between technologies and institutions is
    quite large, and much always needs to be settled aside from the system
    architecture.
    
    But this is where the impact of the Internet is greatest, in the
    nature of the gap between technologies and institutions.  In the old
    days before the Internet, the very nature of system design ensured
    that technologies and institutions mapped together quite closely.
    That was what systems analysts were taught to do: transcribe the
    institutional orders that they found directly onto the applications
    they built.  One reason for this was organizational: an application
    is going to fit much better into an organization if it is aligned
    with the existing ecology of practices.  Another reason is pure
    conservatism: organizational practices evolve more profoundly
    than anyone can explain or rationalize, and it is usually better to
    transcribe them with a certain willful superficiality than to follow
    the advice of the "reengineering" revolutionaries of the 1980's
    by trashing history and starting over.  And a final reason pertains
    to social relations: computer systems by their nature entrain their
    users to the categorical structures that they embody, for example by
    requiring the users to swipe their magstrip cards every time they
    undertake particular sorts of transactions, and a computer system
    whose categories map directly onto the categories of the institution
    will thereby, other things being equal, make people easier to control.
    And this last point is key: the Internet, by providing an extremely
    generic and appropriable layer that a huge variety of other systems
    can easily be built upon, disrupts this classic pattern of social
    control.  The gap between technology and institution yawns, and into
    the gap pour a wide variety of institutional entrepreneurs, all trying
    to create their own facts on the ground.
    
    Fortunately for the powers that be, the great majority of these
    would-be entrepreneurs have little understanding of the opportunity
    that has been handed to them.  A common problem is to overestimate
    what can be accomplished by writing code.  Programmers build
    exciting new systems, and then they watch in dismay as the existing
    institutions stifle, short-circuit, or coopt them.  Perhaps we
    are going through that kind of transition now, when the generation
    that thought they were making a revolution by "sharing" musicians'
    work on Napster, incapable of imagining the fury with which the
    empire could strike back.  The understanding is perhaps now dawning
    that architecture, while certainly a variety of politics, is by
    no means a substitute for politics, and that institutions are made
    most fundamentally by organizing networks of people around new
    institutional models, and not by the sort of spontaneous explosion
    by which the Web was adopted as a new service layer by the millions.
    
    Precisely through its explosive success, the Internet has disserved
    us in a way, teaching us a false model of institutional change.  The
    appropriability of layered public networks is indeed a new day in the
    dynamics of institutions, but it is not a revolution.  It does not
    convert established institutions into the objects of ridicule that
    the founders of Wired magazine had claimed them to be.  Now that the
    proclaimers of ridicule are now objects of ridicule themselves, it
    is time to regroup and reinvent social practices of all sorts in the
    wired world.  Here's what we've learned: things change by staying
    the same, only more so.  The organized social conflict that is endemic
    to democracy has not gone away, and the lessons of democracy are as
    clear as they ever were.  The technology offers new options, opens
    new spaces, and affords new dynamics.  But it doesn't change the world.
    We have to do that ourselves.
    
    **
    
    Public culture and network culture.
    
    Let us consider a mystery.  In the United States, it is customary for
    conference organizers to establish Web pages for their conferences.
    It makes sense, right?  You advertise the conference, attract people
    who might not know about your community already, inform prospective
    attendees, ease registration hassles, and so on.  Yet in Europe, this
    custom is not well-developed.  I try to be global on this list, and
    I want to include URL's for site that advertise European conferences.
    I do get e-mail messages from Europeans advertising conferences,
    and these messages often include Word documents.  Yet rarely do the
    messages or attachments contain working URL's, and most of the URL's
    that they do contain are useless: either pages for the sponsoring
    organization with no substantive information about the conference, or
    PDF snapshots of print newsletters with no way to link to individual
    items, or pages for conference attendees that presuppose that the
    reader is already perfectly well-informed about the purpose of the
    conference.  Of course there are exceptions, with Americans in the
    humanities being less likely to advertise on the Web and Europeans in
    technical areas being more likely.  Still, the contrast is striking.
    
    What accounts for it?  One answer is infrastructural: Americans are
    accustomed to cheap telephone service, and especially to fixed-rate
    phone charges, whereas most Europeans still suffer with expensive
    phone service and usage-based charges that turn a Web site into
    an unpredictable and uncontrollable expense -- you pay a measurable
    amount for every page you serve.
    
    But I'd like to suggest another reason: a cultural difference between
    Europe and the United States.  To understand this difference, we need
    a conceptual distinction.  Let us distinguish two aspects of culture:
    public culture and network culture.  These are not two distinct types
    of cultures, but rather two dimensions of the same culture.  Public
    culture addresses a public.  It is consciously intended to be open
    to everyone.  It operates through newspapers, magazines, television,
    radio, public meetings, and the Web.  Network culture, by contrast,
    operates within a social network.  Individuals working on a certain
    topic might reach out to other individuals working on the same topic.
    They might exchange private messages among themselves as individuals,
    organize formal or informal meetings among small groups, or start
    an association, newsletter, or Internet mailing list.  Whereas public
    culture typically comes with an ideology of openness, whether from
    democratic values or the habits of publicity and marketing, network
    culture need not be driven by an ideology of exclusion.  It simply
    represents a lack of inclusion.  If you already know everybody that
    you need to know, why bother advertising?
    
    It is obvious that public culture and network culture are compatible.
    Both are always present to some degree, and most people switch
    back and forth between them for different purposes.  The question
    is about their relative proportions and the ways they fit together.
    My hypothesis is that, as a broad generalization, the United States
    places a greater emphasis on public culture and Europe places a
    greater emphasis on network culture.  There are many reasons for this.
    Europe sent all of its religious fanatics to America, whose culture
    consequently has more of an evangelical streak.  Americans have always
    thought of their country as a big place, whereas Europeans are only
    now expanding their historically local network culture to continental
    scale.  Europe is older, and in places like Paris social networks
    have been developed for many hundreds of years.  If you go to San
    Diego, social networks are relatively thin, for the simple reason
    that most of the population didn't even arrive in town until the
    last few decades.  Life is short and social relationships take work,
    so San Diego is less tightly woven than Paris.  Of course, ambitious
    people in Paris spend as much time building networks as ambitious
    people in San Diego.  The difference is that up-and-coming Parisians
    are breaking into existing networks, whereas go-getters in San Diego
    are more likely to introduce people who don't yet know one another.
    First, though, they have to find these people, and that is partly
    what public culture is for.  In Paris, by contrast, everyone you need
    to know is at most a couple of network links away.
    
    Another difference is institutional.  Consider one area of social
    activity: the constitution of research communities.  In the United
    States, research communities are more self-organizing than in Europe.
    European funding organizations, mainly in governments, take a great
    deal of initiative in organizing these communities themselves, or
    more precisely in creating the social structures through which these
    communities organize themselves.  Research communities have official
    status -- their workings and boundaries are defined by the agency
    that funds them.  In the United States, by contrast, research funding
    agencies tend to deal with researchers as individuals.  American
    research communities are more likely to get funding from several
    sources, and the funding agencies are less likely to constitute and
    bound the research communities whose work they support.  American
    research conferences are more likely to be defined as "people doing
    research on topic X"; European research conferences are more likely
    to be defined as "people being funded by agency X".  (DARPA workshops
    are an exception.)  In Europe, the people who actually do the research
    work are likely to be employees; in the United States they are likely
    to be graduate students.
    
    All of this has consequences for the roles of public culture and
    network culture in the two societies.  European research publications
    are more likely to take the form of reports addressed to funding
    agencies; American research publications are more likely to take the
    form of refereed journal articles.  For all these reasons, Europeans
    have less reason than Americans to publicize their meetings, and
    they have fewer incentives to publicize the results of their research.
    Europeans get ahead by the networking they do before the research
    begins; Americans get ahead by the networking they do continuously
    throughout the process.
    
    I happen to think that the American model is better, but that is not
    the present point.  The real point is more subtle.  The ruling myths
    of the Internet began life in the context of American culture, and
    as a result those myths tend to conflate public culture and network
    culture, as if they were the same.  The Internet is held to create a
    networked society, and it is also held to create openness.  But those
    are different outcomes, logically independent of one another.  Once
    we see the analytical distinction between them, we can see the reality
    of the Internet more clearly.  I have always been struck, for example,
    at how few American professors have Web sites.  Some professors make a
    personal point of building elaborate Web sites that make their papers
    and projects available to everyone.  But the great majority, in my
    estimation, are happy to let their department put up a generic Web
    site with names and phone numbers.  Many of those generic departmental
    Web sites are professionally designed, with graphics and typography
    and so on, but they are a completely different species of document
    than the geekly Web sites that tell you what you actually want to
    know.  In fact, these two types of Web site orient to two different
    conceptions of the public: the professional community that wants
    details of your publications and the general public that supposedly
    wants a blurry sense of prestige.
    
    I would argue that these public-relations types of personal Web sites
    simply fill a vacuum that is created by network culture.  The fact
    is that the institutions of research create tremendous incentives to
    build professional networks, as tremendous tools (such as libraries
    and footnotes) to support the work of building them.  And having
    built a professional network, there is little reason to publicize
    your work.  My experience is that the most ambitious, insider-oriented
    types of researchers are unlikely to maintain complex personal Web
    sites.  They, like the Europeans I described above, already know
    everyone they need to know, and if they need to know more people
    then they quietly go out and track those people down as individuals.
    Often they operate completely below the public radar.
    
    My point here is not that the Internet is insignificant, but that
    it can be significant in different ways.  The Internet amplifies
    both public culture and network culture.  The Europeans who organize
    conferences without Web sites are not technophobes; they use the
    Internet as intensively as anyone, except that they transmit complex
    conference-planning documents as attachments rather than as Web
    pages.  I personally find this infuriating, given all of the technical
    hassles that attachments bring.  But it's the culture over there.
    
    Now, it used to be that democratic societies took for granted a strong
    connection between public culture and the overall health of a society.
    In recent years, however, fashions have shifted.  Sociologists have
    discovered the importance of social networks, to the point where
    network culture -- also known as social capital -- has become the
    main focus of reformers' attentions.  In part this is because public
    culture has proven so resistant to reform.  The inherent economics of
    the mass media tend overwhelmingly to concentrate the media industries
    and the social power that they confer.  Social networks are by nature
    relatively decentralized.  Individuals might be centrally networked in
    the sense that they know people who do not know one another, but we do
    not view the resulting advantages as unfair or irremediable.  What is
    more, we increasingly view network culture as a substitute for public
    culture.  After all, people who build networks routinely build small
    "publics", for example the internal forums of professions, movements,
    associations, and organizations.  One of the Internet's great strengths 
    is its ability to support medium-sized groups of people.  For purposes
    of gathering millions of people at a time, the Internet is just another
    mass medium.  But for purposes of gathering hundreds of people at a
    time, the Internet is a vast improvement over the technologies that
    existed before.  The Internet does not replace symposia, newsletters,
    bar-hopping, leafletting, and other medium-sized associational forms.
    In fact it is extremely good at both supporting and complementing them. 
    
    Even so, we do have value choices to make.  Network culture may not
    be intended to exclude people, at least not necessarily, but network
    culture unleavened by public culture can be exclusionary in effect.
    It renders institutions opaque and creates a sense of unwritten rules.
    We needn't identify public culture with the mass media any more than
    we identify it with the Web.  Indeed, the library might be the best
    metaphor for public culture of all.  Let's keep the distinctions
    in mind as we go along, and not assume that the Internet will deliver
    all good things automatically.
    
    end
    



    This archive was generated by hypermail 2b30 : Thu Dec 13 2001 - 00:54:00 PST