[RRE]Peer-to-Peer and the Promise of Internet Equality

From: Phil Agre (pagreat_private)
Date: Thu Nov 15 2001 - 11:24:18 PST

  • Next message: Phil Agre: "[RRE]pointers"

    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    This message was forwarded through the Red Rock Eater News Service (RRE).
    You are welcome to send the message along to others but please do not use
    the "redirect" option.  For information about RRE, including instructions
    for (un)subscribing, see http://dlis.gseis.ucla.edu/people/pagre/rre.html
    =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
    
    
      
      Peer-to-Peer and the Promise of Internet Equality
    
      Philip E. Agre
      Department of Information Studies
      University of California, Los Angeles
      Los Angeles, California  90095-1520
      USA
    
      pagreat_private
      http://dlis.gseis.ucla.edu/pagre/
    
    
      This is a draft.  You are welcome to forward it, but please do not
      quote from it.  Comments appreciated.
    
      Version of 15 November 2001.
      2500 words.
    
    
    Technologies often come wrapped in stories about politics.  In the
    case of peer-to-peer technologies on the Internet, the standard story
    goes like this:
    
      Once the world was centralized under the control of a top-down
      hierarchy.  Then came the Internet, a decentralized network
      architecture that lets everyone build their own services
      and prevents anyone from regulating them.  Peer-to-peer (P2P)
      technologies deliver on the Internet's promise, and they will
      lead us to a decentralized world of freedom and equality.
    
    I propose to analyze this story.
    
    An initial problem is the term "peer-to-peer".  As the story suggests,
    it can be hard to distinguish P2P computing from the Internet in
    general.  For example, why isn't e-mail peer-to-peer?  And if SETI@Home
    (Anderson 2001) is an example of P2P computing despite its centralized
    structure (a central server that interacts with numerous personal
    computers that contribute their spare cycles), then why isn't the
    Web also P2P, given that its client-server structure is much less
    centralized?  Shirky (2001) proposes that P2P systems form a distinct
    category because they use mechanisms other than the domain name system
    (DNS) to identify and track their participants.  Yet this definition
    seems arbitrary.  Whatever the ills of the DNS, surely there exist
    other potential tracking mechanisms that are even worse.
    
    In reality, I suggest, P2P does not name a distinct category of
    distributed computing systems.  Rather, it signifies a certain
    project: pushing the design of all computing systems toward a fully
    decentralized ideal.  Yet much remains unclear.  To begin with,
    observe that the standard story shifts between two kinds of "center",
    architectural and institutional.  By "architecture" I mean the
    concepts that are designed into a technology.  Examples include
    the von Neumann serial processor, the client-server distinction,
    and the relational database.  By "institution" I mean the concepts
    that organize language, rules, job titles, and other social categories
    in a given sector of society.  Examples include the concepts of
    "patient", "case", and "disease" in the medical system.  Architectures
    and institutions are often related, and systems analysts have
    developed sophisticated methods of translating institutional concepts
    into system architectures.  The standard story suggests, very simply,
    that decentralized architectures will bring about decentralized
    institutions (see, e.g., Gilder 1992).
    
    Yet this hardly follows.  Architectures and institutions are often
    shaped to fit another another, but they are still different sorts of
    things.  As a means of evaluating the prospects for P2P, therefore,
    I will briefly present four theories of the relation between
    architectures and institutions.  Each theory will be associated with
    a particular theorist of institutions.
    
    1. Veblen
    
    Thorstein Veblen wrote during the Progressive Era, when tremendous
    numbers of professional societies were being founded, and he foresaw
    a society that was organized rationally by engineers rather than
    through the speculative chaos of the market.  Veblen was impressed
    by a profession's ability to pool knowledge among its members, and
    he emphasized the collective learning process through which industry
    grows.  (On Veblen's theory, see Hodgson (1999).)
    
    Veblen's theory resembles some of the claims made for the Internet:
    open information, universally shared, giving rise to a group mind.
    In fact, the rise of professions was facilitated by the communications
    and transportation infrastructures of the 19th century.  As Chandler
    (1977) observes, the first modern professional associations were
    organized by railroad employees, who used the new telegraph and
    the railroad infrastructures, as well as printed newsletters and
    organized conferences, to build new institutions of knowledge-sharing.
    The infrastructures, in turn, had gone through a tumultuous history
    of decentralized innovation and increasing centralization under the
    control of large firms.
    
    2. Hayek
    
    Friedrich Hayek was an Austrian economist who withdrew from technical
    research to provide intellectual ammunition for the fight against
    communism.  His most famous argument is that no centralized authority
    could possibly synthesize all of the knowledge that the participants
    in a complex market use in making their allocation decisions (Hayek
    1963).  This emphasis on knowledge was outside the mainstream of
    economic thought at the time, and it remains largely so even today.
    But Hayek was not an anarchist.  He argued that a market society
    requires an institutional substrate that upholds principles such
    as the rule of law (Hayek 1960).  A productive tension is evident
    in Hayek's work: he is attracted to notions of self-organization
    that seem like the opposite of governmental control, but he is also
    aware that self-organization presupposes institutions generally and
    government institutions in particular.
    
    Hayek's work, like Veblen's, challenges us to understand what a
    "center" is.  In some cases, intuitions are clear.  French society
    is highly centralized.  Switzerland has been remarkably decentralized
    for centuries.  And the federal systems of the United States and
    Germany lie somewhere in the middle.  In each case, we reckon degrees
    of centralization by the constitutional distribution of political
    authority.
    
    But centers and centralization can be understood in other ways.
    Observe that institutions, like architectures, are typically organized
    in layers.  Legislatures and courts are institutions that create
    other institutions, namely laws.  Contract law is an institution,
    but then so are individual contracts.  Internet protocols, likewise,
    are organized in layers, each of which creates ground rules for the
    ones above it.  Do the more basic layers count as "centers"?  Yes,
    if they must be administered by a centralized authority.  Yes, if
    global coordination is required to change them.  No, if they arise
    in a locality and propagate throughout the population.  At least
    sometimes, then, centralization on one layer is a precondition for
    decentralization on the layers above it.  Complex markets systems,
    for example, need their underlying infrastructures and institutions
    to be coordinated and standardized.  Yet this kind of uniformity has
    generally been imposed by powerful governments and monopolies.  The
    conditions under which decentralized systems can emerge, therefore,
    are complicated.
    
    Consider the case of the Internet.  Despite its reputation as the very
    model of decentralization, the institutions and architecture of the
    Internet nonetheless have many centralized aspects, including the DNS,
    the IETF, and Microsoft's control over the desktop software market.
    But let us consider one aspect of the Internet in particular: the
    end-to-end principle (Saltzer, Reed, and Clark 1984), which moves
    complexity out of the network itself and into the hosts that use it.
    In one sense, this principle is nothing but layering.  Each layer in
    the Internet protocol stack is kept simple, and new functionalities
    are assigned to newly created layers atop the old ones.  In another
    sense, however, the end-to-end principle shifts complexity away from
    the centralized expertise of network engineers, placing it instead
    on the desktops of end-users -- the very people who are least able
    to manage it.  Much of the Internet's history, consequently, has
    consisted of attempts to reshuffle this complexity, moving it away
    from end-users and into service providers, Web servers, network
    administrators, and so on.
    
    As this history makes clear, one layer of an architecture can
    have different properties from the layers above and below it.
    A decentralized network can support centralized services, or vice
    versa.  Thus, for example, the asymmetrical client-server architecture
    of the Web sits atop the symmetrical architecture of the Internet.
    The peer-to-peer movement promises to move the entire network toward
    a decentralized ideal.  In doing so, it must confront various types
    of centralization that are inherent in certain applications.  For
    example, if users contend for access to the same physical resource,
    some kind of global lock will be needed.  Most markets have this
    property.  Some mechanisms do exist for sharing a scarce resource
    without an architecturally centralized lock; examples include the
    backoff algorithms that both Ethernet and TCP use to control network
    congestion.  Research on distributed services has long sought to
    replicate documents while avoiding the danger of divergent changes
    (e.g., Dewan 1999).  So the obvious solution of complete architectural
    centralization is hardly the only option.  Even so, it is a profound
    question how thoroughly the functionality of a market mechanism
    like NASDAQ, eBay, or SABRE can be distributed to buyer/seller peers.
    It is also unclear how useful such a radical decentralization would
    be to the market participants.
    
    3. North
    
    For Douglass North (1990: 3), an institution can be understood by
    analogy to the rules of a game.  The rules of baseball, for example,
    define such categories as a "pitcher", "strike", and "infield fly".
    An institution, in this sense, is categorical structure that allows
    people to coordinate their activities.  The institution defines
    social roles and creates a terrain upon which individuals navigate.
    In particular, the institution creates incentives, such as the profit
    motive, that tend to channel participants' actions.  Taking markets
    as his main example, North suggests that the rules of the game change
    only slowly and incrementally.  (For an application of North's theory
    to the DNS controversy, see Mueller (2000).)
    
    As an example, consider the institutional context in which the
    ARPANET and Internet arose (Abbate 1999).  In their attempt to create
    a decentralized computer network, the program managers at ARPA had
    an important advantage: they controlled the finances for a substantial
    research community.  ARPA made the rules, and they consciously created
    incentives that would promote their goals.  They compelled their
    contractors to use the ARPANET (1999: 46, 50, 55), and they drove the
    adoption of electronic mail by methods such as being accessible to
    their contractors only through that medium (1999: 107-110).  Later on,
    they funded implementation of TCP/IP on many vendors' machines (1999:
    143) and imposed a death march on their contractors for the transition
    to TCP (1999: 140-142).
    
    This centralized institutional environment had subtle consequences
    for the decentralized architecture it produced.  Because of ARPA's
    authority, everyone took for granted that the ARPANET's user community
    was self-regulating.  This feature of the institution is reflected in
    the poor security of the Internet's electronic mail standards.  When
    the Internet became a public network, the old assumptions no longer
    applied.  Lasting security problems were the result.
    
    Another example is found in Orlikowski's (1993) celebrated study of
    Lotus Notes in a large consulting firm.  Notes may not be a wholly
    peer-to-peer architecture, but its success was due largely to its
    replication strategy, which is crucial for distributed document-
    sharing.  The CIO of this firm assumed that he could ensure widespread
    adoption simply by making the software available on employees'
    computers.  That did not happen.  Orlikowski identified two problems:
    (1) most employees were familiar with traditional tools such as
    electronic mail, so they used Notes only for those familiar purposes,
    and (2) most of the consultants were promoted on the basis of the
    distinctive practices that they had built as individuals, so they had
    few incentives to share their knowledge.  Only when the company began
    evaluating employees on their use of Notes, therefore, did adoption
    become widespread.  Once again, centralized authority was required to
    institutionalize decentralized knowledge-sharing.
    
    4. Commons
    
    John Commons was a Progressive Era economist who eventually trained
    many of the leaders of the New Deal.  Guided by his union background
    and the democratic ideals of his time, Commons (1934) viewed every
    social institution as a set of working rules defined by collective
    bargaining.  After all, every institution defines a set of social
    roles (doctor-patient, teacher-student, landlord-tenant, and so on),
    and each social role defines a community (for example, the community
    of doctors and the community of patients).  Commons (1924) argues that
    each group develops its own culture and practices, which eventually
    become codified in law.
    
    Commons' theory helps to explain the development of technical
    architectures.  Consider, for example, the classic institutional
    analysis by Danziger, Dutton, Kling, and Kraemer (1982) of the
    development of computer systems in American local governments.
    These authors observed that computer architectures are rarely neutral.
    Whose functionalities should the system support?  Who should gain
    information about whom?  The design process, therefore, is inevitably
    political.  Based on survey and interview studies, the authors asked
    which factors determined the interests a new computer system would end
    up serving.  They did find that every affected group had some input
    to the decision-making process.  But they concluded that new systems
    ended up serving the interests of whichever group (for example, the
    mayor, the financial department, or the police) already held power.
    
    Commons would view this situation as pathological.  He disagreed with
    Marx's vision of history as the inevitable victory of one social class
    over all others, and preferred a vision of collective bargaining among
    evenly matched groups.  For this reason, he might have found more hope
    in the current war over music distribution.  The unexpected growth
    of Napster set off an institutional revolution in the music industry,
    and Napster's subsequent decline under legal attack should provoke
    reflection about that revolution's nature.  Napster had a fatal
    flaw: although it provided a viable architecture for music sharing,
    it did not provide a viable institution for allowing musicians to
    make a living.  Some bands can make money from live performance or
    merchandise, but most bands -- if they make money at all -- still rely
    on record sales.
    
    The collective bargaining process that Napster has set in motion,
    therefore, has at least three parties: musicians, fans, and record
    companies.  As in every negotiation, each party has its own political
    problems -- comprehending the situation, getting organized, adopting
    a common position, coordinating its actions, delegating authority
    to a trusted representative, and so on.  The negotiation takes
    the form of an "ecology of games" (Dutton 1992): conflicts in many
    venues, including legislatures, courts, and standards organizations.
    What is needed, clearly, is an alternative institutional model that
    connects musicians and fans in new ways, ideally without the market
    dysfunctions that lead to abusive record industry contracts.
    
    This alternative institution for music distribution will presumably
    depend on new technical architectures.  Yet, for the moment, most
    technical development is aimed at protecting a Napster-like model
    from the legal assaults of the record companies.  It remains to be
    seen whether a thoroughly peer-to-peer "sharing" architecture can
    avoid being shut down, particularly if monopolies such as Microsoft
    change their own architectures to suit the record companies' needs.
    A more important question, though, is whether the drive toward fully
    decentralized "sharing" bears any useful relationship to the real
    problem of connecting musicans and fans in an economically viable way.
    
    What has been learned?  Decentralized institutions do not imply
    decentralized architectures, or vice versa.  Indeed, the opposite
    is just as arguably the case.  The drive toward decentralized
    architectures need not serve the political purpose of decentralizing
    society, and it can even be destructive.  Architectures and
    institutions will inevitably coevolve, and to the extent they can be
    designed, they should be designed together.  The peer-to-peer movement
    understands that architecture is politics, but it too often assumes
    that architecture is a substitute for politics.  Radically improved
    information and communication technologies do open new possibilities
    for institutional change.  To explore those possibilities, though,
    technologists will need better ideas about institutions.
    
    References
    
    Janet Abbate, Inventing the Internet, Cambridge: MIT Press, 1999.
    
    David Anderson, SETI@home, in Andy Oram, ed, Peer-to-Peer: Harnessing
    the Power of Disruptive Technologies, O'Reilly, 2001.
    
    Alfred D. Chandler, Jr., The Visible Hand: The Managerial Revolution
    in American Business, Cambridge: Harvard University Press, 1977.
    
    John R. Commons, Legal Foundations of Capitalism, New York: Macmillan,
    1924.
    
    John R. Commons, Institutional Economics: Its Place in Political
    Economy, Madison: University of Wisconsin Press, 1934.
    
    James N. Danziger, William H. Dutton, Rob Kling, and Kenneth
    L. Kraemer, Computers and Politics: High Technology in American Local
    Governments, New York: Columbia University Press, 1982.
    
    Prasun Dewan, Architectures for collaborative applications, in Michel
    Beaudouin-Lafon, ed, Computer Supported Co-Operative Work, Chichester,
    UK: Wiley, 1999.
    
    William H. Dutton, The ecology of games shaping communications policy,
    Communication Theory 2(4), 1992, pages 303-328.
    
    George Gilder, Life after Television, New York: Norton, 1992.
    
    Friedrich A. Hayek, The Constitution of Liberty, Chicago: University
    of Chicago Press, 1960.
    
    Friedrich A. Hayek, Individualism and Economic Order, Chicago:
    University of Chicago Press, 1963.
    
    Geoffrey M. Hodgson, Economics and Utopia: Why the Learning Economy Is
    Not the End of History, London: Routledge, 1999.
    
    Milton Mueller, Technology and institutional innovation: Internet
    domain names, International Journal of Communications Law and Policy
    5(1), 2000.  Available on the Web at
    <http://www.IJCLP.org/5_2000/ijclp_webdoc_1_5_2000.html>.
    
    Douglass C. North, Institutions, Institutional Change, and Economic
    Performance, Cambridge: Cambridge University Press, 1990.
    
    Wanda J. Orlikowski, Learning from Notes: Organizational issues in
    groupware implementation, The Information Society 9(3), 1993, pages
    237-250.
    
    Jerome W. Saltzer, David P. Reed, and David D. Clark, End-to-end
    arguments in system design, ACM Transactions in Computer Systems 2(4),
    1984, pages 277-288.
    
    Clay Shirky, Listening to Napster, in Andy Oram, ed, Peer-to-Peer:
    Harnessing the Power of Disruptive Technologies, O'Reilly, 2001.
    
    end
    



    This archive was generated by hypermail 2b30 : Thu Nov 15 2001 - 11:48:43 PST