[RRE]notes and recommendations

From: Phil Agre (pagreat_private)
Date: Sun Jan 27 2002 - 22:54:17 PST

  • Next message: Phil Agre: "[RRE]pointers"

    Some notes about distributed objects, technology-driven change, and
    the diversity of knowledge.
    
    **
    
    But first, a correction.  In my comments on Enron earlier today,
    I said that marketplaces are public goods.  That's usually not true,
    and having fouled up the point, let me take a moment to explain
    it.  According to economics, a public good is a good that has two
    properties: it is nonexcludable, meaning that once it exists it's
    impossible to keep people from using it, and nonrival, meaning that
    many people can use it at once.  Pure public goods are rare, but
    many things come close.  National defense is a common example: once
    the country has been protected, everyone gets the benefit of the
    protection.  Ideas are nearly public goods: you can keep them secret,
    but once they're out there everyone can use them.  The best book
    about public goods for noneconomists is Inge Kaul, Isabelle Grunberg,
    Marc Stern, eds, Global Public Goods: International Cooperation in
    the 21st Century, Oxford: Oxford University Press, 1999.  The concept
    of a public good has several failings, but I'll ignore them for now.
    
    Public goods are important because markets can't be expected to
    provide them.  Markets only provide things because someone can make a
    profit doing so, and if you can't exclude people from using something
    then you have no way of making a profit from it.  For this reason, it
    is often proposed that public goods be provided from tax money.  In
    fact many people prejudge the issue by conflating the economic concept
    with the idea of something provided by the government.  This doesn't
    automatically follow, however, for all kinds of reasons.  Some public
    goods are actually bad things -- we're talking about "goods" in the
    economic sense, not the moral sense, so that ethnic stereotypes are
    public goods that the government shouldn't be providing.  Also, the
    government should only provide things that the government is likely
    to do a better job than the market.  Many public goods are provided
    through indirect market mechanisms, e.g., through advertising, as
    side-effects of the production of non-public goods, and so on.  So
    even though public provision of public goods is often a good idea,
    each case needs to be analyzed on its own terms.
    
    Marketplaces are generally not public goods, for the simple reason
    that if you can build a fence around something then it's not a public
    good.  So, for example, the flea market might charge each vendor a fee
    to set up a stand, or it might charge each shopper a fee to enter the
    fairgrounds where the market is being held.  When I used the phrase
    "public good", I did have some real issues in mind, just not the ones
    that correspond to that phrase.  I was thinking, first, about various
    laws and customs that require kinds of market-makers to deal with
    everyone on equal terms.  I was also thinking about the problematic
    nature of many, if not most, decisions about how marketplaces should
    be paid for.  Marketplaces, as I mentioned, tend to exhibit network
    effects, meaning that the more people participate in them, the more
    valuable they become.  Large marketplaces thus tend to crowd out
    smaller ones, other things being equal, so that competition among
    marketplaces is often unstable.  A monopoly marketplace can extract
    rents (i.e., above-competitive prices) from its participants.  This
    can happen through high prices imposed on vendors or sellers, but it
    can also happen through the marketplace's attempts to interfere with
    the goods being sold, the terms of sale, the nature of advertising,
    and so on.  When a marketplace does impose such restrictions, though,
    it can be hard to prove that they are anticompetitive.  On the other
    hand, even if a marketplace is a monopoly, it doesn't automatically
    follow that the market in marketplaces has failed.  It's complicated,
    is the point, and it's often very unclear what the right answer is.
    
    What's worse, the mechanisms for charging people to participate
    in a marketplace are often determined more by the practicalities of
    extracting money from them than by the economically optimal answer.
    Thus, for example, the flea market should ideally charge a "tax" on
    every transaction rather than charging all participants a flat fee.
    The flat fee distributes the burden unfairly and thus causes some
    potential participants to stay home.  But it's not practical for a
    flea market to keep track of everyone's transactions precisely enough
    to collect taxes.  So if we come across a marketplace that pays for
    itself through a peculiar mechanism that nobody can explain, it need
    not follow that anything is wrong, though it may well follow that
    an opportunity exists for someone who can invent a better mousetrap.
    
    The flea market makes a good example because it's harmless; we can
    follow the economic concepts without getting mad.  When we move along
    to the California electricity market, however, things get bad really
    fast.  The complexities of electric power trading are quite amazing.
    To start with, you've got the power grid, which is a vast, delicate,
    and extremely dangerous physical object.  Even on a good say the
    power grid only operates because of intense, highly-evolved real-time
    cooperation among parties spread across half the continent, any one
    of whom is capable of turning out the lights in Los Angeles on a few
    seconds' notice.  The problem is hard enough when those parties all
    work for the same organization, but when they work for competing firms
    then life gets very hard.
    
    Furthermore, the economics that you learned in economics classes
    -- you know, the economics where they assume that the commodity is
    uniform in every way except maybe the one little way that a particular
    model is concerned with -- doesn't have much to do with real markets,
    even in the case of electricity, where you would think that the
    commodity is just about as uniform as it could possibly be.  In fact,
    electricity markets have a diversity of different kinds of producers,
    each with its own attributes and thus its own agenda, and every last
    detail of the very complicated market mechanisms has huge implications
    for the strategic positions of each of these producers.  That's why
    it helps to buy politicians, and why you want to have the resources
    to set the agenda and work the negotiations from day one, and get your
    favorite economists into the loop, and so on.  It is far from clear
    that we know how to organize that kind of process so that it produces
    a rational outcome.  In economics dreamworld it's easy.  In this
    world it's not.  We read again last week about how energy trading
    is "inevitable", and I'm sure it seems inevitable to people who live
    in the dreamworld of economics.  It probably also seems inevitable
    to people who live in the world where privatization happens whether
    it makes sense or not.  That's a world much closer to our own.  But
    in the real world, "inevitable" is nothing but a refusal to think or
    choose.
    
    I don't know what's going to happen, and I don't even have much of
    an opinion about it.  But I do know that the Enron debacle exposes
    some predictable systemic difficulties with modern real-time markets.
    It's partly about the accounting industry, whose absolute perfidy
    appears to be the one area of indisputable consensus at the moment.
    But that's just the start.  For one thing, I haven't even heard
    the first breath of the right answer to the problem with accounting:
    establish shareholder cooperatives to organize audits.  We know a
    great deal about how to govern cooperatives, and even badly governed
    shareholder cooperatives will do a better job of choosing accounting
    firms than (for heaven's sake!) the firms that are being audited.
    And even when we solve that problem, we still haven't begun to think
    about the institutional demands of the world we're walking into.
    We can expect to hear a lot of fog in the next few months, because
    many of the same people who celebrate markets would be terrified
    to confront the rigors of real ones.  It's much better to maintain
    a bunch of dark corners where they can rip people off.
    
    **
    
    The rise of object world.
    
    One of those revolutions that keeps not quite happening is called
    "distributed objects".  Although object-oriented programming was
    invented with the intention of simulating real objects in the world,
    in practice objects are bundles of data that interact by sending
    "messages" to one another.  Objects are said to be "distributed" when
    they can reside on any Internet host, keeping track of one another and
    exchanging messages without any regard to their location.  Distributed
    object standards do exist, but they are not a big part of the average
    user's life.  There are some good reasons for this, but never mind
    about that.  Instead, I want to imagine what the world will be
    like once standards for distributed objects are widely accepted and
    implemented, and once the creation and use of distributed objects
    becomes a routine part of life.
    
    Let's say we're having a conversation, and that I recommend a book to
    you.  You'd like to follow up, so I pick up an object that represents
    the book (technically, a pointer to an instance of the "book" object),
    and I hand it to you.  These actions of "picking up" and "handing"
    could be physically realized in several ways.  For example, I could
    pick up a small token, wave it over a copy of the book, and click on
    it with my finger to lock in the book's identifier.  (A Xerox PARC
    concept video shows something like this.)  Working out an interface
    for this idea is a lot harder than telling a story about it, but at
    least it's one idea about what "picking up" and "handing" might mean.
    
    Another possibility is that we could use software that monitors our
    conversation for objects it can recognize, either interpreting the
    fluent speech or waiting for us to address it with commands.  The
    software could show each of us a steady stream of candidate objects
    that it thinks we're referring to, and we could stop occasionally to
    poke one of those objects, indicating that we want to save it or share
    it.  It would be fascinating to watch people develop conversational
    conventions for interacting with this device.  It would stop being
    a novelty in short order, and people would conjure objects frequently
    without making a big deal of it.
    
    Software people think by classifying the world into object classes,
    some of them more abstract than others.  This way of thinking is
    hidden from normal people, but in object world we would have to
    democratize it.  People would become aware of object classes, and
    communities would have tools to come up with new object classes of
    their own.  (Paul Dourish is doing something like this.)  It would
    be useful to have an object class for any sort of "thing", abstract
    or concrete, that you want to maintain any kind of relationship with.
    When you make an airplane reservation, for example, an object could
    materialize in front of you, and you could drop it into your wallet.
    That object would include several attributes, all of them tracking
    the current state of the flight: whether it has been cancelled, when
    it is scheduled to leave, your seat assignment and meals, and so on.
    
    Airplane reservations are an obvious example, already famous from the
    early-1990s vaporware of "agents", but object world really starts when
    people start noticing "hey, that's a sort of 'thing' that I would like
    to track".  Some of the "things" would be physical objects, and others
    would be more abstract.  For example, if you are traveling to Omaha
    on April 22nd and 23rd, you can go to a weather site on the Web and
    pick up on object for the weather in Omaha on those days.  Of course,
    there will be gnashing of teeth about whether you are going to pick
    up one object, or two objects, or two objects contained within an
    abstract "visit" object, or what, but that is the normal gnashing of
    teeth that programmers engage in all the time.  We're just going to
    democratize the problem.  As the date approaches, the weather objects
    will continually update themselves to the best available forecast.
    Once you reach Omaha, the objects will start tracking the weather in
    real time.  When your visit is done, you'll have objects that record
    the weather statistics for Omaha in fine detail.  If you care about
    these objects then you can include them in your diary or trip report.
    If not then you can toss them.
    
    Objects can be complicatedly interlinked.  When you buy a car, you
    would also get a complex, interlinked pile of objects, one for every
    component of the car.  The components that include electronics would
    be connected in real time to their object representations, so that
    the objects reflect the current state of the components and some of
    the components can be controlled remotely by manipulating the objects.
    When a component is replaced at the repair shop, its object would
    be discarded and a new object would be linked in with the others.
    In fact, every auto part in the world would have a unique ID embedded
    in its corresponding object; when the part is manufactured, the object
    is manufactured along with it, and the object will then shadow the
    part forever.  When the part is discarded or destroyed, the object
    will hang around as a record of it.  This may sound complicated, but
    once the necessary standards and infrastructure are in place it will
    be perfectly routine.  Besides, it's already happening on a large
    scale, and the resulting numbers are data-mined for quality-control
    and insurance purposes.  Once this system goes public, children will
    grow up assuming that everything has a digital shadow.
    
    Objects can be designed to provide different kinds of access to
    different sorts of people.  You could have a "home page" or other
    public space (perhaps by then the whole idea of a home page will
    be quaint) where you post some objects for anyone to take, such as
    your latest publication or the upcoming performance of your theater
    company.  You could also distribute objects that make a wider range
    of attributes available to some people and a narrower range available
    to others.  Decent graphical tools will allow regular non-programmers
    to design such things.
    
    We are a long way from object world.  We could build demonstration
    versions of it today, and someone surely has.  (Do send URL's for
    papers about any demonstrations.)  To make the idea scale, though,
    real problems need to be solved on several levels.  One problem
    is middleware.  Lots of work has been done on distributed object
    databases, but now we're talking about making billions of objects
    available in billions of locations, exchanging them over wireless
    links, and all sorts of hard things.  If I have a link to my car's
    objects in my wallet, where are the objects really stored?  How widely
    are they replicated, and how is that replication managed?  Are they
    really going to be kept constantly up to date while the car is being
    driven?  Will they be updated ten times a second?  That's a lot of
    data.  Or will the updates wait until I use the object?  How does
    the object know it is being used?  And what if I associate policies
    with the object that it should take such-and-such an action if
    such-and-such predicate on the attributes of the car starts holding,
    for example to detect theft?  Once we ask these questions, we come
    face-to-face with the problem of middleware, the intermediate layers
    of software that are capable of supporting the full range of answers
    to these questions that might be reasonable in one context or another.
    To design middleware you need some idea of the range of applications
    and the practical problems that arise in them, and that in turn
    requires a fair amount of painful iteration between the applications
    level (which wants the middleware to stand still) and the middleware
    level (which wants to keep issuing new releases forever).
    
    Other issues arise at the interface level.  The idea is to insinuate
    objects into the finest corners of everyday life.  If we interact
    with objects only by mousing them on desktops then we have failed.
    The voice-recognition interface that I sketched above is half an idea
    of what it would be like to weave object-manipulation into the most
    ordinary activities.  No doubt other interface models will work best
    in particular settings, and these interfaces will have to work for
    a wide variety of people.  Interface design is relatively easy if
    you have the user's complete attention, which has been the case for
    nearly all interfaces so far, but interfaces for people who are mainly
    focused on something else are harder.  Likewise, interfaces are easy
    to design if you have immobilized the user's body, but they are harder
    to design if three-quarters of the user's bodily orientation is driven
    by a practical task in the real world, such as fixing a car.
    
    The hardest problems, perhaps, are semantic.  It is already a serious
    problem that people in different communities use words in different
    ways and thus recognize different categories of things.  An object
    like "the day's weather" can probably mean very different things to
    athletes, meteorologists, hikers, event planners, and disaster relief
    workers.  The negotiation of those semantic differences has heretofore
    happened behind the scenes, but in object world it will become more
    public.  The world is full of standards organizations dead-set on what
    they call "semantic interoperability", meaning that language gets its
    meaning standardized for the convenience of computers.  The whole idea
    makes my flesh crawl, but in object world we'll definitely have to
    decide if we want it.
    
    **
    
    Logarithmic change.
    
    One of the many unexamined assumptions of the cyberfuturists is that
    exponential growth in the power of computers (which nobody doubts)
    automatically implies exponential growth in the impacts on society.
    If you really believe this, and it's implicit in a great deal of
    public prognostication, for example in the work of Ray Kurzweil,
    then society is headed for a revolution of inconceivable proportions.
    But what if it's not true?  I have already suggested one reason why
    social impacts might not be proportional to computing power: there
    may simply be a limit to the amount of computing that society has any
    use for.  But that argument is no more convincing than the opposite;
    its purpose is really to balance one implausibility with another.
    
    Let me try another counterargument -- or really a counterintuition
    -- that might work better.  The counterintuition is that computing
    has its "impacts" on society logarithmically.  The technological
    changes might be exponential, but the consequences might be linear.
    If you list the potential applications of computing, they fall along
    a spectrum, and that spectrum has a logarithmic feeling to it.  One
    category of applications requires 100 units of computational zorch,
    the next category requires 1,000 units, the next category requires
    10,000 units, and so on, and each category of applications is equally
    easy for society to absorb.  Think, for example, of the computational
    differences between audio and video.  We can now do quite general
    audio processing in cheap, ubiquitous embedded processors, but
    the same is not yet true with video.  The result is a natural kind
    of pacing.  Society can digest the technological, entrepreneurial,
    cultural, criminal, and aesthetic possibilities of digital audio
    processing, and then it can move along to digest the possibilities of
    video processing.  Society will be digesting new digital technologies
    for a long time, no doubt about it, and we are always getting better
    at living in a world of continual change.  But the technologies will
    appear in manageable bunches, and we will learn to manage them.
    
    It would be helpful to have a model of the space of forthcoming
    applications of computing.  One approach is to think about computing
    applications in terms of their input sources and output sinks.  As
    to inputs: you can't compute without data, and the data has to come
    from somewhere.  So the first question is how the data is produced.
    Sometimes this isn't a problem.  Nature provides an infinite supply
    of data, and natural sciences applications (the paradigm case is
    environmental monitoring) will be able to absorb fresh supplies of
    computational capacity forever.  But what are the output sinks for
    natural sciences computing?  That is, who uses the results?  In order
    to change the world, the outputs need to feed into the social system
    someplace where they create new opportunities or destabilize old ones.
    If scientists write research papers based on experiments that employ
    petabytes of data rather than terabytes, does the world change at all?
    The world does change somewhat because of the organization required
    to capture petabytes of data; someone has to install and maintain the
    sensor arrays.  But the institutions of science are already geared to
    maintaining substantial distributed infrastructures.  The picture of
    the lone scientist in the lab coat was old-fashioned a generation ago.
    
    Other examples can be multiplied.  Video games that employ gigaflops
    rather than megaflops are still video games.  They fit into the world
    in the same way, they appeal to the same audience, and they require
    the same amount of time and attention.  People likewise only have a
    certain number of hours in a day that they can spend talking on the
    telephone.  We can do a much better job of delivering print material
    to people -- there's no reason why the average citizen can't skim
    the equivalent of a several hundred books a year -- but again people's
    capacity maxes out after a while.
    
    In my view, the applications of computing that most clearly change the
    world are the ones that involve the "object world" that I described
    above.  (David Gelernter refers to something similar as the "mirror
    world", but I've explained elsewhere the many problems with the
    mirror metaphor.)  Consider again the case of the digital car object.
    That case is perhaps a little frivolous, given that few people need
    to track the deep workings of their cars.  Even so, there's a quiet,
    profound revolution going on: everything in the world is growing a
    digital shadow, that is, a data object that reflects its attributes
    in real time, not to mention its entire history and its various
    simulated futures.  It's a quiet revolution because it's hard to do.
    It's not going to be an overnight success like the Web.  It requires
    whole layers of existing practices to be ploughed up and replanted,
    and that means trashing decades of tacit knowledge and innumerable
    small power bases.  In the long run, however, it really does change
    the world.  It allows various activities to unbundle themselves
    geographically and organizationally and then reassemble themselves in
    new combinations.  (It doesn't mean that the whole world disassembles
    irreversibly into fragments, however, no matter what you've heard.)
    It creates whole new power arrangements based on access to objects;
    it cuts people loose from old social arrangements while binding them
    into new ones.
    
    The question is how significant the social effects of the object
    world are as a function of the magnitude of the data they involve.
    Clearly, it will be a long time before everything in the world acquires
    a full-blown digital shadow.  The computing power required to make
    real-time data on every device in the world available to every other
    device in the world is several orders of magnitude greater than what's
    available now.  It will happen, given exponential improvements in
    computing, but it will happen slowly.  Assuming that organizational
    change processes can keep up with technological improvements (perhaps
    we'll have a lag of undigested technological capacity at some point),
    we can imagine the object world being built and taken for granted
    within a few decades.  The process has two kinds of terminus: when
    we've exhausted the input side by mirroring every object in the world,
    and when we've exhausted the output side by doing everything with the
    mirrored data that we can possibly (I mean, profitably) imagine doing.
    
    Be that as it may, it would still be helpful to have intuitions about
    the magnitude of the social impact as a function of the magnitude
    of the inputs.  If we track ten times as many parameters of the
    world, does that cause ten times the consequences?  It seems unlikely.
    For one thing, we can expect people to objectify the high-impact
    data first -- meaning, the data that produces the highest payoff
    relative to the effort of capturing it.  And some types of data are
    harder to capture than others, relative to the infrastructure and
    manufacturing techniques that are available at a given time.  It is
    relatively easy to capture the operating parameters of an auto engine;
    the engine is already full of computers, all of which will soon be on
    a general-purpose network.  Embedding sensors and processors in every
    bolt will take longer and result in orders-of-magnitude increases in
    the amount of available data, but it's hard to imagine that digital
    bolts will change the structure of the auto industry much more than
    the first round of objectifying that's under way now.  The first round
    really *will* change the industry, and not just the manufacturers but
    the insurers, repairers, hobbyists, regulators, and cops.  But the
    next round?  I suspect that much of the institutional framework to
    deal with those data flows will already be in place.  We will see.
    
    **
    
    Networks and problems.
    
    Different fields produce different kinds of knowledge.  The idea of
    a diversity of knowledge, however, intimidates many people; it sounds
    to them like relativism, as if *anything* can count as knowledge
    if someone simply says so.  That's silly; no such thing follows.
    Even so, it *is* a hard problem to understand how knowledge functions
    in society if knowledge is diverse, for example how to tell the
    difference between quality-control and censorship.  The scholars who
    have argued for the diversity of knowledge, despite the quality of
    their research, have often been unconcerned with the public-relations
    problem that their insights suffer.  They can win the argument
    about relativism when they are arguing with people equally as erudite
    as themselves, but they have historically not done a good job of
    translating the arguments into a rhetoric that wins public debates.
    That's partly because they are so concerned to defeat the mythology
    of unitary knowledge that they emphasize heterogeneity more than
    they emphasize the limits to heterogeneity.  That's too bad, because
    the diversity of knowledge actually turns out to be related to the
    Internet's place in society.
    
    Let me suggest an intuitive way to think about the differences between
    different kinds of knowledge.  To simplify, I'll stick with academic
    fields.  Every academic field, I will suggest, has two dimensions:
    problem and network.  By the "problem" dimension of knowledge I
    mean the ways in which research topics are framed as discrete and
    separable, so that researchers -- whether individuals or teams --
    can dig into them and produce publishable results without enaging
    in far-flung collaborations.  By the "network" dimension of knowledge
    I mean the ways in which researchers organize themselves across
    geographical and organizational boundaries to integrate experience
    from many different sites.  Every field has its own complexity in
    both of these dimensions, but often the emphasis is on one dimension
    or another.  As a result, we can roughly and provisionally categorize
    academic fields as "problem" fields and "network" fields.
    
    The prototype of a "problem" field is mathematics.  Think of Andrew
    Wiles, who disappeared into his study for several years to prove
    Fermat's Last Theorem.  The hallmark of "problem" fields is that
    a research topic has a great deal of internal depth and complexity.
    The math in Wiles' proof may seem like vast overkill for something
    so simple as the statement of Fermat's Last Theorem, but you can think
    of it as an engineering project that finished building a bridge over
    a conceptual canyon.  Publicity value aside, the mathematicians value
    the bridge because they hope that it's going to carry heavier traffic
    in the future.  Even so, it's not clear that Wiles' type of math
    represents the future.  Math papers are more likely to be coauthored
    than in the old days, as mathematicians work increasingly by bringing
    different skills together.  This is partly a legacy of the major math
    project of the 20th century, which aimed at the grand unification
    of fields rather than producing heavier theorems in a single area.
    That unification project opened up many seams of potential results
    along the edges between different areas of math.  The increasing
    practical applicability of even very abstruse areas of math (e.g.,
    in cryptography) didn't hurt either.
    
    Even so, math is still weighted toward the "problem" dimension.  Math
    people do form professional networks like anyone else, but the purpose
    of these networks is not so much to produce the knowledge as to ensure
    a market for it.  The same thing is true in computer science, where
    professional networks also help with funding.  And those are not
    the only problem fields.  Cultural anthropology is a good example.
    The anthropologist goes to a distant island, spends two years learning
    the culture, and writes a book that uses it as raw material to explore
    a particular theoretical problem in depth.  The "problem" nature
    of cultural anthropology is partially an artefact of technology;
    if long-distance communication is hard then it's easier to uphold
    the myth that humanity comes sorted into discrete cultures, and a
    fieldworker who travels great distances to study a culture has no
    choice but to define a large, solitary research project.  But that
    doesn't change the fact that the best anthropology (and there's a
    lot of good anthropology being written) has intellectual depth to
    rival anything being done in computer science, even if the conceptual
    and methodological foundations of the research could hardly be more
    different.
    
    Contrast these fields to some others: medicine, business, and library
    science.  Medicine, business, and library science may not seem similar
    on the surface, but they have something important in common: they are
    all network-oriented.  Because they study something that is complex
    and diverse (illnesses, businesses, and information), they build their
    knowledge largely by comparing and contrasting cases that arise in
    professional practice.  Physicians don't make their careers by solving
    deep problems or having profound ideas; they make their careers by
    building networks that allow them to gather in one central location
    the phenomenology of a syndrome that has not yet been systematically
    described.  Medical knowledge is all about experience-based patterns.
    It says, we've seen several hundred people with this problem, we've
    tried such-and-such treatments on them, and this is what happens.
    Business is the same way: we've investigated such-and-such an issue
    in the context of several businesses, and this is the pattern we've
    discerned.  Library science, likewise, is concerned to bring order
    to the diversity of information as it turns up in the collections of
    library institutions worldwide.
    
    When mathematicians look at business or computer scientists look at
    library science, they often scoff.  They have been taught to value
    "problems", and they are looking for the particular kind of "depth"
    that signifies "good work", "real results", and so on.  When they
    don't find what they are looking for, they often become disdainful.
    The problem is that they are looking in the wrong place.  The don't
    realize that the "problems" that they are familiar with are largely
    artificial constructions.  To fashion those kinds of problems, you
    need to take several steps back from reality.  You're abstracting
    and simplifying, or more accurately someone else is abstracting and
    simplifying for you.  Many job categories are devoted to suppressing
    the messy details that threaten to falsify the abstractions of
    computer science, starting with the clerks whose computer terminals
    demand that they classify things that refuse to be classified.
    The dividing-line between computer science and the business-school
    discipline of "MIS" is especially interesting from this point of view,
    since the MIS managers are much closer to the intrinsic complexity
    and diversity of day-to-day business.  Computer scientists, as a broad
    generalization, have little feeling for the complexity and diversity
    of the real world.  That's not to say that they are bad people or
    defective intellects, only that the field of computer science frames
    its knowledge in certain ways.  It takes all kinds to make a world,
    and that goes for knowledge as well.  We should encourage the creative
    tension between problem field and network fields, rather than arguing
    over who is best.
    
    Medicine is an interesting case for another reason.  Even though
    problem fields are higher-status than network fields as a broad
    generalization, medicine is an exception to the rule.  If my theory
    is right, then, why doesn't medicine fall into the same undeservedly
    low-status bin as business and library science?  The reasons are
    obvious enough.  Medicine is a business unto itself -- at UCLA it's
    half the university's budget -- and it brings money in through patient
    fees, insurance reimbursements, and Medicare, as well as through
    research grants and student tuition.  Money brings respect, all
    things being equal, although the increasingly problematic finances
    of teaching hospitals will test this dynamic in the near future.
    Medicine is also very aggressive in the way it wields symbols --
    it's hard to beat life and death for symbolic value.  What's more,
    business and library schools have stronger competitors than medical
    schools, so they have a greater incentive to speak in plain English.
    Precisely because they rely so heavily on symbols, medical schools
    have never had to explain how their knowledge works in ways that
    normal people can understand.
    
    Professional schools in general tend to produce knowledge that is
    more network-like than problem-like, but historically they have very
    often responded to the disdain of the more problem-oriented fields
    by trying to become more problem-oriented themselves.  This strategy
    is very old; in fact Merton described it perhaps fifty years ago.
    Unfortunately, it doesn't always work.  You end up with professional
    schools whose faculties are trained in research methods that are
    disconnected from the needs of their students, or else you end up
    with factionalized schools that are divided between the scientists
    and the fieldworkers, or with people whose skills lie in network
    methods trying to solve problems because that's what the university
    wants.  I think this is all very unfortunate.  I'm not saying that
    every field should be homogenous, and even if everyone does the
    research they ought to be doing we'll still have the problem of how
    scholars with incommensurable outlooks can get along.  Still, the
    asymmetry of respect between network knowledge and problem knowledge
    is most unfortunate.
    
    I think the world would be better off if network knowledge were just
    as venerated as problem knowledge.  Before this can happen, we need
    better metaphors.  We are full of metaphors for talking about the
    wonders if problem knowledge, as we ought to be.  When Andrew Wiles
    can go off in his room and prove Fermat's Last Theorem, that's a good
    thing, and there's nothing wrong with using the metaphor of "depth"
    to describe it.  It's just that we need metaphors on the other side.
    
    So here's a metaphor.  I propose that we view the university as the
    beating heart of the knowledge society.  The heart, as we all know,
    pulls in blue blood from all over the body, sends it over to the lungs
    until it's nice and red with oxygen, and then pumps it back out into
    the body.  The university does something similar, and the predominant
    working method of business schools can serve as a good way to explain
    it.  If you read business journals, especially journals such as
    the Harvard Business Review that are largely aimed at a practitioner
    audience, you will often see two-by-two matrices with words written in
    them.  These sorts of simple conceptual frameworks (which I've talked
    about before) are a form of knowledge, but it's not widely understood
    what form of knowledge they are.  Once we understand it, we'll be able
    to see how the university is like a heart.
    
    So let's observe that there are at least two purposes that knowledge
    can serve: call them abstraction and mediation.  Abstraction is the
    type of knowledge that the West has always venerated from Plato's
    day forward.  It is something that rises above concrete particulars;
    in fact, it carries the implicit suggestion that concrete particulars
    are contaminants -- "accidents" is the medieval word -- compared to
    the fixed, permanent, perfect, essentially mathematical nature of the
    abstractions.  Abstractions generalize; they extract the essence from
    things.  They are an end in themselves.  In Plato's theory we were all
    born literally knowing all possible knowledge already, since access
    to the ideals (as he called them) was innate.  That made questions of
    epistemology (i.e., the study of the conditions of knowledge) not so
    urgent as they became subsequently, as the West began to recognize the
    absurdity of a conception of knowledge that is so completely detached
    from the material world.
    
    But if knowledge can abstract, it can also mediate.  The purpose
    of the two-by-two matrices in the business journals is not to embody
    any great depth in themselves, the way a theorem or an ethnnography
    might.  Instead, their purpose is to facilitate the creation of new
    knowledge in situ.  Choose a simple conceptual framework (transaction
    costs, core competencies, structural holes, portfolio effects), and
    take it out into real cases -- two or more, preferably more.  Study
    what each conceptual framework "picks out" in each case; that is, use
    the conceptual framework to ask questions, and keep asking questions
    until you can construct a story that makes sense within the logic of
    that particular case.  That's important: each case has its details,
    and each case is filled with smart people who have a great deal of
    practical knowledge of how to make a particular enterprise more or
    less work.  So work up a story that makes sense to them, that fits
    with their understandings, yet that is framed in terms of the concepts
    you've brought in.  Of course, that might not be possible; your new
    concepts may not pick out anything real in a particular case, in which
    you need to get new concepts.  But once you've found concepts that
    let you make sense of several cases, now you can compare and contrast.
    
    And that's where the real learning happens.  Even with the concepts
    held constant, each case will tend to foreground some issues while
    leaving others in the background.  Take the issues that are foreground
    in case A, and translate those issues over to cases B, C, D, and E,
    asking for each of them what's going on that might correspond to the
    issue from case A.  It doesn't matter whether the other cases are all
    directly analogous to case A; even if the issue sorts out differently
    in those other cases, the simple fact that you've thought to ask the
    question will provoke new thoughts that may never have occurred to
    anybody before.  That's what I mean by the mediating role of knowledge:
    it mediates the transfer of ideas back and forth between situations
    in the real world that might not seem at all comparable on the surface.
    
    And that's the beating heart: what the university does is fashion
    concepts that allow ideas to be transferred from one setting to
    another.  Each setting has its own language, so the university
    invents a lingua franca that gets conversation started among them.
    At first the ideas will pass through the doors of the university.
    A researcher will go out to several different sites, gather ideas,
    bring them home, think about them, and then scatter them in other
    sites.  Eventually the concepts themselves will be exported, so that
    students who graduate into companies or consulting firms will become
    beating hearts on their own account.  (That's a place where the
    analogy falters: maybe the university is more like a manufacturer of
    hearts.)  We in modern society take for granted something remarkable:
    that nearly every site of practice is on both the donating and the
    receiving end of these mediated transfers of ideas.  Often we don't
    realize it because the people who import ideas by mediation from
    other fields will often present them full-blown, without bothering to
    explain where they got them.  Other times, a kind of movement will get
    going whereby researchers and practitioners unite across disciplinary
    lines around a particular metaphor that they find useful for mediating
    transfers among themselves: self-organization is one of the fashionable
    metaphors of the moment.
    
    Mediating concepts can be used in various ways, but in general what
    you see is a mixture of two approaches: explicit comparing/contrasting
    of particular cases and something that looks more like abstraction.
    The resulting abstractions, however, usually have no great depth in
    themselves; their purpose is simply to summarize all of the issues and
    ideas and themes that have come up in the various cases, so that all
    of them can be transferred to new situations en masse.  This is what
    "best practices" research is.  It's also what physicians do when they
    codify the knowledge in a particular area of medicine; the human body
    is too complicated, variable, and inscrutable to really understand in
    any great depth, and so codified medical knowledge seeks to overwhelm
    it with a mass of experience loosely organized within some operational
    concepts and boiled down into procedures that can be taught, and whose
    results can be further monitored.  This is the important thing about
    network knowledge: it really does operate in networks -- meaning both
    social networks and infrastructures -- and networks are institutions
    that have to be built and maintained.  In a sense, network knowledge
    is about surveillance, and mediating concepts exist to render the
    results of surveillance useful in other places.
    
    The mediating role of concepts can help us to explain many things.
    It is a useful exercise, for example, to deliberately stretch the
    idea of mediation to situations where its relevance is not obvious.
    Philosophy, for example, has long been understand as the ultimate
    abstraction, something very distant from real practice.  This is
    partly a side-effect of the unfortunate professionalization of
    philosophy that led to the hegemony of analytical philosophy in
    the English-speaking world perhaps a century ago, but really it
    dates much further back into the ancient Greek mythologies of ancient
    times.  The popular conception of philosophy as the discipline of
    asking questions with profound personal meaning is almost completely
    unrelated to the real practice of philosophy at any time or place
    in history.  There are exceptions.  One of Heidegger's motivations,
    especially in his earliest days, was to reconstruct philosophy around
    the kinds of profound meanings that he knew from Catholic mysticism.
    Some political philosophers have tried to make themselves useful to
    actual concrete social movements.  But for the most part, philosophy
    has been terribly abstract from any real practice.
    
    Yet, if we take seriously the mediational role of concepts, then
    maybe the situation is more complicated.  One role of the university
    is precisely to create concepts that are so abstract that they
    can mediate transfers of ideas between fields that are very distant
    indeed.  Perhaps we could go back and write a history of the actual
    sources of scholars' ideas, and maybe we would find that the very
    abstract concepts that scholars learned in philosophy often helped
    them to notice analogies that inspire new theories.  Analogies
    have long been recognized as an important source of inspiration for
    new discoveries, especially in science but in other fields as well,
    and nothing facilitates the noticing of analogies so efficiently as
    an abstract idea that can be used to describe many disparate things.
    
    I would like to see the university take the mediating role of concepts
    more seriously.  I would like every student to be taught a good-sized
    repertoire of abstract concepts that have historically proven useful
    for talking about things in several disparate fields -- examples
    might include positive and negative feedback, hermeneutics, proof by
    contradiction, dialectical relationships, equilibrium concepts from
    physics, evolution by natural selection, and so on -- and teach them
    not as knowledge from particular fields, but as schemata that help
    in noticing analogies and mediating the transfer of ideas from one
    topic to another.  The students would be drilled on the use of these
    concepts to analyze diverse cases, and on comparing and contrasting
    whatever the analyses turn up, and then they be sent off to take
    classes in their chosen majors.  After a while we could do some
    intellectual epidemiology to see which of the concepts actually prove
    useful to the students, and we could gradually evolve the curriculum
    until we've identified the most powerful concepts.  I do realize the
    problem with this proposal: it is bound to set off power struggles
    along political lines, and between the sciences and humanities, over
    the best repertoire of concepts to teach.  But that's life.
    
    The mediating role of concepts, and network knowledge generally, are
    also a useful way to re-understand fields that we normally understand
    mostly in terms of their problem knowledge.  (You'll recall that my
    classification of fields as "network fields" and "problem fields" is
    a heuristic simplification, and that every field has both dimensions.)
    What is the network-knowledge dimension of math or computer science?
    I've already described one role of professional networking in each
    field, which is to provide an audience for one's work.  All research
    depends on peer review, so it's in your interest to get out there and
    explain the important of your research to everyone who might be asked
    to evaluate it.  Likewise, if you need funding for your research then
    you'll probably want to assemble a broad coalition of researchers who
    explain the significance of their proposed research in similar ways,
    so that you can approach NSF or the military with a proposition they
    can understand.
    
    But none of that speaks to the network nature of the knowledge itself.
    What is network-like about knowledge in math and computing?  It's true
    that neither field employs anything like the case method.  But they do
    have something else, which is the effort to build foundations.  Much
    of math during the 20th century, as I mentioned, was organized by the
    attempt to unify different fields, and that means building networks of
    people with deep knowledge in different areas.  Only then can proposed
    foundations be tested for their ability to reconstruct the existing
    knowledge in each area.  In computing, the search for foundations
    takes the form of layering: designing generic computer functionality
    that can support diverse applications.  In that kind of research,
    it's necessary to work on applications and platforms simultaneously,
    with the inevitable tensions that I also mentioned above.  So in that
    sense math and computer science have a network dimension, and I think
    that each field would profit by drawing out and formalizing its network
    aspects more systematically.
    
    Even though anthropology is built on deep case studies, the network
    nature of its knowledge becomes clearer as you speak with the more
    sophisticated of its practitioners.  Anyone who engages seriously
    with the depths of real societies is aware that theoretical categories
    apply differently to different societies, and that there's a limit to
    how much you can accomplish by spinning theories in abstraction from
    the particulars of an ethnographic case.  I am basically a theorist
    myself, but I realize that my research -- that is, the theoretical
    constructs I describe -- is only valuable for the sense it makes of
    particular cases.  So I read case studies, and I try to apply my
    half-formed concepts to those cases, or else I draw on concepts that
    have emerged from particular cases, and then I try to do some useful
    work with them.  My work is also influence by personal experience,
    usually in ways that I don't write about.  But I can only go so
    far before it's time to start testing the concepts against real
    cases again, and that's why I often move from one topic to another,
    contributing what I can until I feel like I'm out on a limb, beyond
    what I can confidently say based on existing case studies and common
    knowledge.  It *is* possible to useful things without being directly
    engaged with cases, for example pointing out internal inconsistencies
    in existing theories, sketching new areas of research that other
    people haven't gotten around to inventing concepts for, noticing
    patterns that have emerged in the cases so far, or comparing
    and contrasting theoretical notions that have arisen in different
    contexts.  But if you believe that theory can blast off into space
    without any mooring in real cases then you're likely to do the sort
    of pretentious big-T Theory that gives us all a bad name.
    
    Anthropologists are thoroughly infused with that understanding, and so
    the best ones really do refuse abstraction.  They see their theoretical
    constructs very much as ways of mediating between different sites.
    Their concern is not practical, so they are not interested in moving
    ideas from one site to another on a material level.  They are usually
    not trying to help the people they study.  Rather, they are interested
    in describing the fullness of the social reality they find in a given
    place, and like the business people they understand that the real
    test is the extent to which their story about a particular case makes
    internal sense.  Granted, they are less concerned than the business
    people to be understandable to the people they are studying, although
    that too is changing as the "natives" become more worldly themselves,
    and as it becomes more acceptable by slow degrees to study "us"
    as well as "them".  In any case, I think that the anthropologists'
    relationship to theory is healthy, and I wish I could teach it to
    people in other fields.  Anthropology is also becoming more network-
    like as reality becomes more network-like, and as the myth of discrete
    cultures becomes more and more of an anachronism, but that's a topic
    for another time.
    
    Knowledge is diverse because reality is diverse.  In fact, reality
    is diverse on two levels.  A field like medicine, business, or
    library science derives knowledge by working across the diversity
    of illnesses, businesses, and information, gathering more or less
    commensurable examples of each under relatively useful headings that
    can be used as to codify and monitor practice.  And then the various
    fields themselves are diverse: they are diverse in diverse ways.
    Fields that pride themselves on abstraction operate by suppressing
    and ignoring diversity.  That can be okay as a heuristic means of
    producing one kind of knowledge -- knowledge that edits the world
    in one particular way, and that can be useful when recombined with
    knowledge that edits the world in other ways.  But it's harmful when
    abstraction is mistaken for truth, and when fields that refuse to
    abstract away crucial aspects of reality are disparaged as superficial
    compared to the artificial depth at the other end of campus.  Let's
    keep inventing metaphors that make network-oriented fields sound
    just as prestigious and heroic as problem-oriented fields.  The
    point, of course, is not just to mindlessly praise the work, since
    bad research can be done anywhere.  The point, rather, is to render
    intuitive the standards that can and should guide us in evaluating
    research of diverse types.  If we don't, then we will disserve
    ourselves by applying standards that don't fit, or else no standards
    at all.
    
    end
    



    This archive was generated by hypermail 2b30 : Sun Jan 27 2002 - 23:12:28 PST