J. Bradford De Long
University of California at Berkeley, National Bureau of Economic Research
A. Michael Froomkin
University of Miami School of Law
DRAFT April 6, 1997 Version B17.
This page has been accessed
times since April 11,
1997. © 1997 J. Bradford De Long & A. Michael Froomkin.
To be published in: Deborah Hurley, Brian Kahin, and Hal Varian, eds., Internet Publishing and Beyond: The Economics of Digital Information and Intellectual Property (Cambridge: MIT Press). It will be published late 1997/early 1998.
All rights reserved. Permission granted to make one copy for non-commercial use.
It is a commonplace today to talk about the efficiency of the market system: everybody can buy what they need from one of a large number of competitive sellers, everyone can sell what they make to one of a large number of competitive buyers (or, more plausibly, "sell" their time to one of a large number of competitive employers). Everybody can make whatever deals they are able regarding the prices at which they buy and sell.
Two hundred and thirty years ago the Scottish moral philosopher Adam Smith used a particular metaphor to describe this system, a metaphor which still resonates today:
...every individual... endeavours as much as he can... to direct... industry so that its produce may be of the greatest value.... neither intend[ing] to promote the public interest, nor know[ing] how much he is promoting it.... He intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end that was no part of his intention.... By pursuing his own interest he frequently promotes that of society more effectually than when he really intends to promote it...(1)
Adam Smith's advocacy in 1776 of the market system as a social mechanism for regulating the economy was the opening shot of a grand and successful campaign. The past two centuries have seen his doctrines woven into the fabric of our society, and have seen governments that followed his path achieve the most materially prosperous and technologically powerful economies that the world has ever known.(2) Belief in free trade, an aversion to all forms of price controls, freedom of occupation, freedom of domicile, freedom of enterprise, and all the other corollaries of belief in Smith's Invisible Hand are now more than ever the background assumptions for thinking about the relationship between the government and the economy. A free-market system, most economists claim and most participants in capitalism believe, generates a level of total economic product that is as high as possible--and is certainly higher than under any alternative system that we have known.(3) The implication many have drawn from this economic success story is that in the overwhelming majority of cases the best thing the government could do for the economy was to leave it alone: laissez-faire.
The main reason that it is generally believed that a free and competitive market system will lead to something worth calling "optimal" is the dual role played by prices in a well-functioning market system. On the one hand, prices serve to ration demand: anyone who is not willing to pay the market price of a particular commodity because he or she would rather do other things with his or her (not unlimited) money does not get to consume the commodity. On the other hand, price serves to elicit production: anyone or any organization that can produce a commodity for a cost that is less than its market price has a strong financial incentive to do so.
You can criticize the market system because it undermines the values of community and solidarity. You can criticize the market system because it is unfair--for it gives good things to those who have control over whatever resources turn out to be most scarce as society solves its production allocation problem, not to those who have any moral desert. But the market system has the powerful achievement of creating the maximum amount of social surplus, at least under the conditions that economists usually assume and that Adam Smith implicitly assumed. In other words, the Invisible Hand has produced a larger difference between the value to consumers of what is produced and the cost to the producers of making it than has any other system.
We assume readers are broadly familiar with Adam Smith's case for the Invisible Hand. The next section deconstructs that case by pointing out three assumptions about production and distribution technologies which are essential for the Invisible Hand to work. We suggest that these assumptions are increasingly unsuited to the information economy. In the following section we take a look at some of the things that are happening on the frontiers of electronic commerce and in the market for information. Our hope is that what is now going on at the frontiers of electronic commerce may contain some clues to processes that will be more general in the future.
Our final section does not answer all the questions we raise. Instead it raises still more questions. The most we can begin to do at this point is to organize our concerns in a coherent and comprehensible framework. By looking at the behavior of people in high-tech commerce--people for whom the abstract doctrines and theories that we present have immense concrete reality because they determine whether or not they get paid--we can also form some tentative conclusions about what the next economics, and the next set of sensible policies, might look like.
Modern technologies are beginning to undermine the features that make the Invisible Hand of the market system an effective and efficient system for organizing production and distribution. The case for the market system has always rested on three pillars: call the first excludability, the ability of sellers to force consumers to become buyers and thus to pay for what they use; call the second rivalry, a structure of costs in which two cannot partake as cheaply as one, and in which producing enough for two million people uses up at least twice as many resources as producing enough for one million people; call the third transparency, that individuals can see clearly what they need and what is for sale so that they truly know what they wish to buy.
All three of these pillars fit the economy of Adam Smith's day pretty well. They fit much of today's economy pretty well too--although the fit for the telecommunications and information processing industries is less satisfactory. They will fit tomorrow's economy less well than today's. And there is every indication that they will fit the twenty-first century economy relatively poorly.(4) As we look at developments along the leading technological edge of the economy, we can see what used to be second-order "externality" corrections to the Invisible Hand becoming first-order phenomena. And we can see the Invisible Hand of the competitive market working less and less well in an increasing number of areas. This result is particularly surprising when one considers that most economic theory suggests that things which make information cheaper and more accessible should generally reduce friction in competitive markets, not gum up the works.
In information-based sectors of the next economy the owner of a commodity will no longer be able to easily and cheaply exclude others from using or enjoying the commodity. Digital data is cheap and easy to copy. Methods exist to make copying more difficult, but they add expense and complexity. Without excludability the relationship between producer and consumer is much more a gift-exchange than a purchase-and-sale relationship.(5) The appropriate paradigm is then an NPR fund-raising drive. There is no reason to presume that non-excludable commodities will get into the hands of the consumers who value them the most. And--with compensation to the producer determined not by the purchase-and-sale exchange of money for value but by the user's feelings of moral obligation to the producer--there is no reason to expect that production will be as high as it should be to maximize social welfare.
If goods are not excludable then rather than sell things, people simply help themselves. If the taker feels like it, he or she may make a "pledge" to support the producer. Perhaps the average pledge will large enough that producers cover their costs. Many people seem to feel a moral obligation to tip cabdrivers and waiters and to contribute to National Public Radio. But without excludability it is hard to believe that the (voluntary) payments as a matter of grace from consumers to producers will be large enough to encourage the optimal level of production. Indeed, most of what we call "rule of law" consists of a legal system that enforces excludability--such enforcement of excludability ("protection of my property rights," even when the commodity is simply sitting there unused and idle) is one of the few tasks that the theory of laissez-faire allows the government.
Excludability does not exist in a Hobbesian state of nature: the laws of physics do not prohibit people from sneaking in and taking your things. So the police and the judges exist to enforce it, through penalties for breach of contract and damage awards for theft of intellectual property. The importance of this "artificial" creation of excludability is rarely remarked on: fish are supposed to rarely remark upon the water in which they swim.
We can see how an absence of excludability can warp a market and an industry by taking a brief look at the history of network television. During its three-channel heyday in the 1960s and 1970s, North American network television was available to anyone with an antenna and a receiver because broadcasters lacked the means of preventing the public from getting the signals for free. Free access was, however, accompanied by scarce bandwidth, and by government allocation of the scarce bandwidth to producers.
The absence of excludability for broadcast television did not destroy the television broadcasting industry. Broadcasters couldn't charge for what they were truly producing, but broadcasters worked out that they could charge for something else: the attention of the program-watching consumers during commercials. Rather than paying money directly, the customers of the broadcast industry merely had to endure the commercials (or get up and leave the room; or channel-surf) if they wanted to see the show.
This solution prevented the market for broadcast programs from collapsing: it allowed broadcasters to charge someone for something. But it left its imprint on the industry. Charging-for-advertising does not lead to the same invisible hand guarantee of productive optimum as does charging for product. In the case of network television, audience attention to advertisements was more-or-less unconnected with audience involvement in the program.
This created a bias toward lowest-common-denominator programming. Consider two programs, one of which will fascinate 500,000 people, and the other of which 30 million people will watch as slightly preferable to watching their ceiling. The first might well be better for social welfare: the 500,000 with a high willingness-to-pay might well, if there was a way to charge them, collectively outbid the 30 million apathetic potential watchers. Thus a network able to collect revenues from interested viewers would broadcast the first program, seeking the applause (and the money) of the dedicated and forgoing the eye-glazed semi-attention of the mass.
But the process breaks down when the network obtains revenue by selling commercials to advertisers. The network can offer advertisers either 500,000 or 30 million viewers. How influenced the viewers will be by the commercials depends relatively little on how much they like the program. As a result, charging-for-advertising gives every incentive to broadcast what a mass audience would tolerate. It gives no incentive to broadcast what a niche audience would love.
As bandwidth becomes cheaper, these problems become less important: one particular niche program may well be worth broadcasting when the mass audience has become sufficiently fragmented by the viewability of multiple clones of bland programming. Until then, however, expensive bandwidth combined with the absence of excludability meant that broadcasting revenues depended on the viewer numbers rather than the intensity of demand. Non-excludability helped ensure that broadcast programming would be "a vast wasteland."(6)
In the absence of excludability, there is no reason to presume that industries today and tomorrow will be able to avoid analogous distortions: the absence of excludability leaves potential users with no effective way to make the market system notice how strong their demand is and exactly what their demand is for.
In the information-based sectors of the next economy the use or enjoyment of the information-based commodity will no longer necessarily involve rivalry. With most tangible goods, if Alice is using a particular good, Bob cannot be. Charging the ultimate consumer the good's cost of production or the free market price provides the producer with an ample reward for his or her effort. It also leads to the appropriate level of production: social surplus (measured in money) is not maximized by providing the good to anyone whose final demand for a commodity is too weak to wish to pay the cost for it that a competitive market would require.
But if goods are non-rival--if two can consume as cheaply as one--then charging a per-unit price to users artificially restricts distribution: to truly maximize social welfare you need a system that supplies everyone whose willingness to pay for the good is greater than the marginal cost of producing another copy. And if the marginal cost of reproduction of a digital good is near-zero, that means almost everyone should have it for almost free.
However, charging price equal to marginal cost almost surely leaves the producer bankrupt, with little incentive to maintain the product except the hope of maintenance fees, and no incentive whatsoever to make another one except that warm fuzzy feeling one gets from impoverishing oneself for the general good.
Thus a dilemma: if the price of a digital good is above the marginal cost of making an extra copy, some people who truly ought--in the best of all possible worlds--to be using it do not get to have it, and the system of exchange that we have developed is getting in the way of a certain degree of economic prosperity. But if price is not above the marginal cost of making an extra copy of a non-rival good, the producer will not get paid enough to cover costs. Without non-financial incentives, all but the most masochistic producer will get out the business of production.
More important, perhaps, is that the existence of large numbers of important and valuable goods that are non-rival casts the value of competition itself into doubt. Competition has been the standard way of keeping individual producers from exercising power over consumers: if you don't like the terms the producer is offering, then you can just go down the street. But this use of private economic power to check private power may come at an expensive cost if competitors spend their time duplicating one another's efforts and attempting to slow down technological development in the interest of obtaining a compatibility advantage, or creating a compatibility or usability disadvantage for the other guy.
One traditional answer to this problem, now in disfavor, was to set up a government regulatory commission to control the "natural monopoly". The commission would set prices, and do the best it could to simulate a socially optimum level of production. While it may have seemed like the perfect answer in the Progressive era, in this more cynical age commentators have come to believe that regulatory commissions of this sort almost inevitably become "captured" by the industries they are supposed to regulate. Often this is because the natural career path for analysts and commissioners involves someday going to work for the regulated industry in order to leverage expertise in the regulatory process; sometimes it is because no one outside the regulated industry has anywhere near the same financial interest in manipulating the rules, or lobbying to have them adjusted. The only effective way a regulatory agency has to gauge what is possible is to examine how other firms in other regions are doing. But such "yardstick" competition proposals--judge how this natural monopoly is doing by comparing it to other analogous organizations--are notoriously hard to implement.(7)
A good economic market is characterized by competition to limit the exercise of private economic power, by price equal to marginal cost, by the return to investors and workers corresponding to the social value added of the industry, and by appropriate incentives for innovation and new product development. These seem impossible to achieve all at once in markets for non-rival goods--and digital goods are certainly non-rival.
In the information-based sectors of the next economy -- indeed, in many sectors of the economy today -- the purchase of a good will no longer be transparent. The Invisible Hand assumed that purchasers know what it is that they want and what they are buying. If purchasers need first to figure out what they want and what they are buying, there is no good reason to presume that their willingness to pay corresponds to its true value to them.
Adam Smith's pinmakers sold a good that was small, fungible, low-maintenance and easily understood. Alice could buy her pins from Gerald today, and from Henry tomorrow. But today's purchaser of, say, a suite of software programs is faced with needs and constraints that a metric designed to explain the market for pins may leave us poorly prepared to understand. The market for software "goods" is almost never a market for today's tangible goods, but rather for a bundle of present goods, future goods, and future services. The initial purchase is not a complete transaction in itself, but rather a down payment on the establishment of a relationship.
Once the relationship is established, both buyer and seller find themselves in different positions. Adam Smith's images are less persuasive in the context of services--especially bespoke services which require deep knowledge of the customer's wants and situation (and of the maker's capabilities)--which are not, by their nature fungible or easily comparable.
When Alice shops for a software suite, she not only wants to know about its current functionality--something notoriously difficult to figure out until one has had days or weeks of hands-on experience--but she also needs to have some idea of the likely support that the manufacturer will provide. Is the line busy at all hours? Is it a toll call? Do the operators have a clue? Will next year's corporate downsizing lead to longer holds on support calls?
Worse, what Alice really needs to know cannot be measured at all before she is committed: learning how to use a software package is an investment she would prefer not to repeat. Since operating systems change frequently, and interoperability needs changes even more often, Alice needs to have a prediction about the likely upgrade path for her suite. This, however, turns on unknowable and barely guessable factors: the health of the corporation, the creativity of the software team, the corporate relationships between the suite seller and other companies.
Some of the things Alice wants to know, such as whether the suite works on quickly enough on her computer, are potentially measurable at least--although one rarely finds a consumer capable of measuring them before purchase, or a marketing system designed to accommodate such a need. You buy the shrink-wrapped box at a store, take it home, unwrap the box--and find that the program is incompatible with your hardware, your operating system, or one of the six programs you bought to cure defects in the hardware or the operating system...
The economics of information is frequently invoked to adjust the traditional neoclassical paradigm to model for the consumer's decision as to whether it pays to attempt to acquire these facts: for example, one can hypothesize that Alice's failure to acquire the necessary information is evidence of the high cost of the investigation compared to the expected value of what it might reveal. So adjusted the basic model retains descriptive power. But its explanatory power is limited.
That the absence of excludability, rivalry, or transparency is bad for the functioning Invisible Hand is not news to economists.(8) The analysis of failure of transparency now makes up an entire subfield of economics: "imperfect information." Non-rivalry has been the basis of the theory of government programs and public goods, as well as of natural monopolies: the solution has been to try to find a regulatory regime that will mimic the decisions that market ought to make, or to accept that the "second-best" public provision of the good by the government is the best that can be done. Analysis of the impact of the lack of excludability is the core of the economic analysis of research and development. It has led to the conclusion that the best course is to try to work around non-excludability by mimicking what a well-functioning market system would have done by using the law to expand "property" or through tax-and-subsidy schemes to promote actions with broad benefits.
But the focus of analysis has always been on overcoming "frictions": how can we make this situation where the requirements of laissez faire fail to hold into a situation in which the Invisible Hand works tolerably well? And as long as the Invisible Hand works well throughout most of the economy this is a tolerable strategy: a finite number of government programs and legal doctrines to mimic what the Invisible Hand would do if it could function properly in a few distinct areas of the economy (like the natural monopoly implicit in the turn-of-the-twentieth-century railroad, or government subsidization of basic research). Such a strategy can achieve reasonable performance, as long as the industries and commodities that do not fit the Invisible Hand paradigm are an easily-identified, relatively-small subset of the economy.
Today, problems of non-excludability, of non-rivalry, of non-transparency apply to a large range of the economy. What do we do when the friction becomes the machine? Usually, economists proceed by a process analogous to perturbation theory: first they solve the simple problem, in which the market works well, and then they consider how the solution changes in response to small steps away from the conditions necessary for the simple solution to hold. In the natural sciences perturbation-theory approaches break down when the deviations of initial conditions from those necessary for the simple solution become large. Does something similar happen in political economy? Is examining how the market system handles a few small episodes of "market failure" a good guide toward how it will handle many large ones?
As a first step toward discerning what new theories of the new markets might look like if new visions do turn out to be necessary, or how old ones should be adjusted, it seems sensible to examine how enterprises and entrepreneurs are already reacting to the fact of widespread market failure. The hope is that experience along the frontiers of electronic commerce will serve as a good guide to what pieces of the theory are likely to be most important, and will suggest areas in which further development might have a high rate of return.
We noted above that the market for modern, complex products is anything but transparent. While one can think of services, such as medicine, which are particularly opaque to the buyer, today it is difficult to imagine a more opaque product than software. Indeed, when one considers the increasing opacity of products in the context of the growing importance of services to the economy, it suggests that transparency is and will become a particularly important issue in the next economy.
Consumers' failure to acquire full information about the software they buy certainly demonstrates that acquiring the information must be expensive. In response to this cost, social institutions have begun to spring up to get around the shrink-wrap dilemma. The first was so-called shareware: you download the program, if you like the program you send its author some money, and maybe in return you get a manual, access to support, or an upgraded version. Shareware dealt with the information acquisition problem by turning the purchase-and-sale of software into a gift-exchange relationship, more akin to an NPR fund raising drive. ("Is this station worth just fifty cents a week to you? Then call...") The benefit to try-before-you-buy is precisely that it makes the process more transparent. The cost is that try-before-you-buy often turns out to be try-use-and-don't-pay.
The next stage beyond shareware has been the evolution of the institution of the "public beta." This public beta is a time-limited (or bug-ridden) version of the product: users can investigate the properties of the public beta version to figure out if the product is worthwhile. But to get the permanent (or the less-bug-ridden) version, they have to pay.
Yet a third is the "dual track" version: Eudora shareware versus Eudora Professional. Perhaps the hope is that users of the free low-power version will some day become richer, or less patient, or find they have greater needs. At that point they will find it least disruptive to switch to a product that looks and feels familiar, that is compatible with their habits, and their existing files. In effect (and if the sellers are lucky), the free version captures them as future customers before they are even aware they are future buyers.
The developing free-public-beta industry is a way of dealing with the problem of lack of transparency. It is a relatively benign development, in the sense that it involves competition through distribution of lesser versions of the ultimate product. An alternative would be (say) the strategy of constructing barriers to compatibility: the famous examples in the computer industry come from the 1970s when the Digital Equipment Corporation made non-standard cable connectors; from the mid-1980s when IBM attempted to appropriate the entire PC industry through the PS/2 line; and from the late 1980s when Apple Computers used a partly ROM-based operating system to exclude clones. Perhaps the main reason that the free-public-beta strategy is now dominant is the catastrophic failure of both IBM's PS/2 and Apple's ROM-based strategies of exclusion based on non-compatibility--even though they were both near-successes.
Predictions abound as to how software will use case and rule-based thinking to do your shopping for you, advise you on how to spend your leisure time, and in general organize your life for you. But that day is still far in the future.(9) So far we have only the crude prototype of the knowledge-scavenging virtual robot, the automated comparison shopper.
Yet, the online marketplace so far has had an ambiguous reaction to the advent of the automated comparison shopper. BargainFinder(10) is one of the first such--and highly experimental--Internet-based agents. BargainFinder does just one thing. Perhaps it does it too well.
The user enters the details of a music compact disk she might like to purchase. BargainFinder interrogates several online music stores that might offer to sell them. It then reports back the prices in a tidy table that makes comparison shopping easy. The system is not completely transparent: it is not always possible to discern the vendor's shipping charges without visiting the vendor's web site. But as BargainFinder's inventors say, "it is still only an experiment."
Most economists, be they Adam Smithian classicists, neo-classical Austrians, or more modern economics of information mavens, would agree with the proposition that a vendor in a competitive market selling a standardized product--for one Tiger Lily CD is as good as another--would want customers to know as much as possible about what the vendor offers for sale, and the prices at which the goods are available.
The reason for this near-consensus is that in a competitive market every sale at the offer price should be welcome: all are made at a markup over marginal cost. Thus all on-line CD retailers ought to have wanted to be listed by BargainFinder, if only because every sale that went elsewhere when they had the lowest price was lost profit.
Not so. A significant fraction of the merchants regularly visited by BargainFinder were less than ecstatic. They retaliated by blocking the agent's access to their otherwise publicly-available data. Currently (March 1997), one third of the merchants targeted by the BargainFinder (three out of nine) continue to lock out its queries.(11) One, CDNow, did so for frankly competitive reasons. The other two said that the costs of large numbers of "hobbyist" queries were too great for them. Meanwhile, seven additional merchants have asked to be listed.
One possible explanation for the divergence between the economic theorist's prediction that every seller should want to be listed by BargainFinder and the apparent outcome is the price gouging story. In this story, stores blocking BargainFinder tend to charge higher than normal prices because they are able to take advantage of consumer ignorance of cheaper alternatives. The stores are gouging buyers by taking advantage of relatively high costs of search.
In this case BargainFinder and its successors are indeed valuable developments. They will make markets more efficient and lower prices. This is entirely possible, although one need not go quite as far as one writer who suggested that the entire world economy is made up of such great pricing inefficiencies that the change to an information economy will drastically reduce inflation.(12)
Another possibility is the kindly service story. Stores blocking BargainFinder tend to charge higher than normal prices because they provide additional service or convenience. If commerce becomes increasingly electronic and impersonal (or if "personal" comes to mean "filtered through fiendishly clever software agents"), this sort of humanized attention will become increasingly expensive. To the extent that this additional service or convenience can be provided automatically, things are less clear.
In a sometimes forgotten classic, The Joyless Economy, Tibor Scitovsky noted that the advent of mass production of furniture seemed to cause the price of hand-carved chairs to increase, even as the demand for them shrank.(13) As consumers switched to less costly (and less carefully made, one size-fits-all) mass-produced furniture, carvers became scarce. Soon only the rich could engage their services. If the kindly service story is right, the rise of the commodity market in turn creates a risk that the economy will become yet more "joyless": mass tastes will be satisfied cheaply as specialty tastes become ever more a luxury.(14)
On the other hand the rise of ShopBots such as BargainFinder offers an opportunity for consumers to aggregate their preferences on worldwide scale. As it becomes increasingly easy for consumers to communicate their individualized preferences to manufacturers and suppliers, and increasingly easy to tailor goods to individual tastes -- be it a CD that only has the tracks you like, customized blue jeans, or a car manufactured just in time to your specifications -- personalized goods may become the norm, putting the "joy" back into the economy. Some signs of this were visible even before the information revolution: lower costs of customization has already undermined one of Scitovsky's examples, as fresh-baked bread makes its comeback at many supermarkets and specialty stores.
In either the price gouging or kindly service stories, the advent of BargainFinder presents CD retailers with a dilemma: If they join in, they contribute towards turning the market for CDs into a commodity market with competition only on price. If they act "selflessly" and stay out, in order to try to degrade BargainFinder's utility (and preserve their higher average markup), they must hope that their competitors will understand their long-run self-interest in the same way.(15) But overt communication in which all sellers agreed to block BargainFinder would, of course, violate the Sherman Act. And without a means of retaliation to "punish" players who do not pursue the collusive long-run strategy of blocking BargainFinder, the collapse of the market into a commodity market with only price competition appears likely.
Indeed, if the strategy was to undermine BargainFinder by staying away, it appears to be failing. CD retailers are unblocking BargainFinder and more are clamoring to join. Whether or not this is an occasion for joy depends on which explanation above is closer to the truth. The growth of the bookstore chain put the local bookshop out of business, just as the growth of supermarkets killed the corner grocer. Not everyone considers this trend to be a victory, despite the lower prices. A human element has been lost, and a "personal service" element that may have led to a better fit between purchaser and purchase has been lost as well.
So far, the discussion has operated on the basis of the assumption that CD merchants would have an incentive to block BargainFinder if they charge higher than normal prices. Strangely, some merchants may have had an incentive to block it if they charged lower than normal prices. As we all know, merchants sometimes advertise a "loss leader," and offer to sell a particular good at an unprofitable price. Merchants do this in order to lure consumers into the store where they either may be attracted to more profitable versions of the same good (leading, in the extreme case, to "bait and switch"), or in the hope that the consumer will spy other, more profitable, goods to round out the market basket.
You can explain this merchant behavior in different ways, either by talking about the economics of information, locational utility, myopic consumers generalizing incorrectly on the basis of a small number of real bargains, or about temporary monopolies caused by the consumer's presence in this store as opposed to another store far away. It may be that merchants blocking BargainFinder did not want consumers to be able to exploit their loss-leaders without having to be exposed to the other goods offered simultaneously. Without this exposure the loss-leaders would not lure buyers to other, higher-profit, items, but would simply be losses. The merchant's ability to monopolize the consumer's attention for a period may be the essence of modern retailing; the reaction to BargainFinder, at least, suggests that this is what merchants believe. The growing popularity of frequent buyer and other loyalty programs also suggests that getting and keeping customer attention is important to sellers.(16)
Interestingly, this explanation works about equally well for the kindly service and loss leader explanations, which are the two stories that are consistent with the assumption that the CD market was relatively efficient before BargainFinder came along.
More important and perhaps more likely is the possibility that the BargainFinder and other ShopBots threaten merchants who are in the browsing assistance business. Some online merchants are enthusiastically attempting to fill the role of personal shopping assistant. Indeed, some of the most successful online stores have adapted to the new marketplace by inverting the information equation.
Retail stores in meatspace ("meatspace" being the part of life that is not cyberspace) provide information about available products--browsing and information acquisition services--that consumers find valuable and are willing to pay for either in time and attention or in cash. Certainly meatspace shopping is conducive to unplanned, impulse, purchases, as any refrigerator groaning under kitchen magnets will attest. Retail stores in cyberspace may exacerbate this: what, after all, is a store in cyberspace but a collection of information?
CDNow, for example, tailors its virtual storefront to what it knows of a customer's tastes based upon her past purchases. In addition to dynamically altering the storefront on each successive visit, CDNow also offers to send shoppers occasional email information about new releases that fit their past buying patterns.
One can imagine stores tailoring what they present to what they presume to be the customer's desires, based on demographic information available about the customer even before the first purchase. Tailoring might extend beyond showcasing different wares: taken to the logical extreme it would include some form of price discrimination based on facts known about the customer's preferences, or on demographic information thought to be correlated with preferences. (The U.S. and other legal systems impose constraints on the extent to which stores may generalize from demographic information: for example, stores that attempt race-based, sex-based, or other types of invidious price variation usually violate U.S. law.).
A critical microeconomic question in all this is how consumers and manufacturer/sellers exchange information in this market. Both consumers and sellers have an interest in encouraging the exchange of information: in order to provide what the consumer wants, the sellers need to know what is desired and how badly it is wanted; consumers need to know what is on offer where, and at what price. A ShopBot may solve the consumer's problem of price, but it will not tell him about an existing product of which he is unaware. Similarly, CDNow may reconfigure its store to fit customer profiles, but without some external source of information about customers this takes time, and requires repeat visitors.
Indeed, it requires that customers come in through the front door: all the reconfiguration in the world will not help CDNow if customers are either unaware that they would enjoy its products, or if the customers' only relationship with CDNow is via a ShopBot.
The retail outlet in the average mall can plausibly be described as a mechanism for informing consumers about product attributes. The merchant gives away product information in the hopes that consumers will make purchases. Physical stores, however, have fixed displays and thus must provide more or less identical information to every customer. Anyone who looks in the window, anyone who studies the product labels, or even the product catalogs, receives the same sales pitch.
It may be that the dynamic storefront story is the Internet's substitute for the kindly service story. If so, then the rise of agent-based shopping may well be surprisingly destructive. It raises the possibility of a world in which retail shops providing valuable services are destroyed by an economic process that funnels a large percentage of consumer sales into what become commodity markets without middlemen: people use the high-priced premium cyberstore to browse, but then use BargainFinder to purchase.
To appreciate the problem, consider Bob, an on-line merchant who has invested a substantial amount in a large and easy to use "browsing" database that combines your past purchases, current news, new releases, and other information to present you with a series of choices and possible purchases that greatly increase your chances of coming across something interesting. After all, a customer enters seeking not a particular title, and not a random title, but a product that he or she would like. What is being sold is the process of search and information acquisition that leads to the judgment that this is something to buy. A good on-line merchant would make the service of "browsing" easy--and would in large part be selling that browsing assistance.
In this browsing is our business story there is a potential problem: browsing assistance is not excludable. Unless Bob charges for his services (which is likely to discourage most customers and especially the impulse purchaser), there is nothing to stop Alice from browsing merchant Bob's website to determining the product she wants, and then using BargainFinder to find the cheapest source of supply. BargainFinder will surely find a competitor with lower prices because Bob's competitor will not have to pay for the database and the software underlying the browsing assistance.
If so, then projects like BargainFinder will have a potentially destructive application, for many online stores will be easy to turn into commodity markets once the process of information acquisition and browsing is complete. It would be straightforward to run a market in kitchen gadgets along the lines of the BargainFinder, even if much of the market for gadgets involves finding solutions to problems one was not consciously aware one had: first free-ride off of one of the sites that provides browsing assistance to discover what you want, then open another window and access BargainFinder to buy it. Students and other people with more time than money have been using this strategy to purchase stereo components for decades: draw on the sales expertise of the premium store, then buy from the warehouse. But the scope and ease of this strategy is about to become much greater.
To merchants providing helpful shopping advice, the end result will be as if they spent all their time in their competitors' discount warehouse, pointing out goods that the competitors' customers ought to buy. The only people who will pay premium prices for the physical good--and thus pay for browsing and information services--will be those who feel under a gift-exchange moral obligation to do so: the "sponsors" of NPR.
Collaborative filtering provides part of the answer to the information exchange problem. It also provides another example of how information technology changes the way that consumer markets will operate. In their simplest form, collaborative filters such as FireFly,(17) bring together consumers to exchange information about their preferences. The assumption is that if Alice finds that several other readers--each of whom also likes Frisch, Kafka, Kundera and Klima but gets impatient with James and Joyce--tends to like William Gass, the odds are good that Alice might enjoy Gass's On Being Blue too.
In the process of entering sufficient information about her tastes to prime the pump, Alice adds to the database of linked preference information. In helping herself, Alice helps others. In this simplest form, the collaborative filter helps Alice find out about new books she might like. The technology is applicable to finding potentially congenial CDs, news, web sites, software, travel, financial services and restaurants -- as well as to helping Alice find people who share her interests. Indeed, each of these is a service currently available or soon to be offered via Firefly.
At the next level of complexity, the collaborative filter can be linked to a shop-bot. Once Alice has decided that she will try On Being Blue she can find out who will sell it to her at the best price.
The really interesting development, however, comes when Alice's personal preference data is available to every merchant Alice visits. (Leave aside for a moment who owns this data, and the terms on which it becomes available.) A shop like CDNow becomes able to tailor its virtual storefront to a fairly good model of Alice's likely desires upon her first visit. CDNow may use this information to showcase its most enticing wares, or it may use it to fine tune its prices to charge Alice all that her purse will bear--or both.
Whichever is the case, shopping will not be the same.
Once shops acquire the ability to engage in price discrimination, consumers will of course seek ways of fighting back. One way will be to shop anonymously, and see what price is quoted when no consumer data is proffered. Consumers can either attempt to prevent merchants and others from acquiring the transactional data that could form the basis of a consumer profile, or they can avail themselves of anonymizing intermediaries who will protect the consumer against the merchant's attempt to practice perfect price discrimination by aggregating data about the seller's prices and practices. In this model, a significant fraction of cyber commerce will be conducted by software agents who will carry accreditation demonstrating their credit-worthiness, but will not be traceable back to their progenitors.
Thus potential legal constraints on online anonymity may have more far-reaching consequences than their obvious effect on unconstrained political speech.(18) In some cases, consumers may be able to use collaborative filtering techniques to form buying clubs and achieve quantity discounts.(19) Or consumers will construct shopping personas with false demographics and purchase patterns in the hope of getting access to discounts. Business flyers across America already routinely purchase back-to-back round trip tickets that bracket a Saturday night in an attempt to diminish airlines' abilities to charge business travelers premium prices. (Airlines, meanwhile, are investing in computer techniques to catch and invalidate the second ticket.)
Consumers will face difficult maximization problems. In the absence of price discrimination, and assuming privacy itself is only an intermediate good (i.e., that consumers do not value privacy in and of itself), the marginal value to the consumer of a given datum concerning her behavior is likely to be less than the average value to the merchant of each datum in an aggregated consumer profile. If markets are efficient, or if consumers suffer from a plausible myopia in which they value data at the short-run marginal value rather than long-term average cost, merchants will purchase this information leading to a world with little transactional privacy.(20) Furthermore, the lost privacy is not without gain: every time Alice visits a virtual storefront that has been customized to her preferences, her search time is reduced, and she is more likely to find what she wants -- even if she didn't know she wanted it -- and this is ever more true the more information the merchant has about her.
The picture becomes even more complicated once one begins to treat privacy itself as a legitimate consumer preference rather than as merely an intermediate good. Once one accepts that consumers may have a taste for privacy it no longer becomes obvious that the transparent consumer is an efficient solution to the management of customer data. Relaxing an economic assumption does not, however, change anything about actual behavior, and the same tendencies which push the market towards a world in which consumer data is a valuable and much-traded commodity persist. Indeed, basic ideas of privacy are under assault. Data miners and consumer profilers are able to produce detailed pictures of the tastes and habits of increasing number of consumers. The spread of intelligent traffic management systems, video security and recognition systems and the gradual integration of information systems built into every appliance will eventually make it possible to track movement as well as purchases. Once one person has this information there is, of course, almost no cost to making it available to all.
Unfortunately, it is difficult to measure the demand for privacy. Furthermore, the structure of the legal system does not tend to allow consumers to express this preference. Today, most consumer transactions are governed by standard-form contracts. The default rules may be constrained by state consumer law, or by portions of the Uniform Commercial Code, but they generally derive most of their content from boilerplate language written by a merchant's lawyer.
If you are buying a dishwasher you do not get to haggle over the terms of the contract, which (in the rare case they are even read) can be found in small print somewhere on the invoice. Although it is possible to imagine a world in which the Internet allows for negotiation of contractual terms even in consumer sales, we have yet to hear of a single example of this phenomenon, and see little reason to expect it. On the contrary, to the extent a trend can be discerned, it is in the other direction, towards the "web wrap" or "clip wrap" contract (the neologism derives from "shrinkwrap" contracts, in which the buyer of software is informed that she has agreed to the terms by opening the wrapper) in which the consumer is asked to agree to the contract before being allowed to view the web site's content.
We have argued that assumptions which underlie the microeconomics of the Invisible Hand fray badly when transported to the information economy. Commodities that take the form of single physical objects are rivalrous and are excludable: there is only one of it, and if it is locked up in the seller's shop no one else can use it. The structure of the distribution network delivered marketplace transparency as a cheap byproduct of getting the goods to their purchasers. All of these assumptions did fail at the margin, but the match of the real to the ideal was reasonably good.
Modest as they may be in comparison to the issues we have left for another day, our observations about the microfoundations of the Invisible Hand do have some implications for economic theory and for policy. At some point soon it will become practical to charge for everything on the world wide web. Whether it will become economically feasible, or common, remains to be seen. The outcome of this battle will have profound economic and social consequences.
Consider just two polar possibilities. On the one hand, one can envision a hybrid gift-exchange model, in which most people pay as much for access to web content as they pay for NPR, and in which the few who value an in-depth information edge or speed of information acquisition pay more. At the other extreme, one can as easily foresee a world in which great efforts have been made to reproduce familiar aspects of the traditional model of profit maximization.
Which vision will dominate absent government intervention depends in part on the human motivation of programmers, authors, and other content providers. The Invisible Hand assumed a "rational economic man," a greedy human. The cybernetic human could be very greedy, for greed can be automated. But much progress in basic science and technology has always been based on other motivations besides money. The thrill of the chase, the excitement of discovery, and the respect of one's peers have played a very large role as well, perhaps helped along by the occasional Nobel Prize, Knighthood for services to science, or a distinguished chair; in the future "Net.Fame" may join the list.
Government will surely be called upon to intervene. Adam Smith said that eighteenth-century "statesmen" could in most cases do the most good by sitting on their hands. The twenty-first century "stateswomen" will have to decide whether this advice still applies once the fit between the institutions of market exchange and production and distribution technologies has decayed.
We suspect that policies which ensure diversity of providers will continue to have their place. One of the biggest benefits of a market economy is that it provides for sunset. When faced with competition, relatively inefficient organizations fail to take in revenue to cover their costs, and then they die. Competition is thus still a virtue, whether the governing framework is one of gift-exchange or buy-and-sell--unless competition destroys the ability to capture significant economies of scale.
The challenge for policy makers is likely to be particularly acute in the face of technological attempts to recreate familiar market relationships. Just because markets characterized by the properties of rivalry, excludability, and transparency were efficient does not mean that any effort to reintroduce these properties to a market lacking them necessarily increases social welfare. Furthermore, in some cases the new, improved, versions may provide more excludability or transparency than did the old model; this raises new problems.
Holders of intellectual property rights in digital information, be they producers or sellers, do have a strong incentive to re-introduce rivalry and excludability to the market for digital information; the extent to which they have an interest to re-introduce transparency regarding their wares is slightly less evident, although the innovations we describe above in the market for software suggest this interest is substantial also. Each of these possible developments creates a different challenge for policy makers.
It appears increasingly likely that technological advances such as "digital watermarks" will allow each copy of a digital data set, be it a program or a poem, to be uniquely identified;(21) coupled with appropriate legal sanctions for unlicensed copying, a large measure of excludability can be restored to the market.
Policy makers will need to be particularly alert to three dangers. First, technologies which permit excludability risk introducing socially unjustified costs if the methods of policing excludability are themselves costly. Second, as the example of broadcast television demonstrates, imperfect substitutes for excludability themselves can have bad consequences that sometimes are difficult to anticipate. Third, over-perfect forms of excludability,(22) raise the specter that traditional limits on excludability of information such as fair use might be changed by technical means without the political and social debate that should precede such a shift.
We counsel caution: In the absence of any clear indication of what the optimum would be, the burden of proof should be on those who argue that any level of excludability should be mandated.
It is somewhat harder to imagine how rivalry might be recreated for digitized data without using an access system that relied on a hardware token of some sort. Rivalry can be reintroduced if access to a program, a document or other digital data can be conditioned on access to a particular smartcard or other physical object. Perhaps a cryptographically based software analog might be developed that relied on some secret that the purchaser would have a strong incentive to keep to herself. In either case, a form of rivalry results, since multiple users can be prevented from using the data simultaneously.
Imposing rivalry where it is not naturally found means imposing a socially unnecessary cost on someone: the result may look and feel like a traditional market, but it cannot, by definition, be an "optimum" since this artificial rivalry ensures that many users whose willingness to pay for the good is greater than the (near-zero) marginal cost of producing another copy will not get one. Policy makers should therefore be very suspicious of any market-based arguments for artificial rivalry.
It might seem that anything which encourages transparency must be good. Indeed, all other things being equal, from the point of view of consumers, merchants selling products that can survive scrutiny, and the economy as a whole, increases in the transparency of product markets are always a good thing. The issue, however, is to what extent this logic justifies a transparent consumer.
The answer is fundamentally political. It depends on the extent to which one is willing to recognize privacy as an end in itself. If information about consumers is just a means to economic end, then there is no reason for concern. If, on the other hand, citizens perceive maintaining control over facts about their economic actions as a good in itself, some sort of governmental intervention into the market may be needed to make it easier for this preference to express itself.
We have focused on the ways in which digital media undermine the assumptions of rivalry, excludability, and transparency--and have tried to suggest that the issue of transparency is at least as important, complicated, and interesting as the other two. In so doing we have been forced to slight important microeconomic features of the next economy. We have not, for example, discussed the extent to which the replacement of human salespeople, travel agents, and shop assistants will affect the labor market. The consequences may be earth-shaking. For about two centuries starting in 1800 technological progress was the friend of the unskilled worker: it provided a greater relative boost to the demand for workers who were relatively unskilled (who could tighten bolts on the assembly line, or move things about so that the automated production process could use them, or watch largely self-running machines and call for help when they stopped) than for those who were skilled. The spread of industrial civilization was associated with a great leveling in the distribution of income.
But there is nothing inscribed in the nature of reality that technological progress must always boost the relative wages of the unskilled. And there is good reason to fear that the enormous economies of scale found in the production of non-rivalrous commodities are pushing in the other direction: to a winner-take-all economy.(23)
Nor have we addressed potentially compensating factors such as the looming disaggregation of the university, a scenario in which distance learning bridges the gap between elite lecturers and mass audiences, turning many educational institutions into little more than degree-granting standards bodies, with financial aid and counseling functions next to a gym, (just maybe) a library. While this may benefit the mass audience, it has harmful effects on one elite: it risks creating an academic proletariat made up of those who would have been well-paid professors before the triumph of "distance learning" in the lecture hall. Consider that every Italian hill city used to provide a pretty good living for the tenors and sopranos in the local opera, back before the triumph of "distance listening" through the LP, the cassette tape, and the CD. There are fewer sopranos employed today.
Similarly, we have completely ignored a number of interesting macroeconomic issues. Arguably, traditional ideas of the "open" and "closed" economy will have to be rethought on sectoral grounds. Once a nation becomes part of the global information infrastructure, its ability to raise either tariff or non-tariff walls against certain foreign trade activities becomes vanishingly small. Nations will not, for example, be able to protect an "infant" browser industry: it will be hard to enforce labor laws when the jobs telecommute away.
National accounts are already becoming increasingly inaccurate as it becomes harder and harder to track imports and exports. Governments are unable to distinguish a personal letter with a picture attached from the electronic delivery of a $100,000 software program.
Monetary economics will also have to adapt. Traditionally the field of economics has always had some difficulty explaining money: there has been something "sociological" about how tokens worthless in themselves get and keep their value that is hard to fit with economists' underlying explanatory strategies of rational agents playing games-of-exchange with one another. These problems will also intensify as the nature and varieties of money changes. The introduction of digital currencies suggests a possible return of private currencies. It threatens the end of seigneurage, and raises questions about who can control the national money supply.
The market system may well prove to be tougher than its traditional defenders have thought, and to have more subtle and powerful advantages than defenders of the Invisible Hand have usually listed. At the very least, however, defenders of the Invisible Hand will need new arguments to justify systems of competitive markets when the traditional prerequisites for market "optimality" are gone. Economic theorists have enormous powers of resilience: economists regularly try to spawn new theories--today evolutionary economics, the economics of organizations and bureaucracies, public choice theory, and the economics of networks. We will see how they do.
De Long would like to thank the Alfred P. Sloan Foundation and the Institute for Business and Economic Research of the University of California for financial support. We would like to thank Caroline Bradley, Joseph Froomkin, Ronald P. Loui and Lawrence H. Summers for helpful discussions.
(1). Adam Smith, The Wealth of Nations, p. 423.
(2). Angus Maddison, Monitoring the World Economy (Paris: OECD, 1994).
(3). Gerard Debreu, The Theory of Value (1957).
(4). See Jeffrey K. MacKie-Mason and Hal R. Varian, "Some Economic FAQs About the Internet," Journal of Electronic Publishing (June 1995) <http://www.press.umich.edu/jep/works/FAQs.html>; Lee McKnight and Joseph Bailey, "An Introduction to Internet Economics," Journal of Electronic Publishing (June 1995) <http://www.press.umich.edu/jep/works/McKniIntro.html>.
(5). See George Akerlof, "Labor Contracts as Partial Gift Exchange," in George Akerlof, An Economic Theorist's Book of Tales (Berkeley: University of California Press, 1985).
(6). Newton Minow, Equal Time 45, 52 (1964);Minow, Newton M., & Craig L. LaMay. Abandoned in the Wasteland: Children, Television, and the First Amendment. New York: Hill & Wang, 1996.
(7). Andrei Shleifer, "Yardstick Competition," Rand Journal of Economics (1986).
(8). See Jean Tirole, The Theory of Industrial Organization (Cambridge, MA: MIT Press, 1988).
(9).A useful corrective to some of the hype is Kathyrn Heilmann et al., Intelligent Agents: A Technology and Business Application Analysis, available online URL http://haas.berkeley.edu/~heilmann/agents/#I .
(10).available online URL http://bf.cstar.ac.com/bf/,
(11).In January 1997, BargainFinder shopped at nine stores: CD Universe, CDnow!, NetMarket, and GEMM, IMM, Music Connection, Tower Records, CD Land, CDworld, and Emusic. A sample search for the Beatles White Album in January 1997 produced this output:
I couldn't find it at Emusic. You may want to try browsing there yourself. CD Universe is not responding. You may want to try browsing there yourself. $24.98 (new) GEMM (Broker service for independent sellers; many used CDs, imports, etc.) I couldn't find it at CDworld. You may want to try browsing there yourself. $24.76 Music Connection (Shipping from $ 3.25, free for 9 or more. 20 day returns.) CDnow is blocking out our agents. You may want to try browsing there yourself. NetMarket is blocking out our agents. You may want to try browsing there yourself. CDLand was blocking out our agents, but decided not to. You'll see their prices here soon.IMM did not respond. You may want to try browsing there yourself.
(12)See Alan Cane, Information age will curb inflation', Financial Times, Jan. 13, 1997, p. 8.
(13). Tibor Scitovsky, The joyless Economy: An Inquiry Into Human Satisfaction and Consumer Dissatisfacton (Oxford: 1976).
(14).On the other hand, retailing, stock control, billing, auditing and shipping are different from, e.g. manufacturing, so in principle there might still be a role for a middleperson. Amazon.com, a leading Internet bookseller has almost no books in stock.
(15).See Axelrod, The Evolution of Completion.
(16).Otherwise, it would slightly odd to find firms giving discounts to the customers who presumably most want the merchant's wares. Perfect price discrimination would suggest that the customers who most want the goods should be charged the most. Of course, in a true commodity industry, loyalty programs can also be understood as nothing more than price competition -- i.e. discounting.
(18).A. Michael Froomkin, "Flood Control on the Information Ocean: Living With Anonymity, Digital Cash, and Distributed Databases", 15 Pitt. J. L. & Com. 395 (1996)
(19).Nicholas Negroponte, Electronic Word of Mouth, Wired 4.10 (Oct. 1996), http://www.hotwired.com/wired/4.10/negroponte.html .
(20).Judge Posner suggests this is the economically efficient result. See Richard A. Posner, Privacy, Secrecy, and Reputation, 28 Buffalo L. rev. 1 (1979); Richard A. Posner, The Right of Privacy, 12 Ga. L. Rev. 393, 394 (1978). Compare, however, Kim Lane Scheppele, Legal Secrets 43-53, 111-126 (1988); see also James Boyle, A Theory of Law and Information: Copyright, Spleens, Blackmail, and Insider Trading 80 Cal. L. Rev. 1413 (1992) (arguing that most law and economic analysis of markets for information are based on fundamentally contradictory assumptions).
(21). For a sketch of excludability might be constructed and maintained for digital documents, see Brad Cox, Supxerdistribution: Objects as Property on the Electronic Frontier (New York: Addison Wesley, 1996).
(22).See, e.g., the forms advocated in Working Group on Intellectual Property Rights, Intellectual Property and the National Information Infrastructure (1995).
(23).See, e.g., Robert H. Frank, Philip J. Cook , The Winner-Take-All Society : Why the Few at the Top Get So Much More Than the Rest of Us (1995).