Standardization/Innovation Tradeoffs in Computing: 

Implications for High-Tech Antitrust Policy


Barry Fagin

Department of Computer Science

2354 Fairchild Drive, Suite 6K41

US Air Force Academy, CO 80840






Barry Fagin is a Professor of Computer Science at the US Air Force Academy in Colorado Springs.  His interests include computing and public policy, computer architecture, and computer science education. He received his A.B. summa cum laude from Brown University in 1982, and the PhD in Computer Science from the University of California at Berkeley in 1987.


Dr. Fagin is the co-founder of Families Against Internet Censorship, and a recipient of the National Civil Liberties Award from the ACLU.  He has appeared on ABC News, Good Morning America, and MS/NBC.  His columns on computing and public policy have appeared in numerous national papers, and he is a regular guest on talk radio.  Dr. Fagin is a Senior Fellow in Technology Policy at the Independence Institute in Golden, Colorado, and an Adjunct Scholar at the Competitive Enterprise Institute in Washington DC.  He is a member of Sigma Xi, ACM, IEEE, and the IEEE Computer Society.  Currently he serves as the Information Director for the ACM Special Interest Group on Computers and Society.





Much of the history of computing can be seen as an ever-changing balance between standardization and innovation.  Fortunately, sophisticated social processes exist that help consumers make informed choices between the two.  A better understanding of these processes is essential for an informed understanding of current attempts to regulate the computing industry. 





Congratulations on your new home!  Your family loves it, and it’s everything you wanted.  But just this morning a new house came on the market, and it’s clearly a better deal.  Do you call your realtor?  What if you buy a new car, only to find out next year’s model performs better and costs less?  Do you buy it?

Chances are you answered no, because you made a quick calculation of a standardization/innovation tradeoff.  You had to make a decision: do the expected benefits from switching to something newer (the innovation) justify abandoning investments in something older (the standard)?  For most of us, a house and car are significant purchases.   We don’t walk away from them just because new knowledge revealed something better.

On the other hand, we replace computers and upgrade software much more often than we change houses. Clearly the same questions have different answers, depending on the circumstances.  When should we walk away from previous resource allocation decisions in light of newly discovered knowledge?  What factors influence this choice? When does the new outweigh the old? What information helps us make these decisions?

Society must answer questions like this every day: standardization/innovation (S/I) tradeoffs are an important part of life. Human beings must constantly reallocate resources in the presence of new knowledge.  Sometimes, the benefits of this knowledge don’t outweigh the costs of undoing older decisions.  Other times they do. Either way, participants in any economic system must make these kinds of decisions in ways that they believe best help them achieve their goals.

The history of humankind offers numerous examples of S/I tradeoffs.  Language evolves in response to new discoveries, while maintaining enough of its essential structure to permit continued communication among its speakers.  Successful political systems adapt to new realities in the context of their imperfect legacies from the past. On a more fundamental level, the evolutionary process in living systems builds on existing structures; it does not start from scratch every time. Complex examples like these have provided fertile ground for linguistics, sociology, and biology. Unfortunately, they take generations or even millennia to unfold.

Computing, by contrast, offers a unique opportunity to examine S/I tradeoffs on a much shorter time scale. We can observe them over the course of a few years, or even months: new ones present themselves almost daily.  In fact, the entire history of computing can be viewed as a struggle between old standards and new ideas. This aspect of computing provides a unique opportunity to study the social processes and institutions that have evolved, both in computing and elsewhere, to help solve the S/I tradeoff.

Given recent widespread policy interest in computer science and the computing industry (for example, the Microsoft and Intel antitrust cases), a closer examination of computing S/I issues in computing is particularly timely.  Unfortunately, a discussion of the standardization/innovation tradeoff in computing is lacking in the public policy community, as is an understanding of the institutions that help solve it.


S/I Tradeoffs:  An Overview


Humans act with goals in mind and limited resources to accomplish them.  Human action usually requires the use of these resources, and we are seldom certain which course of action is best. Compounding this uncertainty is the ongoing discovery of new knowledge that may later cause us to regret prior decisions.

Imagine a game where a player moves toward a goal along one of several paths.

Movement along each path has a cost, and new paths can appear at any time. Moving backward along an older path to take advantage of a newer one is allowed, but exacts a penalty. For example, in the figure below, a player has chosen a path with cost R and has moved down it towards a goal:








Now, however, suppose a new path appears, representing an entrepreneurial discovery:




The player now has a decision to make.  He could ignore the new path, in hopes that a better one will appear later.  Alternatively, he could move backward to his previous position (discarding his investment of R resources) and then invest R’ resources to take the new path.


Over time, several scenarios are possible.  The player might take the best path available initially, see a new path emerge, and decide the costs of backtracking and taking the new path are not worth it. This describes most of us shortly after buying a house or car: we’re not interested in newer paths so soon after a major purchase. Alternatively, for other cost functions and path lengths the costs of backtracking might be worthwhile, in which case we would take the new path:







This might correspond to a dramatic improvement in cost/performance for a relatively low-cost product purchased some time ago. Still another possibility would be to wait at the initial position until the new path appears, and then move down it without having to invest the R resources in the first place:








This happens, for example, when you defer a computer purchase because you anticipate considerable price/performance improvement next year, or you decide not to buy a stock because you think prices are likely to fall.


This scenario requires the benefit of hindsight, or a correct guess that a new path will appear. If the guess is wrong, then a new path might never appear and you’d have been better off without waiting at all. This happens when the expected benefits of deferring a course of action never materialize.


Perhaps the best outcome would be the appearance of a transition path that takes you to the same place as the new one without significant loss of resources:







We’ll have more to say about this possibility shortly.


The Role of Social Institutions


            Because S/I tradeoffs are complex problems, complex institutions have evolved to help solve them. We divide these institutions into four groups: academia, industry, voluntary associations, and government[1].


            1) Academia.  Academic researchers are a significant source of new knowledge, constantly providing new paths for human action.  While some of the new knowledge generated by researchers may eventually make it to the market, the vast majority of it will fall discarded by the wayside after publication in a journal or presentation at a conference. 

This is socially beneficial. The potential new knowledge generated by academic research is infinite, while the resources available to consumers are not. When faced with the staggering amount of scholarship generated by researchers, a consumer would have no idea what useful courses to act upon. The winnowing processes of peer review, competition for funding, publication, and the test of time are all essential in identifying the most promising paths for further study.


2) Industry.  For-profit firms and the competition between them are an essential part of making an informed S/I tradeoff.  We distinguish between two types of firms:  entrepreneurs and stakeholders. 

Entrepreneurs generate new knowledge by combining the results of academic research with information about cost.  They attempt to identify new paths of action for consumers to reach their goals, and then persuade them to expend resources along those paths. They provide the R’ path in the figures above.

Stakeholders, by contrast, have an economic interest in safeguarding existing investment of resources.  Successful stakeholders will constantly be sampling the market for evidence of new knowledge that might influence consumers to abandon their products, and then attempt to persuade consumers to remain committed to their previous decisions.  They correspond to the R path.

Entrepreneurs become stakeholders when their attempts to persuade consumers are so successful that their product becomes the new standard. Stakeholders become entrepreneurs when they stave off competition by generating new knowledge on their own. The attempts of both to sort through new knowledge, to inform about costs and benefits through prices, and to obtain the best of both worlds through compatibility and extension all provide valuable information to consumers.  Both stakeholders and entrepreneurs can provide transition paths.


3) Voluntary associations.  Market competition is not the only source of S/I-related information and transition paths.  A professional community may develop an awareness of the advantages of new discoveries if suitable ways can be found to make the transition from the old to the new.  Such a community may also arise in response to perceived needs to provide transition paths. Examples of this include mechanisms for determining a uniform railway track width in the early development of railroads, professional trade associations, patent sharing arrangements between major automobile manufacturers, Underwriters Laboratories, and the International Standards Organization.

Communities of users may also share information of common interest to help in making tradeoffs.  Examples of this include various communities on the Internet, newsgroups, and trade publications.


4) Government. Political institutions can also solve S/I-related problems. Government agencies can use their unique position of authority to impose common standards by fiat, or to require entrepreneurs to provide transition paths. In the United States, for example, the Federal Communications Commission used its authority (after extensive deliberation) to require all American television broadcasters to adopt the backward compatible ATSC DTV standard for digital broadcasting, and to phase out all analog transmission by 2007 (Booth, 1999, pp 39-46), (FCC 1998).  Other examples include FCC allocation of the electromagnetic spectrum, the role of the FAA in American aviation standards,  and attempts by national governments  to legislate “correct” language as a protection from cultural encroachment  (Offord, 1994).


S/I Tradeoffs In Computing


S/I tradeoffs are notable in computing not only for how often they occur but for the speed at which they operate. Computing is a highly entrepreneurial and intellectually vigorous field, so new knowledge is constantly being generated and evaluated in the marketplace.

Computing is equally distinctive in that a significant portion of its new knowledge is predictable.  The computing power of future microprocessors can be predicted through Moore’s Law, and various IC parameters can be predicted through “road maps” (Patterson & Hennessy, 1997), (Sematech, 1998).  We can predict the cost per bit of magnetic storage at least a few years out, and say reasonably accurate things about the price/performance behavior of many future computer systems (Post, 1999, 17-21). This predictability, while far from perfect, is valuable in assisting entrepreneurs and consumers in making S/I tradeoffs.

Table 1 below gives some examples of well-known S/I computing tradeoffs.  In the sections that follow, we analyze these tradeoffs in more detail and discuss the roles of relevant social institutions.  The first two are small scale, typical of those faced by a single consumer.  The rest are large scale, faced by the computing discipline or society as a whole. 


TABLE 1:  Some S/I Tradeoffs in Computing



Consequences of staying with older technology

Consequences of adopting newer technology


Industry Stakeholders and Entrepreneurs

Voluntary Associations


Personal computer purchase:  upgrade to different CPU & OS

Failure to capitalize on possible advantages of new computer

Loss of older resources, climbing  new learning curve, software compatibility

Identified basic processor and OS design issues

Intel, Apple/ Motorola/ IBM, Microsoft, Linux vendors, BeOS

Linux user and development community, newsgroups, industry publications

DOJ suing Microsoft

Personal computer purchase: upgrade to similar CPU & OS

Might miss significant cost/performance improvements made possible by HW & SW advances

Loss of resources in existing system, but no learning curve / compat. issues

Same company can be both stakeholder and entrepreneur

Internet community, industry publications

Little impact on tradeoff

RISC/CISC processor design

Failure to capitalize on possibly significant gains in cost / performance

Loss of use of existing programs due to binary incompatibility

Generated much of the new knowledge for RISC processors

Intel, Motorola, Dec, Sun, Mips/SGI …

Little impact on tradeoff

FTC sued Intel, reached out of court settlement


Internet standards development

Failure to address issues of internet growth, failure to take advantage of new advances in networking research

Large loss of resources if new standards incompatible with old, new software development, learning curve

Significant role in advancing network research, developing new standards

Virtually every computer user and company is a stakeholder.

Internet working groups develop new but compatible standards

Provided funding for early network development

Programming language design and adoption

Failure to take advantage of better expressibility, power, readability, maintainability

Loss of existing resources:  millions of lines of code, software dev, labor and expertise

Significant role in early programming language development, language theory

Developers with experience in older languages, large system maintainers, SW startups

Internet community, professional societies, user groups

Little impact on tradeoff

The Y2K problem

Increased risk of software malfunction, particularly for older systems

Large resource expenditure required to correct

Little impact on tradeoff

Virtually no entrepreneurs several years ago, virtually no stakeholders now

Little impact on tradeoff

Issues directives concerning Y2K compliance, provides assistance

Windows vs anything else

Failure to take advantage of portability, possible new worldwide standard

Loss of compatibility, learning curve for new look and feel

Little impact on tradeoff

Intel, Microsoft, Sun, BeOs, Linux vendors, others

Industry trade publications

DOJ suing Microsoft, handling license dispute


Personal Computer Purchase: upgrading to different CPU/OS


            When a computer buyer is considering changing to a different CPU and OS, all the relevant S/I factors come into play.   A newer technology may offer better cost/performance, but is it worth giving up resources already invested?  Such resources include not just the cost of the current system, but time spent in learning how to use it and software already purchased.  The example we’re probably most familiar with is the Macintosh/MacOS vs. PC/Windows decision.   Microsoft serves as the primary stakeholder for current PC owners, and predictably attempts to increase the primary benefit of its technology, compatibility, by increasing sales volume.  This improves the odds that a consumer’s software will run on another computer, a tremendous consumer benefit. 

PC/Windows users have massive amounts of time, labor, and other resources invested in a very popular and convenient computing environment. Any company seeking to persuade consumers to buy a different product must offer advantages strong enough to induce buyers to walk away from those investments. Given the huge compatibility benefit offered by a nearly ubiquitous Wintel standard, such advantages will require highly exceptional entrepreneurial discoveries.  A more modest strategy involves a transition path that supports some elements of the Wintel standard.  For example, the MacOS  includes the ability to read DOS disks and to open many Windows-formatted files.  This is an attempt by Apple as an entrepreneur to reduce the costs of its preferred path of action to consumers.

Space is too limited to discuss the battle between Microsoft and Apple, and in any case it has received ample treatment elsewhere.  We note only that, in the early days of personal computing, the two companies pursued different strategies. One focused on technical elegance and high profit margins, the other on low cost and building a user base.  Either strategy could have worked, and would have presented different S/I choices for consumers.  We had no way of knowing which approach consumers would prefer until both were tried, and market participants allowed to express their preference.


Personal Computer Purchase: upgrading to same CPU/OS


            Once we make the decision to upgrade our computer, we usually commit to the same CPU and operating system because the benefits of compatibility and preserving our investment of time and resources outweigh the costs.  But we still face S/I-related decisions in upgrading to a compatible computer.  How long ago did we buy our last model?  Have we really got everything we could out of it?  How have our computing needs changed since then? (Post, 1999, 17-21).

            In this case, the same companies can serve as both stakeholder and entrepreneur. In order for them to persuade you to upgrade, they must offer improvements sufficiently strong to persuade you to abandon the resources you invested in their previous system.  Companies in this position are, in a very real sense, competing against themselves.


Processor Design:  the RISC/CISC debate


            The dramatic changes that impacted computer architecture several years ago are a good example of large-scale S/I tradeoffs in computing.  Academic and industrial researchers first noted the deficiencies of the architectures of the 70’s.  New knowledge had revealed superfluous instructions, unnecessary layers of interpretation, and similar inefficiencies.

The discovery of technical flaws in an architecture, however, does not justify its abandonment. Computer designers interested in producing new architectures face both high development costs and high costs to consumers.  Large amounts of capital are required to bring a new chip design to market, and consumers can face high transition costs due to binary incompatibilities[2].  New architectures must therefore offer very compelling cost/performance advantages to be successful: engineering elegance and promising simulation studies are not enough.

            As the evidence mounted that RISC-based processor designs did in fact offer dramatic improvements in cost and performance, many entrepreneurial companies picked up on the RISC processor research and attempted to convince prospective buyers to adopt a new technology path.  In response, stakeholder companies like Intel leveraged their compatibility advantage to improve their processor designs, always offering compatibility with existing software.  They also adopted many of the ideas from RISC research in an attempt to negate much of the newer machines’ performance advantage, while scrupulously preserving the correct execution of older software (Dulong, 1998, pp 24-32). The end result is that the RISC entrepreneurs were unsuccessful at displacing the stakeholder Intel as the dominant supplier of CPU chips in the marketplace.  Under the present circumstances, it will require entrepreneurial discoveries of phenomenal significance to convince consumers to move to a different architecture.


Internet Standards Development


                The Internet is based on the original protocols of ARPANET.  However, new knowledge in networking generated since that time has discovered problems with these standards.  For one, the Internet has become much more widespread than its designers had ever envisioned. The inefficiencies of a class-based address space are now a cause for significant concern, and the present pool of 32-bit addresses could be depleted within the next several years.  Additionally, the unprecedented growth of the Internet is causing routing tables to grow exponentially, bring with it dramatically increased memory and computational requirements.  Finally, we now know that security, authentication, and privacy are extremely important, and require support from the network itself.

            Because of the enormous amount of resources invested in existing internet standards, however, the development of a new set of protocols from scratch is not worth considering. It would be like redesigning English because we found better rules for spelling and grammar.  Instead, the next set of Internet protocols (IPv6) is compatible and interoperable with the current one (IPv4). A review of the IPv6 literature shows extensive concern with transition mechanisms and backward compatibility.  Voluntary associations like the Internet Engineering Task Force, the Internet Society, and the Internet Architecture Board all recognize that the advocacy of an incompatible standard, whatever its technical merits, would never be taken seriously due to its imposition of tremendous costs on society.



Programming Language Design and Adoption


            Few issues provoke more fevered discussion in computing than programming languages.  Here again we find S/I tradeoffs that software developers and IT managers must face.

            Early computing languages, like Fortran and Cobol, were such significant improvements over existing alternatives that they became widespread very quickly.  New research into programming languages only later revealed their shortcomings, including lack of expressive power and unnecessarily difficult syntax.  Advances from academic research in software engineering also revealed powerful concepts that these languages could not support, such as data abstraction, encapsulation, and object-oriented design.

            Language designers frequently offer transition paths to reduce learning and maintenance costs.  C++, for example, is a superset of C, and succeeding versions of Fortran and Ada have been released over the years that track advances in programming language design and software engineering.  Still, very few of the dozens of programming languages that have been proposed have been put to use on a large scale to develop software.  Fewer still have proven successful at convincing large numbers of programmers to adopt them, despite their technical merits.  This is because of the huge amount of resource loss that a large scale programming language change requires.  The decision of the Department of Defense to repeal its “Ada only” requirement may reflect recognition of this fact (DOD, 1997).


The Y2K Problem


            The Y2K problem is unique in that the costs and benefits associated with standards and innovation are almost exclusively time-driven.  Twenty years ago, the possibility of software failing to function correctly due to the use of two-digit years was certainly known to the computing community, but the benefits of repairing most code in the face of two decades of uncertainty were far outweighed by the costs.  Entrepreneurs in the 1980’s who boasted of Y2K compliant products or tried to sell their services in the labor market as Y2K specialists would not have been successful.

            Over time, however, as the year 2000 approached, the benefits of proceeding down a new technological path became greater, as did the risks of staying with an old one. The Y2K industry is now booming. Y2K consultants currently command a premium in the marketplace, new code is routinely checked for Y2K compliance, and the inspection and repair of legacy code is now commonplace.  Because of the time sensitivity of the costs on both sides of this tradeoff, however, we should expect Y2K-related activity to tail off rapidly after the new year.  The benefits of investing resources to deal with possible unmet Y2K compliance issues will be no longer outweigh their cost.


Windows  vs Anything Else


            How many operating systems should there be for personal computers?  What level of compatibility should they offer?  What should their development environments look like?  What kind of “look and feel” should they have? These are all questions entrepreneurs, stakeholders, and governments are attempting to answer in what is probably today’s most widely discussed computing tradeoff:  Windows vs Anything Else.

            The largest stakeholder here is clearly Microsoft.  By concentrating its efforts on building market share, it has provided enormous compatibility and convenience advantages to consumers.  This fact faces every operating system designer who might consider offering a competing product.  After all, many readers of this magazine could write an operating system for a personal computer:  it’s a complex but well-understood programming task.  None of us, however, would attempt to market such a product, because switching to a new operating system would impose opportunity costs on consumers that most would not be willing to bear.  It would be a waste of resources.  Entrepreneurs who wish to persuade consumers to adopt a new model of computing must therefore offer some combination of extraordinary performance advantages, low cost, and minimal compatibility issues.

            The developers and promoters of Java clearly believed that they had such a combination.  The possibility of platform independent operating systems and applications was sufficiently revolutionary that consumers might have indeed considered a new computing paradigm.  Microsoft responded with its well-known “embrace and extend” strategy, which when faced with a new technology with sufficiently compelling advantages attempts to maintain typical stakeholder compatibility advantages while at the same time adopting many of the features of the new technology. 

It is true that Microsoft may have violated contractual arrangements with Sun by attempting to make Java platform specific, negating many potential benefits for consumers.  But this is an issue of contract law, not public policy.  For our part, we simply note that Microsoft as a stakeholder uses its Windows product to provide standardization and compatibility for its customers, a tremendous consumer benefit.  This means that entrepreneurial discoveries must offer equally tremendous benefits to justify switching.  In that sense, despite the widespread discussion in the popular media, trade press, and public policy community of the Microsoft and Intel antitrust cases, the issues involved are not fundamentally different from the thousands of other standardization / innovation tradeoffs faced throughout the history of computing,




            Hard decisions about previously allocated resources in the face of new knowledge have been a fundamental fact of life since the dawn of civilization.  They’re particularly worthwhile objects of study in computing, however, due to their frequency, their tendency to be observable over a relatively short span of time, and their widespread societal impact.  While every case has its own unique points, I believe there are common insights that have implications for academic computer scientists and policy makers alike.

            1) Technically superior is not the same as better.  There is a disconnect between academic computer science and economics that can frustrate researchers with interdisciplinary interests.  Because we tend to equate “technically superior” with “better”, we see the failure of technically more advanced products to catch on as evidence of some sort of inherent economic deficiency. When seen in the light of S/I tradeoffs, however, the failure of a technically superior product may simply reflect the judgment of consumers that the benefits they expect to receive do not justify abandoning older resource allocation decisions.  The C language, the Windows operating system, and the Internet Ipv4 protocols are all known to have numerous technical deficiencies. But the failure of developers, consumers, and network designers to abandon them for technically superior alternatives reflects important reasoning about hard choices, not a problem that requires correction.

            2) Computing technology advances discretely, not continuously.  Unlike some social processes that evolve more or less continuously (for example, natural languages), computing technology advances in discrete jumps. Software patches and upgrades are not continually released as bugs are identified and improvements are made; months go by between releases that add multiple features and repair multiple errors.  New implementations of existing architectures appear every year or two, not weekly. At the other end of the spectrum, new programming languages and instruction set architectures appear rarely, often only after several years.

This is because of cost and benefits associated with standardization and innovation.  Higher benefits of standardization and/or high costs of innovation require larger benefits from any proposed entrepreneurial activity before an older technology is displaced.  This in turn requires more knowledge, and therefore more time must elapse between displacement of a popular standard by another.  Lower benefits of standardization or lower costs don’t require as much new knowledge to be improved upon, so less time is required to persuade consumers to switch.

            3) There is no reason to assume a priori that one point in the S/I tradeoff is more desirable than any other.  Should there be three operating systems that are more or less equally popular, or one with a large market share?  Should there be five browsers, or just two?  Should the operating system and the browser be separate, or combined?  Should we have one Internet protocol set, or several? How many programming languages should there be?

            The answers to all these questions represent different points on the standardization / innovation continuum.  In the absence of knowing consumer preferences and the resources they have available, many of these points must be seen as both equally likely to occur and equally desirable in practice.  Multiple competing operating systems might give us more innovation, but could also present compatibility problems.  Fewer programming languages would help standardize software, but what about the conversion costs and the loss from future innovations?

Without the social processes of discovery outlined here, we would have no idea how to answer these questions beyond political fiat or mere guessing.  In fact, if we consider the personal nature of economic decisions, it is difficult to find reasons to prefer one point in the S/I continuum to another.  Since the answers involve complex and subjective questions about the ends people desire and the means they have available, they may be forever beyond the reach of quantitative methods or conventional analysis.  Just as we can qualitatively analyze chaotic systems without being able to make precise predictions of their behavior, so perhaps can we study the complex social processes by which computing tradeoffs are made without ever being able to predict their outcome.  We need a little humility.


            Unfortunately, a thorough analysis of these issues seems missing from current discussions of computing and public policy.  Academics, myself included, spend a lot of time finding inefficiencies in existing computing paradigms, proposing ideas based solely on intellectual novelty, and taking passionate interest in debates over which programming language, instruction set architecture, or operating system is technically superior.  This is not a criticism of academia.  Such activity is enjoyable, personally rewarding, and socially beneficial. 

It means, though, that when academics turn to public policy we tend to overvalue technical issues.  When a technically superior product or standard fails to catch on, we bewail the stupidity of the world and seek something to blame for an obviously deficient state of affairs.  If we could see such things in the light of the S/I tradeoffs in our field, we might reach a different conclusion.

            The same holds true in the policy arena.  We are currently seeing very high profile attempts to dictate the behavior of major stakeholders in the computing field, to influence how S/I tradeoffs are being made.  The Federal Trade Commission recently reached a settlement with Intel, and the Department of Justice’s antitrust trial against Microsoft is currently in mediation.  A close reading of the public documents and press coverage in these cases reveals excruciatingly detailed analyses of relevant markets, struggles to define the terms “operating system” and “browser”, and an incorrect assumption that innovation is more important than standardization[3] . Additionally, regulatory remedies are sought that replace some points in an S/I tradeoff with others[4]. 

Seen in the light of other tradeoffs in computing, this kind of activity appears to be an attempt to use law and the political process to answer the kind of extremely complex questions that other social processes may be more likely to answer correctly.

This is not a criticism of government: Just as academics may overvalue intellectual novelty and technical merit in promoting human welfare, policymakers may overvalue the ability of politics and regulatory action to achieve the same objective.

            Anyone concerned with the future of computing needs, I believe, a deeper appreciation for the tradeoffs that have shaped it.  We need more interdisciplinary research that combines computer science and economics. We need to understand that consumers have limited resources to allocate,  that both innovation and standardization are important, and that the actions of stakeholders and entrepreneurs in the marketplace provide valuable information to help consumers decide between them.  Under these conditions, technically superior products that don’t succeed and technically inelegant products that do are both possible and desirable.  Such successes and failures provide valuable information that helps people make S/I tradeoffs.


            Until we have cultivated this kind of appreciation, our policy efforts will be misplaced.  The past two decades have seen a tremendous increase in the importance and societal impact of computing, and with it the influence and social status of its practitioners. But with such increased prominence comes the responsibility to step beyond our own intellectual comfort zones and grapple with the hard policy questions that affect computer science, even if it pits us against the prevailing wisdom.


Until we can learn to do this, our questions may be right, but our answers will be wrong.



Ada and Beyond:  Software Policies for the Department of Defense, Committee on the Past and Present Contexts for the Use of Ada in the Department of Defense, National Research Council, National Academy Press, (1997).


Booth, S. (1999) Digital TV in the US, IEEE Spectrum, 36(3), 39-46.


Digital Television Consumer Information Bulletin, FCC Office of Engineering and Technology, (Nov 1998)


Dulong, C.  (1998). The IA-64 Architecture At Work.  IEEE Computer,  July, 24-32.


International Technology Roadmap for Semiconductors,  Appendix B, (1998).


Offord, M. (1994). Protecting the French Language.   In The Changing Voices Of Europe, University of Wales Press.


Patterson, D. & Hennessy, J. (1997). Computer Organization and Design: The Hardware/Software Interface, Second Edition, Morgan Kauffman Publishers.


Post, G. (1999).  How Often Should a Firm Buy New PCs?.  Communications of the ACM,  May, 17-21.




[1] These categories are approximate: we recognize that many researchers work in government labs, many for-profit firms are also members of voluntary associations, and so on.


[2]Platform independence can remove this problem, see below.


[3] The US Attorney General’s public statement accompanying the filing of the Microsoft case claimed that Microsoft has “stifled competition in operating system and browser markets”, and “restricted the choices available”.  The lawsuit was specifically “designed to promote innovation in the computer software industry”.  The chief of antitrust at the Department of Justice’s statement was similar:  Microsoft’s practices were “deterring innovation” and “restricting consumer choice”.  The word “standardization” was never mentioned.  In the legal complaint itself, standardization is treated exclusively as a barrier to entry and evidence of monopoly power, never as a consumer benefit.


The DOJ statements are available online at   The Microsoft complaint, expert witness testimony, and related legal documents are available at


[4] For example, legal remedies sought by the DOJ  include requiring Windows to ship with either two browsers or none at all, and requiring it to be installable with more than one startup screen.  Robert Bork, a former Supreme Court nominee and now an advisor to Netscape, has suggested that Microsoft be broken into three distinct companies, each of which gets their own copy of Windows source code.  See the New York Times Magazine, Feb 14th 1999.