Computer Security:
Fact Forum Framework


>>Prevention>>

This document-structure is intended to seed a fact forum about computer security. It has been written to be viewed under the Crit Mediator. If you do not see the Mediator banner at the beginning of the document, please follow this link. If that worked, you should now be seeing this document under the Mediator banner. Through the Mediator you can read commentary others have posted on this document, and post your own commentary for others to read.

The pages in this document are arranged, outline-like, into a tree. At the top of each page, you will see links of the form

    <<Previous Item<< Up to Parent Item >>Next Item>>
enabling you to traverse the tree sequentially, or to return to this page's parent. Each page also has a table, such as the one below, providing a map of the child pages. Box elements that are links take you to a discussion of that subject, as well as a more detailed box structure.

Other contributions to this Fact Forum can be found here.


Understanding the Limits of the Possible

In many fields of engineering, productive work only takes off once the shape of the boundary between the possible and impossible is understood. In mechanical engineering, people kept building perpetual motion machines for a long time after they were generally understood to be impossible. People kept trying because the only known direction toward the good was toward perpetual motion. Then Carnot invented what we now call Carnot efficiency--a precise statement of the shape of this boundary for mechanical efficiency. With this, thermodynamics was invented and sensible people stopped trying to create perpetual motion machines.

The computer security field needs to stop building perpetual motion machines. Currently, possible and impossible goals are mixed together without distinction. This leads to frustration, as one cannot succeed at impossible goals. However, unlike perpetual motion, a computer security architecture may become widely deployed, and many people may come to depend on it, before it is discovered that it can never meet its promises. This history of frustration then leads to the common wisdom that "true computer security is impossible." That which is thought to be impossible is not demanded, and that which is not demanded in not commercially produced. With no visible counter-examples, misunderstandings persist.

As more and more of what we value in the world comes to be managed by software, we must do better even to retain the security properties we're used to in the physical world. Fortunately, we can do so, and more. This fact forum is dedicated to developing the understandings needed for us to succeed.


Cooperation Without Vulnerability

For many, the intuitive purpose of computer security is to keep bad things from happening, i.e., to avoid eavesdropping, damage, or attack. This fact forum refers to this goal as safety, and it is a necessary part of any computer security system. However, by itself safety is a trivial goal: nothing bad can happen in a computer that is turned off. Obviously then, the goal of computer security must be to achieve safety while still allowing some kinds of good things to happen. But what kinds of good things?

The most obvious answer is general purpose computation. This is also necessary in any computer security system, and corresponds to perimeter security. However, by itself, it still doesn't do us much good. A Universal Turing Machine in a sealed vault under Cheyenne Mountain can safely engage in any computation, and if I'm in the vault with it I can obtain the benefits. The benefits I'm missing come from interacting with others, so the security problem is how to safely obtain the benefits of interacting with entities you do not trust.

A pattern of interaction undertaken in the expectation of mutual benefit we call a pattern of cooperation. As the Cheyenne Mountain example shows, complete safety is achievable when no cooperation is needed. Similarly, imagine a timesharing system in which all programs execute in system-mode inside one giant address space. All patterns of cooperation possible in computation are clearly possible in this system, but the participants have no safety from each other. Were this timesharing system and its users all in that vault under Cheyenne, they would still be safe from outsiders, but again, this is uninteresting.

Commerce raises many examples of real security issues. In commerce one often needs to cooperate safely with those one doesn't trust. A shop is willing to be open to outsiders only because its cash register isn't. The shop is neither fully open nor fully closed. In anticipated electronic commerce the interactions take place electronically, the agents are running computer programs, and many transactions happen with no human awareness. This environment is sufficiently different that simple analogies are misleading. Confusion about computer security is widely recognized as a major inhibitor of the emergence of electronic commerce.

There are many kinds of safety, and many patterns of cooperation. Some desirable combinations--shown in green--are known to be possible within computation. These represent lowers bounds on the limits of the possible. Other desirable combinations--shown in red--are known to be impossible. These represent upper bounds. The actual boundary between possible and impossible lies within the gray of our ignorance. As this fact forum proceeds, we hope to shrink the gray area by discovering and establishing both new possibilities and new impossibilities. Should we discover that G is possible, the lower bound moves up from (A,B) to (A,G). Should we instead discover that G is impossible, the upper bound moves down from (D,E) to (G,E).

There are many security models and architectures--such as capabilities, access control lists (ACLs), ring security (MLS), and more. This fact forum is for discussing these as well. A given security model will enable certain security relationships to be expressed, such as certain prohibitions, and fail to express others. Armed with some understanding of the possible--independent of model--one can make two criticisms of a model's expressiveness. A model can fail to provide for the expression of security that is both possible and desirable, and a model can enable the expression of security that is impossible. The former is a weakness, the latter is a danger. By expressing prohibitions they cannot prevent, these models foster a false sense of security and put everyone at risk. Publicly exposing such false promises makes the world a safer place.

The intersection between the expressiveness of a given model and the possible is a limit on what is possible within that model. By establishing which points are possible within one model but not another we discover their relative advantages. Plausibly, different models will be found better for different purposes.


The Difference Between Theory and Practice

In theory, there's no difference between theory and practice. In practice there is.
            --Chip Morningstar

Of course, the above is an oversimplified view of security engineering. The varieties of safety have a more complex relationship than simply more safe and less safe. Similarly with cooperation. The question of theoretical possibility, in a formal computer science sense, does not have as much relationship to practicality as one might like. An arrangement that is theoretically possible may be impractical. Worse, an arrangement that is theoretically impossible may be practical anyway. For example, though a theoretically possible attack prevents a given arrangement from being theoretically safe, if the attack itself is impractical the arrangement may be practically safe. This fact forum is relevant to real world engineering only if it helps us understand the boundary between practical and impractical. Why not focus on that instead?

Our technology changes quickly, but theory is timeless. Questions of practicality derive from current processor speeds, relative market share of different products, consumer perceptions, and more. Many of us--myself included--are in businesses for which these questions are pressing. However, a fact forum would be less informative on these, and such issues would distract from the progress we can make. Nevertheless, certain issues of engineering practicality are plausibly fairly timeless (such as the difficulty of preventing wall banging), and such issues are welcome in this forum.


Taxonomies of Issues

Each section includes a table of sub-topics. Each table cell is or will be a link to a child page expanding on that sub-topic, and often with a table of links to further children.

Levels of Risk

With the above framework, one can ask "for a given pattern of cooperation, how safe can we be?" Broadly speaking, in decreasing order of safety, three levels are:
Prevention
Deterrence
Admonition
 
  • Prevention provides safety by actually making the danger impossible--given assumptions and caveats that must be made explicit. Ideally, prevention systems provide the analog of physical law for computation. For example, given a correct realization of the Java architecture, a Java applet could no more write to an arbitrary memory location than you or I can go faster than the speed of light.

  • Deterrence is more like the world of human military, legal, or commercial arrangements. These systems seek to discourage attack by arranging for it not to be in anyone's interest to attack, or better, for it to be against the interests of those with the opportunity. On this topic, we can be badly mislead by our intuitions of the real world. One cannot punish an object by jailing it.

  • Much of the world works by polite request, or admonition, and the decent willingness of others to often abide by these admonitions, even when there are no consequences for violation. Often, you can get someone to avoid endangering you just by asking them. Software can help.

Deterrence and admonition blend into each other. If someone repeatedly violates my admonitions, I may eventually find out and stop dealing with them. This prospect can deter violation. However, the engineering issues of supporting deterrence vs admonition are quite different, and they are usefully distinguished. Admonitions systems seek only to catch accidental violation of requests one does intend to abide by. Deterrence systems seek to change intentions by effecting payoffs, but generally assume entities act according with their interests, or at least their intentions. Admonition systems may improve the accuracy of this assumption.

Classically, computer security has been about systems of prevention. This is the most desirable form of safety, the one specific to computation, and the one that most admits progress by formal reasoning. Accordingly, I expect most activity in this fact forum to occur in this area, so it contains the bulk of seeding framework.


The Classic Saltzer and Schroeder criteria

We have received permission from Saltzer and Schroeder to upload their article to the web.  This is the OCRed result.

These criteria were also used in Extensible Security Architectures for Java, by Wallach et al. We are seeking permission to upload this paper as well. Fortunately, it is already in electronic form. Postscript is found at this link.
 

Capabilities
Access Control 
Lists (ACLs)
Ring Security
(MLS)
Stack
Introspection
Type Hiding
Economy of mechanism eomCap eomAcl eomMls eomSi eomTh
Fail-safe defaults fsdCap fsdAcl fsdMls fsdSi fsdTh
Complete mediation cmCap cmAcl cmMls cmSi cmTh
Open design odCap odAcl odMls odSi odTh
Separation of privilege sopCap sopAcl sopMls sopSi sopTh
Least privilege lpCap lpAcl lpMls lpSi lpTh
Least common mechanism lcmCap lcmAcl lcmMls lcmSi lcmTh
Psychological acceptability paCap paAcl paMls paSi paTh
Accountability acctCap acctAcl acctMls acctSi acctTh
Performance perfCap perfAcl perfMls perfSi perfTh
Compatibility compCap compAcl compMls compSi compTh
Remote Calls rpcCap rpcAcl rpcMls rpcSi rpcTh

The columns above are security models to be examined. Saltzer and Schroeder examined Capabilities and Access Control Lists. Wallach et al examined Capabilities, Stack Introspection (what Netscape uses), and Type Hiding. Though Ring Security is examined by neither, it is the major paradigm of both Multics and The Orange Book, and so bears examination by these criteria. The first eight rows above represent the evaluation criteria used by both these papers.  The last four are further criteria introduced by Wallach et al. When we have permission to upload the Wallach document to html, these row and column headings will be linked appropriately.

The strange identifiers within the table are simply unique anchor points for you to attach commentary about how a given security model relates to a given criteria. In this context, use the "support" link type for commentary primarily claiming the model does well by this criteria, "issue" for claimed problems by this criteria, and the self-explanatory "query" and "comment". These types display as differently colored backlink markers to help guide other readers.


Methodology

There are no disinterested experts. You don't become an expert unless you're interested
            --Arthur Kantrowitz
The objectivity of science does not depend on the objectivity of scientists
            --Karl Popper

I admit it, I have an ax to grind. I passionately believe in a particular security model--pure capabilities--and feel it has been maligned largely by two widespread misunderstandings: capabilities were wrongly thought incapable of providing certain security arrangements, and some security arrangements promised by other models were wrongly thought to be possible. My first hope for this forum is to repair both errors. No doubt I have such misunderstandings of other models, and I hope to repair these as well. Now you know where I'm coming from.

However, I have created two separate document structures. This tree of pages, rooted at this page, is my attempt at neutral framework--a semi-fair playing field for starting the discussion. Separately, I am writing editorial material as commentary linked to the framework. If you browse the framework through the Foresight Mediator you will see little bracketing icons indicating text to which commentary is attached. These icons take you back to the commentary, whether made by me or others. My ability to visibly comment on the framework is no different from yours. You are invited to write your own commentary and make it similarly accessible. We can comment on the framework, as well as on each other's commentary, ad infinitem. We'll have a few fits and starts though--this fact forum on computer security is the Mediator's maiden voyage.

Semi-fair is the most I could achieve. My points of view give rise to both my conclusions about computer security and my notion of the problem space. Nevertheless, I tried to construct a framework which a diversity of views would recognize as fairly posing problems any security paradigm should be willing to address. Computer security is too big a topic for one big unstructured discussion. Framework documents present rendezvous points for discussing the sub-topics. Note that neither Saltzer and Shroeder nor Wallach et al advocate capabilities, and I do propose that the discussion incorporate their taxonomy.

If you feel this framework only good enough to criticize, please do so. With the Mediator making commentary navigable, such criticism may help us form a mutually acceptable framework. If you don't like the framework enough to help improve it, create your own framework and invite us to play in yours. Since a given comment can be linked to multiple frameworks, it's much less than twice the work to comment in two frameworks. There is nothing special about this framework other than being first. Any set of documents indexed by the Mediator can serve the same function. Enjoy.


Let us argue to learn, not to win.