<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<!--last modified on Saturday, October 03, 1998 04:19 PM -->
<HTML><!-- #BeginTemplate "/Templates/caplet.dwt" -->

<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html;CHARSET=iso-8859-1">
<!-- #BeginEditable "doctitle" --> 
<TITLE>Computer Security: Fact Forum Framework</TITLE>
<!-- #EndEditable --> 
<meta name="Author" content="Mark S. Miller">
<link rel=author rev=made href="mailto:markm@caplet.com" title="Mark S. Miller">
<META NAME="description" CONTENT="Caplet(tm) Security: A Consulting Company">
<META NAME="keywords" CONTENT="Capability Security, Capabilities, Cryptography, Distributed Objects, Distributed
  Language, Distributed Capabilities, Lambda Calculus, Scripting Language, Distributed Language, Persistent
  Language, Persistent Capabilities, Persistent Objects, Java Shell, Capability Shell, Scripting Java, Smart
  Contracting, Agoric E-Commerce, Open Source ">
</HEAD>

<BODY TEXT="#000000" BGCOLOR="#FFFFFF" LINK="#0000FF" VLINK="#800080" background="../../images/back.jpg">
<P> 
<TABLE BORDER="0" width="100%">
  <TR VALIGN="TOP"> 
    <TD WIDTH="10%">&nbsp;</TD>
    <TD> 
      <P> 
      <TABLE BORDER="0" WIDTH="100%">
        <TR> 
          <TD ALIGN="LEFT"><a href="../../index.html"><img src="../../images/lgmarb3.gif" width="26" height="26" align="absmiddle" border="0"></a></TD>
          <TD ALIGN="RIGHT"> 
            <!-- #BeginEditable "BigTitle" --> 
            <center>
              <h1 align="right"><font size="7">Computer Security:<br>
                </font> <font size="5">Fact Forum Framework</font></h1>
            </center>
            <!-- #EndEditable -->
          </TD>
        </TR>
      </TABLE>
      <hr>
      <!-- #BeginEditable "LongBody" --> 
      <center>
      </center>
      <center>
      </center>
      >><a href="prevention.html">Prevention</a>>> 
      <p>This document-structure is intended to seed a <a
href="http://www.foresight.org/WebEnhance/HPEK4.html#anchor345654">fact forum</a> 
        about computer security. It has been written to be viewed under the <a href="http://crit.org/">Crit 
        Mediator</a>. If you do not see the Mediator banner at the beginning of 
        the document, please follow <a
href="http://crit.org/http://www.caplet.com/security/taxonomy/index.html">this 
        link</a>. If that worked, you should now be seeing this document under 
        the Mediator banner. Through the Mediator you can read commentary others 
        have posted on this document, and post your own commentary for others 
        to read. 
      <p>The pages in this document are arranged, outline-like, into a tree. At 
        the top of each page, you will see links of the form 
      <ul>
        &lt;&lt;<i><font color="#000080">Previous Item&lt;&lt;</font></i> Up to 
        <i><font color="#000080">Parent Item</font></i> >><i><font
color="#000080">Next Item>></font></i> 
      </ul>
      enabling you to traverse the tree sequentially, or to return to this page's 
      parent. Each page also has a table, such as the one <a
href="#RiskLevels">below</a>, providing a map of the child pages. Box elements 
      that are links take you to a discussion of that subject, as well as a more 
      detailed box structure. 
      <p>Other contributions to this Fact Forum can be found <a
href="http://discuss.foresight.org/~foresight/CSFactForum.html">here</a>. 
       
        <hr width="100%">
        <h2>Understanding the Limits of the Possible</h2>
      <p>In many fields of engineering, productive work only takes off once the 
        shape of the boundary between the possible and impossible is understood. 
        In mechanical engineering, people kept building perpetual motion machines 
        for a long time after they were generally understood to be impossible. 
        People kept trying because the only known direction toward the good was 
        toward perpetual motion. Then <a href=
"http://physics.hallym.ac.kr/reference/physicist/Carnot_Sadi.html">Carnot</a> 
        invented what we now call <a
href="http://www.combustion.me.vt.edu/ME3105/lect17.htm">Carnot efficiency</a>--a 
        precise statement of the shape of this boundary for mechanical efficiency. 
        With this, thermodynamics was invented and sensible people stopped trying 
        to create perpetual motion machines. </p>
      <p>The computer security field needs to stop building perpetual motion machines. 
        Currently, possible and impossible goals are mixed together without distinction. 
        This leads to frustration, as one cannot succeed at impossible goals. 
        However, unlike perpetual motion, a computer security architecture may 
        become widely deployed, and many people may come to depend on it, before 
        it is discovered that it can never meet its promises. This history of 
        frustration then leads to the common wisdom that "true computer security 
        is impossible." That which is thought to be impossible is not demanded, 
        and that which is not demanded in not commercially produced. With no visible 
        counter-examples, misunderstandings persist. 
      <p>As more and more of what we value in the world comes to be managed by 
        software, we must do better even to retain the security properties we're 
        used to in the physical world. Fortunately, we can do so, and more. This 
        fact forum is dedicated to developing the understandings needed for us 
        to succeed. 
       
        <hr width="100%">
        <h2>Cooperation Without Vulnerability</h2>
      <p><img src="boundary.gif" height=335 width=416 align=RIGHT>For many, the 
        intuitive purpose of computer security is to keep bad things from happening, 
        i.e., to avoid eavesdropping, damage, or attack. This fact forum refers 
        to this goal as <i>safety</i>, and it is a necessary part of any computer 
        security system. However, by itself safety is a trivial goal: nothing 
        bad can happen in a computer that is turned off. Obviously then, the goal 
        of computer security must be to achieve safety while still allowing some 
        kinds of good things to happen. But what kinds of good things? </p>
      <p>The most obvious answer is <i>general purpose computation</i>. This is 
        also necessary in any computer security system, and corresponds to <a href="perimeter.html">perimeter 
        security</a>. However, by itself, it still doesn't do us much good. A 
        Universal Turing Machine in a sealed vault under Cheyenne Mountain can 
        safely engage in any computation, and if I'm in the vault with it I can 
        obtain the benefits. The benefits I'm missing come from interacting with 
        others, so the security problem is <i>how to safely obtain the benefits 
        of interacting with entities you do not trust</i>. 
      <p>A pattern of interaction undertaken in the expectation of mutual benefit 
        we call a <i>pattern of cooperation</i>. As the Cheyenne Mountain example 
        shows, complete safety is achievable when no cooperation is needed. Similarly, 
        imagine a timesharing system in which all programs execute in system-mode 
        inside one giant address space. All patterns of cooperation possible in 
        computation are clearly possible in this system, but the participants 
        have no safety <i>from each other</i>. Were this timesharing system and 
        its users all in that vault under Cheyenne, they would still be safe from 
        outsiders, but again, this is uninteresting. 
      <p>Commerce raises many examples of real security issues. In commerce one 
        often needs to cooperate safely with those one doesn't trust. A shop is 
        willing to be open to outsiders only because its cash register isn't. 
        The shop is neither fully open nor fully closed. In anticipated e<i>lectronic 
        commerce</i> the interactions take place electronically, the agents are 
        running computer programs, and many transactions happen with no human 
        awareness. This environment is sufficiently different that simple analogies 
        are misleading. Confusion about computer security is widely recognized 
        as a major inhibitor of the emergence of electronic commerce. 
      <p><img src="pareto.gif" height=196 width=258 align=RIGHT>There are many 
        kinds of safety, and many patterns of cooperation. Some desirable combinations--shown 
        in green--are known to be possible within computation. These represent 
        <i>lowers bounds</i> on the limits of the possible. Other desirable combinations--shown 
        in red--are known to be impossible. These represent <i>upper bounds</i>. 
        The actual boundary between possible and impossible lies within the gray 
        of our ignorance. As this fact forum proceeds, we hope to shrink the gray 
        area by discovering and establishing both new possibilities and new impossibilities. 
        Should we discover that G is possible, the lower bound moves up from (A,B) 
        to (A,G). Should we instead discover that G is impossible, the upper bound 
        moves down from (D,E) to (G,E). 
      <p>There are many security models and architectures--such as <i>capabilities</i>, 
        <i>access control lists</i> (ACLs)<i>, ring security</i> (MLS), and more. 
        This fact forum is for discussing these as well. A given security model 
        will enable certain security relationships to be expressed, such as certain 
        prohibitions, and fail to express others. Armed with some understanding 
        of the possible--independent of model--one can make two criticisms of 
        a model's expressiveness. A model can fail to provide for the expression 
        of security that is both possible and desirable, and a model can enable 
        the expression of security that is impossible. The former is a weakness, 
        the latter is a danger. By expressing prohibitions they cannot prevent, 
        these models foster a false sense of security and put everyone at risk. 
        Publicly exposing such false promises makes the world a safer place. 
      <p>The intersection between the expressiveness of a given model and the 
        possible is a limit on what is possible <i>within</i> that model. By establishing 
        which points are possible within one model but not another we discover 
        their relative advantages. Plausibly, different models will be found better 
        for different purposes. 
       
        <hr width="100%">
        <h2>The Difference Between Theory and Practice</h2>
      <blockquote><i><font size=+1>In theory, there's no difference between theory 
        and practice. In practice there is.</font></i></blockquote>
      <ul>
        <ul>
          <ul>
            <ul>
              <ul>
                <blockquote>--Chip Morningstar</blockquote>
              </ul>
            </ul>
          </ul>
        </ul>
      </ul>
      <p>Of course, the above is an oversimplified view of security engineering. 
        The varieties of safety have a more complex relationship than simply <i>more 
        safe</i> and <i>less safe</i>. Similarly with cooperation. The question 
        of theoretical possibility, in a formal computer science sense, does not 
        have as much relationship to <i>practicality </i>as one might like. An 
        arrangement that is theoretically possible may be impractical. Worse, 
        an arrangement that is theoretically impossible may be practical anyway. 
        For example, though a theoretically possible attack prevents a given arrangement 
        from being theoretically safe, if the attack itself is impractical the 
        arrangement may be practically safe. This fact forum is relevant to real 
        world engineering only if it helps us understand the boundary between 
        practical and impractical. Why not focus on that instead? </p>
      <p>Our technology changes quickly, but theory is timeless. Questions of 
        practicality derive from current processor speeds, relative market share 
        of different products, consumer perceptions, and more. Many of us--myself 
        included--are in businesses for which these questions are pressing. However, 
        a fact forum would be less informative on these, and such issues would 
        distract from the progress we <i>can </i>make. Nevertheless, certain issues 
        of engineering practicality are plausibly fairly timeless (such as the 
        difficulty of preventing <a href="confinement.html">wall banging</a>), 
        and such issues are welcome in this forum. 
       
        <hr width="100%">
        <h2>Taxonomies of Issues</h2>
      <p>Each section includes a table of sub-topics. Each table cell is or will 
        be a link to a child page expanding on that sub-topic, and often with 
        a table of links to further children. </p>
      <h3> <a name="RiskLevels"></a>Levels of Risk</h3>
      With the above framework, one can ask "for a given pattern of cooperation, 
      how safe can we be?" Broadly speaking, in decreasing order of safety, three 
      levels are: 
      <table align=RIGHT border=4 >
        <tr> 
          <td><a href="prevention.html">Prevention</a></td>
        </tr>
        <tr> 
          <td><a href="deterence.html">Deterrence</a></td>
        </tr>
        <tr> 
          <td><a href="admonition.html">Admonition</a></td>
        </tr>
      </table>
      &nbsp; 
      <ul>
        <li> 
          <p><i>Prevention</i> provides safety by actually making the danger impossible--given 
            assumptions and caveats that must be made explicit. Ideally, prevention 
            systems provide the analog of physical law for computation. For example, 
            given a correct realization of the Java architecture, a Java applet 
            could no more write to an arbitrary memory location than you or I 
            can go faster than the speed of light.</p>
        </li>
        <li> 
          <p><i>Deterrence</i> is more like the world of human military, legal, 
            or commercial arrangements. These systems seek to discourage attack 
            by arranging for it not to be in anyone's interest to attack, or better, 
            for it to be against the interests of those with the opportunity. 
            On this topic, we can be badly mislead by our intuitions of the real 
            world. One cannot punish an object by jailing it.</p>
        </li>
        <li>
          <p> Much of the world works by polite request, or <i>admonition</i>, 
            and the decent willingness of others to often abide by these admonitions, 
            even when there are no consequences for violation. Often, you can 
            get someone to avoid endangering you just by asking them. Software 
            can help.</p>
        </li>
      </ul>
      Deterrence and admonition blend into each other. If someone repeatedly violates 
      my admonitions, I may eventually find out and stop dealing with them. This 
      prospect can deter violation. However, the engineering issues of supporting 
      deterrence <i>vs</i> admonition are quite different, and they are usefully 
      distinguished. <i>Admonitions systems</i> seek only to catch accidental 
      violation of requests one does intend to abide by. <i>Deterrence systems 
      </i>seek to change intentions by effecting payoffs, but generally assume 
      entities act according with their interests, or at least their intentions. 
      Admonition systems may improve the accuracy of this assumption. 
      <p>Classically, computer security has been about systems of prevention. 
        This is the most desirable form of safety, the one specific to computation, 
        and the one that most admits progress by formal reasoning. Accordingly, 
        I expect most activity in this fact forum to occur in this area, so it 
        contains the bulk of seeding framework. 
       
        <hr width="100%">
        <h3>The Classic Saltzer and Schroeder criteria</h3>
      <p>We have received permission from Saltzer and Schroeder to upload their 
        article to the web.&nbsp; <a
href="http://cap-lore.com/CapTheory/ProtInf/">This</a> is the OCRed result. </p>
      <p>These criteria were also used in <a
href="http://www.cs.princeton.edu/sip/pub/extensible.html">Extensible Security 
        Architectures for Java</a>, by Wallach et al. We are seeking permission 
        to upload this paper as well. Fortunately, it is already in electronic 
        form. Postscript is found at this link. <br>
        &nbsp; 
      <center>
        <table border=4 cellpadding=4 >
          <tr> 
            <td></td>
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Descriptors.html#list-system-B-(The-Capability-System)"
>Capabilities</a></td>
            <td> 
              <center>
                <a href=
"http://cap-lore.com/CapTheory/ProtInf/Descriptors.html#C-(The-Access-Control-List-System)"
>Access Control&nbsp;<br>
                Lists (ACLs)</a> 
              </center>
            </td>
            <td> 
              <center>
                Ring Security 
              </center>
              <center>
                (MLS) 
              </center>
            </td>
            <td> 
              <center>
                Stack 
              </center>
              <center>
                Introspection 
              </center>
            </td>
            <td>Type Hiding</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#a-(Economy-of-mechanism)-Keep-the-design"
>Economy of mechanism</a></td>
            <td>eomCap</td>
            <td>eomAcl</td>
            <td>eomMls</td>
            <td>eomSi</td>
            <td>eomTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#b-(Fail-safe-defaults)-Base-access-decisions"
>Fail-safe defaults</a></td>
            <td>fsdCap</td>
            <td>fsdAcl</td>
            <td>fsdMls</td>
            <td>fsdSi</td>
            <td>fsdTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#c-(Complete-mediation)-Every-access-to-every"
>Complete mediation</a></td>
            <td>cmCap</td>
            <td>cmAcl</td>
            <td>cmMls</td>
            <td>cmSi</td>
            <td>cmTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#d-(Open-design)-The-design-should-not-be-secret"
>Open design</a></td>
            <td>odCap</td>
            <td>odAcl</td>
            <td>odMls</td>
            <td>odSi</td>
            <td>odTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#e-(Separation-of-privilege)-Where-feasible"
>Separation of privilege</a></td>
            <td>sopCap</td>
            <td>sopAcl</td>
            <td>sopMls</td>
            <td>sopSi</td>
            <td>sopTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#f-(Least-privilege)-Every-program"
>Least privilege</a></td>
            <td>lpCap</td>
            <td>lpAcl</td>
            <td>lpMls</td>
            <td>lpSi</td>
            <td>lpTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#g-(Least-common-mechanism)-Minimize-the-amount"
>Least common mechanism</a></td>
            <td>lcmCap</td>
            <td>lcmAcl</td>
            <td>lcmMls</td>
            <td>lcmSi</td>
            <td>lcmTh</td>
          </tr>
          <tr> 
            <td><a href=
"http://cap-lore.com/CapTheory/ProtInf/Basic.html#h-(Psychological-acceptability)-It-is-essential"
>Psychological acceptability</a></td>
            <td>paCap</td>
            <td>paAcl</td>
            <td>paMls</td>
            <td>paSi</td>
            <td>paTh</td>
          </tr>
          <tr> 
            <td>Accountability</td>
            <td>acctCap</td>
            <td>acctAcl</td>
            <td>acctMls</td>
            <td>acctSi</td>
            <td>acctTh</td>
          </tr>
          <tr> 
            <td>Performance</td>
            <td>perfCap</td>
            <td>perfAcl</td>
            <td>perfMls</td>
            <td>perfSi</td>
            <td>perfTh</td>
          </tr>
          <tr> 
            <td>Compatibility</td>
            <td>compCap</td>
            <td>compAcl</td>
            <td>compMls</td>
            <td>compSi</td>
            <td>compTh</td>
          </tr>
          <tr> 
            <td>Remote Calls</td>
            <td>rpcCap</td>
            <td>rpcAcl</td>
            <td>rpcMls</td>
            <td>rpcSi</td>
            <td>rpcTh</td>
          </tr>
        </table>
      </center>
      <p>The columns above are security models to be examined. Saltzer and Schroeder 
        examined Capabilities and Access Control Lists. Wallach <i>et al</i> examined 
        Capabilities, Stack Introspection (what Netscape uses), and Type Hiding. 
        Though Ring Security is examined by neither, it is the major paradigm 
        of both Multics and <a href="http://www.disa.mil/MLS/info/orange/">The 
        Orange Book</a>, and so bears examination by these criteria. The first 
        eight rows above represent the evaluation criteria used by both these 
        papers.&nbsp; The last four are further criteria introduced by Wallach 
        <i>et al</i>. When we have permission to upload the Wallach document to 
        html, these row and column headings will be linked appropriately. </p>
      <p>The strange identifiers within the table are simply unique anchor points 
        for you to attach commentary about how a given security model relates 
        to a given criteria. In this context, use the "<tt><font
color="#009900"><font size=+1>support</font></font></tt>" link type for commentary 
        primarily claiming the model does well by this criteria, "<tt><font color="#CC0000"><font
size=+1>issue</font></font></tt>" for claimed problems by this criteria, and the 
        self-explanatory "<tt><font color="#FF6600"><font
size=+1>query</font></font></tt>" and "<tt><font color="#000099"><font
size=+1>comment</font></font></tt>". These types display as differently colored 
        backlink markers to help guide other readers. 
      <p> 
      <hr width="100%">
      <h2> Methodology</h2>
      <blockquote><i><font size=+1>There are no disinterested experts. You don't 
        become an expert unless you're interested</font></i></blockquote>
      <blockquote> 
        <ul>
          <ul>
            <ul>
              <ul>
                <ul>
                  --Arthur Kantrowitz 
                </ul>
              </ul>
            </ul>
          </ul>
        </ul>
        <i><font size=+1>The objectivity of science does not depend on the objectivity 
        of scientists</font></i> 
        <ul>
          <ul>
            <ul>
              <ul>
                <ul>
                  --Karl Popper 
                </ul>
              </ul>
            </ul>
          </ul>
        </ul>
      </blockquote>
      <p>I admit it, I have an ax to grind. I passionately believe in a particular 
        security model--pure capabilities--and feel it has been maligned largely 
        by two widespread misunderstandings: capabilities were wrongly thought 
        incapable of providing certain security arrangements, and some security 
        arrangements promised by other models were wrongly thought to be possible. 
        My first hope for this forum is to repair both errors. No doubt I have 
        such misunderstandings of other models, and I hope to repair these as 
        well. Now you know where I'm coming from. </p>
      <p>However, I have created two separate document structures. This tree of 
        pages, rooted at this page, is my attempt at neutral framework--a semi-fair 
        playing field for starting the discussion. Separately, I am writing <a href="../editorial/index.html">editorial</a> 
        material as commentary linked to the framework. If you browse the framework 
        through the Foresight Mediator you will see little bracketing icons indicating 
        text to which commentary is attached. These icons take you back to the 
        commentary, whether made by me or others. My ability to visibly comment 
        on the framework is no different from yours. You are invited to write 
        your own commentary and make it similarly accessible. We can comment on 
        the framework, as well as on each other's commentary, <i>ad infinitem</i>. 
        We'll have a few fits and starts though--this fact forum on computer security 
        is the Mediator's maiden voyage. 
      <p><i>Semi-fair</i> is the most I could achieve. My points of view give 
        rise to both my conclusions about computer security and my notion of the 
        problem space. Nevertheless, I tried to construct a framework which a 
        diversity of views would recognize as fairly posing problems any security 
        paradigm should be willing to address. Computer security is too big a 
        topic for one big unstructured discussion. Framework documents present 
        rendezvous points for discussing the sub-topics. Note that neither Saltzer 
        and Shroeder nor Wallach <i>et al</i> advocate capabilities, and I do 
        propose that the discussion incorporate their taxonomy. 
      <p>If you feel this framework only good enough to criticize, please do so. 
        With the Mediator making commentary navigable, such criticism may help 
        us form a mutually acceptable framework. If you don't like the framework 
        enough to help improve it, create your own framework and invite us to 
        play in yours. Since a given comment can be linked to multiple frameworks, 
        it's much less than twice the work to comment in two frameworks. There 
        is nothing special about this framework other than being first. Any set 
        of documents indexed by the Mediator can serve the same function. Enjoy. 
      <p> 
      <hr width="100%">
      <blockquote> 
        <center>
          <i><font size=+1>Let us argue to learn, not to win.</font></i> 
        </center>
      </blockquote>
      <!-- #EndEditable --></TD>
    <TD WIDTH="10%">&nbsp;</TD>
  </TR>
  <TR VALIGN="TOP"> 
    <TD WIDTH="10%">&nbsp;</TD>
    <TD> 
      <hr>
      <div align="center"> 
        <p><a href="../../index.html"><img src="../../images/lgmarb3.gif" width="26" height="26" align="bottom" border="0"></a> 
          <i><b><font size="5">H</font>ome</b></i></p>
        <table width="100%" border="0" cellspacing="0" cellpadding="4">
          <tr> 
            <td> 
              <div align="left"><i><a href="mailto:webmaster@caplet.com">email 
                MarkM</a></i><br>
                or <a href="http://www.blindpay.com/crit-me-now.cgi"><img src="../../images/cmn.gif" width="98" height="21" border="0"></a> 
              </div>
            </td>
            <td> 
              <div align="right"><a href="http://www.epic.org/crypto/"><img src="../../images/key.gif" width="37" height="19" alt="Golden Key Campaign" border="0"></a>&nbsp;<a href="http://www.eff.org/br/"><img src="../../images/ribbon.gif" width="18" height="30"
alt="Blue Ribbon Campaign" border="0"></a><br>
                <a href="http://www.freesklyarov.org/"><i>Free Dimitry!</i></a> 
              </div>
            </td>
          </tr>
        </table>
      </div>
    </TD>
    <td width="10%" valign="bottom">&nbsp;</td>
  </TR>
</TABLE>
</BODY>

<!-- #EndTemplate --></HTML>
