A Taste of Computer Security

Defining Computer Security

There is no precise definition of "security" of computer systems. Although we all may have a feel (intuition) for what it means to us, it is extremely hard to describe computer security formally. It is a fuzzy, open-ended, all-encompassing concept — almost like a human aspect. In fact, it might be argued that computer security is primarily a human issue. Our sense for computer security is often similar to that for personal security, and surely, there are situations in which a "threat" to computer security would be tantamount to a threat to human life.


An often-used word in the context of computer security is "trust", which too is a human quality — you cannot trust (or mistrust, for that matter) a computer! In a way, when you are trusting a system, you really are trusting those who designed it, those who implemented it, those who configured it, those who maintain it, and so on. Trust is a complicated concept. Note that in mathematical terms, trust is neither a symmetric nor a transitive relation.

In Trust or Bust: Communicating Trustworthiness in Web Design, Jakob Nielsen says that trust is "hard to build and easy to lose: a single violation of trust can destroy years of slowly accumulated credibility."

Trust in a system could be defined as the level of confidence in its integrity. This connotation is part of the philosophy behind Trusted Computing, a concept that has emerged in recent years as an evolutionary approach to providing computing platforms with stronger integrity. These "trusted platforms" aim not to replace secure platforms, but to make their security stronger and simpler.

Nevertheless, just like it is hard to provide computer security with reasonable guarantees, it is hard to have a system that can be trusted with a high level of confidence under all circumstances.


We looked at a seat-of-the-pants definition of computer security in Popular Notions About Security. Although I would not hazard a strict or formal definition, let us refine our existing understanding.

Most general purpose operating systems are shared systems. This sharing (of resources) may happen between multiple users on one system, and between multiple networked computers. Even if a system has only one user, there are multiple entities within the system (such as various applications) that share resources.

A typical system has hardware and software resources, or objects: processors, memory, storage, network bandwidth, and so on. Subjects (users, or entities executing on behalf of users, such as processes and threads — depending on the granularity at which subjects are defined) access and use these objects usually through well-defined interfaces. Thus, generically speaking, "Subjects (processes) perform operations on objects (resources)." Now, processes must not perform operations that they are not "supposed to". They must be protected from each other, and the overall system must be protected from processes. Protection in operating systems is the foundation for both security and reliability. Protection mechanisms ("how") allow for enforcement of policies ("what").


We could therefore understand computer security as a condition wherein all resources are always used as intended. Again, "as intended" is subjective, and it would be impossible to exhaustively enumerate all your intentions. Surely, there could be a situation that neither the designers of the system, nor its users, have thought of yet.

In slightly more concrete terms, security is something that allows a system and its users to:


A definition of security could be reinforced by describing the absence of security, which we shall informally call "insecurity". A computer system's resources (including external, shared resources such as the network) are all vulnerable to attacks: from outside, and in many cases, especially from within. We could understand a vulnerability as potential for unintended use — a result of a software bug, a design oversight or error, a misconfiguration, and so on. A vulnerability when exploited via an attack could lead to tangible or intangible damage. Common types of potential damage includes:

Note that a system's resources could be misused without denying service to legitimate users, or without causing any apparent damage on the system itself. For example, if a system's resources are all lying idle, and it is misused only as a stepping stone to infiltrate another system, both systems suffer damage, albeit of varying tangibility.

Finally, damage could also be incidental, without a proactive attack. Such damage that happens "by itself" during legitimate use could be a result of human error, hardware or software bugs encountered, power failure, hardware failure, and even natural disasters such as earthquakes, floods, hurricanes, rain, snow, storms, tornadoes, etc.

With rapidly increasing deployments of security-related software, it becomes important for security companies and their customers to quantify the effectiveness of such software (to choose what software to use, to calculate return over investment, to advertise, etc.) In this context, a rule-of-thumb definition of security is often cited: a system is considered secure if its "secure-time" is greater than its "insecure-time." Secure time is simply the time during which a system is protected, that is, free of "incidents". Insecure time is the sum of the time it takes to detect an incident and the time it takes to react to the incident (summed over all incidents in a given interval):

A system is secure in a given time interval t if Tsecure(t) > Sum{Tdetect, i + Treact, i} for all incidents i happening in the same interval t

Note that if there is some incident which is not detected, then the system is trivially insecure.

Quantifying Security

Even with all the subjectivity surrounding security, it is useful and often required to officially rate a system (or a system component) security-wise. Such ratings are assigned using standardized evaluation criteria.

The Orange Book

The U.S. Department of Defense Trusted Computer System Evaluation Criteria (TCSEC) classifies systems into four broad hierarchical divisions of enhanced security protection: D, C, B, and A, with systems in the (A) division providing the most comprehensive security. These security ratings, popularly known as the Orange Book, are as follows (note that these apply to specific components as well as entire operating systems):

For more details, refer to Department Of Defense Trusted Computer System Evaluation Criteria.

Common Criteria for IT Security Evaluation

In June 1993, U.S., Canadian, and European organizations behind various security criteria started the Common Criteria (CC) project to evolve into a single, internationally accepted set of IT security criteria. Refer to the official web site of the CC project for details. The CC rating scheme consists of the following evaluation assurance levels, or EALs (approximate Orange Book equivalents are in parentheses)

Regarding backwards compatibility, the CC objective states that: "The CC EALs have been developed with the goal of preserving the concepts of assurance source criteria so that results of previous evaluations remain relevant. [Using the approximate equivalents] general equivalency statements are possible, but should be made with caution as the levels do not drive assurance in the same manner, and exact mappings do not exist."

Examples of some CC ratings are as follows:

A '+' indicates that the system meets some, but not all, requirements of a higher rating.

<<< Popular Notions About Security main Traditional Unix Security >>>