A Taste of Computer Security

© Amit Singh. All Rights Reserved. Written in August 2004


A common, although not necessary, side-effect of enhancing a system's security is that it becomes harder to program, and harder to use. Security related steps that are exposed to end-users in that the users are required must be easy to understand, and easy to use. If not, users may bypass, even altogether, steps that are especially frustrating.

This section contains the following subsections:


Passwords are one of the weakest links in the security chain, and expectedly so — passwords involve humans. Consider several problematic aspects of passwords:

Operating systems have been adopting single sign-on (usually only one password to remember) mechanisms that ensure security and integrity of sensitive information. The most popular of these is Kerberos, a key distribution and authentication system. The Kerberos protocol is meant for use on insecure networks, and is based on a protocol originally published in 1978 by Roger Needham and Michael Schroeder: researchers at the Xerox Palo Alto Research Center.

Kerberos has several open source and commercial implementations. In particular, Microsoft Windows and Mac OS X use Kerberos.

What to put a password on?

It could be difficult to establish the "best" (for the end-user, for the administrator, for the manufacturer, for a particular situation) subsystem to protect using a password. A personal computer could have a power-on firmware password, a boot password, a disk-drive password, and several operating system or application level passwords. Sometimes, external hardware (for example, a Smart Card or a finger print reader) is used for authentication.

Often, seemingly strong password mechanisms may have backdoors, such as a special password that is always accepted, or a special trigger that bypasses the mechanism. For example, the password mechanism in Apple's Open Firmware can be rendered ineffective by altering the machine's memory configuration, etc.

An unrecoverable and backdoor-free password on a critical component (like a disk-drive) offers strong privacy, especially in case of theft. However, users do forget their passwords!


A popular (perhaps due to the movies), though not yet widely used, form of authentication is biometrics. An alternative to password-based identity, biometrics has two primary pieces of information: the biometric data (BD), and the biometric code (BC). BD is biometric information as captured or measured, say, in real-time, by a biometric reader. BC is a template of a user's individual biometric information, registered and stored in the system. The software in the biometric system tries to match a BC given a BD, or verify a BD against a particular BC. A successful authentication would then lead to appropriate authorization.

An important benefit of biometrics is convenience — while you may forget a password, you do not need to "remember", say, your fingerprint or retina information. However, the concept is not without serious drawbacks. If somebody learns your password, you could always change it. If somebody "knows" your biometric information (for example, somebody creates a replica of your fingerprints), you cannot change that. Moreover, present-day biometric readers often have rather high error rates.

Plan 9

Plan 9 from Outer Space, the movie (1959), is often dubbed as the worst movie ever. Still, many consider it to be extremely watchable, perhaps partly because of its amazingly poor qualities.

Plan 9, the operating system, has been rather unsuccessful, particularly when compared to UNIX. It must be noted that inventors of UNIX were involved in Plan 9's creation too. Still, Plan 9 has been a source of several good ideas, some of which made their way into other operating systems (for example, the /proc filesystem).

Plan 9 is a departure from Unix in many ways. In a particular sense, Plan 9 might be considered an extreme case of Unix: while many objects are files on Unix, everything is a file on Plan 9.

Plan 9's current security architecture is based around a per-user self-contained agent called factotum. The word comes from Latin fac (meaning "do") and totum (meaning "everything"). Thus, a factotum is something that has many diverse activities or responsibilities — a general servant, say. Factotum is implemented as a Plan 9 file server (mounted on /mnt/factotum). It manages a user's keys, and negotiates all security interactions with applications and services. It is helpful to think of factotum as being similar to ssh-agent, except that it is an agent for all applications.

The idea is to limit security-related code to a single program, resulting in uniformity, easy update and fault isolation, better sandboxing, and so on. For example, if a security-related algorithm or protocol is compiled into various applications, an update would require the application to be restarted at the very least, and perhaps be relinked, or even recompiled. With factotum, a program that would otherwise have been compiled with cryptographic code would communicate with factotum agents.

The rationale of this approach is that since secure protocols and algorithms are well-understood and tested, they are usually not the weakest link, and thus factoring them out in a single code base is a good thing. Intuitively, it might seem that factotum would be a single point of failure — an attacker can now just focus on "breaking" factotum. However, the designers of Plan 9 argue that it is better to have "... a simple security architecture built upon a small trusted code base that is easy to verify (whether by manual or automatic means), easy to understand, and easy to use."

Moreover, if a security service or protocol needs a certain privilege level, it is factotum, and not the application, that needs to run with that privilege level. In other words, privileged execution is limited to a single process, with a small code base. The scheme allows a factotum process to prevent anybody (including the owner of the process) to access its memory. This is achieved by writing the word private to the /proc/pid/ctl file. Note that the only way to examine a process's memory in Plan 9 is through the /proc interface.

A similar concept is privilege separation, as used in OpenBSD.

Trusted Computing

It is useful to have computing platforms in which we can have a greater level of confidence than is possible today. Intuitively, you could "trust" a system more if it is "known" not to be easily modifiable: an embedded system, say. Still, an embedded system is not guaranteed to be secure, and there have been demonstrable security flaws in embedded systems.

The Trusted Computing Platform Alliance (TCPA) was a collaborative initiative involving major industry players (IBM, Intel, Microsoft, and some others). The successor to TCPA is the Trusted Computing Group (TCG), whose goal is to develop vendor-neutral standard specifications for trusted computing.

Trusted Computing is a technology that exists today, but isn't quite there yet. The retarding factors to its success are not so much technical as they are related to deployment and adoption. Creating a trusted computing environment requires changes in both hardware and software, hence the inertia. The fundamental idea is that users need to have greater (but still not 100%) confidence that the platform in front of them has not been subverted. The "trust" relates to questions such as:

A Trusted Platform is a computing platform that has a trusted component — tamper-resistant built-in hardware, say. Consider one example, that of the TCPA/TCG platform, in which the trusted hardware is called the Trusted Platform Module (TPM). The TPM is the core of the trusted subsystem, with features such as:

Generic benefits of Trusted Computing include more trustworthy systems (better platform integrity), greater and stronger personal privacy, and better data security. More importantly, Trusted Computing has the potential to make security simpler.

Examples of Use

Note that a TPM is not meant to be 100% tamper-proof. It is only tamper-resistant, and might not stand up to an attack from the owner himself.

There are certain aspects of Trusted Computing that are often misunderstood, such as its relationship to the controversial Digital Rights Management (DRM). Consider a few noteworthy aspects:

Microsoft's NGSCB

Microsoft's "Next-Generation Secure Computing Base for Windows" (NGSCB, formerly called "Palladium") is Microsoft's effort for creating a Windows-based trusted platform. Essentially a trusted execution subsystem, NGSCB requires architectural changes to both hardware and software, including the Windows kernel. At this time, this technology seems to be several years away from deployment and potential mainstream adoption. More importantly, while partly similar in concept to TCG's efforts, NGSCB is not an implementation of the existing specifications developed by TCPA or TCG. NGSCB has a wider scope, and requires changes to more hardware components (such as the CPU and memory controllers).


"Security through obscurity" has historically been a controversial issue. It refers to a deliberate, strategic hiding of information about the implementation or design of a component. Perhaps there was a time when such an approach could have been justified to a greater degree as compared to today. Despite being outwitted time and again, the industry never seems to lose its faith in obfuscation as a defense mechanism. While on the one hand, if clever hiding of certain aspects of a program might be worthwhile, if only to mislead an attacker, it could be a colossal error to rely upon such approaches for security. In fact, it could be argued that since an attacker is likely to thrive on challenge, using deception to make something harder to break might actually motivate him more.


We saw earlier that randomization is a rather popular approach currently being used to make it difficult, or practically impossible, for memory-based attacks to succeed. This is a good example of obscurity being put to intellectually defensible use.

In addition to objects in the address space of a process (or the kernel), other aspects of a system could be randomized to create hardened systems. For example, you could create a system where each instance of it has a universally unique ABI.

Many seem to agree today that full disclosure is a better approach. In particular, making the design or implementation of something public would result in a better testing of its security (more people, with diverse experience and expertise, would be able to review, for example).

Sometimes an attacker may not even have to de-obfuscate (or reverse-engineer) an obfuscated component in order to do his bidding. Consider the example of a license management framework that uses public key cryptography and extensive obfuscation to thwart attacks on itself. Such a system might be trivial to break, say, via minor instruction manipulation, without actually having to understand how the overall scheme works.

That said, information hiding is an interesting field in itself. Steganography, hiding secret information in otherwise publicly available information, has attracted great attention in the past few years.

Cryptography protects secret information by transforming it in a way such that only intended parties can use it. Steganography tries to hide the very existence of such information. Thus, the two are potentially complementary.


In the past few years, the issue of quantifying Return On Security Investment (ROSI) has been raised. In Finally, a real return on security spending [CIO Magazine, February 15, 2002], Berinato claims that fear, uncertainty, and doubt (FUD) has been used to sell security ("if you scare them, they will spend"). The concept could be loosely compared to situations consumers deal with in various purchasing scenarios: "You really need that paint-protection and rust-proofing", or "You really ought to buy the 3-year extended warranty on this floppy-diskette: it's only $39.95 + tax!" CIOs are interested in determining if spending a certain amount of money would give them their money's worth of security.

While it is a tough problem to define the worth of a security investment, models exist, and new ones are emerging, for doing so.

Computers and Law Enforcement

High-tech crime is getting more cutting-edge, while low-tech crime is getting high-tech. With the rising criticality of electronic evidence in criminal cases, it is becoming increasingly important for upholders of the law — the police, prosecutors, judges — to have computer skills, and in particular, be well-versed in digital forensics. It is going to be a multi-dimensional challenge for law-enforcement to tackle this problem.

Grab Bag

There are numerous other technologies or mechanisms that either use, support, enhance, or otherwise involve security and privacy. While we shall not discuss any of these, some examples are:

<<< An Example: Solaris Security main Unix vs. Windows >>>