A Taste of Computer Security

© Amit Singh. All Rights Reserved. Written in June 2004

Dealing with Intrusion

As we saw earlier, viruses and worms have been part of computing for several decades — since before the advent of the Internet, and before the advent of Windows.

Since the early 1960s, it has been possible to connect to remote computers — via modems.

Looking Back

In early 1980, James Anderson published a report of a study, the purpose of which was to improve the computer security auditing and surveillance capability of (customers') computer systems. In this report titled Computer Security Threat Monitoring and Surveillance, Anderson introduced the concept of intrusion detection, and concluded that "The discussion does not suggest the elimination of any existing security audit data collection and distribution. Rather it suggests augmenting any such schemes with information for the security personnel directly involved."

In the mid-1980s, a group of "hackers" called the "414 Club" (they were in the 414 area code) used PCs and modems to dial into remote computers. Many of these computers were not really protected (with passwords, for example), and once you had the dial-in phone number, you could gain access. The hackers simply wrote a program to keep dialing numbers until a modem answered. Moreover, it was not as easy to trace callers as it is today.

In September 1986, several computers in the bay area were broken into, apparently as a result of somebody first breaking into one Stanford computer that had a default username and password pair (such as { guest, guest }).


Today, computer system intrusion is almost taken for granted. Although it is of paramount importance to prevent intrusions from happening, a perfect solution is perhaps impossible. Consequently, in addition to combating intrusion by preventing it, there is emphasis on detecting it, be it in real-time or after the fact. More specifically, one would like to detect an intrusion attempt as soon as possible, and take necessary and appropriate actions to minimize (preferably avoid) any actual damage. There has been plenty of research activity in this area particularly in the last decade. Today, an Intrusion Detection System (IDS) is an important component of any security infrastructure.

Historically, IDSs were not proactive in that they did not take any action by themselves to prevent an intrusion from happening, once detected. Recent systems have given importance to hooking up detection events with preventive mechanisms. For example, IDSs exist that respond to threats automatically by reconfiguring firewalls, terminating network connections, killing processes, and blocking suspected malicious networks.


The intrusion detection problem could be addressed using two broad approaches: detecting misuse, and detecting anomalies.

Misuse Detection

Misuse detection is essentially pattern matching, where patterns of activities are compared with known signatures of intrusive attacks. Similar to virus scanning, this approach requires a database of signatures of known attacks. The database would require frequent updates as new exploits are developed and discovered. Now, while some signatures could be simple — without requiring a lot of, or any, state (for example, a specific number of failed login attempts, connection attempts from a reserved IP address range, network packets with illegal protocol flags, email with a virus as an attachment, etc.), others may require more complex analysis (and more state). An example of the latter kind would be an attempt to exploit a remote race condition vulnerability.

Anomaly Detection

Anomaly detection looks for activity patterns that are dissimilar (with a degree of fuzziness) to, or deviate from, "normal" behavior. The implicit assumption is that an intrusive activity will be anomalous. Note that it is extremely difficult to define what is normal. There might be anomalous activities due to legitimate system use, leading to false alarms. Moreover, an intruder may explicitly avoid changing his behavior patterns abruptly so as to defeat detection. It is expensive and difficult both to build a profile of normal activity, and to match against it.

There have been attempts to build the execution profiles of applications statistically, that is, by tracking aspects such as the names and frequencies of system calls made, sockets of various types used, files accessed, processor time used, and so on. You could also take into account the probability of an event happening in an application given the events that have happened so far (prediction). Researchers have also considered using neural networks to predict such future events.


From the point of view of where they reside, IDSs can be host-based or network-based.

Host-based IDSs (HIDS) that analyze audit trails have existed since the 1980s, although they have evolved into more sophisticated (automated, relatively real-time) systems. Host-based IDSs are system-specific as they operate on the host. In many cases, it may be impossible, or at least difficult, to have an IDS outside of the host — consider an encrypted network or a switched environment. While a host-based IDS may have a firewall (which in turn may be built into the network stack), they can also monitor specific system components.

Note that a firewall reinforces overall security, but by itself does not constitute an IDS. While an IDS is essentially a burglar alarm, a firewall could be likened to a barbed-wire fence.

Network-based IDSs (NIDS) are not part of one particular host. They are meant to operate on a network (multiple hosts), and thus are usually operating system independent. They often work by analyzing raw network packets in real-time. An important benefit of network-based IDSs is that an intruder, even if he is successful, would not be able to remove or tamper with evidence in most cases. In contrast, a successful intruder would find it easier to modify audit or other logs on a host.

Some NIDS are capable of doing packet-scrubbing. For example, if a NIDS detects shellcode, then, rather than drop the relevant packets, the NIDS would rewrite the packet to contain a modified version of the shellcode: one that is not harmful. One benefit of doing so is that the attacker would not get a clear indication that his attack was detected.

Protocol scrubbing is also used in the context of traffic normalization, a technique for removing potential ambiguities in incoming network traffic. This is done before a NIDS gets to see the traffic. Thus, the NIDS' job is easier (less resources used, fewer false positives, etc.) as it does not have to deal with ambiguities, many of which may be part of benign traffic.

Example: SNARE

SNARE (System Intrusion Analysis and Reporting Environment) is a host-based IDS for collection, analysis, reporting, and archival of audit events. There are SNARE security "agents" for multiple operating systems and applications, and a proprietary SNARE server.

The SNARE system consists of changes to an operating system kernel (for adding auditing support), the SNARE audit daemon (which acts as an interface between the kernel and the security administrator), and the SNARE audit GUI (user interface for managing the daemon). The administrator can turn on events, filter output, and potentially push audit log information to a central location.

SNARE is available for Linux, Solaris, and Windows.

A few examples of IDSs are:

HIDS: Basic Security Module (Solaris), Entercept, RealSecure, SNARE, Squire, Unix syslog facility, Event Logs on NT-based Windows, etc.

NIDS: Dragon, NFR, NetProwler, Shadow, Snort, etc.

Example: Snort

Snort is a popular open source intrusion detection system. In its simplest form of operation, Snort can work as a packet logger or network sniffer. As a Network IDS (NIDS), it works with a user-defined set of rules specified in a simple rule description language. Snort is capable of doing a stateful protocol analysis and content-matching. The detection engine uses a plug-in architecture, allowing for easy extension of its detection capabilities, which include detecting buffer overflow attacks (by matching shellcode, say), SMB probes, OS fingerprinting, stealth port scans, CGI NUL-byte attacks, and several more.


There have been several efforts, in academia or otherwise, to build effective and efficient intrusion detection systems. Bro [Paxson, 1998] was comprised of a packet capturing library (libpcap) over the link layer, an Event Engine for performing integrity checks, processing packets, and potentially generating events, and a Policy Script Interpreter that executes scripts (specifying event handlers) written in the Bro language. Bro's initial implementation did application-specific processing for a small set of applications: Finger, FTP, Portmapper, and Telnet.

Wu, Malkin, and Boneh [Building Intrusion-Tolerant Applications, 1999] argue for building applications that result in very little harm done even in the face of a successful intrusion. The main design principle is that long term security information should never be located in a single location (no single point of attack). They use the example of a web server's private key, and share it among a number of share servers. Thereafter, they do not reconstruct the key at a single location to use it, but use Threshold Cryptography to use the key without ever reconstructing it.

Zhang and Paxson [Detecting Backdoors, 1999] have looked at the specific problem of detecting backdoors that have been implanted to facilitate unauthorized access to a system. They present a general algorithm for detecting interactive backdoors, as well as several application specific algorithms (SSH, Rlogin, Telnet, FTP, Root Shell, Napster, and Gnutella).

Somayaji and Forrest [Automated Response Using System-Call Delays, 2000] use the approach of delaying system calls as an automated intrusion response. Their system call pH (process homeostasis) monitors every executing process at the system-call level, and responds to anomalies by either slowing down or aborting system calls.

Researchers from Network Associates, Inc. used a combination of intrusion detection and software wrapping [Detecting and Countering System Intrusions Using Software Wrappers, 2000] to better defend against intrusions. Using a generic software wrapper toolkit, they implemented all or part of an intrusion detection system as ID wrappers — software layers dynamically introduced into the kernel. ID wrappers are meant to select intercept system calls, analyze the interceptions, and respond to potentially harmful events. Their emphasis is on moving most of the IDS logic inside the kernel, for performance reasons, for security (of the IDS itself) reasons, and for finer-grained control. Moreover, their system allows composition of multiple ID techniques (such as those based on specification of valid program behavior, signature-based, and sequence-based) where each technique is implemented in a separate ID wrapper.

Host Integrity

Whether you regard it as a sub-component of a HIDS, complementary to a HIDS, or supplementary to a HIDS, an integrity checking mechanism is a useful security tool. Systems implementing such mechanisms have evolved from rudimentary checksum-based systems to comprehensive, perhaps distributed, change monitoring and reporting frameworks. For example, such a host integrity monitoring agent could monitor the state of (and changes to) entities such as:

The most appropriate time to install such a system on a host is before the host is deployed. Specifically, the "baseline", that is, a reference point against which future states of the system will be compared, must be created before deployment. Moreover, the baseline must be stored outside the host, or on read-only media (whose writability is not toggle-able via software). Similarly, the agent that scans the current state of the host and compares it to the baseline would ideally not run on the host itself, and perhaps not even use the filesystem or disk drivers of the host.

Examples of host-based integrity tools include:

Countering Intrusion Detection

Good malware proactively attempts to counteract IDSs — a logical extension of the virus/anti-virus battle. Unfortunately, IDSs have are worse-off in this regard, and it could require a great deal of creative thinking, system resources, and expert administration to come up with an effective IDS. Let us consider examples of some issues involved:

Intrusion Prevention System (IPS)

A cutting-edge IDS could have enough features that try to prevent an attack in real time so as to qualify for an intrusion prevention system (IPS).

Full-fledged efforts for intrusion prevention would require enough host-specific actions that an effective IPS would probably be host-based, and could be a very useful complement to a regular IDS.

There has been a rise in the emergence of such systems in recent times: those that monitor system calls, library calls, memory use, overall program behavior, etc. On the proactive side, such a system could guard against host-based attacks, and perhaps even prevent arbitrary untrusted applications from executing at all (requiring proper deployment — an initiation of sorts — for all new applications). Several companies are offering such systems, calling them application firewalls, memory firewalls, host armors, etc.

<<< Access Control main Sandboxing >>>