Security principles and models for operating systems

Because operating systems and/or hypervisors lay the groundwork for security across much of computing, discussion of their design often refers to security principles and models. The principles were already discussed in considerable detail at the beginning of the material. The concept map presented there is referred to later both in general terms and in relation to different types of operating systems. After that, a brief introduction to security models is given. A more comprehensive treatment of these belongs (in later courses) to the domain of access control. Here, their discussion is motivated because access control often ultimately reduces to what the operating system does.

One central concept should be defined at this point. If we consider a computing system from the perspective of the owner of the data it contains and the security policy they have defined, the TCB, trusted computing base, describes the area within which the policy can be believed to be enforced “automatically”. It is thus the base of trusted computing. In addition to hardware, it includes parts of the operating system, and in this context the TCB refers specifically to code. The concept map (the latter of the two) in the chapter on trust presents the characteristics of trusted computing. It is also useful to note that what is referred to in the current chapter as a security domain corresponds to what the concept map calls a trust domain.

The TCB, or Trusted Computing Base, includes
The reference monitor is a concept that usually arises in the context of trusted computing rather than with ordinary operating systems, as is also the case with the TCB. Although it has not yet been explained here, try to infer which option best describes the task of a reference monitor (or consult a later module). It

Security principles (advanced)

From a security perspective, there should be as complete isolation as possible between different security domains (in the concept map Use structures → Isolation). All interaction across security domain boundaries must be fully mediated, that is, checked (in the map: Allow less → Verify before trusting). Security domains should share as few mechanisms as possible—especially mechanisms that maintain shared state (Make smaller → Number of shared facilities). For example, if for a given procedure one can choose between implementing it inside the operating system kernel using global variables, or providing it via a user-space library that operates in an isolated manner within each process, the latter option should be chosen. However, this must not increase code size excessively or violate other principles or constraints. In addition, mediation must adhere to the principle of secure defaults (Allow less → Deny by default). For instance, a policy that allows security domains to use each other’s resources must not be of the form “yes, except if” but strictly “no, except if”. Beyond minimizing shared mechanisms, the principle of economy of mechanism (Make smaller → Complexity) implies that the amount of TCB code requiring trust should be minimized (Make smaller → Security elements (trusted base, code size)). Studies have shown that even skilled programmers produce 1–6 defects per 1,000 lines of code. If code complexity remains unchanged, reducing the TCB means fewer bugs, a smaller attack surface, and better opportunities to verify the correctness of the TCB—automatically or manually—against a formal specification.

In a previous section, four types of operating system structures were presented:

  1. the single security domain case
  2. the monolithic system (… Windows, Linux …)
  3. the multi-server system (… microkernel)
  4. the library operating system (… Unikernel, Exokernel)

Case 1 means that the TCB includes all software in the system, including applications. All mechanisms are “shared”, and there are practically no secure defaults or checks. In the design of a monolithic operating system, the situation is somewhat better, since at least the operating system is protected from applications and applications are protected from each other. The operating system itself, however, is still a single security domain and thus inherits the disadvantages of case 1. The extreme compartmentalization (← Isolation ← Use structures) of a multi-server operating system (case 3) is better suited than case 2 for ensuring security: we can enforce checks between components inside a small microkernel that follows secure defaults. Much of the code that in other models resides within the operating system’s security domain—such as driver code—is no longer part of the TCB. Unikernels (case 4) are an interesting alternative: in principle, the operating system code and the application operate in the same security domain, but the former is kept as small as possible, and mechanisms shared across applications are minimized (→ Security elements). Resource compartmentalization can also be implemented entirely at the Unikernel level. In a Unikernel application, the TCB consists only of the underlying hypervisor or Exokernel and the operating system components that the application chooses to use. In addition, the library implementing an operating system component is part of the TCB only of that application, since other applications do not share it.

The principle of open design may be more controversial than those discussed above. (Even in the concept map, Organize your work → Open design is not among the highest-level principles.) Endless debates have been held, especially about open-source software as one way of adhering to the principle. It has been compared with closed-source software, and the advantages and disadvantages have been argued from a security perspective. The benefit of open design is that anyone can inspect it, increasing the likelihood of finding defects in general and vulnerabilities in particular. Auguste Kerckhoffs made a similar observation about cryptographic systems in the 19th century, often summarized as the idea that one should not rely on obscurity. Eventually, obscurity fails, and if malicious actors discover a vulnerability before defenders do, problems arise. The counterargument is that with open design, malicious actors also have a higher probability of finding flaws.

By contrast, there is little doubt that a strictly compartmentalized structure realizes both the principles of least privilege and separation of privilege (or duty) (← Allow less) better than an architecture in which most code executes within a single security domain. In particular, a monolithic system lacks real separation of privilege among the various operating system components, and the operating system always runs with full rights. In other words, OS code responsible for tracking the identifier (PID) of the currently running process can modify page tables, create root accounts, alter any files on disk, read and write arbitrary network packets, and crash the entire system at will. Multi-server systems are very different. They can restrict for each OS component the allowed calls to only those it requires for its task (Least privilege). Separation of privilege thus also holds between components. Unikernels—that is, library operating systems—offer a different and interesting way of addressing this problem. Although most components operate within a single domain, without isolation or privilege minimization, the operating system has been stripped down to only those parts required to execute the application, and the Unikernel itself can operate with only the privileges needed for that purpose.

No matter how important security is, the principle of psychological acceptability (Apply constantly in parallel in your mind → User experience view → Acceptance) states that the system must still remain usable. Given the complexity of operating system security, this is not trivial. Although hardened solutions such as SELinux (Security-Enhanced Linux) and Qubes OS provide clear security advantages over many other operating systems, few ordinary users adopt them, and even fewer feel confident configuring their security settings themselves.

Security models (advanced)

An important question in operating systems concerns information flow: who is allowed to read and write which data? Traditionally, system-wide practices are defined using security models. These models are based on differing security requirements of information and the structures that arise from them. Here we consider only a structure with multiple levels (multilevel security, MLS). One could also consider multiple domains (multilateral security). In military tradition, the level hierarchy may be, for example, “unclassified”, “classified”, “secret”, “top secret”, etc. In business contexts, the levels might be “public”, “confidential”, and “secret”.

The application of a grouping structure for access control—especially in military applications—starts from the need-to-know principle, which aims to grant each actor access only to information required to perform their tasks (in the concept map: Allow less → … → Need‑to‑know). Both actors and objects are equipped with security labels, which in the case of actors are called clearance levels and in the case of objects classification levels. In general, these are more or less permanent attributes of the entities involved.

The purpose of security labels that classify information resources into different levels is fairly obvious. To implement a security model, more precise conditions are still required for the access-enforcing reference monitor, so that it can be ensured that unauthorized actors cannot view the information or “contaminate” it, that is, corrupt its integrity. Basic models usually address only one perspective—either confidentiality or integrity—although combinations are, of course, needed. Availability is such a more challenging objective that there are relatively few models addressing it.

Confidentiality

The rules are that one must not read above one’s own level (NRU, No Read Up) and must not write below one’s own level (NWD, No Write Down). These form the core content of the model developed by Bell and LaPadula in 1973. The model successfully captured how the use of security labels works in practice. In particular, the NWD rule (the so‑called *-property, or confinement rule) was an innovation, because earlier models did not prevent computer Trojan horses from leaking information downward. The more obvious NRU rule is also known as the simple security rule. In addition to these rules, the BLP model includes the assumption that object classifications do not change during an operation. This required so‑called tranquility property (a kind of “stabilization”) can be realized to varying degrees, but without it the security objective would not be achieved.

Declassification or lowering of security levels (for example, copying information from a top‑secret document into a secret one) may be performed only by special, “trusted” subjects. Strict adherence to this model prevents sensitive data from leaking to unauthorized users.

Integrity

If integrity is the target, a different classification scheme than that used for confidentiality may be required. Let us assume here simply that some form of level hierarchy has been established. It is then easy to observe that integrity rules are exactly the opposite of those in the BLP model. Level‑based integrity is preserved if the rules NWU, No Write Up, and NRD, No Read Down, are followed. This integrity model, developed by Biba, is a few years younger than the BLP model and is named after its author. It includes some variations: if the NWU and NRD rules are applied strictly, the result is referred to as the mandatory Biba model. If it is used together with the BLP model and no separate integrity classification is defined, the outcome is that reading and writing are permitted only between objects of the same class.

Another option is to allow both Write Up and Read Down, but in these cases the classification of the higher‑level object (the written‑to target or the reading subject) must be downgraded to match the lower level. The Biba model has not, by itself, been particularly practical, apparently because it was not modelling prevailing real‑world practices.

About ten years younger than the Biba model is the so‑called commercial integrity model developed by Clark and Wilson. It introduces two central concepts: well‑formed transactions and separation of privilege (or duty). The latter is a standard practice in business for preventing fraud and is also known in military contexts, for example in the handling of nuclear weapons.

Transactions preserve integrity if they are composed of integrity‑preserving transformations, that is, procedures. The application of these transformations is subject to access control and, in particular, separation of privilege. The model does not, however, provide a means for determining which transformations are integrity‑preserving. In this respect, it is no more complete than the Biba model.

MAC, DAC, RBAC, and others

Bell–LaPadula and Biba are access control models that the operating system applies when mediating access to resources such as memory or files on disk. More precisely, they are MAC (Mandatory Access Control) models, in which a system‑wide policy determines which users are allowed to read or write which objects, and users cannot make information available to others without the appropriate clearance, no matter how convenient it might be. A less strict model is known as DAC (Discretionary Access Control), where users or processes that have access to an object are allowed some discretion over who else may access it. In DAC systems, access decisions are typically based on identities or group membership, and a user or process with sufficient rights may delegate those rights to others. This flexibility makes DAC easier to use, but also makes it more difficult to control information flow in a systematic way than with MAC. Operating systems therefore often combine DAC and MAC models: MAC policies impose global constraints on information flow, while DAC allows users and programs to manage access rights within those constraints.

Access control is discussed in depth elsewhere. In this context, it is also worth mentioning role‑based access control (RBAC), which restricts access to objects based on roles that may correspond to job functions. Although RBAC is intuitively simple, it can be used to implement both DAC and MAC access control policies. There are also other models. For example, the Chinese Wall model (Brewer and Nash, 1989) addresses access control rules in situations where a consulting firm has multiple client companies, some of which are competitors. The goal is to define classifications and access rights in such a way that no information flow can occur that would create a conflict of interest.

Posting submission...