Note: If you click on
the 1 image presented in this chapter, it will display a larger and better image
quality at regular size. We shrunk the image on this pages to half its scanned
size so it would fit on web page better.
Special Publication 800-12: An Introduction to Computer Security - The NIST Handbook
Computer security assurance is the degree of confidence one has that the security measures, both technical and operational, work as intended to protect the system and the information it processes. Assurance is not, however, an absolute guarantee that the measures work as intended. Like the closely related areas of reliability and quality, assurance can be difficult to analyze; however, it is something people expect and obtain (though often without realizing it). For example, people may routinely get product recommendations from colleagues but may not consider such recommendations as providing assurance.
|Security assurance is the degree of confidence one has that the security controls operate correctly and protect the system as intended.|
Assurance is a degree of confidence, not a true measure of how secure the system actually is. This distinction is necessary because it is extremely difficult -- and in many cases virtually impossible -- to know exactly how secure a system is.
Assurance is a challenging subject because it is difficult to describe and even more difficult to quantify. Because of this, many people refer to assurance as a "warm fuzzy feeling" that controls work as intended. However, it is possible to apply a more rigorous approach by knowing two things: (1) who needs to be assured and (2) what types of assurance can be obtained. The person who needs to be assured is the management official who is ultimately responsible for the security of the system. Within the federal government, this person is the authorizing or accrediting official.71
There are many methods and tools for obtaining assurance. For discussion purposes, this chapter categorizes assurance in terms of a general system life cycle. The chapter first discusses planning for assurance and then presents the two categories of assurance methods and tools: (1) design and implementation assurance and (2) operational assurance. Operational assurance is further categorized into audits and monitoring.
The division between design and implementation assurance and operational assurance can be fuzzy. While such issues as configuration management or audits are discussed under operational assurance, they may also be vital during a system's development. The discussion tends to focus more on technical issues during design and implementation assurance and to be a mixture of management, operational, and technical issues under operational assurance. The reader should keep in mind that the division is somewhat artificial and that there is substantial overlap.
Accreditation is a management official's formal acceptance of the adequacy of a system's security. The best way to view computer security accreditation is as a form of quality control. It forces managers and technical staff to work together to find workable, cost-effective solutions given security needs, technical constraints, operational constraints, and mission or business requirements. The accreditation process obliges managers to make the critical decision regarding the adequacy of security safeguards and, therefore, to recognize and perform their role in securing their systems. In order for the decisions to be sound, they need to be based on reliable information about the implementation of both technical and nontechnical safeguards. These include:
A computer system should be accredited before the system becomes operational with periodic reaccreditation after major system changes or when significant time has elapsed.72 Even if a system was not initially accredited, the accreditation process can be initiated at any time. Chapter 8 further discusses accreditation.
Assurance is an extremely important -- but not the only -- element in accreditation. As shown in the diagram, assurance addresses whether the technical measures and procedures operate either (1) according to a set of security requirements and specifications or (2) according to general quality principles. Accreditation also addresses whether the system's security requirements are correct and well implemented and whether the level of quality is sufficiently high. These activities are discussed in Chapters 7 and 8.
The accrediting official makes the final decision about how much and what types of assurance are needed for a system. For this decision to be informed, it is derived from a review of security, such as a risk assessment or other study (e.g., certification), as deemed appropriate by the accrediting official.73 The accrediting official needs to be in a position to analyze the pros and cons of the cost of assurance, the cost of controls, and the risks to the organization. At the end of the accreditation process, the accrediting official will be the one to accept the remaining risk. Thus, the selection of assurance methods should be coordinated with the accrediting official.
In selecting assurance methods, the need for assurance should be weighed against its cost. Assurance can be quite expensive, especially if extensive testing is done. Each method has strengths and weaknesses in terms of cost and what kind of assurance is actually being delivered. A combination of methods can often provide greater assurance, since no method is foolproof, and can be less costly than extensive testing.
The accrediting official is not the
only arbiter of assurance. Other officials who use the system should also be
consulted. (For example, a Production Manager who relies on a Supply System
should provide input to the Supply Manager.) In addition, there may be constraints
outside the accrediting official's control that also affect the selection of
methods. For instance, some of the methods may unduly restrict competition in
acquisitions of federal information processing resources or may be contrary
to the organization's privacy policies. Certain assurance methods may be required
by organizational policy or directive.
Assurance planning should begin during the planning phase of the system life cycle, either for new systems or a system upgrades. Planning for assurance when planning for other system requirements makes sense. If a system is going to need extensive testing, it should be built to facilitate such testing.
Planning for assurance helps a manager make decisions about what kind of assurance will be cost-effective. If a manager waits until a system is built or bought to consider assurance, the number of ways to obtain assurance may be much smaller than if the manager had planned for it earlier, and the remaining assurance options may be more expensive.
|Design and implementation assurance should be examined from two points of view: the component and the system. Component assurance looks at the security of a specific product or system component, such as an operating system, application, security add-on, or telecommunications module. System assurance looks at the security of the entire system, including the interaction between products and modules.|
Design and implementation assurance addresses whether the features of a system, application, or component meets security requirements and specifications and whether they are they are well designed and well built. Chapter 8 discusses the source for security requirements and specifications. Design and implementation assurance examines system design, development, and installation. Design and implementation assurance is usually associated with the development/acquisition and implementation phase of the system life cycle; however, it should also be considered throughout the life cycle as the system is modified.
As stated earlier, assurance can address whether the product or system meets a set of security specifications, or it can provide other evidence of quality. This section outlines the major methods for obtaining design and implementation assurance.
Testing can address the quality of the system as built, as implemented, or as operated. Thus, it can be performed throughout the development cycle, after system installation, and throughout its operational phase. Some common testing techniques include functional testing (to see if a given function works according to its requirements) or penetration testing (to see if security can be bypassed). These techniques can range from trying several test cases to in-depth studies using metrics, automated tools, or multiple detailed test cases.
Certification is a formal process for testing components or systems against a specified set of security requirements. Certification is normally performed by an independent reviewer, rather than one involved in building the system. Certification is more often cost-effective for complex or high-risk systems. Less formal security testing can be used for lower-risk systems. Certification can be performed at many stages of the system design and implementation process and can take place in a laboratory, operating environment, or both.
NIST produces validation suites and conformance testing to determine if a product (software, hardware, firmware) meets specified standards. These test suites are developed for specific standards and use many methods. Conformance to standards can be important for many reasons, including interoperability or strength of security provided. NIST publishes a list of validated products quarterly.
In the development of both commercial off-the-shelf products and more customized systems, the use of advanced or trusted system architectures, development methodologies, or software engineering techniques can provide assurance. Examples include security design and development reviews, formal modeling, mathematical proofs, ISO 9000 quality techniques, or use of security architecture concepts, such as a trusted computing base (TCB) or reference monitor.
Some system architectures are intrinsically more reliable, such as systems that use fault-tolerance, redundance, shadowing, or redundant array of inexpensive disks (RAID) features. These examples are primarily associated with system availability.
One factor in reliable security is the concept of ease of safe use, which postulates that a system that is easier to secure will be more likely to be secure. Security features may be more likely to be used when the initial system defaults to the "most secure" option. In addition, a system's security may be deemed more reliable if it does not use very new technology that has not been tested in the "real" world (often called "bleeding-edge" technology). Conversely, a system that uses older, well-tested software may be less likely to contain bugs.
A product evaluation normally includes testing. Evaluations can be performed by many types of organizations, including government agencies, both domestic and foreign; independent organizations, such as trade and professional organizations; other vendors or commercial groups; or individual users or user consortia. Product reviews in trade literature are a form of evaluation, as are more formal reviews made against specific criteria. Important factors for using evaluations are the degree of independence of the evaluating group, whether the evaluation criteria reflect needed security features, the rigor of the testing, the testing environment, the age of the evaluation, the competence of the evaluating organization, and the limitations placed on the evaluations by the evaluating group (e.g., assumptions about the threat or operating environment).
The ability to describe security requirements and how they were met can reflect the degree to which a system or product designer understands applicable security issues. Without a good understanding of the requirements, it is not likely that the designer will be able to meet them.
Assurance documentation can address the security either for a system or for specific components. System-level documentation should describe the system's security requirements and how they have been implemented, including interrelationships among applications, the operating system, or networks. System-level documentation addresses more than just the operating system, the security system, and applications; it describes the system as integrated and implemented in a particular environment. Component documentation will generally be an off-the-shelf product, whereas the system designer or implementer will generally develop system documentation.
The accreditation of a product or system to operate in a similar situation can be used to provide some assurance. However, it is important to realize that an accreditation is environment- and system-specific. Since accreditation balances risk against advantages, the same product may be appropriately accredited for one environment but not for another, even by the same accrediting official.
A vendor's, integrator's, or system developer's self-certification does not rely on an impartial or independent agent to perform a technical evaluation of a system to see how well it meets a stated security requirement. Even though it is not impartial, it can still provide assurance. The self-certifier's reputation is on the line, and a resulting certification report can be read to determine whether the security requirement was defined and whether a meaningful review was performed.
A hybrid certification is possible where the work is performed under the auspices or review of an independent organization by having that organization analyze the resulting report, perform spot checks, or perform other oversight. This method may be able to combine the lower cost and greater speed of a self-certification with the impartiality of an independent review. The review, however, may not be as thorough as independent evaluation or testing.
Warranties are another source of assurance. If a manufacturer, producer, system developer, or integrator is willing to correct errors within certain time frames or by the next release, this should give the system manager a sense of commitment to the product and of the product's quality. An integrity statement is a formal declaration or certification of the product. It can be backed up by a promise to (a) fix the item (warranty) or (b) pay for losses (liability) if the product does not conform to the integrity statement.
A manufacturer's or developer's published assertion or formal declaration provides a limited amount of assurance based exclusively on reputation.
It is often important to know that
software has arrived unmodified, especially if it is distributed electronically.
In such cases, checkbits or digital signatures can provide high assurance that
code has not been modified. Anti-virus software can be used to check software
that comes from sources with unknown reliability (such as a bulletin board).
Design and implementation assurance addresses the quality of security features built into systems. Operational assurance addresses whether the system's technical features are being bypassed or have vulnerabilities and whether required procedures are being followed. It does not address changes in the system's security requirements, which could be caused by changes to the system and its operating or threat environment. (These changes are addressed in Chapter 8.)
Security tends to degrade during the operational phase of the system life cycle. System users and operators discover new ways to intentionally or unintentionally bypass or subvert security (especially if there is a perception that bypassing security improves functionality). Users and administrators often think that nothing will happen to them or their system, so they shortcut security. Strict adherence to procedures is rare, and they become outdated, and errors in the system's administration commonly occur.
Organizations use two basic methods to maintain operational assurance:
In general, the more "real-time" an activity is, the more it falls into the category of monitoring. This distinction can create some unnecessary linguistic hairsplitting, especially concerning system-generated audit trails. Daily or weekly reviewing of the audit trail (for unauthorized access attempts) is generally monitoring, while an historical review of several months' worth of the trail (tracing the actions of a specific user) is probably an audit.
An audit conducted to support operational assurance examines whether the system is meeting stated or implied security requirements including system and organization policies. Some audits also examine whether security requirements are appropriate, but this is outside the scope of operational assurance. (See Chapter 8.) Less formal audits are often called security reviews.
|A person who performs an independent audit should be free from personal and external constraints, which may impair their independence and should be organizationally independent.|
Audits can be self-administered or independent (either internal or external).74 Both types can provide excellent information about technical, procedural, managerial, or other aspects of security. The essential difference between a self-audit and an independent audit is objectivity. Reviews done by system management staff, often called self-audits/ assessments, have an inherent conflict of interest. The system management staff may have little incentive to say that the computer system was poorly designed or is sloppily operated. On the other hand, they may be motivated by a strong desire to improve the security of the system. In addition, they are knowledgeable about the system and may be able to find hidden problems.
The independent auditor, by contrast, should have no professional stake in the system. Independent audit may be performed by a professional audit staff in accordance with generally accepted auditing standards.
There are many methods and tools, some of which are described here, that can be used to audit a system. Several of them overlap.
Even for small multiuser computer systems, it is a big job to manually review security features. Automated tools make it feasible to review even large computer systems for a variety of security flaws.
There are two types of automated tools: (1) active tools, which find vulnerabilities by trying to exploit them, and (2) passive tests, which only examine the system and infer the existence of problems from the state of the system.
Automated tools can be used to help
find a variety of threats and vulnerabilities, such as improper access controls
or access control configurations, weak passwords, lack of integrity of the system
software, or not using all relevant software updates and patches. These tools
are often very successful at finding vulnerabilities and are sometimes used
by hackers to break into systems. Not taking advantage of these tools puts system
administrators at a disadvantage. Many of the tools are simple to use; however,
some programs (such as access-control auditing tools for large mainframe systems)
require specialized skill to use and interpret.
|The General Accounting Office provides standards and guidance for internal controls audits of federal agencies.|
An auditor can review controls in place and determine whether they are effective. The auditor will often analyze both computer and noncomputer-based controls. Techniques used include inquiry, observation, and testing (of both the controls themselves and the data). The audit can also detect illegal acts, errors, irregularities, or a lack of compliance with laws and regulations. Security checklists and penetration testing, discussed below, may be used.
|Warning: Security Checklists that are passed (e.g., with a B+ or better score) are often used mistakenly as proof (instead of an indication) that security is sufficient. Also, managers of systems which "fail" a checklist often focus too much attention on "getting the points," rather than whether the security measures makes sense in the particular environment and are correctly implemented.|
Within the government, the computer security plan provides a checklist against which the system can be audited. This plan, discussed in Chapter 8, outlines the major security considerations for a system, including management, operational, and technical issues. One advantage of using a computer security plan is that it reflects the unique security environment of the system, rather than a generic list of controls. Other checklists can be developed, which include national or organizational security policies and practices (often referred to as baselines). Lists of "generally accepted security practices" (GSSPs) can also be used. Care needs to be taken so that deviations from the list are not automatically considered wrong, since they may be appropriate for the system's particular environment or technical constraints.
Checklists can also be used to verify that changes to the system have been reviewed from a security point of view. A common audit examines the system's configuration to see if major changes (such as connecting to the Internet) have occurred that have not yet been analyzed from a security point of view.
Penetration testing can use many methods to attempt a system break-in. In addition to using active automated tools as described above, penetration testing can be done "manually." The most useful type of penetration testing is to use methods that might really be used against the system. For hosts on the Internet, this would certainly include automated tools. For many systems, lax procedures or a lack of internal controls on applications are common vulnerabilities that penetration testing can target. Another method is "social engineering," which involves getting users or administrators to divulge information about systems, including their passwords.75
Security monitoring is an ongoing activity that looks for vulnerabilities and security problems. Many of the methods are similar to those used for audits, but are done more regularly or, for some automated tools, in real time.
As discussed in Chapter 8, a periodic review of system-generated logs can detect security problems, including attempts to exceed access authority or gain system access during unusual hours.
Several types of automated tools monitor a system for security problems. Some examples follow:
From a security point of view, configuration management provides assurance that the system in operation is the correct version (configuration) of the system and that any changes to be made are reviewed for security implications. Configuration management can be used to help ensure that changes take place in an identifiable and controlled environment and that they do not unintentionally harm any of the system's properties, including its security. Some organizations, particularly those with very large systems (such as the federal government), use a configuration control board for configuration management. When such a board exists, it is helpful to have a computer security expert participate. In any case, it is useful to have computer security officers participate in system management decision-making.
Changes to the system can have security implications because they may introduce or remove vulnerabilities and because significant changes may require updating the contingency plan, risk analysis, or accreditation.
In addition to monitoring the system, it is useful to monitor external sources for information. Such sources as trade literature, both printed and electronic, have information about security vulnerabilities, patches, and other areas that impact security. The Forum of Incident Response Teams (FIRST) has an electronic mailing list that receives information on threats, vulnerabilities, and patches.76
Assurance is an issue for every control and safeguard discussed in this Handbook. Are user ID and access privileges kept up to date? Has the contingency plan been tested? Can the audit trail be tampered with? One important point to be reemphasized here is that assurance is not only for technical controls, but for operational controls as well. Although the chapter focused on information systems assurance, it is also important to have assurance that management controls are working well. Is the security program effective? Are policies understood and followed? As noted in the introduction to this chapter, the need for assurance is more widespread than people often realize.
Life Cycle. Assurance is closely linked to the planning for security in the system life cycle. Systems can be designed to facilitate various kinds of testing against specified security requirements. By planning for such testing early in the process, costs can be reduced; in some cases, without proper planning, some kinds of assurance cannot be otherwise obtained.
There are many methods of obtaining assurance that security features work as anticipated. Since assurance methods tend to be qualitative rather than quantitative, they will need to be evaluated. Assurance can also be quite expensive, especially if extensive testing is done. It is useful to evaluate the amount of assurance received for the cost to make a best-value decision. In general, personnel costs drive up the cost of assurance. Automated tools are generally limited to addressing specific problems, but they tend to be less expensive.
Borsook, P. "Seeking Security." Byte. 18(6), 1993. pp. 119-128.
Dykman, Charlene A. ed., and Charles K. Davis, asc. ed. Control Objectives -- Controls in an Information Systems Environment: Objectives, Guidelines, and Audit Procedures. (Fourth edition). Carol Stream, IL: The EDP Auditors Foundation, Inc., April 1992.
Farmer, Dan and Wietse Venema. "Improving the Security of Your Site by Breaking Into It." Available from FTP.WIN.TUE.NL. 1993.
Guttman, Barbara. Computer Security Considerations in Federal Procurements: A Guide for Procurement Initiators, Contracting Officers, and Computer Security Officials. Special Publication 800-4. Gaithersburg, MD: National Institute of Standards and Technology, March 1992.
Howe, D. "Information System Security Engineering: Cornerstone to the Future." Proceedings of the 15th National Computer Security Conference, Vol 1. (Baltimore, MD) Gaithersburg, MD: National Institute of Standards and Technology, 1992. pp. 244-251.
Levine, M. "Audit Serve Security Evaluation Criteria." Audit Vision. 2(2). 1992, pp. 29-40.
National Institute of Standards and Technology. Guideline for Computer Security Certification and Accreditation. Federal Information Processing Standard Publication 102. September 1983.
National Institute of Standards and Technology. Guideline for Lifecycle Validation, Verification, and Testing of Computer Software. Federal Information Processing Standard Publication 101. June 1983.
National Institute of Standards and Technology. Guideline for Software Verification and Validation Plans. Federal Information Processing Standard Publication 132. November 1987.
Nuegent, W., J. Gilligan, L. Hoffman, and Z. Ruthberg. Technology Assessment: Methods for Measuring the Level of Computer Security. Special Publication 500-133. Gaithersburg, MD: National Institute of Standards and Technology, 1985.
Peng, Wendy W., and Dolores R. Wallace. Software Error Analysis. Special Publication 500-209. Gaithersburg, MD: National Institute of Standards and Technology, 1993.
Peterson, P. "Infosecurity and Shrinking Media." ISSA Access. 5(2), 1992. pp. 19-22.
Pfleeger, C., S. Pfleeger, and M.
Theofanos, "A Methodology for Penetration Testing." Computers and
Security. 8(7), 1989. pp. 613-620.
Polk, W. Timothy, and Lawrence Bassham. A Guide to the Selection of Anti-Virus Tools and Techniques. Special Publication 800-5. Gaithersburg, MD: National Institute of Standards and Technology, December 1992.
Polk, W. Timothy. Automated Tools for Testing Computer System Vulnerability. Special Publication 800-6. Gaithersburg, MD: National Institute of Standards and Technology, December 1992.
President's Council on Integrity and Efficiency. Review of General Controls in Federal Computer Systems. Washington, DC: President's Council on Integrity and Efficiency, October 1988.
President's Council on Management Improvement and the President's Council on Integrity and Efficiency. Model Framework for Management Control Over Automated Information System. Washington, DC: President's Council on Management Improvement, January 1988.
Ruthberg, Zella G, Bonnie T. Fisher and John W. Lainhart IV. System Development Auditor. Oxford, England: Elsevier Advanced Technology, 1991.
Ruthburg, Zella, et al. Guide to Auditing for Controls and Security: A System Development Life Cycle Approach. Special Publication 500-153. Gaithersburg, MD: National Institute of Standards and Technology, April 1988.
Strategic Defense Initiation Organization. Trusted Software Methodology. Vols. 1 and II. SDI-S-SD-91-000007. June 17, 1992.
Wallace, Dolores, and J.C. Cherniasvsky. Guide to Software Acceptance. Special Publication 500-180. Gaithersburg, MD: National Institute of Standards and Technology, April 1990.
Wallace, Dolores, and Roger Fugi. Software Verification and Validation: Its Role in Computer Assurance and Its Relationship with Software Product Management Standards. Special Publication 500-165. Gaithersburg, MD: National Institute of Standards and Technology, September 1989.
Wallace, Dolores R., Laura M. Ippolito, and D. Richard Kuhn. High Integrity Software Standards and Guidelines. Special Publication 500-204. Gaithersburg, MD: National Institute of Standards and Technology, 1992.
Wood, C., et al. Computer Security:
A Comprehensive Controls Checklist. New York, NY: John Wiley & Sons,
is a process used primarily within the federal government. It is the process
of managerial authorization for processing. Different agencies may use other
terms for this approval function. The terms used here are consistent with Federal
Information Processing Standard 102, Guideline for Computer Security Certification
and Accreditation. (See reference section of this chapter.)
72. OMB Circular A-130 requires management security authorization of operation for federal systems.
73. In the past, accreditation has been defined to require a certification, which is an in-depth testing of technical controls. It is now recognized within the federal government that other analyses (e.g., a risk analysis or audit) can also provide sufficient assurance for accreditation.
74. An example of an internal auditor in the federal government is the Inspector General. The General Accounting Office can perform the fole of external auditor in the federal government. In the private sector, the corporate audit staff serves the role of internal auditor, while a public accounting firm would be an external auditor.
75. While penetration testing is a very powerful technique, it should preferably be conducted with the knowledge and consent of system management. Unknown penetration attempts can cause a lot of stress among operations personnel, and may create unnecessary disturbances.
76. For information on FIRST, send e-mail to FIRST-SEC@FIRST.ORG.
different terms are used to describe risk management and its elements. The definitions
used in this paper are based on the NIST Risk Management Framework.
63. The NIST Risk Management Framework refers to risk interpretation as risk measurement. The term "interpretation" was chosen to emphasize the wide variety of possible outputs from a risk assessment.
64. This is often viewed as a circular, iterative process.
to Chapter 9 | Back
to SP 800-12 Home Page | Go to Chapter 10
July 22, 2014
Page created: July 1, 2004