Difference between revisions of "Software Process"

From CyberSecurity
Jump to: navigation, search
(Implementation Processes)
(Maintenance Processes)
Line 29: Line 29:
  
 
====Maintenance Processes====
 
====Maintenance Processes====
Root Cause Analysis
+
Root Cause Analysis,
Sharing Knowledge with Industry
+
Sharing Knowledge with Industry,
  
 
===Section 4:  The Role of Public Policy Governing the Changes ===
 
===Section 4:  The Role of Public Policy Governing the Changes ===

Revision as of 01:38, 3 December 2005

Whitepaper Proposal

Summary

Enterprises have been concerned with developing the most feature rich-applications to give them a competitive advantage in the marketplace. This approach has created vulnerable code that can be fall prey to the threat of cyberterrorism, hacking and financial risk posed by privacy legislation and litigation. Development processes must change to improve the security and privacy of code. Recommendations for public policy as well as software practices will be introduced.

Section 1: Factors Driving the Need for Changes to Modern Software Practices

Daryl Sterling: The first section will discuss the factors that have emerged and are driving the need for changing software practices. First among these changes is the emergence of the Internet and the attacks that are enabled by it. Analysis of the following attacks will included but not be limited to: hacking through buffer/integer overruns, cross-site scripting, SQL injection, etc; problems with patching lifecycle). Charistel Ticong Also, government regulation through legislation such as Sarbanes-Oxley and SB1386 are punishing companies through massive fines and criminal prosecution.

Section 2: Broken Software Practices

Eric Leonard: Section two will review the history of current software practices to show how we have arrived in current security and privacy crises. It will analyze the shortcomings and biases of current practices and how incentives need to change. Some of the problems that will be addressed will be: the lack of security and software engineering training in universities, the hacking mindset, marketing and feature-driven development, software processes that give little time for review and test.

Section 3: Improved Software Processes

Chad Parry: Improvements to software processes will addressed in section three. Various proposed changes will be discussed including the tenants found in the Microsoft Software Development Lifecycle (SDL). The proposed improvements will mirror each of the shortcomings that were posed in the previous sections.

Design Processes

Simpler, more testable designs More reliable languages Threat modeling Defense in depth

Implementation Processes

Security Reviews Static Analysis Tools

Maintenance Processes

Root Cause Analysis, Sharing Knowledge with Industry,

Section 4: The Role of Public Policy Governing the Changes

Charistel Ticong: Section 4 will discuss public policy for protection and security of computer dating back to the year 1973, with the privacy and security concerns of storing personal medical records on computers. It will discuss the how the benefits of computerization were weighed with the concern for safety and reliability of storing personal information on computers. (Privacy Rights Clearinghouse). Public policy must be updated in the light of today’s more complicated security landscape and sophisticated hackers and cyber terrorists. Discussion will include the Code of Fair Information Practices, which consists of five clauses: openness, disclosure, secondary use, correction, and security. It will also discuss legislation that affects corporate security and privacy policy (Sarbanes-Oxely, SB1386, etc.)

First Draft

Introduction

Factors Driving the Need for Changes to Modern Software Practices

Broken Software Practices

Software Engineering Biases Most software engineering approaches perform design and test based on a set of requirements specifications or use case analysis. Testing is supposed to demonstrate traceability back to the original requirements and is supposed to verify and validate the functionality of the system. By testing only against what is specified in the analysis phase of a project, any segment of the application outside that specification goes untested. A large portion of security vulnerabilities can be found in the extra-specification portion of the code. A fundamental shift from testing to find vulnerabilities is required based on feature requirement to threat scenarios from adversaries and malicious inputs must be considered. See sections on Threat modeling and Misuse case analysis.

Tools and programming languages are also a source of problems for security. Applications are written as if all users and the environment were cooperative. Operating systems such as windows were created with security as an afterthought. The internet was developed within DARPA and US military as a system that wasn’t about central control, but about being able to survive node loss during nuclear war.

C and C++ (as well as other languages) and their runtimes were created to allow the full creativity of users who know what they are doing. Because the C programming language evolved from a typeless language it carries idiosyncrasies that make it more vulnerable to security problems. Specifically, its syntactic similarities between pointers and array types and how strings/buffers are manipulated has made it more vulnerable to buffer overruns. If arrays’ bounds and checked, many hacks could be avoided. Standard practices that have been taught for using the C runtime library’s string libraries don’t take into account the fact that inputs could be manipulated in order to execute arbitrary code.

The expressivity of these languages often makes it difficult to detect simple mistakes. It is difficult to spot mistakes in code when the difference between valid statements in these languages are so close together. A better-designed language would have greater separation between valid statements of code to prevent users from easily confusing on statement with another.

Conflicting interests of decision makers The current software development marketplace is driven by demands for new features. Security of the software is generally taken for granted by the user and is difficult to impossible to take into consideration when a software package is being considered for purchase. Because of this, most product releases are devised on the next se of features that can be added based on what the market demands. While product development understand that reputation could be lost if a product has low security quality, rarely does the business put security priority over features. .

Training in Software Development Processes that Emphasize Security Most computer science or computer engineering students have not been adequately trained in security practices for software development. Computer science programs in universities require some software engineering coursework. These programs generally emphasis how to achieve quality (of which security is only one aspect) and rarely address security explicitly. Classes may include project management, requirements gathering, design, and testing, but don’t address specific issues of software security. Even if graduating computer science students receive training in software engineering, not all people entering the information technology workforce have degrees in CS.

About one-third of people working as programmers have degrees in computer science, and about one-quarter of those in IT jobs employment hold computer and information science degrees. Most candidates hold degrees in a wide range of fields such as engineering and mathematics to psychology and education.

Developer mindset Developers have traditionally been given the wrong incentives for producing secure code. Generally, developer productivity is measured on lines of code or function points delivered features delivered. Quality is measured on number of bugs (not specifically software bugs) Once software is shipped, little accountability for the mistakes are tied back to the person who created them. Developers are also quick to code and slow to design and even slower to devise threat models.


Ritchie, Dennis M “The Development of the C Language” presented at Second History of Programming Languages conference, Cambridge, Mass., April, 1993. http://cm.bell-labs.com/cm/cs/who/dmr/chist.html.

Bates, Rodney “Buffer Overrun Madness”, Open Source Vol 2, No3, May 2004.

LeBlanc, Rich; Sobel, Ann, Joint Task Force on Computing Curricula, “Software Engineering 2004: Curriculum Guidelines for Undergraduate Programs in Software Engineering” August 23, 2004.

Office of Technology Policy, US Department of Commerce “Update: America’s New Deficit” January 1998.

Improved Software Processes

There are some software processes that can prevent critical security bugs from ever being introduced into an application. Companies that are willing to make improvements to their processes without waiting for government intervention can adopt these techniques. Some of the most fundamental recommended changes are described in this section. They are organized according to the phase of the software lifecycle that they are relevant to: the design phase, the implementation phase or the maintenance phase.

Design Processes

Testable Designs

It has been mentioned that a hacker mentality can cause software developers to produce applications with low quality. A paradigm shift by developers is required to alleviate this problem. Traditionally software developers view themselves as having complete freedom in the way that they design and implement software. There is a hubris which leads to a cowboy mentality which leads to shortcuts which leads to expensive defects. New design paradigms clearly emphasize that this problem can be solved with designs that are simple and testable. A simple design ought to be favored over a complex one, even though the complex one may be believed to be slightly more efficient. A design ought to take into consideration the ways that the design can be tested. An attempt should be made to verify the correctness of the design, and this is only a tractable problem if the design was kept simple.

Security-sensitive components of applications can benefit even more from this principle. A simply designed security component can be verified to be correct, but a complex design could hide vulnerabilities for a long time.

Programming Languages

One tool for writing secure software is a reliable programming language. Most software could be written using any of several modern programming languages. The decision on which language to use might be based on efficiency or developer familiarity. Some languages are markedly different in their reliability for secure applications, however. In some traditional languages like C, it is hard to avoid certain mistakes (such as a double-free) that can lead to a disastrous exploit. In newer languages like C#, it is easy to avoid that class of bugs. Mistakes can still be made that will lead to an exploit using a language like C#, but it is materially easier to write safe code. A desirable language trait is that security-sensitive operations are easy to spot and easy to review in software. Another desirable trait is that inexperienced developers will be able to write secure software. A corollary to this is that only experienced developers should know how to expose the features that could lead to security vulnerabilities, and since they are experienced developers they will know how to safeguard themselves. In contrast, many fundamental and commonly-used C functions (such as “malloc”) are needed by beginners but are hard to use safely.

Companies that wish to ensure that their software security improves can move towards languages that were designed with security in mind. This move does not prevent security defects, but it can turn large classes of bugs into a rare occurrence. This move requires a large upfront cost for developer education and it also requires the discipline to make a language decision based on the security implications rather than other factors.

Threat Modeling

It is accepted by security experts that software has adversaries and that it has to be designed accordingly. The process of threat modeling assimilates that outlook into the design review process. Designs can be evaluated based on a threat model analysis. Using a threat model analysis, design flaws can be corrected early in the software development process before they ever become security holes. Another advantage to threat modeling is that it encourages architects and developers to change designs so that vulnerabilities are removed entirely. Once the software is designed and implemented, it is usually impossible to remove vulnerabilities, and there is only time to mitigate them. An example of mitigating a vulnerability is adding input validation tests to ensure that a function does not operate on unexpected data. An example of removing a vulnerability is to remove the function entirely and alter the data flow so that the work performed by the function is no longer necessary. Sometimes this can be done by reducing feature sets or by discovering more elegant ways to implement the required functionality.

Threat modeling follows some steps for making sure that damage that can be caused by an attacker is mitigated. First the assets or resources that are at risk are identified. Then the means by which an attacker could compromise those assets are identified. The damage that would be caused by each possible attack is rated according to its severity. Note that vulnerabilities are examined, not exploits. A vulnerability means that there is a possible attack vector, whether there is a known exploit or not. Each vulnerability is treated as if there were a working exploit that will be discovered. Then possible mitigation techniques are identified. All vulnerabilities should be mitigated somehow. The most severe vulnerabilities should all have strong mitigations where multiple successful attacks would be required in order to compromise the assets.

Defense in Depth

The mantra of “defense in depth” is another paradigm shift that can be adopted by companies and developers that would like to reduce security exploits. With this mindset, it is assumed that there could be an as-yet undiscovered security hole in any part of an application. In order to prevent damage, security checks are placed in multiple places. Sensitive security assets should be protected by multiple layers of security. Even though each layer individually should be designed to be sufficient to guard against attacks, multiple such layers offer the strongest safeguards. An application that exhibits defense in depth can sometimes protect all its sensitive assets even if attackers are successful at compromising individual components or layers of the system.

Implementation Processes

Static Analysis Tools

Static analysis is the process of verifying that applications are free of defects by examining the source code directly. Static analysis can only be done with sophisticated tools. These tools scan through source code for signatures of known defects. In this sense they are similar to virus scanners which scan files for patterns that identify known viruses. The first step is for security experts to identify common source code idioms that are unsafe. For example, most buffer overrun bugs are caused by a defect where the buffer size is not tracked scrupulously. These types of bugs can be found by scanning source code for buffer operations that do not take the correct size into account. The signature for these types of bugs is programmed into the static analysis tool so that it will always be able to flag instances of the bug. Then the tool must be run regularly and the flagged source code must be fixed.

Static analysis is more reliable than code reviews because the tools can scan millions of lines of source code without getting tired or making a mistake. The tools can only be used to find relatively simple signatures, however. Many common security bugs, such as those related to buffer overruns, can be prevented like this, but more exotic bugs are beyond the reach of current static analysis tools.

Security Reviews

The practice of regular security reviews is known to increase the quality of software. Security reviews are expensive, but they are still less expensive than deploying the fix for a security defect. Some companies already institute security reviews as part of their standard development practice. A security review is when an expert (but not the expert that designed or implemented the software) examines an application to find potential security holes. Security reviews focus only on security-sensitive issues and ignore the thousands of other issues that affect software quality. Sometimes the reviewers will be following a checklist of items in order to make sure that all the items on the checklist were remembered by the developer.

Security reviews institutionalize a desire for security that most application development teams already have. The security review gives developers a chance to concentrate only on finding security defects, and so the quality of the security analysis will be improved. The security review also gives developers the goal of writing software well enough that it passes the security review the first time. These factors serve to remind developers of the importance of security, which can help teams make better decisions when tradeoffs arise between developing more features or developing more secure features.

Maintenance Processes

Root Cause Analysis

Root Cause Analysis (RCA) is the process of searching for underlying reasons for failures. A root cause analysis is more in-depth and more expensive than a traditional analysis.

A traditional analysis usually includes the minimum amount of work that needs to be done in order to return the software to a trusted working condition. The first step is that unexpected behavior in the application gets reported. Customers and maintenance engineers try to reproduce the problem and to record the steps that are necessary to trigger the bug. Then regression tests can be written that isolate the problem by executing a small number of steps that will cause the bug to recur. A failure in an application typically can be traced back to a single code defect. The code defect is fixed by developers. Then the fix is packaged and shipped to the customers. The customers verify that the modified application works correctly in their environment and they deploy the new version.

A root cause analysis adds several steps to the traditional process. Before performing a root cause analysis, all the traditional work of isolating and fixing the bug must occur. In addition, the analysis includes steps to prevent bugs of a similar nature from happening again. A typical technique is known as the “5 Whys.” Using this technique, an investigator asks why the bug occurred. Then the investigator takes the answer and asks why that was the case, repeated about five times. Some bugs require more or fewer questions to arrive at a useful answer. The goal is to use the information gleaned from one failure in order to prevent future failures. Using this technique, each discovered failure can lead to the software becoming more reliable rather than less reliable.

Sharing Knowledge

The current state of the art in software development is that developers must learn little by little over the course of an entire career about good practices. There is not a central repository of knowledge for good software practices and procedures. In fact, there is not much agreement on which practices are desirable nor on the standards that determine what makes a practice good. In this sense, software development is more of a craft than an engineering discipline. Because of its craft-like nature, software engineers all over the world have to continually relearn the same lessons from the same mistakes, even though other engineers had already discovered that knowledge.

One of the distinguishing features of other engineering disciplines, such as civil engineering, is that there are codes that prescribe and govern correct practices. The knowledge necessary for creating these codes still needs to be accumulated in the computer science field. To this end, cooperation is needed between corporations and universities to share knowledge and best practices. These organizations ought to continue to work together with the goal of removing the craft and superstition from software development.

The Role of Public Policy Governing the Changes

Conclusion