Team 7 Main

From CyberSecurity
Revision as of 10:05, 24 October 2005 by Keunwoo Lee (talk | contribs) (Estimated Attack Value for Terrorist Aims: pasting in a partial draft of this section)

Jump to: navigation, search

The assignment mentions four bullets:

  • "A plain English, no jargon description of attack techniques attempted..." etc.
  • "Estimated dollar value of the damage..."
  • "Estimated feasibility and strategic value of the attack technique to a terrorist organization."
  • "Feasibility and cost of defending against such attacks."

These map onto the sections below as follows:

  • The "executive summary" is just a summary of the report; we should leave until the rest is written.
  • The "Attack Methodology" is the first bullet.
  • The first subsection of the "Vulnerability Assessment" is the second bullet.
  • The second subsection of the "Vulnerability Assessment" is the third bullet.
  • The "Details: Defenses" is the fourth bullet.

Executive Summary

Attack Analysis

Methodology

Results

Summary of Defenses

Details: Vulnerability Assessment

Estimated Potential Damage

Estimated Attack Value for Terrorist Aims

The attack actually performed in this experiment, by itself, has relatively low value. We study this attack because it represents a wider class of attacks --- attacks on buffer overflow vulnerabilities --- which may be of value to a terrorist organization.

When considering the value of this attack to a terrorist organization, several questions arise:

  1. Feasibility: How feasible is this attack for a terrorist organization?
  2. Scalability: To what extent do the results in this experiment generalize to more serious attacks?
  3. Utility: How could the exploits based on these attacks be used to accomplish terrorist aims?

In this section, we address these questions in turn. The next subsection discusses the first two of these questions (feasibility and scalability

Feasibility

To construct the attacks we conducted, a basic background in computer science would be required. This includes an elementary understanding of assembly language programming, and familiarity with programming tools (such as compilers, debuggers, and scripts). An advanced undergraduate in computer science, or a bright self-taught programmer, would possess sufficient knowledge.

To execute a working exploit, all that's required is access to a computer and basic computer literacy skills, such as the ability to use a search engine and to download and install a program.

Resources sufficient for both of the above are available to many terrorist organizations. Walter Lafleur notes that contemporary terrorist organizations often successfully recruit middle-class, educated members. The best-funded organizations have even been known to fund "scholarships" for aspiring members to learn chemistry (to manufacture explosives) or other sciences; such organizations could easily fund scholarships in computer science. And, of course, computer literacy and access are widely diffused in wealthy nations like Europe, or even the emerging middle class of developed nations.

Therefore, we conclude the following. First, all but the poorest and least sophisticated terrorist organizations to possess the ability to execute exploits discovered by others. Second, better-funded terrorist organizations have the additional ability to discover new exploits.

Scalability

The program we attacked was particularly amenable to study; to what extent do these results extend to realistic targets? A successful buffer overflow exploit requires answers to three questions:

  1. Where does the program accepts input into a buffer that may allow overflow?
  2. What is the size of the buffer to be overflowed?
  3. What is the address of this buffer?

In this experiment, the answer to the first of these questions was given to us. In a realistic target program, it might take a great deal of human-guided trial and error to discover where and how to craft input to a program which reveals a possible buffer overflow. Typically, an input will contain many subcomponents, which are each copied into different buffers when the program processes an input, and only one or a few of these buffers may allow overflow. Our experiment does not give us sufficient information as to how long this would take with a realistic target program.

We successfully answered the second and third questions using two attacks: a source examination attack, and a brute-force attack. We can more easily quantify the scalability issues of these attacks.

  • Scaling the source examination attack: Examining, modifying, and rebuilding the source code to a large program may require several times more effort than our test program. However, because the attacker can use automated code scanning tools, this effort does not scale in direct proportion to the code size, but somewhat less. Furthermore, the source examination attack took one programmer-hour or less for both attackers; even if doing the equivalent for a large program took 100 times as much time, it would only take 100 programmer-hours. Three programmers working for a week would suffice.
  • Scaling the brute-force attack: The only additional obstacle to performing this brute-force attack on a realistic target might be processing time --- a realistic program may do a significant amount of computation, making the brute-force trial take longer. However, this attack can trivially be spread out over many machines working in parallel. Today, one can buy a dozen cheap commodity machines at $500 apiece for $6,000. Running a full brute-force attack on our target would take on the order of tens of hours. Even if executing the brute-force attack were 100 times slower than for our target, brute-force scan would only take thousands of machine-hours, or on the order of weeks with a dozen computers.

In short, these attacks easily scale to realistic target programs, with generous estimates of the additional cost being on the order of less than $10,000 or a few weeks of human and machine time --- resources that are easily at the disposal of many terrorist groups. The most probable estimates of the cost are probably somewhat lower than these generous figures, which assume a hundredfold factor difficulty increase.

Furthermore, all of the above estimates apply only to attackers who wish to craft their own exploits, without any assistance from others. In reality, there exists a large worldwide community of attackers who are continually developing exploits and sharing information about them. A terrorist group could invest essentially no money or labor, and simply wait for this community to discover vulnerabilities and construct exploits.

Therefore, we conclude that this attack easily scales to realistic targets, including web servers, ssh, or other network-connected programs in common use.

Strategic value

This attack grants the attacker total control over some machine. There are exactly two ways that a machine can be valuable:

  1. Direct value: A resource connected to, and directly accessible from, that machine may be valuable.
  2. Indirect value: The machine may be useful as a platform for mounting attacks on other machines, which in turn have direct or indirect value.

Both of these kinds of value vary widely in seriousness, depending on the machine that is compromised. We have been asked to consider financial sector targets, so this analysis limits our analysis to attacks on private sector companies. Buffer overflows may present serious problems for other targets, e.g. systems that control critical infrastructure.

XXX-TODO: finish this section.

Details: Defenses

[this is a first pass to help other people work off of and also comment. I'm going to bed, I'll do another pass tomorrow] [See some of my comments under discussion... Keunwoo Lee 00:53, 24 October 2005 (PDT)]

Software is very easy to distribute once it’s created. Also, software engineers and computer scientists are passionate about improving the state of the art, and the market of rapidly improving hardware and voracious consumer appetite has supported this rapid advancement. These very things have contributed to the large-scale insecurity present in our software infrastructure, but are also the primary reasons why it is cost effective to make software secure.
It’s true that the best protection against attacks is simply good administration and policy in the companies that distribute and use software. Having quality control and not allowing people to get hold of things like passwords is important, quite aside from technical issues. I argue that these costs will be largely incurred above and beyond simply avoiding software vulnerabilities, so here, we focus on technical costs.
Creation of software has a “pay once, benefit many people” or “pay once, benefit for a long time” dynamic. Software that’s created today can be pushed over the internet to millions of computers. In addition, software that’s created today provides a basis for future, more complex software, and its design aspects inform future designs, and thus the state of the art improves. The market is thus willing to invest in enhanced security, because there is feeling that we can drastically improve our security, that it’s not just an arms race.
There are three ways to avoid being damaged by exploits such as buffer overrun attacks. One is to lower the attack surface: don’t expose your program to input from the user in unnecessary ways; for example, don’t install potentially risky programs on a server by default. The second is to separate the attack points from critical systems; for example, our target executable for this exercise should have not run under administrative privileges. Or an internet browser could operate within a shell and not be able to affect the larger computer system. The third is to make the software itself free of vulnerabilities, and can be done by hiring talented developers, having an excellent process of quality control, and using good meta-software tools which make your software inherently resistant to exploits recover if a vulnerability is indeed exploited. It can also be achieved by having an efficient way to provide fixes for broken software.
Successfully pursuing all three of these items involves hiring talented and experienced developers and managers, which incurs perhaps a 10-20% overhead over hiring their less talented peers. Such developers can create good robust designs, and the creation of the better software is not more expensive beyond the initial cost of the people, who can do more with less time. Also, good design becomes cheaper over time, as the state of the art improves, and as people learn from the mistakes of the past.
The third item is particularly interesting, because its effectiveness doesn’t presuppose that people are flawless. So I will focus on some items here as fundamental and cost effective ways to achieve security.
In the short term, many companies, most notably Microsoft and Apple, have developed ways to push patches for broken software out to customers. Millions of customers regularly (perhaps monthly) receive updates that fix vulnerabilities. For some fixes, it is possible to “hotfix” the client software so that its host need not even reboot when it gets the patch. The technology behind hotfixing is improving. The cost of protecting customers from discovered vulnerabilities post-shipment has thus gone from prohibitive to relatively cheap in just a few years. The fixed cost to develop these mechanisms was large, but the recurring cost is small. A server can avoid being reliant on hotfixes by having multiple servers redundantly provide the same service. This is often necessary due to transaction volume anyway, and need not always be considered an additional cost.
The runtime environment software uses, or the construction of the software itself, can be inherently more secure. For example, Microsoft’s latest compilers provide buffer-overrun checking which cause the program to crash rather than execute nefarious code when a vulnerability is found. Even cooler, when such a thing happens, an error report can be sent to Microsoft, and Microsoft can then find and fix the issue. Recent versions of Linux provide Address Space Layout Randomization, which makes it difficult for hackers to use hardcoded values when making their attacks, and much more difficult to make viruses and worms. These mechanisms make newer software more secure almost for free, and either would have protected against our attack for this project.
Also, software can be made type-safe using languages like Java, C++/CLI, Ada, and C#. This prevents users from writing input to segments of the executing program which aren’t designated as accepting user input. Rewriting old software in type safe languages is quite expensive, but this cost can often be amortized by the fact that the software often needs to be rewritten for other reasons. Writing type-safe code has many other merits; it can actually improve developer productivity, as software bugs tend to be discovered at compile-time rather than run-time. However, not all software can be written in a type-safe language, either because it is impossible (some system code), or because the runtime performance impact of type-safety is often significant for time-critical applications.
All of the things listed here are cost-effective because (a) it is easy to distribute software, (b) it is easy to use protection mechanisms which can benefit all or most software, or (c) because costs of increasing software security are often the same as the costs of increasing software quality.