Difference between revisions of "Team 7 Main"

From CyberSecurity
Jump to: navigation, search
(Details: Defenses)
(Details: Defenses)
Line 240: Line 240:
 
Successfully pursuing all three of these items involves hiring talented and experienced developers and managers, which incurs perhaps a 10-20% overhead over hiring their less talented peers (assuming roughly $300,000 of annual cost per developer).  Such developers can create good robust designs, and the creation of the better software is not more expensive beyond the initial cost of the people, who can do more with less time.  Also, good design becomes cheaper over time, as the state of the art improves, and as people learn from the mistakes of the past.<br>
 
Successfully pursuing all three of these items involves hiring talented and experienced developers and managers, which incurs perhaps a 10-20% overhead over hiring their less talented peers (assuming roughly $300,000 of annual cost per developer).  Such developers can create good robust designs, and the creation of the better software is not more expensive beyond the initial cost of the people, who can do more with less time.  Also, good design becomes cheaper over time, as the state of the art improves, and as people learn from the mistakes of the past.<br>
 
The third item is particularly interesting, because its effectiveness doesn’t presuppose that people are flawless.  So we focus on some items here as fundamental and cost effective ways to achieve security.<br>
 
The third item is particularly interesting, because its effectiveness doesn’t presuppose that people are flawless.  So we focus on some items here as fundamental and cost effective ways to achieve security.<br>
In the short term, many companies, most visibly Microsoft and Apple, have developed ways to push patches for broken software out to customers.  Millions of customers regularly (perhaps monthly) receive updates that fix vulnerabilities.  For some fixes, it is possible to “hotfix” the client software so that its host need not even reboot when it gets the patch.  The technology behind hotfixing is improving.  The cost of protecting customers from discovered vulnerabilities post-shipment has thus gone from prohibitive to relatively cheap in just a few years.  The up-front cost to develop these mechanisms was large, but the recurring cost is small.  A server can avoid being reliant on hotfixes by having multiple servers redundantly provide the same service.  This is often necessary due to transaction volume anyway, and need not always be considered an additional cost.<br>
+
In the short term, many companies, most visibly Microsoft and Apple, have developed ways to push patches for broken software out to customers.  Millions of customers regularly (perhaps monthly) receive updates that fix vulnerabilities.  For some fixes, it is possible to “hotfix” the client software so that its host need not even reboot when it gets the patch.  The technology behind hotfixing is improving.  The cost of protecting customers from discovered vulnerabilities post-shipment has thus gone from prohibitive to relatively cheap in just a few years.  The up-front cost to develop these mechanisms was large, but the recurring cost is small.  A server can avoid being reliant on hotfixes by having multiple servers redundantly provide the same service.  This is often necessary due to transaction volume anyway, and need not always be considered an additional cost. In addition using patch-related best practices can reduce patching costs. A study showed that centralizing IT operations can reduced patching costs by up to 55 percent. This study also showed that adopting an end-to-end patch management system can reduce costs by up to 44 percent.<br>
 
The runtime environment software uses, or the construction of the software itself, can be inherently more secure.  For example, Microsoft’s latest compilers provide buffer-overrun checking which cause the program to crash rather than execute nefarious code when a vulnerability is found.  Moreover, when such a thing happens, an error report can be sent to Microsoft, and Microsoft can then find and fix the issue.  Recent versions of Linux provide Address Space Layout Randomization, which makes it difficult for hackers to use hardcoded values when making their attacks, and much more difficult to make viruses and worms.  New Linux modifications also involve putting more restrictive permissions on segments of executable code, so that it is more difficult for an attacker to write or execute to it.  These mechanisms make newer software more secure very cheaply, and either would have protected against our attack for this project.<br>
 
The runtime environment software uses, or the construction of the software itself, can be inherently more secure.  For example, Microsoft’s latest compilers provide buffer-overrun checking which cause the program to crash rather than execute nefarious code when a vulnerability is found.  Moreover, when such a thing happens, an error report can be sent to Microsoft, and Microsoft can then find and fix the issue.  Recent versions of Linux provide Address Space Layout Randomization, which makes it difficult for hackers to use hardcoded values when making their attacks, and much more difficult to make viruses and worms.  New Linux modifications also involve putting more restrictive permissions on segments of executable code, so that it is more difficult for an attacker to write or execute to it.  These mechanisms make newer software more secure very cheaply, and either would have protected against our attack for this project.<br>
 
Also, software can be made type-safe using languages like Java, C++/CLI, Ada, and C#.  This prevents users from writing input to segments of the executing program which aren’t designated as accepting user input.  More recent versions of the C Runtime Library (CRT) contain enhanced security features which make it easy for software developers to eliminate potential points of vulnerability and safe guard against Stack based exploits possible in previous versions of the library.  Rewriting old software in type safe languages is quite expensive, but this cost can often be amortized by the fact that the software often needs to be rewritten for other reasons.  Writing type-safe code has many other merits; it can actually improve developer productivity, as software bugs tend to be discovered at compile-time rather than run-time.  However, not all software can be written in a type-safe language, either because it is impossible (some system code), or because the runtime performance impact of type-safety is often significant for time-critical applications.<br>
 
Also, software can be made type-safe using languages like Java, C++/CLI, Ada, and C#.  This prevents users from writing input to segments of the executing program which aren’t designated as accepting user input.  More recent versions of the C Runtime Library (CRT) contain enhanced security features which make it easy for software developers to eliminate potential points of vulnerability and safe guard against Stack based exploits possible in previous versions of the library.  Rewriting old software in type safe languages is quite expensive, but this cost can often be amortized by the fact that the software often needs to be rewritten for other reasons.  Writing type-safe code has many other merits; it can actually improve developer productivity, as software bugs tend to be discovered at compile-time rather than run-time.  However, not all software can be written in a type-safe language, either because it is impossible (some system code), or because the runtime performance impact of type-safety is often significant for time-critical applications.<br>

Revision as of 03:23, 25 October 2005

The assignment mentions four bullets:

  • "A plain English, no jargon description of attack techniques attempted..." etc.
  • "Estimated dollar value of the damage..."
  • "Estimated feasibility and strategic value of the attack technique to a terrorist organization."
  • "Feasibility and cost of defending against such attacks."

These map onto the sections below as follows:

  • The "executive summary" is just a summary of the report; we should leave until the rest is written.
  • The "Attack Methodology" is the first bullet.
  • The first subsection of the "Vulnerability Assessment" is the second bullet.
  • The second subsection of the "Vulnerability Assessment" is the third bullet.
  • The "Details: Defenses" is the fourth bullet.

Executive Summary

Attack Analysis

Attack Description

Red Team 7 engaged in a cyber attack resulting in a “buffer overflow exploit.” This exploit allowed the team to send a specially crafted sequence of bytes to a program’s input, causing the program to perform actions on the team’s behalf.

In order to better understand this action and result, below is a brief list of terminology used:  Exploit: a program which permits the user to perform an action which (s)he is not authorized to perform
 Exploiter: the user who uses an exploit
 Attack: an attempt to discover how to build an exploit
 Attacker: the person who performs attacks
 Buffer: region of memory in a program
 Stack: a data system which employs a last in, first out storage principle, similar to a stack of plates
 Pointer: a piece of data that refers to another piece of data or code

A buffer overflow exploit is made possible by the fact that programs commonly write data into a buffer without checking if data fits into this buffer. To better understand this concept, the visual image of writing on a series of crumpled pieces of paper may help. Imagine an individual writing a long sentence off a book onto one of dozens of pieces of crumpled paper surrounding this book. If the writer’s memory wanders, he or she may fail to notice that the sentence has gone from the end of one piece of paper to the next.

Ultimately, the individual will catch itself. But computer programs are completely oblivious to such an error. After a buffer overflow, potentially any information in a special region of memory called a stack can be overwritten with information provided by the exploiter. Stacks can be shifted and altered by the exploiter.

Red Team 7 attacked a program whose stack contained a pointer for a future program operation. The team was able to overwrite this pointer, and in addition to write its own operations to memory, pointing the pointer to those new operations and causing them to execute.

The following three piece of information were required to craft the exploit:
 Where and when a program expects input
 The size of the buffer to be overflowed
 The address of the buffer in memory

Finding the information was the challenge. After knowing the information, execution was simple.

Red Team 7 determined where and when the program expects input by observing the program’s behavior. For example:
 As in this case, a program might accept input through “command line arguments” when it starts up
 It might prompt the user for input (like Pine, when the user composes email)
 It might accept bytes off the network (like a web server, when it accepts a request to serve a page)
 It might read a file on disk (like Acrobat Reader, when it loads a PDF file)

Any of these are possible attack opportunities if the application copies input onto a buffer without ensuring that it fits. Attackers, such as Red Team 7, choose to attack all available input points until something gives.

The size of the buffer to be overflowed and its address in memory can be determined by examining the source code, automated brute-force trial-and-error, or manual, controlled inspection of the program execution.

Red Team 7 attempted all three of these methods. Examining the code and trial-and-error worked in and of themselves. The third did not suffice by itself.

Because the program that was exploited had "root", or administrative-level, access, the team could use that program to give itself root access, allowing arbitrary privileges on the target machine.


Methodology

Estimated Difficulty

Red Team 7 successfully performed two independent attacks on the target: an open-source attack and a brute-force attack. These included about 1-2 hours of effort common to both attacks, and 1-2 hours specific to each attack. Provided with information from the course instructors (which can be found on the internet), either or both of these attacks could be devised by a moderately skillful programmer in about 3-6 hours.

The open-source attack relied on access to the source code of the target. After examining the code, the team modified it to output a piece of key information and used this modified executable to learn the necessary information for attacking the main target. By inspecting the source code, the team found both fixed-size buffers and operations that copy input into these buffers. By modifying the source code, the team produced a version of the program that prints out the buffer address. This gave the team the size of the buffer and its address.

The brute-force attack relied heavily on having the ability to run the target program repeatedly – on the order of tens of thousands of times – in an attempt to find the weakness. The team wrote a script that tried many variations of the general attack, and ran it. When the script concluded, the team had enough information to complete the attack.

The team also attempted to use a debugging program in an attempt to discover the buffer size. This manual, controlled inspection of execution, which relied on knowledge of debugging programs, did not suffice. Trial and error was necessary to augment this procedure.

Results

Vulnerabilities exposed

The attack gave the team access to the machine's “root” superuser account, which has privileges to execute any program. This allowed the team to read, change or observe any activity or programs on the machine.

The model would allow the team to read files on local disks, files on any shared network disks accessible to users on the machine, all traffic on the local network that's connected to the machine and anything that can be picked up by input devices (e.g., microphones, keyboards, cameras) connected to the machine. If the machine were to be used for e-mail, the team would be able to view these messages. If the user were to use this machine to log onto another resource, the attacker could log onto that resource too.

The attacker can use this machine to anonymously launch further attacks, or wipe and destroy the hard disk. Depending on the authentication subsystem used, the team may also have been able to read the password file; this file is encrypted, but the attacker can perform a brute-force cryptographic attack on passwords of all the users that have accounts on the machine. If a different authentication scheme like Kerberos is used (or if, by some magic, all the users on the machine have selected very long passwords) then the attacker can simply capture the passwords of users who actually log onto the machine as they are typed. Once the attacker possesses login names and passwords, of course, any resource on that network that uses the same username and password for authentication is also vulnerable, so this attack can be used to bootstrap a larger attack on other machines in the network or accounts with service providers, such as banks.

Red Team 7 would most likely refrain from destroying the hardware in practice because once the team has access to the resource, it would be more useful to maintain the system and use it surreptitiously. Furthermore, most valuable targets will have backups and redundancy built into the larger system, so destroying data only causes transient harm.

Summary of Defenses

Defenses

This attack could have been prevented or made less likely in several ways.
 The source code could have been made unavailable. However, as Red Team 7 demonstrated, a l33t hacker can easily overcome this limitation.
 The buffer overrun vulnerability could have been found and removed in the program. The programmer could have relied on a more secure library to do its work (i.e. the insecure library could have been deprecated leaving the programmer no choice), or the programmer could have been smarter. The programmer could also have used another, safer programming language.
 The program could have executed with the user's privileges rather than root privileges. Thus, even with a vulnerability, hacking the program could do no real damage.
 Address-space randomization in the installed operating system would have caused characteristics of the program change with every execution, thus making it more difficult to prepare and launch an attack.
 The compiler could have inserted buffer overrun checking into the program, such that once an overrun was detected, it would crash rather than execute user code.
 The program could have made the "stack" memory space non-executable, preventing the user from executing the data he inserted into the program.
 The program or underlying system could have prevented the user from executing the program multiple times, preventing the brute-force attack. Think of password acceptors which only let users try their password three times.

The broader impact and cost of some of these defenses is examined later.

Details: Vulnerability Assessment

Estimated Potential Damage

Estimated Dollar Value of Such An Attack: Specific Examples

Intro

Putting a dollar value on cyber terrorist attacks can be correlated with the phrase, “taking a shot in the dark.” There is no way to accurately judge how significant an impact an attack or exploit, similar to the one conducted by Red Team 7, can have. “This is very very very hard to do,” said Nicholas Weaver, Computer Security researcher at UC Berkeley’s International Computer Science Institute. “Some computers really have value $0; e.g. a home machine, where it gets reformatted/reinstated instead of mowing the lawn this weekend. Yet if that same machine is used to access a bank account, and the account got compromised by the attacker, then the loss may be many of thousands.” Weaver adds that time of impact is critical for the determination of financial impact. “Even trying to compute average damage figures is nigh impossible: How do you account for lost time?” he asks. “There is a huge nonlinearity, even in a relatively important business system: Down for 5 minutes? You go to the break room. Down for an hour? So what. Down for a day? That may be a problem. Down for a week? Chapter 11 (bankruptcy).” The answer ultimately lies in the details and specifics of the type of attack, the location of the attack, the number of people directly influenced, etc. “I think you are going to find coming up with an answer a lot harder than you think: it really comes down to ‘It depends,’ for both damage and response/prevention strategies,” Weaver said.

With this context in mind, below Red Team 7 provides estimates to the best of its abilities on the impacts of a buffer overflow type exploit on three specific test locations.

Private Home

It is important to keep in mind that damage made to any location is not only financial, but also psychological. And psychological impacts can result in financial impacts in the long run, e.g. through personal damage lawsuits, the implementation of brand new security systems for appeasement, new regulations, etc.

Damage to a private home computer can range anywhere from the theft of personal identity and account information to the erasure of hundreds of download-hours of music and videos. A student could lose his or her term paper or thesis or he or she may be prevented from completing an assignment on time and be forced to explain the situation to a professor. This may result in academic failure, which of course adds an extra semester – at the very least – of attendance costs. At UC Berkeley, taking living arrangements into account, this could be as high as $20,000. People may also lose items of sentimental value, such as old photographs, which are priceless. More importantly from a financial perspective, however, is the ability to access personal account information, credit files, tax reports, bank statements, legal documents and passwords. This information can give access to multiple other locations as most users tend to use one password in multiple scenarios. It will also allow an individual to make massive purchases on the Web. Individual consumers lost $3.8 billion in 2003 due to identity theft, according to Gregory Gerard, William Hillison and Carl Pacini in “What Your Firm Should Know About Identity Theft.” (The Journal of Corporate Accounting and Finance, May/June 2004, p. 1)

E-mail could also be accessed, giving the exploiter the ability to destroy an individual’s reputation.

If items are deleted, users will be forced to reinvent the wheel, which will take many man hours and slow the user down. Time, of course, is money. As Weaver explains above, reinvention and the time spent calling credit companies to cancel accounts might prevent the user from mowing the lawn or going to his or her child’s soccer match.

Furthermore, the individual will be incredibly distressed, and likely afraid of similar outcomes in the future.

If the harm results in the purchases of a new computer, the individual may end up having to spend close to $1000.

Corporate VP of Ordering Stuff from China According to Wal-Mart, the company “procures high volume of merchandise from China and exports to the rest of the world through its Global Procurement Center located in Shenzhen.” Wal-Mart’s direct and indirect procurements have increased annually by $2 billion to $3 billion since 2001, totaling $18 billion in 2004. (http://www.wal-martchina.com/english/walmart/index.htm). The company works with 20,000 suppliers in the country, and has a total of 5,3111 store units globally in ten different countries. Wal-Mart has more than 1.6 million associates worldwide and had $285.2 billion in sales for fiscal year ended January 31.

The company has three primary segments: The Wal-Mart stores segment, the Sam’s Club segment, and the International segment. According to the company’s annual report filing on EDGAR, the company ships items directly from supplier to store and that certain distribution centers serve more than one segment.

Any impact to the company distribution would be significant, especially depending on the time necessary to address the impact. Distribution management is a very exact science. Any error or delay has major economic reverberations in the market, especially for the host company. Shipping centers work on a precise schedule. A hacker’s ability to disrupt trade with China for the nation’s top retailer should not be taken lightly, then. The VP of Trade with China handles close to $20 billion annually. His computer also contains information on exactly who his or her company trades with, and the quantities of those trades. He is also likely connected to the entire company network. If the exploiter is interested in selling confidential data, he or she can. If the exploiter is interested in contacting Chinese suppliers, especially as a representative from another retailer, he or she can. If the exploiter wants to slow the system down, he or she can. As Weaver explains, a five minute slow down is insignificant. A weeklong slow down might result in company failure. Sandra Thompson, deputy director of the division of supervision and consumer protection at the Federal Deposit Insurance Corporation in her statement in May to a House subcommittee adds that “controlling the exposure is dependent upon the time it takes to verify the fraud and discover the source and extent of the compromise.” (05/18/05 “Enhancing Data Security” before the Subcommittee on Financial Institutions and Consumer Credit”) Delay could also bring down the company share price very quickly, lead to negative press reports, and destroy consumer trust in the company. A loss of shoppers leads to a loss of revenue. And, if the problem is long enough, it may raise prices at the “always low price” stores. Trust would also be an issue with the suppliers the company deals with in China. Losing a supplier could lead to millions in losses. Overall impact could easily reach the billions for a firm, said Gaurav Jain of Bryant University in “Cyber Terrorism: A Clear and Present Danger to Civilized Society?”

Charles Schwab computer used to place buy/sell orders on NYSE

This computer has more of a direct tie to the market than does Wal-Mart. A Schwab computer trading on the New York Stock Exchange, if hacked into, can be the host for a market turn around.

The company, according to its annual report filings on EDGAR had $1.081 trillion in 7.3 million active client accounts as of December 31, 2004. Of course these were broken into three main categories: Individual Investor, Institutional Investor, and U.S. Trust. The impact of the machine is dependent on which type of accounts the machine focuses on, in the short term. In the long term, the machine should be able to access multiple other machines. Regardless, damage would be at least in the billions. The stock market is very time sensitive. Any delay or slowdown results in huge losses on a per minute basis. The company’s trading revenue, according to its SEC filings, totaled $1.0 billion in 2004. Non-trading revenue, based off interest and client assets, totaled $3.2 billion in 2004. Trading revenue would be impacted directly through the trading machine. In the long run, so would non-trading revenue, as a lower principle results in lower returns. On a daily basis, the company has an average revenue trade of $156,400. This amount would likely become a negative with any slow down, resulting in million of losses in the long run.

A security breach would also cause clients to lose trust. Net new client assets in 2004 were $50.3 billion, close to 5 percent of overall assets. If the number of new clients plunges, so will the company revenue.

Internally, the company also trades for its own employees through their 401(k) plans. These individuals will also lose trust in their parent.

The market will also react strongly if all Schwab accounts started to pull out of the exchange randomly. Although it sounds dramatic, it could happen.

Conclusion

Each scenario is dependent on the length of the impacts.

According to the 2002 Computer Crime and Security Survey conducted by the FBI and Computer Security Institute, close to half of respondents faced $455 million in financial losses due to security breaches. (http://www.gocsi.com/press/20020407.jhtml;jsessionid=STDWQUDGLSDSUQSNDBCSKH0CJUMEKJVN?_requestid=32615)

Estimated Attack Value for Terrorist Aims

The attack actually performed in this experiment, by itself, has relatively low value. We study this attack because it represents a wider class of attacks --- attacks on buffer overflow vulnerabilities --- which may be of value to a terrorist organization.

When considering the value of this attack to a terrorist organization, several questions arise:

  1. Feasibility: How feasible is this attack for a terrorist organization?
  2. Scalability: To what extent do the results in this experiment generalize to more serious attacks?
  3. Utility: How could the exploits based on these attacks be used to accomplish terrorist aims?

In the rest of this section, we address these questions in turn.

Feasibility

To construct the attacks we conducted, a basic background in computer science would be required. This includes an elementary understanding of assembly language programming, and familiarity with programming tools (such as compilers, debuggers, and scripts). An advanced undergraduate in computer science, or a bright self-taught programmer, would possess sufficient knowledge.

To execute a working exploit, all that's required is access to a computer and basic computer literacy skills, such as the ability to use a search engine and to download and install a program.

Resources sufficient for both of the above are available to many terrorist organizations. Walter Laqueur notes that contemporary terrorist organizations often successfully recruit middle-class, educated members. The best-funded organizations have even been known to fund "scholarships" for aspiring members to learn chemistry (to manufacture explosives) or other sciences; such organizations could easily fund scholarships in computer science. And, of course, computer literacy and access are widely diffused in wealthy First World nations, or even the emerging middle class of developing nations.

Therefore, we conclude the following. First, all but the poorest and least sophisticated terrorist organizations possess the ability to execute exploits discovered by others. Second, better-funded terrorist organizations have the additional ability to discover new exploits.

Scalability

The program we attacked was particularly amenable to study; to what extent do these results extend to realistic targets? A successful buffer overflow exploit requires answers to three questions:

  1. Where does the program copy input into a buffer that may allow overflow?
  2. What is the size of the buffer to be overflowed?
  3. What is the address of this buffer?

In this experiment, the answer to the first of these questions was given to us. In a realistic target program, it might take a great deal of human-guided trial and error to discover where and how to craft input to a program which reveals a possible buffer overflow. Typically, an input will contain many subcomponents, which are each copied into different buffers when the program processes an input, and only one or a few of these buffers may allow overflow. Our experiment does not give us sufficient information as to how long this would take with a realistic target program. However, such overflows are discovered by the worldwide hacking community at a rate on the order of hundreds per year, suggesting that it is not difficult.

We successfully answered the second and third questions using two attacks: a source examination attack, and a brute-force attack. We can more easily quantify the scalability issues of these attacks.

  • Scaling the source examination attack: Examining, modifying, and rebuilding the source code to a large program may require several times more effort than our test program. However, because the attacker can use automated code scanning tools, this effort does not scale in direct proportion to the code size, but somewhat less. Furthermore, the source examination attack took one programmer-hour or less for both attackers; even if doing the equivalent for a large program took 100 times as much time, it would only take 100 programmer-hours. Three programmers working for a week would suffice.
  • Scaling the brute-force attack: The only additional obstacle to performing this brute-force attack on a realistic target might be processing time --- a realistic program may do a significant amount of computation, making the brute-force trial take longer. However, this attack can trivially be spread out over many machines working in parallel. Today, one can buy a dozen cheap commodity machines at $500 apiece for $6,000. Running a full brute-force attack on our target would take on the order of tens of hours. Even if executing the brute-force attack were 100 times slower than for our target, brute-force scan would only take thousands of machine-hours, or on the order of weeks with a dozen computers.

In short, these attacks easily scale to realistic target programs, with generous estimates of the additional cost being on the order of less than $10,000 or a few weeks of human and machine time --- resources that are easily at the disposal of many terrorist groups. The most probable estimates of the cost are probably somewhat lower than these generous figures, which assume a hundredfold factor difficulty increase.

Furthermore, all of the above estimates apply only to attackers who wish to craft their own exploits, without any assistance from others. In reality, there exists a large worldwide community of attackers who are continually developing exploits and sharing information about them. A terrorist group could invest essentially no money or labor, and simply wait for this community to discover vulnerabilities and construct exploits.

Therefore, we conclude that this attack easily scales to realistic targets, including web servers, ssh, or other network-connected programs in common use.

Utility

This attack grants the attacker total control over some machine. There are exactly two ways that a machine can be valuable:

  1. Direct value: A resource connected to, and directly accessible from, that machine may be valuable.
  2. Indirect value: The machine may be useful as a platform for mounting attacks on other machines, which in turn have direct or indirect value.

Both of these kinds of value vary widely in seriousness, depending on the machine that is compromised. The direct value of a computing resource includes access to information, the use of computational power or network connectivity, and the ability to destroy that resource. The indirect value includes the ability to use the network connectivity to either exploit another machine, or send traffic over the network in a denial of service attack.

The aim of a terrorist organization is to cause violence against one group of people for the purpose of psychological effect on another group. With this in mind, we can envision several uses of machine resources for terrorist ends. We can group these uses into two general categories: tactical uses, and organizational uses. By "tactical uses", we mean the use of computing resources to execute a specific attack. By "organizational uses", we mean the use of computing resources to build or to grow the terrorist organization itself, rather than to execute a specific attack.

Tactical uses

As Gary Ackerman has noted, violence has the most psychological effect when it is spectacular and acts upon primal fears, such as the fear of infection (biological attacks) or radiation (radiological attacks) or annihilation (nuclear attacks). Currently, most conceivable tactical uses of compromised computing resources seem relatively ineffective for this purpose.

Although much of our nation's economic infrastructure relies on computing technology, it is difficult to see how temporarily disabling commercial transactions, or even damaging the infrastructure like the electric power grid, would cause dramatic psychological effects. Abuse by energy traders caused massive brownouts throughout California in 2000, yet this did not cause widespread panic.

Therefore, we believe that, at present, compromised IT resources have relatively little direct tactical value for terrorists.

One might wonder, then, whether there are indirect tactical uses. Do compromised computing resources offer significant additional value for the specific purpose of acquiring conventional, chemical, biological, radiological, or nuclear weapons? We believe the answer is no. Consider each of these forms of weapons in turn:

  • Conventional weapons are already widely available to even the least sophisticated terrorist groups. For example, consider the instructions, given to Jose Padilla, to turn on the gas in an empty apartment building and rig up a timed device to set it off with a spark: no technology was required to devise this plan, only ingenuity. Compromised computers offer little added value.
  • The limiting factor in acquiring chemical weapons is chemistry expertise and availability of materials. Both of these are already accessible to many terrorist organizations; compromised computers offer little added value.
  • The limiting factor in acquiring biological weapons is either physical access to a biological weapons lab, or the technology to weaponize biological agents. Access to labs is not purely computer-mediated, and weaponization technology is a problem of physical engineering. Compromised computers offer little added value.
  • The limiting factor in acquiring radiological weapons is, again, access to already-weaponized materials or the ability to weaponize acquired materials. And again, compromised computers offer little added value.
  • Finally, the limiting factor in acquiring nuclear weapons is either access to refined uranium (for fission bombs) or the ability to acquire a prefabricated weapon (for fission or fusion bombs). Both of these rely on either state sponsorship, or the ability to subvert human agents of a state, neither of which depends on computing technology.

Therefore, we conclude that even the indirect tactical uses of compromised IT resources, for a terrorist organization, are minimal.

Organizational uses

We envision two possible organizational uses of compromised resources.

First, modern terrorist organizations need information technology for the same reasons that any other organization does: they must communicate with each other, and they must publish information about themselves in order to communicate with the broader world. However, terrorist organizations must face an adversary (law enforcement and intelligence agencies) that attacks legitimately owned resources. Any computing or networking resource which a terrorist organization acquires legally presents an opportunity for investigating that organization. Therefore, remotely attacked machines may be valuable as proxies to disguise the origin of email and other traffic, or as servers to host content such as propaganda videos.

The value of untraceable IT resources is difficult to measure in dollar or hour terms, since it enables terrorists to elude capture, or force law enforcement to use more expensive investigative techniques. The value of eluding capture varies tremendously depending on the individuals who do so --- a fresh recruit with no knowledge or skills may be essentially worthless, whereas a well-trained long-term operative may be critical to performing an attack or maintaining the cohesion of the group.

Second, because compromised resources may contain data with financial value, terrorist organizations may use the information gathered on IT resources for financial gain. This information may be of direct financial value --- for example, a credit card number can generally be converted directly into cash. It may enable control over material resources --- for example, a vice-president of Wal-Mart purchasing may possess the authority to cause valuable goods be delivered to a drop-off point; by impersonating that vice-president electronically, these goods may become accessible to the terrorist. Finally, information may be of blackmail value --- personal computers especially contain a wealth of private data. Blackmail victims have an incentive to conceal their problem from the world, including law enforcement, and therefore represent a possible covert revenue source.

Again, the value of all these avenues for financial gain varies widely. However, Kirk Bailey has noted that existing cybercriminal networks have been able to reap on the order of tens of millions of dollars annually from compromised credit card numbers. If this attack were used to compromise a machine which had access to a commercial e-commerce database --- for example, a server at an online retailer --- then we could expect the attacker to be able to reap a large number of credit card numbers, enabling fundraising on a similar scale.

Details: Defenses

Software is very easy to distribute once it is created. Also, software engineers and computer scientists are passionate about improving the state of the art, and the market of rapidly improving hardware and voracious consumer appetite has supported this rapid advancement. These very things have contributed to the large-scale insecurity present in our software infrastructure, but are also the primary reasons why there is considerable momentum to make software secure against hacking.
The software market has become increasingly concerned about security, and people are increasingly willing to pay, as the internet gets larger and more fundamental to profit. Meanwhile, software providers are recognizing how lucrative it is to be able to boast good security, and how damaging it can be when your vulnerabilities are exposed. The effect of a hacking attack is often far more damaging than the specific impact of the attack, as it undermines confidence in the software and inspires copycat attacks until the problem is fixed.
It’s true that the best protection against attacks is simply good administration and policy in the companies that distribute and use software. Having quality control and not allowing people to get hold of things like passwords is important, quite aside from technical issues. These costs will be largely incurred above and beyond simply avoiding software vulnerabilities, so here, we focus on technical costs.
Creation of software has a “pay once, benefit many people” or “pay once, benefit for a long time” dynamic. Software that’s created today can be pushed over the internet to millions of computers. In addition, today's software provides a basis for future, more complex software, and its design aspects inform future designs, and thus the state of the art improves. The market is thus willing to invest in enhanced security, because there is feeling that we can drastically improve our security, that it’s not just an arms race.
So we have two contributing factors: a problem companies feel is soluble, and a market which demands a solution. These factors will induce greatly increased computer security in the coming years. Next we'll discuss some ways in which this is happening, and at the end we will discuss options should market-driven progress fail to be satisfactory.
There are three ways to avoid being damaged by exploits such as buffer overrun attacks. One is to lower the attack surface: don’t expose your program to input from the user in unnecessary ways; for example, don’t install potentially risky programs on a server by default. The second is to separate the attack points from critical systems; for example, our target executable for this exercise should have not run under administrative privileges. Or an internet browser could operate within a shell and not be able to affect the larger computer system. The third is to make the software itself free of vulnerabilities, and can be done by hiring talented developers, having an excellent process of quality control, and using good meta-software tools which make your software inherently resistant to exploits recover if a vulnerability is indeed exploited. It can also be achieved to a lesser extent by having an efficient way to provide fixes for broken software.
Successfully pursuing all three of these items involves hiring talented and experienced developers and managers, which incurs perhaps a 10-20% overhead over hiring their less talented peers (assuming roughly $300,000 of annual cost per developer). Such developers can create good robust designs, and the creation of the better software is not more expensive beyond the initial cost of the people, who can do more with less time. Also, good design becomes cheaper over time, as the state of the art improves, and as people learn from the mistakes of the past.
The third item is particularly interesting, because its effectiveness doesn’t presuppose that people are flawless. So we focus on some items here as fundamental and cost effective ways to achieve security.
In the short term, many companies, most visibly Microsoft and Apple, have developed ways to push patches for broken software out to customers. Millions of customers regularly (perhaps monthly) receive updates that fix vulnerabilities. For some fixes, it is possible to “hotfix” the client software so that its host need not even reboot when it gets the patch. The technology behind hotfixing is improving. The cost of protecting customers from discovered vulnerabilities post-shipment has thus gone from prohibitive to relatively cheap in just a few years. The up-front cost to develop these mechanisms was large, but the recurring cost is small. A server can avoid being reliant on hotfixes by having multiple servers redundantly provide the same service. This is often necessary due to transaction volume anyway, and need not always be considered an additional cost. In addition using patch-related best practices can reduce patching costs. A study showed that centralizing IT operations can reduced patching costs by up to 55 percent. This study also showed that adopting an end-to-end patch management system can reduce costs by up to 44 percent.
The runtime environment software uses, or the construction of the software itself, can be inherently more secure. For example, Microsoft’s latest compilers provide buffer-overrun checking which cause the program to crash rather than execute nefarious code when a vulnerability is found. Moreover, when such a thing happens, an error report can be sent to Microsoft, and Microsoft can then find and fix the issue. Recent versions of Linux provide Address Space Layout Randomization, which makes it difficult for hackers to use hardcoded values when making their attacks, and much more difficult to make viruses and worms. New Linux modifications also involve putting more restrictive permissions on segments of executable code, so that it is more difficult for an attacker to write or execute to it. These mechanisms make newer software more secure very cheaply, and either would have protected against our attack for this project.
Also, software can be made type-safe using languages like Java, C++/CLI, Ada, and C#. This prevents users from writing input to segments of the executing program which aren’t designated as accepting user input. More recent versions of the C Runtime Library (CRT) contain enhanced security features which make it easy for software developers to eliminate potential points of vulnerability and safe guard against Stack based exploits possible in previous versions of the library. Rewriting old software in type safe languages is quite expensive, but this cost can often be amortized by the fact that the software often needs to be rewritten for other reasons. Writing type-safe code has many other merits; it can actually improve developer productivity, as software bugs tend to be discovered at compile-time rather than run-time. However, not all software can be written in a type-safe language, either because it is impossible (some system code), or because the runtime performance impact of type-safety is often significant for time-critical applications.
All of the things listed here are cost-effective because (a) it is easy to distribute software, (b) it is easy to use protection mechanisms which can benefit all or most software, or (c) because costs of increasing software security are often the same as the costs of increasing software quality.
But what is the result of these trends? Is it enough? It's clear that the turnaround hasn't been and will not be immediate; there is too much complexity and a good deal of inertia due to currently installed software. Yet trend toward greatly improved security will continue, and certain simple methods like buffer overrun attacks will become a thing of the past within a few more years. Hackers will have to resort to other means.
But keep in mind that most of the market does not demand absolute perfection: people back up their data, they secure data cryptologically, and they can reinstall systems that are compromised. In some cases, the market simply does not understand how much financial damage actually results from an attack.
Perfect security will be reserved for those institutions who are willing to pay dearly, such as banks and governments. Such institutions pay hackers to try to break their systems, and develop in-house software with intense quality controls. Unfortunately, because these organizations develop customized solutions, it's unlikely that total security will be consistently achieved across these critical organizations unless some forces other than the market come into play.
The government has begun investing in education programs as well as mechanisms to make sure that its own systems are secure. This is not enough. In particular, Berkowitz and Hahn think the U.S. government lacks a "mechanism in the policy that holds public officials, business executives, or managers responsible for their performance in ensuring cyber security; nor is there a mechanism that ensures failure will result in significant costs to the responsible parties." A bank that promises its customers that their data is secure must be held accountable if there is a breach.
It is important that the government patch up the legal issues regarding cybersecurity, and protect its own resources. It must be ready to patch holes in what rules the market enforces. However, the market of the 1990s is not the same of the market of today, and it is crucial to recognize that there is considerable momentum in this area now.


Works Cited

“An Introduction to Wal-Mart.” Wal-Mart Stores, Inc. [Online]. http://www.wal-martchina.com/english/walmart/index.htm.

“The Charles Schwab Corporation, Form 10-K.” U.S. Securities and Exchange Commission. [Online] http://yahoo.brand.edgar-online.com/doctrans/finSys_main.asp?formfilename=0000316709-05-000006&nad=. (12 Dec 2004).

“Cyber crime bleeds U.S. corporations, survey shows.” Computer Security Institute. [Online]. http://www.gocsi.com/press/20020407.jhtml;jsessionid=STDWQUDGLSDSUQSNDBCSKH0CJUMEKJVN?_requestid=32615. (07 April 2002).

“Cybersecurity: Who's Watching The Store?" Bruce Berkowitz and Robert W. Hahn. (March 2003)

Gerard, Gregory, William Hillison and Carl Pacini. “What Your Firm Should Know About Identity Theft.” The Journal of Corporate Accounting and Finance. May/June 2004. p. 1.

Jain, Gaurav. “Cyber Terrorism: A Clear and Present Danger to Civilized Society?” Information Systems Education Journal. [Online]. http://isedj.org/3/44/ (12 August 2005).

Thompson, Sandra L. “Enhancing Data Security.” House Subcommittee on Financial Institutions and Consumer Credit. 18 May 2005.

“Wal-Mart Stores, Inc., Form 10-K.” U.S. Securities and Exchange Commission. [Online]. http://yahoo.brand.edgar-online.com/doctrans/finSys_main.asp?formfilename=0001193125-05-066992&nad= (31 January 2005)