Team 8 Main

From CyberSecurity
Revision as of 17:33, 24 October 2005 by Elijah Esquibel (talk | contribs)

Jump to: navigation, search

Looking at the two different outlines that have been posted, I suggest we somehow combine the two. Elijah's outline has more of a public policy structure to it, but it lacks the criteria for the red team project. Eric's outline is to the point, but it i'm pretty sure the professors don't want a paper that structured in a simple question-answer format. So here is a suggestion:


I. Problem Identification

Security breach compromised information 2. What kinds of attacks could have been attempted and what would their effort and costs to implement:

 a. Buffer Overrun of Custom Built Applications (we succeeded rooting the machine through the target1 application)
 b. Password Cracking Attack (we knew all of the other groups’ usernames and their default passwords.  We could have compromised their data)
 c. Denial of Service Attack (not attempted)
 d. Virus Attack (not attempted)
 e. Worm Attack (not attempted, no vulnerabilities found)
 f. Phishing attack (not attempted)
 g. Remotely Executed Exploit (no vulnerabilities found)

II. Evidence of Problem


--Leonarde 21:50, 23 October 2005 (PDT) How about this outline?:

1. What we did to probe the machine: (I'll take this -- Eric Leonard)

 a. Determined the operating system (through nmap and redhat-version file) 
 b. Determined applications installed on the system and their versions (nmap) and potentially exploitable applications (findsuid.sh)
 c. Determined open ports on a system (nmap)
 d. Investigated password file and passwords of other user accounts (/etc/passwd)

Morris Worm (1988) Red Code (2001) Slammer Worm (2003)

III. Construct alternative

One simple way to avoid attacks altogether is to work offline. Not putting yourself out there to be vulnerable is the only way to prevent such attacks. But one must question whether the benefits of online use outweigh taking full on prevention and working offline. Working offline may allow a user to use a downloaded or copied web page, but they will not have access to an updated page or may be missing images, video, or audio files that are fundamental to the web page. And although this seems like a simple solution, it is economically unfeasible because of all the benefits we enjoy as a society. Take the WIKI for instance; it is a website that completely revels in being able to have updated information at the click of a button. Although this may be a quick and easy solution, it is painful and slow-moving process which can lead to less production.

The next option would be to avoid computer languages that actually check the data that is going to be written into an array. Many languages offer the option to warn the user that a unique data assignment is being made to accommodate the maxed out overflow. In a time were processors are becoming faster and faster it may be worth the trade off to sacrifice speed for a lot more safety. The reason C and C++ are generally used is the fact that it is quicker due to its lack of boundary checking.

5. Feasibility for defending against such attacks.

a. home
b. corporate
c. financial
d. protection is cost effective
e. what is the lowest-cost provider for protection
f. list of possible policy levers for government

IV. Criteria What is the dollar value of damage through our successful exploit to a

a. Home computer
   1.denial-of-service 
   2.execute code on the system by attacker
b. Corporate computer used by a corporate VP of Walmart
   1.denial-of-service 
   2.execute code on the system by attacker
c. Charles Schwab computer to buy/sell orders on NYSE.
   1.denial-of-service 
   2.execute code on the system by attacker

V. Outcomes

4. What is the feasibility and strategy value of the techique(s) to a terrorist org?

a. scalability
b. feasibility
c. potential value for terrorist aims

VI. Confront Tradeoffs


--Leonarde 21:50, 23 October 2005 (PDT) How about this outline?:

1. What we did to probe the machine: (I'll take this -- Eric Leonard)

 a. Determined the operating system (through nmap and redhat-version file) 
 b. Determined applications installed on the system and their versions (nmap) and potentially exploitable applications (findsuid.sh)
 c. Determined open ports on a system (nmap)
 d. Investigated password file and passwords of other user accounts (/etc/passwd)

2. What kinds of attacks could have been attempted and what would their effort and costs to implement:

 a. Buffer Overrun of Custom Built Applications (we succeeded rooting the machine through the target1 application)
 b. Password Cracking Attack (we knew all of the other groups’ usernames and their default passwords.  We could have compromised their data)
 c. Denial of Service Attack (not attempted)
 d. Virus Attack (not attempted)
 e. Worm Attack (not attempted, no vulnerabilities found)
 f. Remotely Executed Exploit (no vulnerabilities found)

3. What is the dollar value of damage through our successful exploit to a

a. Home computer
b. Corporate computer used by a corporate VP of Walmart
c. Charles Schwab computer to buy/sell orders on NYSE.

4. What is the feasibility and strategy value of the techique(s) to a terrorist org?

a. scalability
b. feasibility
c. potential value for terrorist aims

5. Feasibility for defending against such attacks.

a. home
b. corporate
c. financial
d. protection is cost effective
e. what is the lowest-cost provider for protection
f. list of possible policy levers for government

--Leonarde 11:42, 22 October 2005 (PDT) I don't exactly know how this will fit into our paper, but here is a step-by-step summary of what we did to mount the exploit -

To hack into the target server CSE291B, we began by investigating the possibility of performing a buffer overrun attack on an executable file (target1) that we discovered was installed to run as the root user. By causing the executable file to overrun a stack-allocated buffer with a carefully crafted input string, we could force the file to execute code to start a command shell running as root. We were told beforehand where this file was located and were also given the file’s source code. If we hadn’t received this information, we could have located all files installed as root it by scanning the file system with the findsuid.sh script (see the appendix of Phrack 49: Smashing the Stack for Fun and Profit by Aleph One).

After looking at the source code for target1, we quickly noticed that a buffer overrun attack could be conducted by modifying the inputs to the foo() function. This function performed a string copy with strcpy()without first verifying its inputs. The source string was passed into the application through the first command line parameter whose length and contents were never verified. The target buffer was a fixed length char array of 64 bytes that was defined within the main() function.

I started by creating my own copy of target1 on the development machine (CSE291A). That way, I could trace through disassembled code within the debugger. I could also step through the code at the instant the overrun occurred and inspect the memory. I found that the address of the beginning of the buffer was 0xBFFFF930 and the space between the beginning of the buffer and the return address was 80 bytes. Both of these values were used in crafting the exploit code.

I modified the file sploit1.c so that it would generate an environment variable “EGG” that contained the specially-crafted buffer. The code to generate the buffer was taken from the example of exploit 3 in the Phrack article. The beginning portion of the 84 character buffer was padded with NOP operations. Next in the buffer came the shell code. The end of the buffer was filled with a series of addresses that pointed to the estimated beginning of the buffer 0xBFFFF930. I built and executed sploit1 to construct the EGG variable to be used in the next part of the exploit.

I executed the target1 app and passed in the EGG variable and a segmentation fault occurred. When I traced through the code in the debugger, I discovered that the buffer beginning wasn’t located where I had though it was before; it was no longer at 0xBFFFF930 but was now located at 0xBFFFF990 – a difference of 90 bytes. It seems that some of the times I loaded the target1 app, it was executed with a slightly different stack location. This was not, however, caused by Address Space Layout Randomization since that feature of the operating system was turned off. Since the margin of error for the estimated location of my buffer was greater than the size of the buffer, it would be impossible to consistently point the return address to the shell code, even if the beginning of the buffer were padded with NOPs.

For that reason, I decided to change the buffer overrun approach. I changed it so that all of the shell code was contained in a separate, much larger buffer (2048 bytes) that was placed in the environment variables. The beginning of this buffer was padded with NOP instructions. I passed in a string of return codes (still 84 bytes) to target1 on the command line in order overrun the buffer and point the executable to the shell code in its new location. I found out that the environment variable containing my shell code was loaded in memory from 0xBFFFF3D1 to 0xBFFFFBCB. The value that I chose for my return code (0xBFFF950) was well within the previously observed margin of error. After I re-executed the code and the shell code was run successfully—sploit1 worked!

I then took sploit1 and changed it to point to /tmp/target1 and re-ran it against the test server. Sploit1 worked on the first try, and I gained root access.