Santtu

From CSEP590TU
Jump to: navigation, search

An Attack in 2014: A Walk Through

The date is October 23, 2014. Rep. Sandra Hill is running for U.S. Senate. The race is tight and could turn on the tiniest misstep. Rep. Hill has a secret: She has been diagnosed with the early signs of Alzheimer’s disease the past summer. Based on her doctor’s diagnosis of slow progression that could also be treated with experimental drugs, Rep. Hill had decided to stay in the race.

Joe Cracker has an agenda: he does not want Rep. Hill elected to the Senate. He feels strongly opposed to most issues Rep. Hill has promoted, especially increased control over rogue elements on the internet. Joe Cracker knows a little about software security and vulnerabilities, and has had plenty of time on his hands. Months earlier Joe, had decided to find embarrassing information about Rep. Hill and publicize it in order to derail her campaign.

On a visit to his doctor, a week earlier, Joe pocketed a small computer memory device that someone had left on the counter. Joe did not expect it to be of much use other than as more portable memory, but upon examining it at home he soon realized that it contained access code to the Central Medical Record Service (CMERECS). CMERECS was created 8 years earlier to allow authorized individuals access to the records of any patient under their care. Joe knew that although his doctor’s office used the latest and greatest patient management application, the application vendor had not updated the encryption mechanism for nearly 10 years. The encryption algorithm was still on the list of approved algorithms for store CMERECS access codes, but Joe knew it could be cracked. Joe set his two brand new high end computers to work cracking the encryption by doing nothing more clever than just attempting every possible combination. Four days later Joe had the access codes. These access codes allowed him to connect to CMERECS and request the medical records for anyone in the nation in a way that the requests would appear to have originated from his doctor’s office. Under normal conditions the patient’s approval would be required before the records could be released to a doctor, but the system had a loophole. For emergency services no approval was required. Joe requested the records for Rep. Hill using the emergency request procedure.

With the medical records in hand, Joe embarks on a plan he spent the last several months preparing; he will distribute Rep. Hill’s medical records using a worm that attacks cell phones. Joe has a built a network of vulnerable cell phones with the help of a “surreptitious worm” [XVIII] that spreads slowly from cell phone to cell phone by keeping a low profile and piggy backing on the phone users regular phone usage. Thanks to this behavior, the worm has thus far been able to avoid discovery, but that will change once Joe triggers it into high gear.

This initial worm, which Joe named PrepBoy, takes advantage of an unpublicized vulnerability in the operating system used in several data capable cell phones. As the worm spreads it sends other cell phones a message saying an update for the cell phone is available. Included in this message is a malformed value for the expected size of the update. This malformed value causes the cell phone software to attempt to contact a backup site listed in the original message. Joe included an address to an anonymous website on which he had previously placed his payload (a copy of PrepBoy). When the cell phone downloads PrepBoy, assuming it is the official update, the second vulnerability, a buffer overflow, is encountered and allows PrepBoy to take over the cell phone. Once on the phone, PrepBoy attempts to spread itself by sending a copy of the update message to other cell phone numbers that it has gathered from the cell phone’s address book or numbers within the same telephone exchange. In order to avoid detection it only sends a few of these messages at a time, and only when the cell phone is connected to the data network as part of the user’s regular activities.

With this network in place, Joe sends a message exposing Rep. Hill’s Alzheimer’s diagnosis and a copy of the medical records to the set of infected cell phones to which he initially sent PrepBoy. Upon receiving this message, PrepBoy notices a key word and kicks into high gear. It begins to blast the received message to all the other cell phones it has infected which in turn forward it to all the phones they have infected. This blast quickly grows into a tidal wave reaching all infected cell phones within minutes. The message also triggers another change in PrepBoy: PrepBoy starts to forward the message not only to other cell phones, but to any email address it finds on the cell phone.

Within a couple hours, not only the most vulnerable cell phones in the state, but also within the whole country have received the message. There is public outcry not only about the massive traffic generated by the worm, but also about Rep. Hill’s decision to run for office even after she was diagnosed with Alzheimer’s. With less than a week left before the elections, Rep. Hill’s support drops dramatically and her opponent wins by a landslide. Joe’s plan has succeeded; he has effectively ended Rep. Hill’s career.

Software System Vulnerabilities

Theft, bombings, power outages, sweet talkers, worms, and bugs – these are just a few of threats faced by computer systems. Although computer systems are vulnerable to traditional physical and social engineering attacks, the vulnerability most closely associated with software is software defects. The potential damage from software defects is arguably the greatest, especially when software in integrated and interconnected. Many of these defects are benign, but some can expose the system or data stored in the system to the ill will of malicious parties. There is no lack of software vulnerabilities as can be confirmed with a quick glance at any of the software security websites. The state of software vulnerabilities is such that there have been calls for action among the computer science community to devise new methods for preventing attacks. One such call is from Professor Jeannette Wing, Computer Science Department Head at Carnegie Mellon University. In “Beyond the Horizon: A Call to Arms” she calls for the computer science community to look beyond the current flaws and examine flaws of the future while acknowledging that today’s attacks and flaws are likely to remain [XV]. Before examining processes and incentives to improve software quality, it is useful to examine the current attack vectors and speculate about future ones. We’ll discuss the past and future evolution of software defects, and also examine how integration and interconnectedness can increase the damage caused by attacks, including transferring them from one medium to another.

Software Coding Vulnerabilities

It is commonly accepted that all software has defects, or ‘bugs,’ as they are known within the software development community. Although many defects can be considered benign or irritating – a spellchecker failing to identify a misspelled word or suggesting an inappropriate word as the correct spelling – and do not cause damage to the users work or system, other defects can have far more severe consequences. These defects can range from minimal loss of current user's work to the clearing of storage devices or, in arguably the worst case, enabling attackers to gain complete uncontrolled access to the system, potentially using it for their own purposes, such as mounting further attacks. Examples of these most serious defects, which we’ll call exploitable defects, are numerous – the Morris Worm which paralyzed the internet in 1988 and the Code Red and Sapphire/Slammer worms. What types of defects cause these most serious vulnerabilities in software and how do they occur?

Exploitable software defects first became widely known with the Morris worm in 1988, but it was not until a publication in the BugTraq mailing list in 1995 [II] spurred a wave of reports of similar vulnerabilities. This type of defect is known as a buffer overflow, or stack smashing attack. In its simplest form, this defect is caused by the software’s failure to check the length of the data it receives and subsequent blind overwriting of its own control structures with data from the user/attacker. The root cause is that the software engineer trusts users of the software to pass only valid input and therefore does not check it for validity – if the engineer has designed that the maximum password size is 8 characters, she or he assumes that no one will provide a password with 9 characters or more.

As companies and engineers have become more aware of these specific defects, more variations on the original defect have been shown to be exploitable. In most cases it had been widely believed that, although these variations existed, they could not be exploited for one reason or another. The original defect overwrote data in a software program’s scratch area where it tracks, among other things, what code it should execute next. Overflows in other areas were thought harmless, but were eventually shown to be as exploitable as the original defect [III]. It has been shown that an overflow of even a single character can lead to an exploit [IV].

One of the more recent variations is the integer arithmetic defect. In these defects an attacker is able to cause the software to perform an error in integer arithmetic. This may seem outrageous, as computers are computing machines, and except for faults in processors, should not be subject to arithmetic errors. However, computer systems are limited in the range of numbers they can store accurately. An integer in software does not have a range from negative infinity to positive infinity, but rather a considerably smaller range that varies between different versions of ‘integers’ and even processor designs. An integer arithmetic defect occurs when the result of a calculation does not fit into the fixed space reserved for the value. In this case, the value is truncated at the front. A quick example illustrates this: A form allows two digits for the minimum height of a security fence. The height is set to 72 inches in 1999 because no guard dog can jump that high (in order to get loose and attack outsiders). In 2003 a new breed is introduced that can jump 40 inches higher than the previous best jumper. A clerk is instructed to add 48 inches to the minimum value. The clerk uses long addition and changes the first digit to a 0, the second digit to a 2. When the clerk attempts to write the 1 for the third digit he finds there is no room for it and just leaves it out. The next day a builder comes to check the minimum height for the fence he’s building and reads 20 inches. Once the fence is finished the dogs are placed inside and immediately escape because the fence is so low that most dogs can jump over it. Until recently it was widely believed that arithmetic errors could not lead to exploits [V, VI], but they have now shown to be exploitable [VII].

Although buffer overflow defects have been widely known about since 1988, and definitely 1995, they are still very common in modern software system. A search of the Secunia.com security advisory database lists 22 new buffer overflow security advisories for the first 23 days of November 2004 [VIII]. Some of these vulnerabilities are in popular software such as WinAMP [IX] and Microsoft Internet Explorer [X]. It is unlikely that these defects will disappear anytime soon, but will rather be joined by new types of defects.

New defects in the future may be variations or evolutions on current vulnerabilities, as the above are variations on the original buffer overflow defect, or they may take completely new approaches. Although the future is impossible to predict, there are some indications that can serve as hints to the future.

An example of a new attack vector is the algorithmic defect. Algorithmic defects are weaknesses in algorithms, especially security algorithms, which can cause the algorithm to perform incorrectly or reveal the secrets that the algorithm was intended to protect. Although their elusiveness may account for the relatively low numbers of algorithmic exploits, algorithmic flaws nevertheless cannot be ignored. Once found and abused, algorithmic defects can cause a great deal of harm, precisely because software engineers trust algorithms to operate safely. Examples of algorithmic flaws are the Needham-Schroeder authentication protocol and several digital watermarking algorithms. The Needham-Schroeder authentication protocol claimed to allow two parties to reliably prove their identities to each other – such as a customer and a bank during phone-banking. The bank needs a guarantee that the customer is who he claims to be. Similarly, the customer wants a guarantee that the entity with whom he makes a transaction is not a fraudster who will turn around and use information given in confidence to empty out the customer’s account. The Needham-Schroeder algorithm contained a flaw that escaped detection for 17 years and allowed an intruder to impersonate one of the participants in the protocol [XI].

In addition to paying attention to buffer overflow defects and algorithmic defects, software engineers will also need to prepare for advances in computing power and even power consumption in order to fully protect their software from exploits.

The Data Encryption Standard (DES) was selected as the data encryption standard by the United States government in 1976. Although controversial from the start it was reaffirmed as the standard as late as 1998 [XII] . Also in 1998 the Electronic Frontier Foundation developed a $250,000 chip that could crack the DES encryption in a little over two days [XIII]. This was a brute force attack – try all combinations to a combination lock until the lock opens. As computing power increases over time – Moore’s law states that computing power doubles every 18 months – brute force attacks become more viable attack vectors especially on older, weaker standards such as the DES and software systems using these standards will be vulnerable to eavesdropping by attackers on communications or data that should be secure.

An attack vector currently at the theoretical stage is Differential Power Analysis (DPA). DPA can break encryption keys by monitoring the power usage of a processor as it encrypts or decrypts a message [XIV]. Although currently DPA is mostly at a theoretical level, it is possible that it will become feasible in the future. Even though it is possible to defend against DPA using mechanical/electronic measures, it is also possible to defend against using software measures that mask the actual power consumption levels. Failure to protect a software system against DPA, especially in secure systems, should be considered a failure similar to any other software defect that can lead to an exploit. [X]

From buffer overflows to DPA, software defects are of limited use on isolated systems as the effects of an attack are generally limited to that individual machine. Similarly, the effort required to mount an attack on isolated machines is far greater than the benefit to an attacker, except perhaps in cases of espionage. Just as the potential benefits to users can dramatically increase from integration and interconnectedness, so can the negative effects of vulnerabilities be greatly increased, and the effort required reduced, by the same integration and interconnectedness.

Integration and Interconnectedness Vulnerabilities

Common wisdom says that only a fully isolated system can be truly secure [XVI]. With integration and interconnectedness, systems become harder to secure, especial when previously benign failures can, at worst, become life threatening. Software systems are presently already integrated and interconnected in multiple ways, and are likely to become ever more so. Software systems are currently integrated into digital video recorder machines, car environmental systems, heart pacers, cell phones, medical equipment, and military system to name a few. In many cases these software systems are not specialized software but based on off-the-self software such as versions of Microsoft Windows or Linux. Systems are also more interconnected; cell phones can connect to data networks, more government and commercial systems are linked to the internet, and even cash machines can be reached from the internet. This integration and interconnectedness exposes these systems to more attacks by making them more readily accessible.

Integration of software systems into other products has already shown to be problematic and even life threatening. The Thai Finance Minister nearly baked in his bulletproof car because the car’s environmental control software failed and he could not open the doors or windows. This case also shows how a previously benign failure can be life threatening in an integrated system. The software in the car was based on software also used in Personal Digital Assistants (PDAs). Had the failure occurred in a PDA the user could have easily restarted the PDA and lost, at most, some amount of work that had not been saved. In an integrated system as the car’s environmental control system, a fault that in a PDA is an annoyance becomes critical. The requirements of the software have dramatically changed based on the application.

Other examples of integration and interconnected abound. Software malfunctions partially contributed to the East Coast power outage of August 14, 2003 where alarm software was disabled by erroneous input [XVII]. In January of 2002, the Bellevue, WA 911 response center was rendered unavailable due to the outbreak of the Slammer/Sapphire internet worm. The same worm disabled many Bank of America ATMs which were interconnected in a manner that allowed an internet based worm to reach the ATMs. The phone system is no longer connected just to the traditional phone network but also to data networks. Cell phones are becoming data devices and Voice-Over-Internet-Protocol (VOIP) based phone systems are starting to compete with traditional land-line phone systems. The phone traffic in these systems travels over the internet until converted to a phone line within the customer’s location.

What does the future of integration and interconnectedness look like? Integration will continue with software systems becoming smaller and located in an ever increasing number of appliances. Cable TV boxes are now starting to have more extensive capabilities, such as integrated network connections and built-in web browsers. Medical professional can already observe and diagnose patients remotely using video conferencing systems and prototype systems allow for remote operations – doctors already benefit from local remote operations when operating with the help of miniature cameras and tools inserted through small incisions. The risks of a vulnerability in these remote operating systems are life threatening, could a script kiddie in the future cause the death of patient by sending malicious instructions to the operating system? RFID technology will ease inventory tasks but will also open to the door for remote snooping. It is conceivable that malicious RFID tag could exploit vulnerabilities in software such as those discussed in the first section. Refrigerators will have software systems integrated that will allow applications such as tracking food usage using afore mentioned RFIDs and automatic ordering of foods as current supplies are exhausted. If these systems are tied to the refrigerator’s temperature controls, could it be possible for some to cause the food in the refrigerator to spoil?

The integration and interconnectedness of software systems increases the number of factors to consider when designing these software systems as the software may operate in environments for which it may not have original been designed. The interconnectedness also makes the software systems more tempting targets as they are now reached easier with easily available and useable tools and for the limited cost of an internet connection. This level of requirements have traditionally been limited to very few systems, such as medical devices in a disconnected environment and space flight systems. With the potential exception of space flight systems, even these systems could not be guaranteed to work outside of their intended environments. As off-the-shelf commercial systems enter more of these specialized areas, they must be secured against the threats.

Conclusion

With the status-quo, existing vulnerabilities are likely to continue to exist as buffer overflows have not been eradicated in the 16 years since they first came to wide spread attention. Just as integer overflows have evolved from buffer overflows, new threats such as DPA will continue to be discovered. The effects and reach of these vulnerabilities will also grow as the increase in software systems integration and interconnectedness continues. What can be done to break out of this status-quo and eliminate at least some of the vulnerabilities?

End Notes

I Kopytoff, Verne. “Q&A Kevin Mitnick Ex-hacker shares secrets of deception.” San Francisco Chronicle. 28 Oct. 2002. Retrieved 17 Nov. 2004 <http://sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2002/10/28/BU162351.DTL&type=tech>

II Lopatic, Thomas. “Vulnerability in NCSA HTTPD 1.3.” Online psoting. 13 Feb. 1995. BugTraq. Retrieved 17 Nov. 2004 <http://www.securityfocus.com/archive/1/2154>

III Conover, Matt. “w00w00 on Heap Overflows.” w00w00 Security Development. Retrieved 20 Nov. 2004 <http://www.w00w00.org/files/articles/heaptut.txt>

IV “Buffer overflow”. InformationBlast.com. Retrieved 20 Nov. 2004 <http://www.informationblast.com/Buffer_overflow.html>

V Howard, Michael. “Reviewing Code for Integer Manipulation Vulnerabilities” Microsoft Corp. 28 April 2003. Retrieved 17 Nov 2004 <http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncode/html/secure04102003.asp>.

VI Seltzer, Larry. “Integer Overflows Add Up to Real Security Problems”. eWeek.com. 11 Mar. 2004 Retrieved 17 Nov 2004 <http://www.eweek.com/article2/0,1759,1545382,00.asp>

VII “FreeBSD fetch utility Integer Overflow Vulnerability”. Secunia.com 18 Nov. 2004. Retrieved 23. Nov. 2004 <http://secunia.com/advisories/13226>

VIII Secunia.com. Search Advisory, Vulnerability, and Virus Database. Retrieved 23 Nov. 2004. < http://secunia.com/search/?search=buffer+overflow>

IX “Winamp ‘IN_CDDA.dll’ Buffer Overflow Vulnerability”. Secunia.com 23 Nov. 2004. <http://secunia.com/advisories/13269/>

X “Internet Explorer IFRAME Buffer Overflow Vulnerability”. Secunia.com 23 Nov. 2004. <http://secunia.com/advisories/12959/>

XI Lowe, Gavin. “Breaking and Fixing the Needham-Schroeder Public-Key Protocol Using FDR”. Tools and Algorithms for the Construction and Analysis of Systems (TACAS) 1055 (1996): 147-166.

XII “Data Encryption Standard.” 19 Nov. 2004. Wikipedia, The Free Encyclopedia. 19 Nov. 2004. Retrieved 22 Nov. 2004 <http://en.wikipedia.org/wiki/Data_Encryption_Standard>

XIII EFF: DES Cracker Project. Electronic Frontier Foundation. Retrieved 19 Nov. 2004. <http://www.eff.org/Privacy/Crypto/Crypto_misc/DESCracker/>

XIV Kocher, Paul, Joshua Jaffe, and Benjamin Jun. “Differential Power Analysis” Lecture Notes in Computer Science. 1666 (1996): 388-397.

XV Wing, Jeannette. “Beyond the Horizon: A Call to Arms” IEEE Security and Privacy. November/December 2003, pp. 62-67.

XVI Although DPA shows that it is hard to make a computer truly isolated.

XVII U.S.-Canada Power System Outage Task Force. “Finald Report on the August 14, 2003 Blackout in the United States and Canada: Causes and Recommendations”. Natural Resources Canada. Retrieved 23 Nov. 2004. <http://www.nrcan.gc.ca/media/docs/final/finalrep_e.htm>