Talk:Lecture 10

From CyberSecurity
Revision as of 01:39, 8 November 2005 by Asadj (talk | contribs) (BotNets and Windows Operating Systems)

Jump to: navigation, search

Responsible Disclosure of the security vulnerabilities

Pravin Mittal

What should be responsible way to disclose a security vulnerability for an "ethical hacker"? I am little torn if there should be full disclosure to public or limited disclosure to the software vendor and disclose it only once the patch is out as I can see the pros and cons for both of them.

Limited disclosure helps vendor to release patches for the flaws before the bad guys decide to use for nefarious activities.

But what if vendors are not responsive and "black hat" hackers are capable of finding flaws on their own? And full disclosure may also allow especailly in open-source community to react quickly and fix the problem good example such as BugTraq.

Also to quote Elias Levy who was named "one of the 10 most important people of the Decade by Netword Computing" "Back in 1993, the Internet was actually far less secure than it is today because there was little or no dissemination of information to the public about how to keep malicious users or hackers from taking advantage of vulnerabilities,"

Also, I would like to hear from public policy students, if they are stated gudilines/laws/policy from the U.S government?

I did find the comment by Richard Clarke, President Bush's special advisor for cyber space security, said security professionals have an obligation to be responsible with the disclosure of security vulnerabilities. They should first report vulnerabilities to the vendor who makes the software in which the vulnerability is found, and then tell the government if the vendor doesn't take action.

Pravin Mittal


SMM: As usual, the criterion is easy to state but hard to operationalize. We should follow whichever practice minimizes the number of malicious exploits. The twist is that Dr. Lackey can find potentially disasterous problems that, apparently, the bad hat community never gets around to noticing and/or turning into actual disasters. Dr. Pustilnik's comment that his group is much more fiendish than the guys at bad hat conventions is pretty much the same point. If we believe that Microsoft has gotten out ahead of the (quasi-open source) bad hat community, then total openness might not be the best policy.

The idea that companies will ignore defects unless you do exploits is really interesting. In fact, it screams "institutional failure" or "market imperfection." More precisely, the intuition should be that there is some discrete thing wrong that, if you could only find and fix it, would make things globally better. The current hacker/exploit system looks like a bizarre historical accident that would never evolve the same way twice. Are there other mechanisms that would work better? For example, if companies were required to carry insurance, would the insurance company start jumping up and down to get their attention faster? Maybe that won't work, but it's worth thinking about what would.

BTW, the idea that security can now push back on marketing and say "that can't be done" is a very positive sign. In the old days, companies that shipped insecure products got the revenues and shifted the losses onto consumers. Now they are seeing something closer to the net cost. Telling marketing "no" says we're getting the incentives right, that projects with net negative value for society also have net negative value for the vendor.

The question about having prizes for bugs is really interesting. You could imagine that the expected payout per bug would be smaller than the $2K/person/day that MS pays out for finding bugs Lackey's way. I was particularly puzzled by the idea that paying people prizes for finding bugs is contrary to public policy. When you get past the "giving in to terrorists" rhetoric, what you're really doing is either outsourcing your bug-hunting or persuading people that it is more profitable to turn their discoveries into money than to be malicious. (In the latter case the semi-ethical hackers get subsidized to discover more bugs than they otherwise would, so some of the bugs you buy might never have been discovered without a prize and are therefore wasted. But the same thing happens when you hire Lackey, so you can't really count it as a fundamental objection.) I suppose that there are also mechanical problems with prizes -- you don't want to pay prizes to guys who then turn around and do exploits anyway -- but one imagines that there are fixes for that. For example, you could make payment conditional on nobody exploiting the same bug for 90 days.

Finally, the continued prevalence of parsing and buffer overflow errors suggests that the industry isn't anywhere near the limits of what computer science knows how to do. Presumably, simple errors are about social institutions and incentives. The question is, what percent of the existing problem is amenable to those kinds of reforms? Using Microsoft bulletins to figure out what percent of problems are social vs. frontier CS problems would make an interesting term paper.

Geoff Voelker I found the attitude about buffer overflows from the first two speakers somewhat worrisome. Basically, they were not interested in exploits that a "trained monkey" could perform, like buffer overflows. Instead, they wanted to focus their attention on the tough, challenging cryptoanalysis problems such as finding flaws in security protocols (e.g., Kerberos or 802.11). This worries me because it falls into the classic pattern of not focusing on the easiest way to break a system (practical security), but instead on what is technically challenging or creative. (The perfect is the enemy of the good.)

Keunwoo Lee: I also found the speakers' attitudes towards automatic programming tools to be overly glib and pessimistic. For example, yes, a buffer overrun in a type-safe language can lead to a denial of service attack (index out of bounds or out-of-memory exception) rather than arbitrary code execution, but I'd claim that this is an obvious improvement. If your code crashes with an exception, then you shut down the server and patch it, wheras if you give the attacker a root shell, you may not even notice, plus the attacker can bring down the server and additionally do whatever else (s)he desires. Some programming languages and automated tools can virtually eliminate certain classes of bugs. Yes, there will always be security problems that automated tools do not catch, but that argument could be used against penetration testing or any other security measure that's not 100% complete. Eliminating root exploits via buffer overruns, format string attacks, and race conditions would make a big quantititative difference. These may not be the "cool, hard cracks" that security geeks love, but they're still an important and solvable problem.

Jeff Bigham

  • Prizes for Bugs. A complication with awarding prizes for finding bugs is coming up with a pricing structure that makes sense and doesn't end up making matters worse. A problem exists when hackers could get more on the open market for their exploits than they could from the prize, so, to make prizes work, businesses would always have to offer more than would be potentially offered on the open market. Given the presumed high profitability of bot nets and the like, it seems that a good exploit could potentially be worth millions. Unfortunately, the potential payoff may be much higher than what the hacker could actually get and it seems incredibly difficult to gauge that accurately. It's also not clear to me how a hacker could prove to Microsoft that they've found a legitimate exploit without giving away a huge hint to how it was accomplished (if they break into a Microsoft computer, they've probably left breadcrumbs that the smart people at Microsoft could piece together). Given the uncertainty of these new psuedo-ethical-maybe hackers at so many levels it seems like MS might be out a whole lot more money paying off a lot of threats of dubious seriousness with the side effect of enticing people into targetting their software instead of that of their competitors. Without prizes, however, MS can hire people to do the same sorts of things while under their control and without nearly as much risk of them going bad.

Geoff Voelker There are a couple of assumptions here that may not hold. First, I suspect that there are many more people willing to submit exploits for prizes than to put them on the botnet market. Prizes make the effort legitimate, and turn it into a programming activity like any other (e.g., Google giving prizes for the programming projects). Second, you do not necessarily have to exploit a computer at Microsoft -- you just need to demonstrate one in MS software.

BotNets and Windows Operating Systems

Chris Fleizach - We haven't heard a great deal of BotNets in this class, but I think their threat potential deserves attention. A few reports have come out from industries, like online gambling (usually operating outside the US and its legal protections), that have detailed either how they succumbed to or defeated a targeted DDoS attack from a BotNet. Although I couldn't find a reference, there was a Wired article about an off-shore gambling company which was the target of an attack that at its peak had 3Gb/sec directed at it. I'm assuming 100% of those hosts were compromised Windows machines (otherwise separate botnet software would have to be written for each type of OS for little gain). I was curious if Microsoft thought they should take an active role in stopping the BotNet problem, or if they were responsible at all. In one sense, the software controlling the computer for the BotNet was allowed to be installed by the user and it must do what it needs to do. The system assumes the user installed it and runs with the user's privileges. Many times, social engineering can fool a user into installing such software without exploiting vulnerabilities inherent in the OS. The first speaker mentioned that they would continue to improve their spyware and adware detectors, frequently sending updates that can find new engines. The most obvious problem with this approach, is that an entriprising hacker can write his own BotNet controller that Microsoft won't know about it.

The next obvious solution is to turn on the firewall and disallow incoming connections, which would stop a BotNet controller from accessing the computer. But currently, when software is installed as a user, there is nothing stopping that software from disabling the firewall entirely, or just for the specific port needed. Linux and MacOS both require a password to enter the control panel and change settings like the firewall, but Windows has never done this. Access is granted based on the first login. It seems, just from a cursory examination, that preventing BotNets might start by allowing access to critical system configuration through passwords. Does anyone from MS know if Vista will do more to protect against these problems? Do they have better ideas about stopping BotNets before they start? Is it their problem at all?

Geoff Voelker

We will hear more about botnets later in the quarter.

As to whether MS feels responsible, I would put it in terms of whether MS feels market pressure. And the answer is yes -- they incur a huge support cost as a result of spyware, adware, etc. Malicious code tends to cause stability problems on people's computers. The first place people blame and call is MS. Since support is costly, MS would like to eliminate the source of those calls.

I'm not entirely clear on the proposed scenario, but I don't think that firewalls are a solution here. Once malicious code is able to run on a machine, it can control the firewalls.

--Imran Ali 12:42, 4 November 2005 (PST) It looks like the FBI apprehended the 'Botmaster', see article: FBI agents bust Botmaster His sentence carries a maximum term of 50 years. It looks like the government is cracking down hard on this type of criminal activity. The article also indicates that the Botmaster was 'lured' to their FBI offices. I wonder how they actually tracked him down in the first place?

Jack Menzel I'm not sure how reputable of a source "pcpro.com" is but they claim that the luring entailed being " lured to FBI offices in order to pick up equipment seized in an earlier raid". There was a more in depth report from the Washington Post yesterday.


Asad JawaharFrom what I know, MS is hardening the security in Vista on similar lines that you stated ie Vista will probably require password for security sensitive opterations even if you are logged on as admin.

Examples of Vulnerabilities

Jeff Davis Last night people were asking for examples of security vulnerabilities that are beyond just the basic buffer overflow, things that tools like Prefix and Prefast can not detect. Internet Explorer has had plenty of these. We had an interesting vulnerability in the download dialog where the evil web site could repeatedly send an executable, causing IE to display the download prompt asking the user to Run, Save or Cancel. If the web site did it quickly enough they could create enough dialogs to run the system out of memory. When this happened, the dialog would fail to display. The developer who wrote the code never considered what would happen if DialogBoxParam() failed due to OOM and due to a quirk in the way he initialized his variables, the switch statement afterward would fall into the Run case allowing the bad guys to run arbitrary code on the machine.

In Writing Secure Code, page 54, when listing security principles to live by, Howard states clearly: "Don't mix code and data." Unfortunately for us, web pages by thier nature are mixtures of code and data. People have done all sorts of creative things with jscript. Prior to Windows XP Service Pack 2 web sites could call window.createPopup() (not the same as window.open()), creating a borderless, chromeless, topmost window with HTML content of their choosing. The function was originally intended to make scripting menus on web sites easier for developers. However, it lead to all sorts of spoofing attacks, like overlaying the Address bar with phone text, covering up the Cancel button on the ActiveX install dialog, etc. Another fun vulnerability was web sites that would wait for the mousedown event and then move the IE window out from under the mouse, creating a drag and drop event. If they were clever about how they did this, evil web sites could put links on your desktop, in your favorites folder, etc.

Obviously switching languages, from C++ to something "safer" is not a good solution, nor is relying on tools like Prefix (not to in any way disparriage the work the Prefix team does). I saw Theo de Raadt speak at an ACM conference once where he spoke about the OpenBSD three-year, line-by-line code audit. He said 90% of the vulnerabilities stemmed from people not fully understanding the APIs they called (or the documentaiton being broken, etc), and he is right.

David Dorwin Searching Microsoft's website, it appears PREFast will be in Visual Studio 2005 ("Whidbey"). Is PREFix going to remain internal?

False Postives in On-line Crash Analysis

Jeff Davis What the first speaker said about false postives is true. I debugged a lot of dumps where the stack cookie (/GS switch stuff) was clearly visible on the stack in the place it should be, only one bit had been flipped. Or it was one word away from where it should be on the stack. These are indicitive of faulty hardware (or overclocked hardware). Or the stack was just complete random garbage, which indicates a serious flaw somewhere but it is pretty difficult to debug if the stack is blown away.

In all the OCA buckets I have looked at, I only found one buffer overflow from a GS violation and it was a false postive-- the cookie was there on the stack just off by a bit. But while I was in there looking at the code, I noticed some string manipulation in a different function that looked suspect.

Small Size Big Impact

Tolba: The first speaker just glossed over this topic but I think it has a very profound impact. Handheld devices and consumer electronics based on special versions of popular software like Windows are becoming more and more popular. Such devices are growing to play a central role in many peoples’ lives. With high-speed and permanent connection plans becoming more affordable on those devices like smart-phones, they are becoming more vulnerable to attacks traditionally targeted at desktop and server systems. The impact of this is especially scary given the fact that system software on such devices is usually embedded in some form of read-only memory (ROM) which heightens the update barrier for such devices. In time handhelds and consumer electronics could become very attractive targets for attackers (if they are not already) and turn into fertile grounds for information theft and more.

Manish Mittal: With reference to this article http://www.securitydocs.com/library/3188

It's interesting to look at the stats on loss of handheld devices containing sensitive information. Two major threats to this information are: Loss of the handheld device Malware attack

Rightfully so, the author suggests that unless fully engaged in the company’s security efforts, end-users can be an organization’s greatest vulnerability. Awareness training, and related activities, is the best way to obtain end-user participation in a security program.

Single Point of Failure

--Chris DuPuis 15:15, 3 November 2005 (PST) In Brian Lopez's presentation, he described a wide variety of ways in which the details of a company's security infrastructure can become public, and the measures that he recommended to mitigate such information leakage.

However, he also mentioned the crates of security-related documents that organizations send to his team for the purpose of the intrustion test. In addition, he indicated that the clients sometimes (not always) call his team back to see how they have progressed. These two facts imply that, somewhere at Lawrence Livermore, there is a storeroom full of juicy security details for some of the most sensitive sites in the country. Also, based on the description of some number of crates per client, and the large number of clients, it seems likely that a great many of these crates are stored in some kind of offsite document storage facility.

I wonder if the clients are appraised of the risk associated with the storage area for these documents being compromised.

Chris Fleizach - In a similar vein, ChoicePoint, a company that sells reports to industries like insurance, collects, stores and analyzes sensitive consumer information. Last february there was an issue where they sold over a 150K credit reports to identity theives because they didn't have policy to do background checks on their customers. And more disturbing is that ChoicePoint has no financial incentive to keep the information secret, since they bear none of the costs of fraudelent transactions and identity theft. This seems exactly like a ploy Brian Lopez's team would go through when analyzing a company. I suspect most top-secret government institutions have policies that will usually make it too difficult for most attackers to get through, companies that hold data for consumers are more plentiful with less regulation. Information they hold is usually valuable to only one individual (like a credit report), unlike trying to protect a state secret. The result, a burgeoning atmosphere for criminals to try to exploit.

David Dorwin There is some financial incentive for ChoicePoint as long as there are some viable alternatives for their clients. One would think that clients won't want to be affiliated with companies that the public knows has lost consumer data. This summer Visa and American Express announced they would end their relationship with credit card processing company CardSystems after it exposed nearly 40 million credit card accounts. (Visa has since said it would continue its relationship with Cardsystems.)

The ChoicePoint scenario described above sounds like a giant loophole. Is anyone aware of pending legislation to close such loopholes?

Andrew Cencini After reading the Spyware paper for this upcoming week's lectures, I find it interesting that the authors of the paper describe in (some) detail the layout of the UW campus network. Probably most of that information is public anyway but I found it to be interesting and well-timed to read a paper that would help an attacker reduce some of the legwork of dissecting a target network.

Passports and RFID

-Jessica Miller 16:10, 4 November 2005 (PST) I have come across a very relevant article in Wired [http://www.wired.com/news/privacy/0,1848,69453,00.html?tw=wn_tophead_2 Fatal Flaw Weakens RFID Passports ] and thought it would be of interest to many of us. The article is written by Bruce Schneier who wrote the pop-security book called "Secrets and Lies: Digital Security in a Networked World". In the Wired article, Schneier talks about the RFID technology and the privacy issues that come along with deploying RFID in passports. Schneier also explains how the State Department has tried to mitigate these security concerns with various "fixes". In end, though, Schneier is still dissatified and argues that this RFID passport technology should have not been designed behind closed doors and should be accessible to public scrutiny. In any case, I thought this was another excellent example of whether or not technology should be made available to the public as a security measure. (Also it is very relevant as the State Department plans to issue these RFID passports in October 2006.)

Not sure what part of the lecture discussion this fits in, but I highly Brian Lopez's talk. He did a great job of breaking down what his group does in terms of pre-assessment, assessment, and post-assessment. The fact that his group examines everything from an organzation's press releases, annual reports, firewalls, and physical compound etc. was amazing. I wondered how long this process takes?? Do you think the assessment program lasts a few months, a year, or longer? I don't know how fast his staff can accomplish certain tasks or how many people he has working for him.... Katie

Chemical weapons dumped at sea off the U.S. coast

Marty Lyons, UW -- This article from the Daily Press (Newport News, VA) details chemical weapons which were dumped at sea. They are now decaying and beginning to show up in fishing nets, marine life, and more. The effects of this short-sightedness could be long-term and influence all kinds of human usage of waterfront areas, seafood consumption, and further exploration of sea areas for other uses such as oil production. There is as yet no major public or political response, but one could imagine this will be a major news event once there is an actual public safety impact; hopefully this situation is noted and acted on by the media and Congress.

Special report from the Daily Press at:

http://www.dailypress.com/news/local/dp-chemdumping-stories,0,7934774.storygallery

Part 1 (excerpt below): http://www.dailypress.com/news/local/dp-02761sy0oct30,0,3545637.story

Part 2: http://www.dailypress.com/news/local/dp-02774sy0oct31,0,6036010.story

Decades of dumping chemical arms leave a risky legacy

BY JOHN M.R. BULL Daily Press (Newport News, Va.)

NEWPORT NEWS, Va. - In the summer of 2004, a clam-dredging operation off New Jersey pulled up an old artillery shell.

The long-submerged World War I-era explosive was filled with a black tarlike substance.

Bomb disposal technicians from Dover Air Force Base, Del., were brought in to dismantle it. Three of them were injured - one hospitalized with large pus-filled blisters on an arm and hand.

The shell was filled with mustard gas in solid form.

What was long feared by the few military officials in the know had come to pass: Chemical weapons that the Army dumped at sea decades ago finally ended up on shore in the United States.

[...]

Hashing Passwords and Encrypting Credit Card Information - making it easier for fraudsters ?

--Hema 20:31, 6 November 2005 (PST) In ecommerce websites it has become a SOX requirement to hash and store the passwords of users.The advantage of hashing is that an unauthorized person accessing the database won't see the user's password. But seeing the password is only useful if they use the same password on other websites. If someone can break into a company database, users will hate the company even if the passwords are hashed.

More often frausters tend to use the same passwords in the site. Companies have Fraud Monitoring Groups who monitor the website transactions everyday. Knowledge of fraudster passwords over time helps them to identify new fraud accounts created on their website. One way hashing of passwords takes them a longer time to identify fraudsters. Also encrypting credit cards leads to a similar problem. Fraudsters use Credit Card Generators to attack sites and Fraud Prevention Groups find it easy to catch such people as they are able to identify them generating credit cards with the same BIN numbers. But with encryption they are no longer able to use the same tactics.

Could someone give examples of how internet based companies have adapted themselves to identify fraud transactions ?