Talk:Lecture 10

From CyberSecurity
Revision as of 23:15, 3 November 2005 by Chris DuPuis (talk | contribs) (Single Point of Failure)

Jump to: navigation, search

Responsible Disclosure of the security vulnerabilities

Pravin Mittal

What should be responsible way to disclose a security vulnerability for an "ethical hacker"? I am little torn if there should be full disclosure to public or limited disclosure to the software vendor and disclose it only once the patch is out as I can see the pros and cons for both of them.

Limited disclosure helps vendor to release patches for the flaws before the bad guys decide to use for nefarious activities.

But what if vendors are not responsive and "black hat" hackers are capable of finding flaws on their own? And full disclosure may also allow especailly in open-source community to react quickly and fix the problem good example such as BugTraq.

Also to quote Elias Levy who was named "one of the 10 most important people of the Decade by Netword Computing" "Back in 1993, the Internet was actually far less secure than it is today because there was little or no dissemination of information to the public about how to keep malicious users or hackers from taking advantage of vulnerabilities,"

Also, I would like to hear from public policy students, if they are stated gudilines/laws/policy from the U.S government?

I did find the comment by Richard Clarke, President Bush's special advisor for cyber space security, said security professionals have an obligation to be responsible with the disclosure of security vulnerabilities. They should first report vulnerabilities to the vendor who makes the software in which the vulnerability is found, and then tell the government if the vendor doesn't take action.

Pravin Mittal


SMM: As usual, the criterion is easy to state but hard to operationalize. We should follow whichever practice minimizes the number of malicious exploits. The twist is that Dr. Lackey can find potentially disasterous problems that, apparently, the bad hat community never gets around to noticing and/or turning into actual disasters. Dr. Pustilnik's comment that his group is much more fiendish than the guys at bad hat conventions is pretty much the same point. If we believe that Microsoft has gotten out ahead of the (quasi-open source) bad hat community, then total openness might not be the best policy.

The idea that companies will ignore defects unless you do exploits is really interesting. In fact, it screams "institutional failure" or "market imperfection." More precisely, the intuition should be that there is some discrete thing wrong that, if you could only find and fix it, would make things globally better. The current hacker/exploit system looks like a bizarre historical accident that would never evolve the same way twice. Are there other mechanisms that would work better? For example, if companies were required to carry insurance, would the insurance company start jumping up and down to get their attention faster? Maybe that won't work, but it's worth thinking about what would.

BTW, the idea that security can now push back on marketing and say "that can't be done" is a very positive sign. In the old days, companies that shipped insecure products got the revenues and shifted the losses onto consumers. Now they are seeing something closer to the net cost. Telling marketing "no" says we're getting the incentives right, that projects with net negative value for society also have net negative value for the vendor.

The question about having prizes for bugs is really interesting. You could imagine that the expected payout per bug would be smaller than the $2K/person/day that MS pays out for finding bugs Lackey's way. I was particularly puzzled by the idea that paying people prizes for finding bugs is contrary to public policy. When you get past the "giving in to terrorists" rhetoric, what you're really doing is either outsourcing your bug-hunting or persuading people that it is more profitable to turn their discoveries into money than to be malicious. (In the latter case the semi-ethical hackers get subsidized to discover more bugs than they otherwise would, so some of the bugs you buy might never have been discovered without a prize and are therefore wasted. But the same thing happens when you hire Lackey, so you can't really count it as a fundamental objection.) I suppose that there are also mechanical problems with prizes -- you don't want to pay prizes to guys who then turn around and do exploits anyway -- but one imagines that there are fixes for that. For example, you could make payment conditional on nobody exploiting the same bug for 90 days.

Finally, the continued prevalence of parsing and buffer overflow errors suggests that the industry isn't anywhere near the limits of what computer science knows how to do. Presumably, simple errors are about social institutions and incentives. The question is, what percent of the existing problem is amenable to those kinds of reforms? Using Microsoft bulletins to figure out what percent of problems are social vs. frontier CS problems would make an interesting term paper.


Jeff Bigham

  • Prizes for Bugs. A complication with awarding prizes for finding bugs is coming up with a pricing structure that makes sense and doesn't end up making matters worse. A problem exists when hackers could get more on the open market for their exploits than they could from the prize, so, to make prizes work, businesses would always have to offer more than would be potentially offered on the open market. Given the presumed high profitability of bot nets and the like, it seems that a good exploit could potentially be worth millions. Unfortunately, the potential payoff may be much higher than what the hacker could actually get and it seems incredibly difficult to gauge that accurately. It's also not clear to me how a hacker could prove to Microsoft that they've found a legitimate exploit without giving away a huge hint to how it was accomplished (if they break into a Microsoft computer, they've probably left breadcrumbs that the smart people at Microsoft could piece together). Given the uncertainty of these new psuedo-ethical-maybe hackers at so many levels it seems like MS might be out a whole lot more money paying off a lot of threats of dubious seriousness with the side effect of enticing people into targetting their software instead of that of their competitors. Without prizes, however, MS can hire people to do the same sorts of things while under their control and without nearly as much risk of them going bad.

BotNets and Windows Operating Systems

Chris Fleizach - We haven't heard a great deal of BotNets in this class, but I think their threat potential deserves attention. A few reports have come out from industries, like online gambling (usually operating outside the US and its legal protections), that have detailed either how they succumbed to or defeated a targeted DDoS attack from a BotNet. Although I couldn't find a reference, there was a Wired article about an off-shore gambling company which was the target of an attack that at its peak had 3Gb/sec directed at it. I'm assuming 100% of those hosts were compromised Windows machines (otherwise separate botnet software would have to be written for each type of OS for little gain). I was curious if Microsoft thought they should take an active role in stopping the BotNet problem, or if they were responsible at all. In one sense, the software controlling the computer for the BotNet was allowed to be installed by the user and it must do what it needs to do. The system assumes the user installed it and runs with the user's privileges. Many times, social engineering can fool a user into installing such software without exploiting vulnerabilities inherent in the OS. The first speaker mentioned that they would continue to improve their spyware and adware detectors, frequently sending updates that can find new engines. The most obvious problem with this approach, is that an entriprising hacker can write his own BotNet controller that Microsoft won't know about it.

The next obvious solution is to turn on the firewall and disallow incoming connections, which would stop a BotNet controller from accessing the computer. But currently, when software is installed as a user, there is nothing stopping that software from disabling the firewall entirely, or just for the specific port needed. Linux and MacOS both require a password to enter the control panel and change settings like the firewall, but Windows has never done this. Access is granted based on the first login. It seems, just from a cursory examination, that preventing BotNets might start by allowing access to critical system configuration through passwords. Does anyone from MS know if Vista will do more to protect against these problems? Do they have better ideas about stopping BotNets before they start? Is it their problem at all?


Examples of Vulnerabilities

Jeff Davis Last night people were asking for examples of security vulnerabilities that are beyond just the basic buffer overflow, things that tools like Prefix and Prefast can not detect. Internet Explorer has had plenty of these. We had an interesting vulnerability in the download dialog where the evil web site could repeatedly send an executable, causing IE to display the download prompt asking the user to Run, Save or Cancel. If the web site did it quickly enough they could create enough dialogs to run the system out of memory. When this happened, the dialog would fail to display. The developer who wrote the code never considered what would happen if DialogBoxParam() failed due to OOM and due to a quirk in the way he initialized his variables, the switch statement afterward would fall into the Run case allowing the bad guys to run arbitrary code on the machine.

In Writing Secure Code, page 54, when listing security principles to live by, Howard states clearly: "Don't mix code and data." Unfortunately for us, web pages by thier nature are mixtures of code and data. People have done all sorts of creative things with jscript. Prior to Windows XP Service Pack 2 web sites could call window.createPopup() (not the same as window.open()), creating a borderless, chromeless, topmost window with HTML content of their choosing. The function was originally intended to make scripting menus on web sites easier for developers. However, it lead to all sorts of spoofing attacks, like overlaying the Address bar with phone text, covering up the Cancel button on the ActiveX install dialog, etc. Another fun vulnerability was web sites that would wait for the mousedown event and then move the IE window out from under the mouse, creating a drag and drop event. If they were clever about how they did this, evil web sites could put links on your desktop, in your favorites folder, etc.

Obviously switching languages, from C++ to something "safer" is not a good solution, nor is relying on tools like Prefix (not to in any way disparriage the work the Prefix team does). I saw Theo de Raadt speak at an ACM conference once where he spoke about the OpenBSD three-year, line-by-line code audit. He said 90% of the vulnerabilities stemmed from people not fully understanding the APIs they called (or the documentaiton being broken, etc), and he is right.

False Postives in On-line Crash Analysis

Jeff Davis What the first speaker said about false postives is true. I debugged a lot of dumps where the stack cookie (/GS switch stuff) was clearly visible on the stack in the place it should be, only one bit had been flipped. Or it was one word away from where it should be on the stack. These are indicitive of faulty hardware (or overclocked hardware). Or the stack was just complete random garbage, which indicates a serious flaw somewhere but it is pretty difficult to debug if the stack is blown away.

In all the OCA buckets I have looked at, I only found one buffer overflow from a GS violation and it was a false postive-- the cookie was there on the stack just off by a bit. But while I was in there looking at the code, I noticed some string manipulation in a different function that looked suspect.

Single Point of Failure

--Chris DuPuis 15:15, 3 November 2005 (PST) In Brian Lopez's presentation, he described a wide variety of ways in which the details of a company's security infrastructure can become public, and the measures that he recommended to mitigate such information leakage.

However, he also mentioned the crates of security-related documents that organizations send to his team for the purpose of the intrustion test. In addition, he indicated that the clients sometimes (not always) call his team back to see how they have progressed. These two facts imply that, somewhere at Lawrence Livermore, there is a storeroom full of juicy security details for some of the most sensitive sites in the country. Also, based on the description of some number of crates per client, and the large number of clients, it seems likely that a great many of these crates are stored in some kind of offsite document storage facility.

I wonder if the clients are appraised of the risk associated with the storage area for these documents being compromised.