Talk:Lecture 10

From CyberSecurity
Revision as of 18:11, 3 November 2005 by Smaurer (talk | contribs)

Jump to: navigation, search

Responsible Disclosure of the security vulnerabilities

Pravin Mittal

What should be responsible way to disclose a security vulnerability for an "ethical hacker"? I am little torn if there should be full disclosure to public or limited disclosure to the software vendor and disclose it only once the patch is out as I can see the pros and cons for both of them.

Limited disclosure helps vendor to release patches for the flaws before the bad guys decide to use for nefarious activities.

But what if vendors are not responsive and "black hat" hackers are capable of finding flaws on their own? And full disclosure may also allow especailly in open-source community to react quickly and fix the problem good example such as BugTraq.

Also to quote Elias Levy who was named "one of the 10 most important people of the Decade by Netword Computing" "Back in 1993, the Internet was actually far less secure than it is today because there was little or no dissemination of information to the public about how to keep malicious users or hackers from taking advantage of vulnerabilities,"

Also, I would like to hear from public policy students, if they are stated gudilines/laws/policy from the U.S government?

I did find the comment by Richard Clarke, President Bush's special advisor for cyber space security, said security professionals have an obligation to be responsible with the disclosure of security vulnerabilities. They should first report vulnerabilities to the vendor who makes the software in which the vulnerability is found, and then tell the government if the vendor doesn't take action.

Pravin Mittal


SMM: As usual, the criterion is easy to state but hard to operationalize. We should follow whichever practice minimizes the number of malicious exploits. The twist is that Dr. Lackey can find potentially disasterous problems that, apparently, the bad hat community never gets around to noticing and/or turning into actual disasters. Dr. Pustilnik's comment that his group is much more fiendish than the guys at bad hat conventions is pretty much the same point. If we believe that Microsoft has gotten out ahead of the (quasi-open source) bad hat community, then total openness might not be the best policy.

The idea that companies will ignore defects unless you do exploits is really interesting. In fact, it screams "institutional failure" or "market imperfection." More precisely, the intuition should be that there is some discrete thing wrong that, if you could only find and fix it, would make things globally better. The current hacker/exploit system looks like a bizarre historical accident that would never evolve the same way twice. Are there other mechanisms that would work better? For example, if companies were required to carry insurance, would the insurance company start jumping up and down to get their attention faster? Maybe that won't work, but it's worth thinking about what would.

BTW, the idea that security can now push back on marketing and say "that can't be done" is a very positive sign. In the old days, companies that shipped insecure products got the revenues and shifted the losses onto consumers. Now they are seeing something closer to the net cost. Telling marketing "no" says we're getting the incentives right, that projects with net negative value for society also have net negative value for the vendor.

The question about having prizes for bugs is really interesting. You could imagine that the expected payout per bug would be smaller than the $2K/person/day that MS pays out for finding bugs Lackey's way. I was particularly puzzled by the idea that paying people prizes for finding bugs is contrary to public policy. When you get past the "giving in to terrorists" rhetoric, what you're really doing is either outsourcing your bug-hunting or persuading people that it is more profitable to turn their discoveries into money than to be malicious. (In the latter case the semi-ethical hackers get subsidized to discover more bugs than they otherwise would, so some of the bugs you buy might never have been discovered without a prize and are therefore wasted. But the same thing happens when you hire Lackey, so you can't really count it as a fundamental objection.) I suppose that there are also mechanical problems with prizes -- you don't want to pay prizes to guys who then turn around and do exploits anyway -- but one imagines that there are fixes for that. For example, you could make payment conditional on nobody exploiting the same bug for 90 days.

Finally, the continued prevalence of parsing and buffer overflow errors suggests that the industry isn't anywhere near the limits of what computer science knows how to do. Presumably, simple errors are about social institutions and incentives. The question is, what percent of the existing problem is amenable to those kinds of reforms? Using Microsoft bulletins to figure out what percent of problems are social vs. frontier CS problems would make an interesting term paper.


BotNets and Windows Operating Systems

Chris Fleizach - We haven't heard a great deal of BotNets in this class, but I think their threat potential deserves attention. A few reports have come out from industries, like online gambling (usually operating outside the US and its legal protections), that have detailed either how they succumbed to or defeated a targeted DDoS attack from a BotNet. Although I couldn't find a reference, there was a Wired article about an off-shore gambling company which was the target of an attack that at its peak had 3Gb/sec directed at it. I'm assuming 100% of those hosts were compromised Windows machines (otherwise separate botnet software would have to be written for each type of OS for little gain). I was curious if Microsoft thought they should take an active role in stopping the BotNet problem, or if they were responsible at all. In one sense, the software controlling the computer for the BotNet was allowed to be installed by the user and it must do what it needs to do. The system assumes the user installed it and runs with the user's privileges. Many times, social engineering can fool a user into installing such software without exploiting vulnerabilities inherent in the OS. The first speaker mentioned that they would continue to improve their spyware and adware detectors, frequently sending updates that can find new engines. The most obvious problem with this approach, is that an entriprising hacker can write his own BotNet controller that Microsoft won't know about it.

The next obvious solution is to turn on the firewall and disallow incoming connections, which would stop a BotNet controller from accessing the computer. But currently, when software is installed as a user, there is nothing stopping that software from disabling the firewall entirely, or just for the specific port needed. Linux and MacOS both require a password to enter the control panel and change settings like the firewall, but Windows has never done this. Access is granted based on the first login. It seems, just from a cursory examination, that preventing BotNets might start by allowing access to critical system configuration through passwords. Does anyone from MS know if Vista will do more to protect against these problems? Do they have better ideas about stopping BotNets before they start? Is it their problem at all?