Difference between revisions of "Talk:Lecture 11"

From CyberSecurity
Jump to: navigation, search
(Lecture 11: Thoughts & Questions ...)
Line 46: Line 46:
  
 
Mr. Aucsmith, Professor Gribble, and/or Professor Maurer, do you believe that organizations like Comedy Central, which attach spyware-type items to your system when you vist or download something from their site, face potential liability for doing so?  If not statutorily, could negligence or trespass from tort come into play?
 
Mr. Aucsmith, Professor Gribble, and/or Professor Maurer, do you believe that organizations like Comedy Central, which attach spyware-type items to your system when you vist or download something from their site, face potential liability for doing so?  If not statutorily, could negligence or trespass from tort come into play?
 +
 +
 +
== EULAs and Spyware ==
 +
[[Chris Fleizach]] - After hearing how deviously spyware can get on your computer from the presenters, we may overlook the fact that a lot of malware
 +
gets on a users' computer because they agree to let it on. They <i>are</i> blindly clicking Proceed, but they are probably doing so because they have no desire to read a EULA. How many people have ever waded through pages and pages of legalese if they just wanted to download a comedy central joke of the day program? Again, it comes down to what Butler Lampson mentioned was the biggest problem, that of trust. We usually trust large organizations, ie Comedy Central, but more in general we trust that when we download Kazaa we're going to get a file sharing program. I doubt the thought that the main reason to create the file-sharing program is to just push malware onto your computer, even with your agreement as it is, as you click OK to the EULA. I think the first step might be to require companies to provide a brief summary of what their software does and what it will install. People will be more inclined to read two coherent sentences than a EULA. The Sony DRM rootkit has been mentioned already, but apparently on the Mac version it asks for your permission to alter your kernel, which you would know if you read everything in the EULA. Instead, I imagine, most people trust Sony that they're getting music, not insidious software.

Revision as of 01:18, 12 November 2005

--Gorchard 09:47, 10 November 2005 (PST) For the first time, I came out of the lectures last night with a sense of optimism. I thought the first two speakers especially (Dave Aucsmith and Steve Gribble) painted a picture that managing nefarious internet activity is possible and already well under way. We seem to have a pretty good understanding of how these guys operate, and it's comforting to know that the people making the attacks are not actually clever enough to discover the vulnerabilities themselves. They also make mistakes like URL typos and allowing themselves to be tracked down through Watson reports. It seems we're not fighting a losing battle. I also found it reassuring to hear Steve Gribble's spyware statistics - that only a small percentage of spyware programs actually do really bad things like keystroke monitoring and calling expensive toll numbers, while most spyware busies itself with 'harmless' activites such as pop-up ads or browser hijacking.

--Liebling 17:07, 10 November 2005 (PST) Agree with that ... the concept of developing software with an adversary in mind is something that seemed novel even though I've been through hours of security training at Microsoft. Sure, we do threat modeling, but it's really at the design and implementation stages where changes have to be made. Using the historical perspective allows us to project at least into the near future; i.e. what's after the application services layer? The data itself?

Drew Hoskins Here is an interesting take on the "10 worst bugs in history". Naturally, the first three aren't security-related, but then that starts to pick up in 1988. It's interesting that they choose some of the older internet worms rather than new ones like Sasser and Blaster. They are putting emphasis on how seminal an exploit is.
The "AT&T Network Outage" is an interesting example of exponential growth that we keep encountering with nuclear, biological, and cybercrime attacks.
The other interesting one is the "Kerberos Random Number Generator" which illustrates how far the hacking community has come; there's no way this type of exploit would be left untouched now.
http://wired.com/news/technology/bugs/0,2924,69355,00.html?tw=wn_tophead_1

--Gmusick 21:05, 10 November 2005 (PST) That's funny. I had the opposite reaction and felt the problem was even worse and more intractible that I had believed. The fact that our systems cannot even stop adware from being deployed means we are effectively helpless against a highly motivated, expert hacker. Look at it this way, most criminals in the physical world get caught because they did something stupid, like driving with a busted tail light. They get pulled over and the cops run a criminal background check on them for priors or warrants.

Apparently things are no better in cyberspace where we have to wait for them to do something stupid before we can start back-tracking them. And we, the white hats, can't get ahead of them with technology because we don't have the same motivation they do (in general). The only good news was that cybercriminals seem to do as many stupid things to give themselves away as regular criminals.

In theory we control the platform and that should give us an advantage. But the reality is that with companies afraid to alienate customers by cutting off insecure legacy programs, we will always have backdoors that are one or two generations behind the state-of-the-art cracking tools.


Re: our discussion on Sony's DRM Rootkit

Eiman Zolfaghari There's a Slashdot article saying that someone has already written a trojan using Sony's DRM rootkit. I believe Dave Aucsmith predicted this in his lecture, and yep, he was right. It's only a matter of time. Good thing this DRM software is not widely installed.
Here's the link:
http://it.slashdot.org/it/05/11/10/1615239.shtml?tid=172&tid=233

Disassembled Code and Trusted Platforms

Chris Fleizach - The timeline of events that Dave Aucsmith presented showed it can be a matter of hours after a patch is deployed before it's reversed engineered and exploited. This is only possible due to the quality of reverse-engineering tools that the various groups possess, as it seems the ability for one person to wade through miles of assembly code is something no one wants to do anymore. A lot of software security, especially registration numbers, has relied on the inability to change binary code back to human readable, high level language code. But from the talk, I gathered that there are now sophisticated tools which can come close enough so that the valuable information can be transcribed from binary code. The reason these groups can exploit the problems is because they have access to the underlying code that describes the problem. The week before, one of the Microsoft presenters mentioned that the idea of separating the owner of the computer platform from the administrator was a field that deserved attention. This seems like a far-fetched idea for PCs, but a trend recently has been a move to web services, where you don't own the processor that is running "your" application. If we used the same exploit timeline, a vulnerability might be discovered, reported, but then when its patched, it's patched in only the place where it is actually running. There's no time to create exploits. Indeed, there is only one copy, in one location, running that software. For example, if you can only use Outlook Express through the web, and another email virus like Melissa comes out, Microsoft fixes their copy and the problem is solved. This now removes the whole "application" layer from attackers. To further extend the idea, what if your whole OS ran on the web. A group would now have to find a new vulnerability and create an exploit themselves and then contend with all the extra security parameters Microsoft/Google/Yahoo employed, which was shown to be a very unlikely situation. This model wouldn't solve every security problem, but it would close the door to a lot of them.

Lecture 11: Thoughts & Questions ...

Mircrosoft presenters, particularly Mr. Aucsmith, is there any profile on the "typical/average" adversary -- age, education, country of origin, etc.?

Mr. Aucsmith, how prevalent are state-sponsored cyber attacks? Do most countries -- those that have the ability -- engage in it? Is there some sort of cyber war going on that the public doesn't know about -- countries constantly building security around their and attacking other important networks?

Mr. Aucsmith, do you have an opinion on the former Sandia Nat'l Labs employee who was fired for tracking cyber attackers back to the Chinese mainland?

Mr. Aucsmith, you briefly mentioned decision theory and "uda" with repsect to the importance of relative speed vis a vis belligerents, could you discuss that a bit more? Could you provide a useful reference on the subject, decision theory that is?

Mircrosoft presenters, do you take affirmative steps to counter those adversaries you track? It seems like you have quite a bit of info on several of your more capable adversaries, do you actively track them and look for points where you can pester them or bombard their systems, etc. with attacks or not?

Mr. Aucsmith, sorting of going back to the profile question, where do the authors and ring leaders involved with organized crime rings that engage in cyber activity come from -- are these guys mostly E. European or are they from elsewhere and operating from E. Euro because it is easier there to find refuge? Are the from the US, W. Euro, Asia, etc. and are simply operating from E. Euro and/or are they usually international in nature (an author from Latin America, a leader from France, etc.)?

Mr. Aucsmith, it seems like the Mutual Legal Assistance Treaty is woefully inadequate, are alternatives being floated, are people/committees/UN/countries working to improve it, if so, what do they look like? What sort of regime would you envison, ideally?

Mr. Aucsmith, is there any push to place sanctions on states like Chad that do not have cyber crimes legislation in place? Beyond Chad, are there any other primary culprits?

Mr. Aucsmith, Professor Gribble, and/or Professor Maurer, do you believe that organizations like Comedy Central, which attach spyware-type items to your system when you vist or download something from their site, face potential liability for doing so? If not statutorily, could negligence or trespass from tort come into play?


EULAs and Spyware

Chris Fleizach - After hearing how deviously spyware can get on your computer from the presenters, we may overlook the fact that a lot of malware gets on a users' computer because they agree to let it on. They are blindly clicking Proceed, but they are probably doing so because they have no desire to read a EULA. How many people have ever waded through pages and pages of legalese if they just wanted to download a comedy central joke of the day program? Again, it comes down to what Butler Lampson mentioned was the biggest problem, that of trust. We usually trust large organizations, ie Comedy Central, but more in general we trust that when we download Kazaa we're going to get a file sharing program. I doubt the thought that the main reason to create the file-sharing program is to just push malware onto your computer, even with your agreement as it is, as you click OK to the EULA. I think the first step might be to require companies to provide a brief summary of what their software does and what it will install. People will be more inclined to read two coherent sentences than a EULA. The Sony DRM rootkit has been mentioned already, but apparently on the Mac version it asks for your permission to alter your kernel, which you would know if you read everything in the EULA. Instead, I imagine, most people trust Sony that they're getting music, not insidious software.