Difference between revisions of "Talk:Lecture 11"

From CyberSecurity
Jump to: navigation, search
m (removed vspace)
Line 1: Line 1:
 +
__TOC__
 +
 
--[[User:Gorchard|Gorchard]] 09:47, 10 November 2005 (PST) For the first time, I came out of the lectures last night with a sense of optimism. I thought the first two speakers especially (Dave Aucsmith and Steve Gribble) painted a picture that managing nefarious internet activity is possible and already well under way. We seem to have a pretty good understanding of how these guys operate, and it's comforting to know that the people making the attacks are not actually clever enough to discover the vulnerabilities themselves. They also make mistakes like URL typos and allowing themselves to be tracked down through Watson reports. It seems we're not fighting a losing battle. I also found it reassuring to hear Steve Gribble's spyware statistics - that only a small percentage of spyware programs actually do really bad things like keystroke monitoring and calling expensive toll numbers, while most spyware busies itself with 'harmless' activites such as pop-up ads or browser hijacking.
 
--[[User:Gorchard|Gorchard]] 09:47, 10 November 2005 (PST) For the first time, I came out of the lectures last night with a sense of optimism. I thought the first two speakers especially (Dave Aucsmith and Steve Gribble) painted a picture that managing nefarious internet activity is possible and already well under way. We seem to have a pretty good understanding of how these guys operate, and it's comforting to know that the people making the attacks are not actually clever enough to discover the vulnerabilities themselves. They also make mistakes like URL typos and allowing themselves to be tracked down through Watson reports. It seems we're not fighting a losing battle. I also found it reassuring to hear Steve Gribble's spyware statistics - that only a small percentage of spyware programs actually do really bad things like keystroke monitoring and calling expensive toll numbers, while most spyware busies itself with 'harmless' activites such as pop-up ads or browser hijacking.
  
Line 15: Line 17:
 
<br>
 
<br>
 
Here's the link: <br>http://it.slashdot.org/it/05/11/10/1615239.shtml?tid=172&tid=233
 
Here's the link: <br>http://it.slashdot.org/it/05/11/10/1615239.shtml?tid=172&tid=233
 +
 +
== Disassembled Code and Trusted Platforms ==
 +
[[Chris Fleizach]] - The timeline of events that Dave Aucsmith presented showed it can be a matter of hours after a patch is deployed before it's reversed engineered and exploited. This is only possible due to the quality of reverse-engineering tools that the various groups possess, as it seems the ability for one person to wade through miles of assembly code is something no one wants to do anymore. A lot of software security, especially registration numbers, has relied on the inability to change binary code back to human readable, high level language code. But from the talk, I gathered that there are now sophisticated tools which can come close enough so that the valuable information can be transcribed from binary code. The reason these groups can exploit the problems is because they have access to the underlying code that describes the problem. The week before, one of the Microsoft presenters mentioned that the idea of separating the owner of the computer platform from the administrator was a field that deserved attention. This seems like a far-fetched idea for PCs, but a trend recently has been a move to web services, where you don't own the processor that is running "your" application. If we used the same exploit timeline, a vulnerability might be discovered, reported, but then when its patched, it's patched in only the place where it is actually running. There's no time to create exploits. Indeed, there is only one copy, in one location, running that software. For example, if you can only use Outlook Express through the web, and another email virus like Melissa comes out, Microsoft fixes their copy and the problem is solved. This now removes the whole "application" layer from attackers. To further extend the idea, what if your whole OS ran on the web. A group would now have to find a new vulnerability and create an exploit themselves and then contend with all the extra security parameters Microsoft/Google/Yahoo employed, which was shown to be a very unlikely situation. This model wouldn't solve every security problem, but it would close the door to a lot of them.

Revision as of 01:50, 11 November 2005

--Gorchard 09:47, 10 November 2005 (PST) For the first time, I came out of the lectures last night with a sense of optimism. I thought the first two speakers especially (Dave Aucsmith and Steve Gribble) painted a picture that managing nefarious internet activity is possible and already well under way. We seem to have a pretty good understanding of how these guys operate, and it's comforting to know that the people making the attacks are not actually clever enough to discover the vulnerabilities themselves. They also make mistakes like URL typos and allowing themselves to be tracked down through Watson reports. It seems we're not fighting a losing battle. I also found it reassuring to hear Steve Gribble's spyware statistics - that only a small percentage of spyware programs actually do really bad things like keystroke monitoring and calling expensive toll numbers, while most spyware busies itself with 'harmless' activites such as pop-up ads or browser hijacking.

--Liebling 17:07, 10 November 2005 (PST) Agree with that ... the concept of developing software with an adversary in mind is something that seemed novel even though I've been through hours of security training at Microsoft. Sure, we do threat modeling, but it's really at the design and implementation stages where changes have to be made. Using the historical perspective allows us to project at least into the near future; i.e. what's after the application services layer? The data itself?

Drew Hoskins Here is an interesting take on the "10 worst bugs in history". Naturally, the first three aren't security-related, but then that starts to pick up in 1988. It's interesting that they choose some of the older internet worms rather than new ones like Sasser and Blaster. They are putting emphasis on how seminal an exploit is.
The "AT&T Network Outage" is an interesting example of exponential growth that we keep encountering with nuclear, biological, and cybercrime attacks.
The other interesting one is the "Kerberos Random Number Generator" which illustrates how far the hacking community has come; there's no way this type of exploit would be left untouched now.
http://wired.com/news/technology/bugs/0,2924,69355,00.html?tw=wn_tophead_1


Re: our discussion on Sony's DRM Rootkit

Eiman Zolfaghari There's a Slashdot article saying that someone has already written a trojan using Sony's DRM rootkit. I believe Dave Aucsmith predicted this in his lecture, and yep, he was right. It's only a matter of time. Good thing this DRM software is not widely installed.
Here's the link:
http://it.slashdot.org/it/05/11/10/1615239.shtml?tid=172&tid=233

Disassembled Code and Trusted Platforms

Chris Fleizach - The timeline of events that Dave Aucsmith presented showed it can be a matter of hours after a patch is deployed before it's reversed engineered and exploited. This is only possible due to the quality of reverse-engineering tools that the various groups possess, as it seems the ability for one person to wade through miles of assembly code is something no one wants to do anymore. A lot of software security, especially registration numbers, has relied on the inability to change binary code back to human readable, high level language code. But from the talk, I gathered that there are now sophisticated tools which can come close enough so that the valuable information can be transcribed from binary code. The reason these groups can exploit the problems is because they have access to the underlying code that describes the problem. The week before, one of the Microsoft presenters mentioned that the idea of separating the owner of the computer platform from the administrator was a field that deserved attention. This seems like a far-fetched idea for PCs, but a trend recently has been a move to web services, where you don't own the processor that is running "your" application. If we used the same exploit timeline, a vulnerability might be discovered, reported, but then when its patched, it's patched in only the place where it is actually running. There's no time to create exploits. Indeed, there is only one copy, in one location, running that software. For example, if you can only use Outlook Express through the web, and another email virus like Melissa comes out, Microsoft fixes their copy and the problem is solved. This now removes the whole "application" layer from attackers. To further extend the idea, what if your whole OS ran on the web. A group would now have to find a new vulnerability and create an exploit themselves and then contend with all the extra security parameters Microsoft/Google/Yahoo employed, which was shown to be a very unlikely situation. This model wouldn't solve every security problem, but it would close the door to a lot of them.