Difference between revisions of "Talk:Lecture 11"

From CyberSecurity
Jump to: navigation, search
(Red and Green e-mail)
(Windowbox - setup Red/Green environments and more!)
Line 81: Line 81:
  
 
The other issue with the red-green divide is how to allot activities betwee red and green partitions. Clearly the OS could (should?) have a large role here prescribing for instance during web surfing that any activity which formerly required the user to click OK to questions about doing something bad be placed on the red side with no option. There is still an immense gray area to be negotiated between red and green.
 
The other issue with the red-green divide is how to allot activities betwee red and green partitions. Clearly the OS could (should?) have a large role here prescribing for instance during web surfing that any activity which formerly required the user to click OK to questions about doing something bad be placed on the red side with no option. There is still an immense gray area to be negotiated between red and green.
 +
 +
[[User:Avichal|Avichal]] 15:15, 13 November 2005 (PST) Talking about implementation, yes that's a challenge. Users have talked about doing it using multiple physical computers, the problems with sharing are obvious in such a case. If you permit sharing over the network, it may open a security hole and might still e cumbersome (file synchronization hell!). If you lock it down to say sharing by physical media, that could still be insecure (viruses carried on a floppy from red to green) while being overly inconvenient.
 +
 +
In an analogous approach Windows XP has layered user accounts - Administrator, Standard, Limited [http://www.microsoft.com/windowsxp/using/setup/getstarted/configaccount.mspx]. Which is fine if only segregation you want is something like - an account for yourself, kids and guests. But the problem is that most users would themselves want to run logged in as an account with administrative privileges and thus would remain vulnerable.
 +
 +
Some other approaches we see in real life are people multi-booting their machines, usually the Windows OS is the vulnerable playbox and Linux is the stronghold :-)<br>
 +
A variation to that is using VMWare (or such) to simulate multiple virtual OS environments. Problem with that is again with sharing between these environments (files, data, etc.).
 +
 +
A unique approach presented by Dan Simon (Microsoft) in his paper is <b>[http://research.microsoft.com/crypto/papers/windowbox.pdf Windowbox]</b>. It basically simulates multiple desktops on a single PC, while allowing simple drag-and-drop functionality for sharing (and user is prompted for confirmation). This goes beyond the binary Red/Green appoach by allowing multiple desktops to be defined with different set of policies. Some examples that Dan Simon suggests are -
 +
*Personal desktop
 +
*Enterprise desktop
 +
*Play desktop
 +
*Communication desktop - single point to get all your email
 +
*Console Desktop - for administrative access
 +
 +
This is cool, and I like this even more than a two-pronged Red/Green approach as this gives me more flexibility in defining my desktops. However it also adds to the complexity. But such a solution may come with a default Red/Green desktops to begin with which any user could understand and advanced users can then change that to setup customized desktops.
 +
 +
In my earlier post in section [[Talk:Lecture_11#Overall_sentiments_about_the_Lecture|Overall_sentiments_about_the_Lecture]], I mention how I am concerned about the attacks moving upstack and the possibility that it would continue endlessly that way. Dan Simon discusses this (only indirectly) and points that Windowboxing pushes the problem domain to the OS level. Any problem with applications/services and the layers above are mitigated (but not solved) by Windowbox desktop environment. This could be the disruptive technology that I mentioned we need to stop the endless escalation of attacks up the stack. And my approach of [[Talk:Lecture_11#.22System_Restore_points.22_to_tackle_malware| System Restore using disk images]] seems embarrassingly difficult compared to Windowboxing.
  
 
=="System Restore points" to tackle malware==
 
=="System Restore points" to tackle malware==

Revision as of 23:15, 13 November 2005

Overall sentiments about the Lecture

--Gorchard 09:47, 10 November 2005 (PST) For the first time, I came out of the lectures last night with a sense of optimism. I thought the first two speakers especially (Dave Aucsmith and Steve Gribble) painted a picture that managing nefarious internet activity is possible and already well under way. We seem to have a pretty good understanding of how these guys operate, and it's comforting to know that the people making the attacks are not actually clever enough to discover the vulnerabilities themselves. They also make mistakes like URL typos and allowing themselves to be tracked down through Watson reports. It seems we're not fighting a losing battle. I also found it reassuring to hear Steve Gribble's spyware statistics - that only a small percentage of spyware programs actually do really bad things like keystroke monitoring and calling expensive toll numbers, while most spyware busies itself with 'harmless' activites such as pop-up ads or browser hijacking.

--Liebling 17:07, 10 November 2005 (PST) Agree with that ... the concept of developing software with an adversary in mind is something that seemed novel even though I've been through hours of security training at Microsoft. Sure, we do threat modeling, but it's really at the design and implementation stages where changes have to be made. Using the historical perspective allows us to project at least into the near future; i.e. what's after the application services layer? The data itself?

Drew Hoskins Here is an interesting take on the "10 worst bugs in history". Naturally, the first three aren't security-related, but then that starts to pick up in 1988. It's interesting that they choose some of the older internet worms rather than new ones like Sasser and Blaster. They are putting emphasis on how seminal an exploit is.
The "AT&T Network Outage" is an interesting example of exponential growth that we keep encountering with nuclear, biological, and cybercrime attacks.
The other interesting one is the "Kerberos Random Number Generator" which illustrates how far the hacking community has come; there's no way this type of exploit would be left untouched now.
http://wired.com/news/technology/bugs/0,2924,69355,00.html?tw=wn_tophead_1

--Gmusick 21:05, 10 November 2005 (PST) That's funny. I had the opposite reaction and felt the problem was even worse and more intractible that I had believed. The fact that our systems cannot even stop adware from being deployed means we are effectively helpless against a highly motivated, expert hacker. Look at it this way, most criminals in the physical world get caught because they did something stupid, like driving with a busted tail light. They get pulled over and the cops run a criminal background check on them for priors or warrants.

Apparently things are no better in cyberspace where we have to wait for them to do something stupid before we can start back-tracking them. And we, the white hats, can't get ahead of them with technology because we don't have the same motivation they do (in general). The only good news was that cybercriminals seem to do as many stupid things to give themselves away as regular criminals.

In theory we control the platform and that should give us an advantage. But the reality is that with companies afraid to alienate customers by cutting off insecure legacy programs, we will always have backdoors that are one or two generations behind the state-of-the-art cracking tools.


Re: our discussion on Sony's DRM Rootkit

Eiman Zolfaghari There's a Slashdot article saying that someone has already written a trojan using Sony's DRM rootkit. I believe Dave Aucsmith predicted this in his lecture, and yep, he was right. It's only a matter of time. Good thing this DRM software is not widely installed.
Here's the link:
http://it.slashdot.org/it/05/11/10/1615239.shtml?tid=172&tid=233

Avichal 09:51, 13 November 2005 (PST) What struck me the most was the part where David Aucsmith highlighted fact that the attacks have simply moved upstack as the base layers have been strenghtened. That trend is likely to continue. As Microsoft and other companies are getting ready to deploy a new web services paradigm, that's where probably most of the new attacks will be crafted; while the underlying OSes are expected to be strengthened. In my opinion it will take some disruptive development in the security arena to break this cycle.

I found the Red/Green idea presented by Butler Lampson to be most fascinating. Couple of folks mentioned how they already use that approach in computer usage (usually by having 2 seperate computers), but I started thinking about other (non-IT) scenarios where we already use the same idea. I could only think of some lose ones

  • Credit Cards: Use a seperate credit-card for where we may not trust the transaction. Some companies even let you generate temperory credit card numbers.
  • TV: Kids TV has channels blocked, whereas all channels available for adults

Given the simplicity of the idea, I had expected to find some 'eureka' examples which we use in real-life, and go "Duh, sure we've been doing that for ages". But I suppose even though the idea is simple, the approach is certainly novel. Probably since the problems presented in the internet world have never been faced before in another setting. Can anyone else think of a good example of non-IT red/green approach?

Disassembled Code and Trusted Platforms

Chris Fleizach - The timeline of events that Dave Aucsmith presented showed it can be a matter of hours after a patch is deployed before it's reversed engineered and exploited. This is only possible due to the quality of reverse-engineering tools that the various groups possess, as it seems the ability for one person to wade through miles of assembly code is something no one wants to do anymore. A lot of software security, especially registration numbers, has relied on the inability to change binary code back to human readable, high level language code. But from the talk, I gathered that there are now sophisticated tools which can come close enough so that the valuable information can be transcribed from binary code. The reason these groups can exploit the problems is because they have access to the underlying code that describes the problem. The week before, one of the Microsoft presenters mentioned that the idea of separating the owner of the computer platform from the administrator was a field that deserved attention. This seems like a far-fetched idea for PCs, but a trend recently has been a move to web services, where you don't own the processor that is running "your" application. If we used the same exploit timeline, a vulnerability might be discovered, reported, but then when its patched, it's patched in only the place where it is actually running. There's no time to create exploits. Indeed, there is only one copy, in one location, running that software. For example, if you can only use Outlook Express through the web, and another email virus like Melissa comes out, Microsoft fixes their copy and the problem is solved. This now removes the whole "application" layer from attackers. To further extend the idea, what if your whole OS ran on the web. A group would now have to find a new vulnerability and create an exploit themselves and then contend with all the extra security parameters Microsoft/Google/Yahoo employed, which was shown to be a very unlikely situation. This model wouldn't solve every security problem, but it would close the door to a lot of them.

Lecture 11: Thoughts & Questions ...

Mircrosoft presenters, particularly Mr. Aucsmith, is there any profile on the "typical/average" adversary -- age, education, country of origin, etc.?

Mr. Aucsmith, how prevalent are state-sponsored cyber attacks? Do most countries -- those that have the ability -- engage in it? Is there some sort of cyber war going on that the public doesn't know about -- countries constantly building security around their and attacking other important networks?

Mr. Aucsmith, do you have an opinion on the former Sandia Nat'l Labs employee who was fired for tracking cyber attackers back to the Chinese mainland?

Mr. Aucsmith, you briefly mentioned decision theory and "uda" with repsect to the importance of relative speed vis a vis belligerents, could you discuss that a bit more? Could you provide a useful reference on the subject, decision theory that is?

Mircrosoft presenters, do you take affirmative steps to counter those adversaries you track? It seems like you have quite a bit of info on several of your more capable adversaries, do you actively track them and look for points where you can pester them or bombard their systems, etc. with attacks or not?

Mr. Aucsmith, sorting of going back to the profile question, where do the authors and ring leaders involved with organized crime rings that engage in cyber activity come from -- are these guys mostly E. European or are they from elsewhere and operating from E. Euro because it is easier there to find refuge? Are the from the US, W. Euro, Asia, etc. and are simply operating from E. Euro and/or are they usually international in nature (an author from Latin America, a leader from France, etc.)?

Mr. Aucsmith, it seems like the Mutual Legal Assistance Treaty is woefully inadequate, are alternatives being floated, are people/committees/UN/countries working to improve it, if so, what do they look like? What sort of regime would you envison, ideally?

Mr. Aucsmith, is there any push to place sanctions on states like Chad that do not have cyber crimes legislation in place? Beyond Chad, are there any other primary culprits?

Mr. Aucsmith, Professor Gribble, and/or Professor Maurer, do you believe that organizations like Comedy Central, which attach spyware-type items to your system when you vist or download something from their site, face potential liability for doing so? If not statutorily, could negligence or trespass from tort come into play?


EULAs and Spyware

Chris Fleizach - After hearing how deviously spyware can get on your computer from the presenters, we may overlook the fact that a lot of malware gets on a users' computer because they agree to let it on. They are blindly clicking Proceed, but they are probably doing so because they have no desire to read a EULA. How many people have ever waded through pages and pages of legalese if they just wanted to download a comedy central joke of the day program? Again, it comes down to what Butler Lampson mentioned was the biggest problem, that of trust. We usually trust large organizations, ie Comedy Central, but more in general we trust that when we download Kazaa we're going to get a file sharing program. I doubt the thought that the main reason to create the file-sharing program is to just push malware onto your computer, even with your agreement as it is, as you click OK to the EULA. I think the first step might be to require companies to provide a brief summary of what their software does and what it will install. People will be more inclined to read two coherent sentences than a EULA. The Sony DRM rootkit has been mentioned already, but apparently on the Mac version it asks for your permission to alter your kernel, which you would know if you read everything in the EULA. Instead, I imagine, most people trust Sony that they're getting music, not insidious software.


Red and Green e-mail

Jack Menzel In Butler Lampson's talk about partitioning Windows into red and green zones he briefly and somewhat jokingly mentioned that for e-mail to work in this system you would need two separate outlook windows running, one for each zone. I actually didn't think this was at all a strange idea. After all I do this already. I actually have 3 levels of e-mail addresses that I use. One for garbage website registration and communication with ppl that I don't trust, one for personal, and one for work. Do other folks in the class not do the same?


Pravin Mittal

I agree with you and actually do the same. Actually I have two computers at my home where I keep my all the private data, always upto date use only very few trusted internet sites whereas other computer where I use for random surfing and don't care whatever happens to that machine.

In my personal opinion, I find Lampson's solution to be most viable way to create safe internet. I also agree with down in heart or ego that as software engineer we think we will get all the bugs out from our software one day. From my experience, it is next to impossible to ship a product without bugs, it will be always there.

I am also really taken back with the simplicity but elegant and practical solution to the problem. Does any share my sentiment? (Botnet is one loose thread that needs to be tied!!!)

Asad JawaharI also liked the idea and in fact I also use mutiple email addresses and computers for trusted vs untrusted uses. However, the user experience leaves much to be desired when I want to share data or programs between these machines. I think this point was also made in the lecture. In order for this technique to get wide adoption we need to improve the user experience, for example people want to do copy/paste across machines etc. The implementation of this solution also comes at a cost, for example you have to manage multiple accounts or computers hence we need to look at solutiuons like virtual machines.

Dennis Galvin 13:25, 13 November 2005 (PST)
A pesky detail with Butler Lampson's red-green divide is how to implement it in such a way as to make it simpler for the user to engage in safe computing. We have all suffered from click-through fatigue at some point or another clicking OK to baffling questions about ActiveX controls, scripts, etc. The average user just clicks through the questions without reading them. The challenge: make safe computing absolutely intuitive and require special intervention to utilize unsafe (red) computing. Given recent 'advances' courtesy of Sony in DRM, perhaps playing music CD's belongs on the red side.

The other issue with the red-green divide is how to allot activities betwee red and green partitions. Clearly the OS could (should?) have a large role here prescribing for instance during web surfing that any activity which formerly required the user to click OK to questions about doing something bad be placed on the red side with no option. There is still an immense gray area to be negotiated between red and green.

Avichal 15:15, 13 November 2005 (PST) Talking about implementation, yes that's a challenge. Users have talked about doing it using multiple physical computers, the problems with sharing are obvious in such a case. If you permit sharing over the network, it may open a security hole and might still e cumbersome (file synchronization hell!). If you lock it down to say sharing by physical media, that could still be insecure (viruses carried on a floppy from red to green) while being overly inconvenient.

In an analogous approach Windows XP has layered user accounts - Administrator, Standard, Limited [1]. Which is fine if only segregation you want is something like - an account for yourself, kids and guests. But the problem is that most users would themselves want to run logged in as an account with administrative privileges and thus would remain vulnerable.

Some other approaches we see in real life are people multi-booting their machines, usually the Windows OS is the vulnerable playbox and Linux is the stronghold :-)
A variation to that is using VMWare (or such) to simulate multiple virtual OS environments. Problem with that is again with sharing between these environments (files, data, etc.).

A unique approach presented by Dan Simon (Microsoft) in his paper is Windowbox. It basically simulates multiple desktops on a single PC, while allowing simple drag-and-drop functionality for sharing (and user is prompted for confirmation). This goes beyond the binary Red/Green appoach by allowing multiple desktops to be defined with different set of policies. Some examples that Dan Simon suggests are -

  • Personal desktop
  • Enterprise desktop
  • Play desktop
  • Communication desktop - single point to get all your email
  • Console Desktop - for administrative access

This is cool, and I like this even more than a two-pronged Red/Green approach as this gives me more flexibility in defining my desktops. However it also adds to the complexity. But such a solution may come with a default Red/Green desktops to begin with which any user could understand and advanced users can then change that to setup customized desktops.

In my earlier post in section Overall_sentiments_about_the_Lecture, I mention how I am concerned about the attacks moving upstack and the possibility that it would continue endlessly that way. Dan Simon discusses this (only indirectly) and points that Windowboxing pushes the problem domain to the OS level. Any problem with applications/services and the layers above are mitigated (but not solved) by Windowbox desktop environment. This could be the disruptive technology that I mentioned we need to stop the endless escalation of attacks up the stack. And my approach of System Restore using disk images seems embarrassingly difficult compared to Windowboxing.

"System Restore points" to tackle malware

Avichal 10:08, 13 November 2005 (PST) Although Windows provides some functionality to restore system to particular checkpoints, I use disk imaging tools instead. They, in my opinion, provide a more robust restore functionality. I usually surf the net with relative impunity, and if things go wrong I restore from my last disk checkpoint.

There are some issues with this

  • It protects my system from malware but doesn't protect my personal data in case someone does break in
  • It assumes that I will be able to detect any malware on my PC

However I do think this approach works well for me, and should for most other users too. And probably will be cheaper and easier than a Red/Green approach (which I think can only be done using 2 seperate computers). Right now, I feel limited in that approach due to contraints on disk space and time taken to image my disk.However given the rate at which things are changing, we are sure to get cheap and abundant disk space and possibly smarter/faster system checkpoint mechanism.

The killer of this approach however will be a malware desined to survive a disk-reimage :-(
Could be something which can reside in other circuits like Flash-RAM.

Sony DRM

--Dennis Galvin 13:02, 13 November 2005 (PST):
As David Aucsmith predicted, it looks like Microsoft is taking action against the Sony DRM rootkit with both the Malicious Software Removal Tool (MSRT) and the Anti-Spyware beta (http://blogs.technet.com/antimalware/) according to Jason Garms blog entry for the Anti-Malware Engineering Team. Given that most users are willing to give up any shred of secure computing (previous discussions about candy bars), this will continue to embed itself on user's computers between monthly MSRT scans to be removed again on patch Tuesday. Sadly, Sony's response to date has been largely evasive (eweek article).