Talk:Lecture 11

From CyberSecurity
Jump to: navigation, search

Overall sentiments about the Lecture

--Gorchard 09:47, 10 November 2005 (PST) For the first time, I came out of the lectures last night with a sense of optimism. I thought the first two speakers especially (Dave Aucsmith and Steve Gribble) painted a picture that managing nefarious internet activity is possible and already well under way. We seem to have a pretty good understanding of how these guys operate, and it's comforting to know that the people making the attacks are not actually clever enough to discover the vulnerabilities themselves. They also make mistakes like URL typos and allowing themselves to be tracked down through Watson reports. It seems we're not fighting a losing battle. I also found it reassuring to hear Steve Gribble's spyware statistics - that only a small percentage of spyware programs actually do really bad things like keystroke monitoring and calling expensive toll numbers, while most spyware busies itself with 'harmless' activites such as pop-up ads or browser hijacking.

--Liebling 17:07, 10 November 2005 (PST) Agree with that ... the concept of developing software with an adversary in mind is something that seemed novel even though I've been through hours of security training at Microsoft. Sure, we do threat modeling, but it's really at the design and implementation stages where changes have to be made. Using the historical perspective allows us to project at least into the near future; i.e. what's after the application services layer? The data itself?

Drew Hoskins Here is an interesting take on the "10 worst bugs in history". Naturally, the first three aren't security-related, but then that starts to pick up in 1988. It's interesting that they choose some of the older internet worms rather than new ones like Sasser and Blaster. They are putting emphasis on how seminal an exploit is.
The "AT&T Network Outage" is an interesting example of exponential growth that we keep encountering with nuclear, biological, and cybercrime attacks.
The other interesting one is the "Kerberos Random Number Generator" which illustrates how far the hacking community has come; there's no way this type of exploit would be left untouched now.
http://wired.com/news/technology/bugs/0,2924,69355,00.html?tw=wn_tophead_1

--Gmusick 21:05, 10 November 2005 (PST) That's funny. I had the opposite reaction and felt the problem was even worse and more intractible that I had believed. The fact that our systems cannot even stop adware from being deployed means we are effectively helpless against a highly motivated, expert hacker. Look at it this way, most criminals in the physical world get caught because they did something stupid, like driving with a busted tail light. They get pulled over and the cops run a criminal background check on them for priors or warrants.

Apparently things are no better in cyberspace where we have to wait for them to do something stupid before we can start back-tracking them. And we, the white hats, can't get ahead of them with technology because we don't have the same motivation they do (in general). The only good news was that cybercriminals seem to do as many stupid things to give themselves away as regular criminals.

In theory we control the platform and that should give us an advantage. But the reality is that with companies afraid to alienate customers by cutting off insecure legacy programs, we will always have backdoors that are one or two generations behind the state-of-the-art cracking tools.

Katie Veazey: I came out of the lectures feeling as though it would be very difficult for a mass terrorist attack on computers to occur over a long period of time. I definitely think terrorists could do some immediate damage on a computer system, but people at places like Microsoft seem to be very skilled in figuring out bugs and could quickly shut down the attack. One of the most interesting points of the evening though was the quote by Mr. Lampson that "real world security is about punishments and deterrence, not locks." So although we may be developing software and security measures, something in the policy realm needs to be done soon to deter people from comitting internet crimes. Laws and punishments cannot be created though without definitions of what exactly is spyware, bots, etc... (I'm not that informed on how things differ, since I'm a policy student), but without a group sitting down and attempting to define legal from illegal, it looks as though attacks will continue to occur with little accountability.

Jeff Davis I agree with musick-- this is not a problem you solve and then it goes away. The very nature of computing makes it a continuously escalating war. We can make things more difficult for the bad guys, we can plug the holes, but in the end we just raise the bar a little higher. The only way to stop it is to build a system that costs more to comprimise than can be gained by comprimising it, but then nobody would probably want to buy it for legitimate use either.

Jeff Bilger 3:00 PM, 11/16/05 I have to agree with Gmusick as well. We've heard a lot from researchers and security experts at Microsoft regarding what they are doing to help secure their Operating Systems, but that is only one small piece of a much larger puzzle. I'm more concerned about all the other companies that write software for the Windows platform. Do these companies have the money to spend on security like MS does? I seriously doubt it. Moreover, with the trend of black hats attacking "up the stack" and now at the applications themselves, it is only a matter of time before these hackers switch from attacking Microsoft OS vulnerabilities and moving on to something easier.

Also, let's not dismiss the intelligence and drive of these black hats. Just because they are not discovering the vulnerabilities themselves, doesn't mean they can't. It's just much easier to reverse engineer the current patch and exploit that security vulnerability. They know that some percentage of people can't or don't install security updates in a timely manner.

Jim Xiao Agree with that spyware is a really a severe problem. It’s not that spyware is always too hard to detect and remove. Most of the time it’s just that lots of users is not doing basic protection steps to fight against them. I was fighting with spyware myself on one of my home desktops several months ago. I constantly got a security warning (probably 10 times a day) from Norton antivirus which says that a file infected with virus is detected and isolated. But when I run a full scan on the computers nothing get caught. I installed several new type of anti virus/spyware software like Microsoft anti-spyware and Mc. Afee but the warning are still happening. Worrying about my computer safety, I was almost at the point of trying to reinstall the whole computer until I downloaded the latest windows services patches and the issue seems gone. Lots of Internet user (I believe I am on of them) either don’t understand the importance of installing the latest service pack or they cannot remember to update regularly or they just simply too lazy to install it until they have a serious issue.

Liebling Although Vista is supposed to be really secure, I think the work on Next Generation Secure Computing Base and things like secure microkernels, the trust-no-one attitude, etc, are probably going to affect the security situation more than anything else. We've seen how miserable a failure it is to put people in charge of keeping their computers up to date. Having an entire platform that assumes things will FUBAR is probably the best way to go.

Re: our discussion on Sony's DRM Rootkit

Eiman Zolfaghari There's a Slashdot article saying that someone has already written a trojan using Sony's DRM rootkit. I believe Dave Aucsmith predicted this in his lecture, and yep, he was right. It's only a matter of time. Good thing this DRM software is not widely installed.
Here's the link:
http://it.slashdot.org/it/05/11/10/1615239.shtml?tid=172&tid=233

Avichal 09:51, 13 November 2005 (PST) What struck me the most was the part where David Aucsmith highlighted fact that the attacks have simply moved upstack as the base layers have been strenghtened. That trend is likely to continue. As Microsoft and other companies are getting ready to deploy a new web services paradigm, that's where probably most of the new attacks will be crafted; while the underlying OSes are expected to be strengthened. In my opinion it will take some disruptive development in the security arena to break this cycle.

I found the Red/Green idea presented by Butler Lampson to be most fascinating. Couple of folks mentioned how they already use that approach in computer usage (usually by having 2 seperate computers), but I started thinking about other (non-IT) scenarios where we already use the same idea. I could only think of some lose ones

  • Credit Cards: Use a seperate credit-card for where we may not trust the transaction. Some companies even let you generate temperory credit card numbers.
  • TV: Kids TV has channels blocked, whereas all channels available for adults

Given the simplicity of the idea, I had expected to find some 'eureka' examples which we use in real-life, and go "Duh, sure we've been doing that for ages". But I suppose even though the idea is simple, the approach is certainly novel. Probably since the problems presented in the internet world have never been faced before in another setting. Can anyone else think of a good example of non-IT red/green approach?

Disassembled Code and Trusted Platforms

Chris Fleizach - The timeline of events that Dave Aucsmith presented showed it can be a matter of hours after a patch is deployed before it's reversed engineered and exploited. This is only possible due to the quality of reverse-engineering tools that the various groups possess, as it seems the ability for one person to wade through miles of assembly code is something no one wants to do anymore. A lot of software security, especially registration numbers, has relied on the inability to change binary code back to human readable, high level language code. But from the talk, I gathered that there are now sophisticated tools which can come close enough so that the valuable information can be transcribed from binary code. The reason these groups can exploit the problems is because they have access to the underlying code that describes the problem. The week before, one of the Microsoft presenters mentioned that the idea of separating the owner of the computer platform from the administrator was a field that deserved attention. This seems like a far-fetched idea for PCs, but a trend recently has been a move to web services, where you don't own the processor that is running "your" application. If we used the same exploit timeline, a vulnerability might be discovered, reported, but then when its patched, it's patched in only the place where it is actually running. There's no time to create exploits. Indeed, there is only one copy, in one location, running that software. For example, if you can only use Outlook Express through the web, and another email virus like Melissa comes out, Microsoft fixes their copy and the problem is solved. This now removes the whole "application" layer from attackers. To further extend the idea, what if your whole OS ran on the web. A group would now have to find a new vulnerability and create an exploit themselves and then contend with all the extra security parameters Microsoft/Google/Yahoo employed, which was shown to be a very unlikely situation. This model wouldn't solve every security problem, but it would close the door to a lot of them.

Brian McGuire When an exploit was discovered in the service type application the number of people impacted would be far greater. To some extent, having a huge number of potential targets as we do in the current model protects a user with and above average understanding of security issues. On the other hand, companies that publish applications as services will have to worry a lot more about security just because of the number of users that would be impacted.

Lecture 11: Thoughts & Questions ...

Mircrosoft presenters, particularly Mr. Aucsmith, is there any profile on the "typical/average" adversary -- age, education, country of origin, etc.?

Mr. Aucsmith, how prevalent are state-sponsored cyber attacks? Do most countries -- those that have the ability -- engage in it? Is there some sort of cyber war going on that the public doesn't know about -- countries constantly building security around their and attacking other important networks?

Mr. Aucsmith, do you have an opinion on the former Sandia Nat'l Labs employee who was fired for tracking cyber attackers back to the Chinese mainland?

Mr. Aucsmith, you briefly mentioned decision theory and "uda" with repsect to the importance of relative speed vis a vis belligerents, could you discuss that a bit more? Could you provide a useful reference on the subject, decision theory that is?

Mircrosoft presenters, do you take affirmative steps to counter those adversaries you track? It seems like you have quite a bit of info on several of your more capable adversaries, do you actively track them and look for points where you can pester them or bombard their systems, etc. with attacks or not?

Mr. Aucsmith, sorting of going back to the profile question, where do the authors and ring leaders involved with organized crime rings that engage in cyber activity come from -- are these guys mostly E. European or are they from elsewhere and operating from E. Euro because it is easier there to find refuge? Are the from the US, W. Euro, Asia, etc. and are simply operating from E. Euro and/or are they usually international in nature (an author from Latin America, a leader from France, etc.)?

Mr. Aucsmith, it seems like the Mutual Legal Assistance Treaty is woefully inadequate, are alternatives being floated, are people/committees/UN/countries working to improve it, if so, what do they look like? What sort of regime would you envison, ideally?

Mr. Aucsmith, is there any push to place sanctions on states like Chad that do not have cyber crimes legislation in place? Beyond Chad, are there any other primary culprits?

Mr. Aucsmith, Professor Gribble, and/or Professor Maurer, do you believe that organizations like Comedy Central, which attach spyware-type items to your system when you vist or download something from their site, face potential liability for doing so? If not statutorily, could negligence or trespass from tort come into play?


EULAs and Spyware

Chris Fleizach - After hearing how deviously spyware can get on your computer from the presenters, we may overlook the fact that a lot of malware gets on a users' computer because they agree to let it on. They are blindly clicking Proceed, but they are probably doing so because they have no desire to read a EULA. How many people have ever waded through pages and pages of legalese if they just wanted to download a comedy central joke of the day program? Again, it comes down to what Butler Lampson mentioned was the biggest problem, that of trust. We usually trust large organizations, ie Comedy Central, but more in general we trust that when we download Kazaa we're going to get a file sharing program. I doubt the thought that the main reason to create the file-sharing program is to just push malware onto your computer, even with your agreement as it is, as you click OK to the EULA. I think the first step might be to require companies to provide a brief summary of what their software does and what it will install. People will be more inclined to read two coherent sentences than a EULA. The Sony DRM rootkit has been mentioned already, but apparently on the Mac version it asks for your permission to alter your kernel, which you would know if you read everything in the EULA. Instead, I imagine, most people trust Sony that they're getting music, not insidious software.

Jeff Davis This is not unreasonable, but look at P3P. Have you ever looked at the P3P summary for a website you visit or submit data to? Not many people do, but its there. Furthermore, who is going to enforce the accuracy of the summary? Just like P3P summaries, I can simply lie; in fact, if I am a bad guy, it is already established I have no problem lying. The EULA problem is huge because it is generally illeadgable by design.

A better solution is to (a) only install programs you trust and (b) only install from sources you trust. For (a) my list is short: Adobe, Microsoft, Putty and Games I Buy At The Mall. For (b), download.com after reading what other people had to say and doing a google search.

If you must install random 3rd party software, *EXPECT TO GET HORKED*. If you want to install Kazaa, have a separate computer outside your firewall in a DMZ and let it get pwn3d. Those who would object that [random computer illiterate family member] can't possibly be expected to do this need to have (a) and (b) enforced upon them and told they can have their private data or they can have cute little mouse pointer programs but they cannot have both.

Avichal 19:37, 14 November 2005 (PST) I generally agree with above mentioned points. While appropriate disclosure to users would be welcome (accurate and brief rather than 20 pages of lawyer speak), but would users care anyway (those trying to get the cool mouse cursors). Well, I think it'll atleast help some more aware users. Then there is the problem of how do you make sure the information is indeed accurate.

However Jeff, the part where you mention that users should only install programs they "trust". Well that's a whole can of worms in itself. Been worked with computers for a while, maybe I can make those decisions (or maybe not), but what about novice users? On a sidenote, I think Sony must have dropped from a lot of users trusted lists recently :-)

Marty Lyons, UW -- Some of the challenges we as computing and interface designers face is in presenting sensible options to the user community. Demanding (and expecting) that someone will read a legal contract prior to installing software is almost laughable in an industry where faster, easier, cheaper to use is the standard expectation. Maybe we need to have IEEE or ISO standardize a "10 bullet line items" list of terms of agreement. At the same time, people have a reasonable expectation that you won't corrupt their environment -- so installing anything outside the user environment (kernel code, extensions, etc) should be impossible by operating system policy [I know you can do this with seperation of roles today with a seperate administrator account, but who does this?]. Some will make the argument that this is the users fault, but there are some things you need to protect people from in the interests of "safety". My electrical service panel is screwed shut, and while I have access to turn breakers on and off, there's been a great deal of thought and agreement in the industry how to allow that safely. Should I be able to take off the cover, and start poking at things with a screwdriver? Probably not - and if we use that analogy for user software, maybe that's where you make people agree to a long technical EULA, the same way we put big warning labels on electrical devices "No user serviceable parts inside - danger of electrocution...". It's probably time that the software industry starts putting lawyers in with development teams not so they can write contracts, but to have some rationalization of what's safe for both the user and the company and industry.

Red and Green e-mail

Jack Menzel In Butler Lampson's talk about partitioning Windows into red and green zones he briefly and somewhat jokingly mentioned that for e-mail to work in this system you would need two separate outlook windows running, one for each zone. I actually didn't think this was at all a strange idea. After all I do this already. I actually have 3 levels of e-mail addresses that I use. One for garbage website registration and communication with ppl that I don't trust, one for personal, and one for work. Do other folks in the class not do the same?


Pravin Mittal

I agree with you and actually do the same. Actually I have two computers at my home where I keep my all the private data, always upto date use only very few trusted internet sites whereas other computer where I use for random surfing and don't care whatever happens to that machine.

In my personal opinion, I find Lampson's solution to be most viable way to create safe internet. I also agree with down in heart or ego that as software engineer we think we will get all the bugs out from our software one day. From my experience, it is next to impossible to ship a product without bugs, it will be always there.

I am also really taken back with the simplicity but elegant and practical solution to the problem. Does any share my sentiment? (Botnet is one loose thread that needs to be tied!!!)

Asad JawaharI also liked the idea and in fact I also use mutiple email addresses and computers for trusted vs untrusted uses. However, the user experience leaves much to be desired when I want to share data or programs between these machines. I think this point was also made in the lecture. In order for this technique to get wide adoption we need to improve the user experience, for example people want to do copy/paste across machines etc. The implementation of this solution also comes at a cost, for example you have to manage multiple accounts or computers hence we need to look at solutiuons like virtual machines.

-- Rob Anderson I think Butler's idea will never fly. Sure you have three email accounts, but you also move email between them effortlessly. The word 'effortless' doesn't even appear in Butler's vocabulary; the whole idea is to put an air barrier between the red and green zones, or the closest thing possible. As awful as it is, users demand features, and they demand interoperability. There's a reason drag-and-drop took off. The most popular applications today are email and web browsing, and both of them involve promiscuous interactions between your computer and the internet. All the interesting stuff will end up happening in the red zone, and the green zone will remain unused.

--Imran Ali 20:57, 16 November 2005 (PST) I agree that the user experience is a complete nightmare. However, a solution involving virtual machines may be a preferable one. The context switch using virtual machine implementations available today is not very expensive. However, the user experience may still not be optimal, but it's well on the way to becoming easier to use and will hopefully become a core security component of commercial operating systems.

Dennis Galvin 13:25, 13 November 2005 (PST)
A pesky detail with Butler Lampson's red-green divide is how to implement it in such a way as to make it simpler for the user to engage in safe computing. We have all suffered from click-through fatigue at some point or another clicking OK to baffling questions about ActiveX controls, scripts, etc. The average user just clicks through the questions without reading them. The challenge: make safe computing absolutely intuitive and require special intervention to utilize unsafe (red) computing. Given recent 'advances' courtesy of Sony in DRM, perhaps playing music CD's belongs on the red side.

The other issue with the red-green divide is how to allot activities between red and green partitions. Clearly the OS could (should?) have a large role here prescribing for instance during web surfing that any activity which formerly required the user to click OK to questions about doing something bad be placed on the red side with no option. There is still an immense gray area to be negotiated between red and green.

Avichal 15:15, 13 November 2005 (PST) Talking about implementation, yes that's a challenge. Users have talked about doing it using multiple physical computers, the problems with sharing are obvious in such a case. If you permit sharing over the network, it may open a security hole and might still e cumbersome (file synchronization hell!). If you lock it down to say sharing by physical media, that could still be insecure (viruses carried on a floppy from red to green) while being overly inconvenient.

In an analogous approach Windows XP has layered user accounts - Administrator, Standard, Limited [1]. Which is fine if only segregation you want is something like - an account for yourself, kids and guests. But the problem is that most users would themselves want to run logged in as an account with administrative privileges and thus would remain vulnerable.

Some other approaches we see in real life are people multi-booting their machines, usually the Windows OS is the vulnerable playbox and Linux is the stronghold :-)
A variation to that is using VMWare (or such) to simulate multiple virtual OS environments. Problem with that is again with sharing between these environments (files, data, etc.).

A unique approach presented by Dan Simon (Microsoft) in his paper is Windowbox. It basically simulates multiple desktops on a single PC, while allowing simple drag-and-drop functionality for sharing (and user is prompted for confirmation). This goes beyond the binary Red/Green appoach by allowing multiple desktops to be defined with different set of policies. Some examples that Dan Simon suggests are -

  • Personal desktop
  • Enterprise desktop
  • Play desktop
  • Communication desktop - single point to get all your email
  • Console Desktop - for administrative access

This is cool, and I like this even more than a two-pronged Red/Green approach as this gives me more flexibility in defining my desktops. However it also adds to the complexity. But such a solution may come with a default Red/Green desktops to begin with which any user could understand and advanced users can then change that to setup customized desktops.

In my earlier post in section Overall_sentiments_about_the_Lecture, I mention how I am concerned about the attacks moving upstack and the possibility that it would continue endlessly that way. Dan Simon discusses this (only indirectly) and points that Windowboxing pushes the problem domain to the OS level. Any problem with applications/services and the layers above are mitigated (but not solved) by Windowbox desktop environment. This could be the disruptive technology that I mentioned we need to stop the endless escalation of attacks up the stack. And my approach of System Restore using disk images seems embarrassingly difficult compared to Windowboxing.

Dennis Galvin 22:24, 13 November 2005 (PST):
Responding to your point about layering of user privilege levels: I fully agree the tools are there and provided by the OS, but in the real world most Windows users operate with Administrative or Power User privilege. So much available software requires administrative privilege to run in Windows XP that the layered account levels have coalesced into one 'Administrator' layer. Microsoft has identified this as a significant problem to be addressed with Vista (formerly Longhorn): MSDN article. In the article it states "... but 70 percent of all software won't run properly unless the user is an administrator and that's an optimistic number."

Lets for the moment assume that the user privilege problem above is resolved and 100 percent of users are signed on with normal user accounts. Now we can get back to letting users make decisions again about what is safe and non safe (remember these individuals will give away their password for a candy bar). Clearly when the user inserts their Van Sant CD, and it requests administrative privilege to install the software to listen to the music they bought, they are going to either enter the Administrator password or switch to the administrator account to facilitate Sony BMG surrepititiously installing a root kit... after all "I bought the CD, and I have a right to listen to it."

Butler Lampson inferred, in the future, John Q. Public might buy computer administration much the same way we buy anti-virus products today. The third party adminstrative software could maintain a list of safe applications and ONLY allow those designated "safe" to run on the green side, with everything else operating on the red side. There are trust issues to resolve to implement this.

Jeff Davis Heheheh, you guys are going to love IE7. Personally I find having "red windows" and "green windows" annoying as hell. Nobody is going to use security that annoys them. Yes, the UI needs to be better, defaults need to be more secure, etc, etc, but whatever happened to educating the user? I have an idea!! Joe Schmoe can't install a new turbo in his engine, why should he be expected to be able to install software? Let's force everyone to go to a professional to get new software. We could all get rich.

Avichal 19:43, 14 November 2005 (PST) Dennis, I know exactly what you are talking about. The company that I work for has made recent changes where most users do not have admiistrative privileges, and are unable to install new programs and change most stuff. Programs are installed via a centralized Gatekeeper module. Well, to spell it in four letters, it's been HELL. I personally have spent hours trying to troubleshoot issues with Applications not running with Admin. privileges. The most frustrating part is if you contact the vendors, that is their final answer - "Well, you really need to run it with Admin pivileges".

Mark Ihimoyan 23:28, 15 November 2005 (PST) I agree that implementing the red/green solution in an enterprise setting will be an IT nightmare. Another scenario/use

case to be considered is for the simple novice users who have no real expertise with computers - take for instance my grandmum who simply wants to check email and stay in touch. I think it will be quite difficult for her to be able to classify what should be considered red and green. The effectiveness of this solution could potentialy be undermined by the reduction in usability and ease of use. A question I think worth asking would be the cost of an attack to a user versus the reduction in usability of the system. I think it would be very interesting to find a sweet spot between ease of use and security. I think with security, it does not seem possible to eat your cake and have it... there has to be some sort of compromise between a system that is easy to use and less secure versus one that is less usable however more secure and less prone to attacks.

Marty Lyons, UW -- The Red/Green concept has been used in designing secure network environments for years. See Bill Cheswick and Steve Bellovin's classic text "Firewalls and Internet Security" [ISBN: 0-201-63357-4 , Amazon: http://www.amazon.com/exec/obidos/tg/detail/-/020163466X] for some useful background in layered defenses. As far as I can recall, the first folks who started using this technique widely in the open Internet community were Brent Chapman (http://www.greatcircle.com/gca/staff/brent.html) and Marcus Ranum (http://www.ranum.com). Brent co-authored "Building Internet Firewalls" [ISBN: 1565928717 , Amazon: http://www.amazon.com/exec/obidos/tg/detail/-/1565928717] which also talks about these principles.

Eric Leonard I agree with Butler Lampson that a strategy of running applications under a least-privilege user account isn’t sufficient to protect a system. Once package software vendors finally enable their applications to run under a least-privilege user account, the hacker community will quickly follow suit. Although hackers won’t be able to root a machine, they still will be able to exploit resources or compromise data that is accessible by the current user. By creating malware disguised as an innocuous app such as a game, a hacker still could lure a naïve users download and run an EXE that needs no administrator access. Once run, it could do anything that the least privilege user could do—delete documents, access email or even replicate the code to other systems. To offer the best protection, users need to access sites that offer trust and accountability. Content downloaded from unaccountable sites should be sandboxed in some way. I don’t think that the red/green system is the most user-friendly, but Butler has the concept right in general.

"System Restore points" to tackle malware

Avichal 10:08, 13 November 2005 (PST) Although Windows provides some functionality to restore system to particular checkpoints, I use disk imaging tools instead. They, in my opinion, provide a more robust restore functionality. I usually surf the net with relative impunity, and if things go wrong I restore from my last disk checkpoint.

There are some issues with this

  • It protects my system from malware but doesn't protect my personal data in case someone does break in
  • It assumes that I will be able to detect any malware on my PC

However I do think this approach works well for me, and should for most other users too. And probably will be cheaper and easier than a Red/Green approach (which I think can only be done using 2 seperate computers). Right now, I feel limited in that approach due to contraints on disk space and time taken to image my disk.However given the rate at which things are changing, we are sure to get cheap and abundant disk space and possibly smarter/faster system checkpoint mechanism.

The killer of this approach however will be a malware desined to survive a disk-reimage :-(
Could be something which can reside in other circuits like Flash-RAM.

Manish Mittal This seems like a good idea. Although, you will have to keep taking the image of the OS drive whenever you download new software. Its usually advicable to run services on least priviledged account, evening logging in with non administrative access. This does protect against some vulnerabilities.

Given these threats, how can one tell if there are spies on your system? By far the most common issue I have faced is a general system slowdown, especially when online, due to numerous spy programs running behind the scenes. Other indications include visiting a site and then receiving junk email or pop-ups about a seemingly related subject shortly afterwards, or when browser's home page suddenly changes.

One way to minimise the risk is to familiarise yourself with the security settings in your browser, to make sure that only cookies for sites you trust are let onto your system. Of course, downloading spyware killers work as well but then spywares have gotten smarter too. Recently, my machine was hit by Winfixer and I haven't been able to get rid of it despite numerous tries. (Virtumondo is another name for this particular spyware)

Sony DRM

--Dennis Galvin 13:02, 13 November 2005 (PST):
As David Aucsmith predicted, it looks like Microsoft is taking action against the Sony DRM rootkit with both the Malicious Software Removal Tool (MSRT) and the Anti-Spyware beta (http://blogs.technet.com/antimalware/) according to Jason Garms blog entry for the Anti-Malware Engineering Team. Given that most users are willing to give up any shred of secure computing (previous discussions about candy bars), this will continue to embed itself on user's computers between monthly MSRT scans to be removed again on patch Tuesday. Sadly, Sony's response to date has been largely evasive (eweek article).

Avichal 15:27, 13 November 2005 (PST) Well atleast "Sony BMG" has announced that it'll suspend this program [2]. However outrageous this may seem to others, in my opinion, the recording industry is simply trying to find the right ground in the piracy wars. Not to say that what they did wasn't wrong, but it must be frustrating to come up with copy protection schemes which are in turn broken. I remember an interesting episode (can't recall the details or the company involved) where they added non-recognizable bits in the outer sector of a CD, which would be skipped by audio CD players, but would render the CD unreadable on computers (thus blocking users from ripping it). The technique discovered to thwart it was to use a black marker to mark the outer track :-)

Avichal 07:02, 14 November 2005 (PST) Ok, I take that back. I had read up a bit on Sony DRM issue, but just read this article [3] which details the egregiousness of the matter. I think Sony did step too far inspite of whatever their motivations may have been.

Avichal 15:26, 19 November 2005 (PST) I looked into the case I mentioned of copy-protection mechanism on CDs which was subverted by using a marker. Turns out that was none other but Sony! Ha, oh the capacity of dunceness :-) This linked article has a mention of this [4]

User:NAseef It would be intresting to see how Microsoft plans on Zapping the DRM root kit as XCP technology manipulates the Windows kernel to make it almost virtually undetectable on Windows systems and nearly impossible to remove without possibly damaging the Windows

Handling Large-Scale Internet criminal activities

--Noor-E-Gagan Singh 20:33, 13 November 2005 (PST):
- For handling Large-Scale Internet criminal activities do we need an Internet focused law enforcement?

We have a feedback email link on a popular MSN page. Every day I see at least 20-30 mails in it which are obviously phishing emails sent by Cybercriminals. The mails talk about our non existent Paypal, Ebay or various Bank accounts. I can see that the URL in the mail looks correct but the actual link points to a fake site. The mails use social engineering to coerce your personal details. Just glancing the mail for 5 seconds makes it clear that it is the handiwork of a criminal.

If I see crime on the street then I know where to report but where do I report the rampant crime on the Internet? I believe that we need a central registry where phishers can be reported and then prosecuted. Currently individual companies are carrying out the task of law enforcement which makes the process inefficient and confusing for the consumers.

Jeff Davis Yes... if only it were built in to the operating system... ... ...

Marty Lyons, UW -- One of the highest profile cases in this realm involved two Russians who penetrated systems and offered to fix them for ransom, and also set up a complicated scheme to defraud Ebay and PayPal. They were ultimately lured to the U.S. by the FBI, and were arrested here after using FBI keylogger-equipped computers to access stolen data. While I was working toward a Master of Software Engineering at Seattle University, Barbara Endicott-Popovsky (now here at the UW Information School: http://www.ischool.washington.edu/people/personnel.aspx?id=6528&mode=pics) taught a Computer Foresics class, which went through the process of analyzing break-ins, preserving evidence, and presenting at trial. The class was co-taught by Steve Schroeder, the Assistant U.S. Attorney who prosecuted the Goshkov/Ivanov case (excerpt below). I can ask Barbara and Steve if they wouldn't mind if I posted some of the material from that class, since it was a textbook example of a very involved and complicated crime, enabled wholly by badly secured systems.

Also of possible interest, Barbara is now teaching: (at the UW) LIS 498 Foundations of Organizational Information Assurance (IA) (http://www.ischool.washington.edu/ciac/index.php?page=LIS_498) which deals with these topics. And there is a very extensive reading list of resources at: http://www.ischool.washington.edu/ciac/index.php?page=Additional_Reading.

(excerpt, full story available for a fee from the Washington Post)

By Ariana Eunjung Cha of Washington Post (October 15, 2004)

CHELYABINSK, Russia -- Vasiliy Gorshkov did not set out to be a thief.

Relatives and friends say he had wanted to build a dot-com like those he had read about on the other side of the world -- the Amazon.coms, eBays and Yahoos that were becoming household names even in this industrial expanse of dilapidated tenements and factories.

But in the spring of 2000, just three months after he sank his inheritance into a quixotic start-up to build Web sites for corporations, Gorshkov was getting squeezed. Few merchants here wanted to hear about the Internet, much less invest in it. What's worse, Gorshkov told several associates, local crime bosses had started to demand that he hand over a percentage of his earnings to avoid smashed windows, theft of merchandise and broken bones.

Gorshkov, then 24, didn't have the cash. Business associates recalled that he didn't even have enough money to keep paying his four programmers. But one of those programmers, 19-year-old Alexey Ivanov, said he knew how to raise the protection money, according to lawyers familiar with the conversation. Goshkov could offer a protection service of his own. To online businesses. Six thousand miles away in the United States.

Soon, U.S. prosecutors said, Gorshkov and Ivanov were scouring the Internet looking for security vulnerabilities in the computer networks of American corporations. When they found a way in, they would steal credit card numbers or other valuable information. They would then contact the site's operator and offer to "fix" the breach and return the stolen data -- for a price.

Within a few months, banking, e-commerce and Internet service providers across the country, including Central National Bank of Waco, Tex.; Nara Bank NA of Los Angeles; and Internet service provider Speakeasy Inc. of Seattle, became victims. The hackers also used online payment service PayPal Inc. to turn pilfered credit card numbers into cash by setting up phony accounts.The men would eventually expose American businesses to perhaps tens of millions of dollars in losses, the prosecutors said.

(end excerpt)

abc The federal government (in the US) should take the lead in providing a central point of contact for reporting large-scale Internet crime. While some of this is already implemented by the FBI, there needs to be an organization with clear accountability and authority (and resources) to investigate and prosecute these crimes. The big problem, as has been pointed out in one or two lectures is that many of these crimes (intentionally) cross international boundaries, which makes enforcement and investigation more difficult. One thing that would be interesting would be an international accord/treaty to help speed phishing investigations since clearly the scope of affected parties definitely is global.

operating system.

Daryl Sterling Jr About that article that was sent out about Cybercrime revenue overtaking the drug trade. How in the world do they know the real numbers of such activities? I _highly_ doubt dealers & distributers of code and narcotics publish quarterly financial statements.

Group backs program to certify downloads

Jim Xiao WASHINGTON--A group of Internet companies plans to announce on Wednesday a new program to certify downloads so consumers can get friendly and noninvasive software. (http://news.com.com/Group+backs+program+to+certify+downloads/2100-1029_3-5954668.html) Just hope one of these certification programs could become really successful and popular so software manufactures will be proud and eager to get certified under these program to attract more users. Otherwise, most consumers will just end up with not knowing who to trust since there are so many such programs which claimms to certify or qualify downloads against some standard of transparency.