Talk:Lecture 6

From CyberSecurity
Jump to: navigation, search

SHA

--Jeff Davis: Bruce Schneier, author of the authoritive work on cryptography Applied Cryptography, has a blog where he posts about many things we will most likely be discussing in this class. I bring him up because he recently posted about a team of Chinese researchers who have broken SHA, here and here.

--Noor-E-Gagan Singh: What if the Hash is altered to match whatever changes were made to the message. How do you know if the Hash itself has not been modified? I got my answer at An Illustrated Guide to Cryptographic Hashes. One signs (encrypts with one's private key) the hash of the document, the result of which is a digital signature. This ensures that only the correct public key can decrypt the Hash itself.

--Anil Relkuntwar: Following is why I think collision or breaking of hash function is of significance

1.Initially the way that Alice sends messages to Bob that was secure and non-repudiable was that,

    Slow: Alice sends KB{kA{msg}} to Bob

2.But encryption of long messages with long keys is slow. To make things faster message digest are used. Here Alice needs to encrypt only the message digest (which is much smaller than the entire message) with her private key. This is in-fact the signature for this message signed by Alice. So the new way of sending the message is,

    Fast: Alice sends KB{(msg, kA{hash(msg)})} to Bob

This faster way is secure and non-repudiable as long as the Message Digest is free of collisions. If Message digest is has collisions, Bob can generate another message with the same message digest. Since the message digest is the same, the digital signature signed by Alice’s private key will be the same, for original message as well as the tampered/generated message.

3.Thus now that collision can be done in such a predictable manner, the signature is no longer unique for a message. For non-repudiation to be unambiguous, we are back to

    Slow: Alice sends KB{kA{msg}} to Bob

Authenticode Dialog in Microsoft Internet Explorer

--Jeff Davis: I work on the Browser UI team for Internet Explorer and for Windows XP Service Pack 2 I actually did some work on the Authenticode Dialog that was the topic of a few slides tonight. I wanted to share a few things:

  • The screen shots in the slides are of the old pre-XPSP2 dialog.
  • The old dialog says something like "Do you want to install and run '%s'..." where the %s is replaced with the name of their program. The dialog did not impose strict enough limits on the name of the control that was being downloaded, resulting in unscrupulous companies titling their programs in clever ways, e.g. "Click YES to automatically get blah blah blah..." This social engineering attack was ridiculously successful even though to us CS-types it was obvious that it was sketchy.
    • For XPSP2 we moved the dialog elements around and started imposing strict limits.
    • This goes back to trusting the root authority to do the right thing. Verisign could have required reasonable text for this field, but they did not. This is actually quite understandable because the companies that author these questionable controls tend to be a fairly litigious bunch.
  • There are various levels of warnings in the authenticode dialogs now. For example, if everything seems kosher you get a fairly limited warning dialog. If there are just a few things wrong, like the cert expired, you get a slightly more alarming (visually and textually) warning. If someone is trying to install version 3 of a control that has version 2 already installed on your machine and the certificates are signed by different entities, we block it outright.
  • And everyone knows nobody reads dialogs anyway. Most people will say 'yes' to anything while browsing the web if they think they will get something free out of it.
  • I have been on customer visits and seen usability studies where we tell people they are installing spyware and they don't care. They don't care that someone might be able to use their computer to relay spam or DoS attacks, as long as they can play the game they want, or get the cool theming effect.

So we see there are a lot of factors here beyond just the computer science. The human factor and the thread-of-litigation factor are huge.

--David Coleman: This was the topic of a major slashdot (http://www.slashdot.org) rant not too long ago. The conversation was in response to something from Bruce Schneir saying essentially that code signing was useless for computer security. What every single responder forgot was the threat of a virus being distributed as a program (driver, update, etc.) from a given vendor (that you would trust). That was a major concern several years ago. So it's true that this is not a solution to the overall security problem (as all the anti-Microsoft folks seem to like to bash it for), but it does mitigate a very real threat. It has all the problems listed above, but it is certainly better than nothing for ensuring that the bits you are about to run / install is really from the source you think it is (allowing the trust relationship to take place) and hasn't been modified on route.

--Rob Anderson: - Authenticode does improve the problem of malicious users distributing dangerous mobile code that is falsely purported to be a patch from a vendor (like Microsoft, Logitech, etc). I think the biggest weakness of the approach, however, is that that it makes users think they're running safe code when they're not. This actually makes it worse than if no signing had been done at all. I prefer the approach used by Java, which actually inspects the code before it is executed, and only allows particular instructions to be run, according to the security policy provided by the user. Of course, there is some risk that the JVM itself may be compromised, but it's much easier to secure a single set of binaries that come pre-installed on a computer than it is to come up with a security model for every random piece of code floating around on the Internet.

Jared Smelser- (In response to the idea that users will accept almost anything even if it causes their computer to relay spam or DoS attacks) What are the implications of a system that lacks any true responsibility in its usage? As I understand it, the propagation of a virus is through compromised computers, what responsibility do we as users have in safeguarding the integrity of our computers? Should users who fail to protect against viruses and the subsequent use of their computer as a vehicle for the spread of said virus be held accountable? In the last class lecture we learned that a single virus that can disrupt the internet can cost billions in lost commerce as well as the untold amount of damage to infrastructure i.e. 911 emergency, flight control, etc. My question then becomes should everyone have the right to use the internet? This might sound dramatic but how many systems of commerce and infrastructure this important to the world economy are not regulated on some level, for use? I don’t think I would be a proponent of discriminating usage on the internet but I certainly feel the topic should be discussed.

--Rob Anderson: - Even more complicated in terms of liability: It was recently reported that the latest CD from the band "Van Zant" installs code on the user's machine that prevents duplication of the CD and hides itself from view, like a rootkit. The code does not go through any kind of Authenticode-style checking (although according to some sources, if you read the fine print in Sony's EULA, you'll see that code will be installed). Worse, the code is difficult to deinstall, and can later be exploited by an attacker running code on the same box. How much liability does each party have in this case, if a secondary attacker manages an exploit through code that Sony intalled on your box? See [1].

Eric Leonard: - Developer laziness and apathy toward security is also contributing to the problem of users ignoring Authenticode security dialogs within Internet Explorer. I’ve reviewed several corporate applications that required users to install unsigned Web code. Rather than being inconvenienced with trying to figure sign the code and eliminate the warnings, the developers end up telling the users to ignore the dialogs. I’ve even seen installation instructions—complete with screen shots—showing what warning dialogs to click through. This ends up reinforcing bad behavior and misinforming users of the true problems of ignoring the dialogs.

Yi-Kai - Certainly we should try to clarify what are the responsibilities of the various actors involved: users, internet service providers, and the corporations that develop these technologies. Right now most people feel that the corporations should be responsible for security. I tend to think that ISP's should also bear responsibility for monitoring their networks -- and that might provide a sufficient level of protection so that we wouldn't have to regulate the individual users.

Santtu 18:26, 9 October 2005 (PDT): There has been talk about licensing or certification with respect to users and software at several levels. A "computer users license" has already been mentioned above (also see Wired News and SearchSecurity.com articles on the topic). The idea of licensing software engineers, similar to civil engineers, was discussed in last years IT & Public Policy course, including a chapter on the topic in one of the final papers. I don't think that either of these approaches (warning dialogs or licensing) is going to be total solution -- they'll work for some users but not for all. What might work better is associating some cost to unwanted behavior. This cost need not be immediate or even financial. For example, a lot of the malware/spyware that can get installed when users just click "OK" can lead to degraded system performance, if this degradation could be attributed to this software, then users might become more aware of the issues and be more careful since although they're gaining the ability of to play a game or a cool theme they'll be paying with slower performance. The trick of course is being able to attribute performance or stability degradation to specific programs and then inform the user in a clear manner (while avoiding the appearance of "blame the other guy").

Drew Hoskins This discussion of personal security has quite a parallel in biological vulnerability to viruses like HIV.
HIV spreads quickest where nobody knows about it, and where the living environment is most conducive to its spread. Security researchers and UI designers can certainly try to make the internet environment more friendly, but ultimately uneducated people will find a way to spread or become infected. Having a dialog pop up when you try to navigate to a website is a bit like having a hotel placing a condom their bedstands: It's somewhat effective, but annoying, and ultimately limited in how intrusive and preventative people will allow it to be, and how much effort people will be willing to spend on saving others from themselves.
To answer the above question with another, do you hold someone accountable if he/she spreads HIV to somebody else? I suppose you must consider whether their action was neglectful or simply uninformed.
Ultimately, the security world will converge to the same idea that STD prevention has in the US: education is the most important thing, and it is up to you to protect yourself. Researchers are still trying to cure/treat diseases, but we are also being taught all about the hazards as we grow up and into adulthood. I think the biggest obstacle to good pervasive personal computer security today is simply the lack of established infrastructure for teaching technology in schools; thus, there's nowhere really to fit in education about security. I'd be curious to hear if anyone knows about strides that have been made in this area.

David Dorwin Jeff is right about people not caring about spyware, etc. I don't need a usability study to tell me that, though; all I have to do is look at the computers of my non-techy family members. They have all kinds of junk from the web installed on their computers. When I ask them about it, they often don't know where it came from or say I can't uninstall it because they really like the desktop backgrounds or whatever benefit they get. Security and privacy do not seem to be a concern for them - they aren't even phased by all the pop-ups that these programs generate.

There's been several mentions of unscrupulous companies or programs, but we all (except the really paranoid, conspiracy theorists, opposition, and security buffs) assume that "trusted" companies won't violate the trust we place in them. (David Coleman alluded to this above.) We assume this for a variety of reasons (we know them and feel comfortable with them, we don't have a choice - I need an OS, they have a lot to lose, etc.). In 2000, Steve Gibson found that the downloader software from a couple "trusted" companies was retrieving personal information by evaluating the actual packets sent from his computer. Unless you go to these lengths, it's hard to know exactly what is leaving your computer. And with the number of net-enabled applications and volume of information transferred, it'd be easy to hide information. Even if you have a software firewall that limits program access to the Internet, it's difficult to know what's okay. Why exactly does "Generic Host Process for Win32 Services" need access to the Internet? What I do know is that if I disallow access, I can't do much.

Yet another reason why passwords are on the way out...

--Gmusick 10:01, 6 Oct 2005 (PDT) I ran across this article on slashdot a while back about some experiments out of Berkely on recovering typed characters by monitoring the sounds emanating from the keyboard. In the article [2] Zhuang, Zhou and Tygar claim they can get 96% accuracy on keystrokes and break 80% of 10 character passwords in less than 75 attempts.

Now a three-attempt lockout will mostly foil this technique, but they are probably going to be getting more refined and more tolerant of random noise. So eventually you could imagine gathering your co-workers passwords with a run-of-the mill tape recorder sitting on your desk.

--Jeff Davis: I can't find the paper, but last year I went to a talk at MS given by Adi Shamir where he presented his work on breaking RSA by simply listening to the sounds the machine made during the encryption process. They used a karaoke microphone one of his children had. When they analyzed the sound spectrum they saw distinct changes at different phases of the process. They traced it back to a capacitor in the power supply that emitted a different 'whine' as the power consumption of the CPU changed. He said he was confident that with a very good microphone they could recover the private key eventually...

--Joe Xavier [MSFT]: Here's their proof-of concept presentation for the talk that Jeff refers to [3]

Sean West (UCB): Policy student with no CS background, I wonder how to factor all this insecurity into policy. Would it be fair to assume, then, that all passwords, no matter how strong, could be broken? If this procedure works, contrary to keyloggers, must we worry now about programs that turn on microphones to capture the sounds coming from a users keyboard? It seems to me that the safest policy assumption is that nothing is secure, but there are ways to reduce one's risk to being exploited. Is this a fair assumption? Is there a discussion in the literature about when one should consider a system secure or about creating systems that are impossible to hack whether physically or virtually?

--Rob Anderson: - I think it's safe to say that password systems aren't terribly secure, but they work "well enough" for most users, and they're convenient. I think it's a matter of economics. If users perceive that their data isn't worth a lot, then they're willing to bet their data on relatively cheap systems, like passwords and keyboards. For that reason, I don't think keyboards are going away any time soon. So I think that Sean's point about "nothing being secure", and about "harm reduction", is an accurate model. I suspect that systems that listen to keyboard sounds are difficult to build, and probably require significant tuning. So, I think that people won't easily be able to get their hands on a system that decodes keyboard sounds into characters, and thus people will continue to think that keyboards/passwords are relatively secure. One of the things that Joshua Lackey said in Lecture_10 is that you can, for a relatively low cost, build a software radio that listens to Bluetooth signals. If the price of those systems gets very, very low, then it's easy to imagine that I'll be able to buy a tiny device that collects passwords within an entire room or lab with 100% reliability (assuming people are using wireless keyboards). Of course, if those systems are cheap and easily available, that might push people toward biometric authentication like fingerprint or retina identification.

Manish Mittal More secure passwords: Here’s an interesting paper on passwords found via Slashdot(http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/tr500.pdf). The study from the Cambridge University Computer Laboratory debunks some of the conventional wisdom about passwords. For example, it found that random passwords are no better than those based on mnemonic phrases. The study found that each appeared to be as strong as the other. It also offered suggestions on creating passwords that are harder to crack. One idea: Write a simple sentence of eight words and then use the first letter or last letter from each word as the password. Making some of the letters uppercase helps too. Traditional hacking is no longer as big a problem as in the past, due mainly to current technology which can counter it. A different type of hacking, however, in the form of “social engineering” (which includes attacks such as phishing), is on the rise. This is the view of Naveed Moeed, RSA Security technical consultant for the Middle East and Africa region, who spoke to ITWeb while on a recent visit to SA. Banks and financial institutions, some of which have recently been the targets of phishing attacks, are constantly facing the threat of fraud, he said. Phishing, he argued, is difficult to combat due to its innocuous nature, and takes advantage of users' perceptions that they are acting correctly by submitting information to an “authentic” online banking site, for instance.“It is becoming widely accepted that [banks and insurance companies] have to take responsibility for security – it cannot be left up to the users,” Moeed stated. With only four or five personal details, hackers are able to create a limited set of passwords, Moeed maintains, highlighting that about 80% of a Jordanian bank's online clients were affected after completing a bogus survey.

Two-factor authentication counteracts such situations and creates a secure online environment by validating the identity of a user by requiring a personal identification number, as well as a token code before granting access to a network, he said. This limits vulnerabilities associated with easily compromised passwords.

Changing Paradigms

Chris Fleizach - The good thing about technology is that whatever people can dream, they can build (well almost). A main problem of conventional communication is dealing with someone who can intercept messages (often called the man-in-the-middle attack). Recent advances in quantum manipulation have led to commercial systems that guarantee no one else looks at your data. If they try, the quantum nature of some data being sent is altered, thus alerting both parties that there is a problem immediately. This field is called Quantum Cryptography and MagiQ is one of the first providers of such a system. The lecture tonite reiterated the commonly held belief that no system can ever be completely secure, but can we build systems that exploit fundamental properties of nature to actually achieve this? Attackers work on the belief that the targets have used some assumption that is false. What if our assumptions are that the laws of physics hold true all of the time (presumably, they do). Will there be ways around this (besides exploiting human fallibility)?

Yi-Kai - Quantum cryptography does provide some fundamental new capabilities, but it still suffers from "implementation bugs." For instance, its security depends on the ability to emit and receive single photons; but in real implementations, the emitter sometimes produces multiple photons (allowing information to leak), and the detector has limited efficiency (introducing errors which can hide an eavesdropper). A lot of work goes into building better hardware to close these loopholes. Even so, this only protects against eavesdropping on the optical fiber; there are always side-channels to worry about, like having information leak via fluctuations in the power supply, though you could surely fix these problems through heroic effort.

I think it is possible to achieve security based on the laws of physics, in principle, but probably not in practice.

SMM: NATURE covers this regularly. Today, there are experiments that provide physics guarantees against evesdropping over 20 km or so in open air. That might not sound like much, but the uplink and downlink to a satellite won't be much harder. So global guarantees against evesdropping ought to be workable, modulo man in the middle kinds of issues where the bad guys own their own rogue satellite, etc.

Quantum computing is harder, but simple factoring (e.g. "prime factors of 16") does work and academic physicists are devoting enormous effort on it. It only solves special problems, but RSA happens to be one of them. What's needed is to go from 5-6 ions as bits to several thousand. A big scale up, but reasonable over several decades. And, of course, one imagines that NSA is ahead of the rest of us by a modest but non-zero amount.

This would be a good project topic for people!

Cmckenzie: There is a flipside to this too. It is a nice idea that we could have perfect encryption to protect our data, but there are times when this is undesirable. Intelligence agencies have always been uncomfortable with encryption, and there are times (say, with a court order) where I think it's appropriate for the government to be able to requisition your private communication. My understanding is that quantum encyrption provides an absolute gaurantee against undetected interception. In the context of homeland security this could be a terrible thing, especially if it became affordable. On some level it's nice if encryption isn't unbreakable because it allows us (or the government, at least) to listen in on the bad guys.

A different paradigm shift

Jameel Alsalam While it seems like there might be some types of attacks that would be foiled by new high-tech solutions like quantum encryption, it sounds like it requires a significant new investment in hardware and a lot of technical challenges (I'm sure a list about this could get long), the bigger paradigm shift seems to be that there are a lot of weak points in any security system before the important data gets encrypted that is more often overlooked (assumed).

The example of reading reflections off of walls is one example, but I think that by analyzing particular systems, a lot of other weaknesses would emerge. Probably solutions need to be somewhat tailored to the system that they will be used for. If we are talking about keeping login passwords for online websites secure (which are only one application, but probably a major gateway into identity theft), then SSL seems like it is probably by far the strongest link in the chain. It is much more likely that a criminal could accomplish a number of different attacks on this sort of system than on the encryption - be it phishing, becoming the employee of an organization that holds personal information, putting recording mechanisms on public workstations (a little camera in a coffee shop would probably get several peoples' important passwords per day), or any number of other high-tech or low-tech ways of getting around encryption, which at this point is an esoteric problem for mathematicians to work on while we know of many other weaknesses to average systems.

Probably the one area where there is the most room for improvement and the highest potential to disuade attacks is by improving enforcement and forensics. As security professionals, the risk function has to do with vulnerability of the system and the penalty for a security breach. For attackers, the risk assessment has to do with the probability of success vs. the benefit of success and the penalty for failure. So we could just as profitably spend time trying to limit the benefit of success for a criminal (lower daily minimums on ATM's, or such things) or increasing the penalty for failure. It seems like the probability of success bar is already fairly high.

Yi-Kai - Yes, it does seem like crypto is the strongest link in the chain, so we should focus on practical non-cryptographic attacks.

On the other hand, there are some applications where you should never assume that crypto (or any other part of your security) is "good enough." In World War II, the Germans were convinced that the Enigma cipher was unbreakable (and the Allies did their best to hide the break). Even nowadays, surprises do happen in cryptography, like the recent attacks on MD5 and SHA-1, and some possible weaknesses in AES. These kinds of attacks will never be common, but for high-value commercial applications, they cannot be ruled out.


Cryptosystem secure to "a high probability"

--Parvez Anandam 01:34, 6 Oct 2005 (PDT) The RSA cryptosystem, on which e-commerce is based, relies on the fact that it is difficult to quickly factor a large number. No one has proved that that's impossible to do; we just know it hasn't been done yet. The RSA cryptosystem is secure to a high probability, the thinking goes.

This raises the question: what is the probability that some very clever person will come up with an algorithm to factor numbers in polynomial time? (I don't mean quantum computing fuzziness but something you could implement today on a good old fashioned digital computer.)

That probability is close to zero, you say. At the beginning of the previous decade, that likely was the odds of proving Fermat's Last Theorem. The probability of the latter is now 1: it's been proved.

It doesn't take an evil-doer to try and solve the factoring problem for selfish motives. A University Professor is the most likely candidate for such a discovery, made to advance human knowledge. The moment she publishes that result, however, our world changes.

We have no way of quantifying the likelyhood of scientific progress in a certain direction. It seems therefore imprudent to rely on a cryptosystem that is based not on a solid mathematical proof but merely a conjecture that it is hard to crack.


--Gmusick 10:01, 6 Oct 2005 (PDT) Not really. As the prof noted, you should always try to do the best you can with what you've got. The real problem in this case is not that somebody might crack the system by doing polynomial time factoring, but that we might not detect they had cracked the system. If we know they have cracked the system, then we can guard against it while we switch to a different encryption scheme that doesn't rely on factoring. And when they crack that, we'll switch again. And it will cost money, but the question is how much?

I think one of the overriding lessons from this course is going to be we must start to think like insurance adjusters and accept that we are never going to stop bad things from happening. Instead we are going to have to analyze how much risk in terms of some value unit (lives, money, time) using imperfect technology in a given application will cost. Not doing that analysis is the imprudent part.

--Chris DuPuis 13:51, 6 Oct 2005 (PDT) Unfortunately, the idea that we'll "switch to a different encryption scheme" is based on the assumption that the data you encrypt today will be worthless tomorrow. If an attacker just snoops on all of your network traffic and records it, he will be able to decrypt it at some point in the future when algorithms, processors, and networked machines give him sufficient computing power to do so. As long as your data becomes worthless in a reasonably short time frame (a few years, maybe), encryption is a good protection. If the disclosure of your data would be harmful even many years into the future, encryption of any kind is probably insufficient protection.


SMM: This is why NSA keeps encoded intercepts forever. The US break-in to Soviet wartime traffic ("Venona") came five years after the fact, but that was plenty good enough to uncover spy networks. Query, however, what the shelf life is for the vast majority of data. If you use a random key as long as the message, you are formally secure. It's expensive, but governments have done it for 100s of years. If the number of secrets with a long shelf life is small, that might be the way to go.


--Gmusick 14:46, 6 Oct 2005 (PDT) Touche, sort of. Although I'm not assuming that current data will be "worthless" in as much as I'm assuming it will be "worth less". I'm of the opinion that most of our "personal" data loses value over time. I may be disabused of that notion as the course progresses, though.

We are in agreement that encryption by itself is insufficient. Intrusion detection is at least as important as the choice of encryption algorithm. Having personal data snooped for a few people before we cut off the bad guy is a tragedy for those people, but having it snooped for hundreds of thousands is a calamity.

--Jeff Davis 17:30, 6 Oct 2005 (PDT) Well, there are other choices. Take the amazon example; if you record my SSL transaction eventually you will be able to brute force it. However my credit card will probably have expired by then. Furthermore I can generate a one-time-use credit card number and use that and only Amex knows the mapping back to my real account number.

Encryption is like a bike lock. Will it stop someone who is sufficiently motivated? No. But it does stop 99% of the people walking by with just a little bit of larceny in their hearts. Perfect security is impossible so we raise the bar as high as possible and reasonable, and then we rely on law enforcement to clean up the scum. There are other uses for encryption besides the amazon case, but I believe that if we think them through none of them are good reasons to worry:

  • If I use encryption as part of illegal activities, I never want it to become public. Of course, we do not care about protecting criminals.
  • If I use encryption to encrypt business secrets, I never want my competitors to discover it. Eventually they will discover it, and probably without breaking the encryption. Employees leave, patents get filed; no trade secrets last forever.
  • If I use encryption as part of my activities to overthrow an illegitimate or repressive government, I never want the government to know. I suggest that if by the time they break the encryption you have not succeeded in overthrowing them, you are probably doomed anyway.

--Noor-E-Gagan Singh What is the significance of this paper. It presents an algorithm that determines whether a number is prime or not in polynomial time. There is an interesting discussion on Slashdot. [4]

Keunwoo Lee 16:42, 10 October 2005 (PDT): It doesn't have any relevance to cryptography. Probabilistic polynomial-time primality tests (which are, for all practical purposes, just as good as deterministic tests) have been around for decades. No cryptosystem depends on the hardness of primality testing, which is quite distinct from factoring.

PKI Security Dialog from IE

--Dennis Galvin 01:35, 6 Oct 2005 (PDT) The article Why Johnny Can't Encrypt (even if very dated) brought up many germane points about usability. In that vein, the Internet Explorer dialog box on slide 41 from the lecture is certainly less than clear with its use of graphics to represent the threat:

  • Yellow caution sign by the line saying "The security certificate is from a trusted certifying authority,"
  • Green checkmark by the line indicating the error "The name on the security certificate is invalid...."

OK software is not perfect, but this is an excellent example of the confusing use of graphics. It also does not inspire confidence in the software being correct, nor in causing the user to contact the webmaster of the site with the invalid security certificate. For the record, the Firefox browser has confusing dialogs with respect to security as well, and this may have been corrected in the latest security release. "jeffdav" made an earlier comment about there being a lot of factors beyond just the computer science. Most users when confronted with such a dialog will click through it anyway, as the earlier post pointed out, probably muttering something under their breath about not understanding computers. Usability may be one of those things beyond computer science, but it needs to be factored heavily into GUI design.

Eavesdropping on computers

--Dennis Galvin 08:08, 6 Oct 2005 (PDT)

The lecture showed the two slides demonstrating that sensed reflected light could be used to reconstruct with a high degree of accuracy what is on the screen. I wonder how it might be at reading small 10 pt or smaller type? News articles in the past few weeks have played up an upcoming paper by three UC Berkeley researchers on using a $10 microphone and signal processing to grab keystrokes with 96% accuracy (http://www.cnn.com/2005/TECH/internet/09/21/keyboard.sniffing.ap/). There was an earlier paper from IBM in Almaden, CA, along the same lines in which Asonov and Agrawal used a microphone and a neural network to suss out with somewhat less accuracy keystrokes from keyboard sound emissions (http://www.almaden.ibm.com/software/quest/Publications/papers/ssp04.pdf). What a difference a bit of time can make. The upshot here is that even if sensitive information is immediately encrypted and only stored in encrypted form, with the combination of these two technical advances, the encryption may now be immaterial.

--Liebling 11:10, 6 Oct 2005 (PDT)

Some forms of biometrics can get around the "peeking" problem of recording passwords from visual and aural cues. Putting your thumb on a printreader doesn't make any noise. It doesn't eliminate such eavesdropping as capacitor whine, however. Furthermore, biometrics can only solve the "authentication" problem rather than the authorization and encryption problems. Even then, if the price is high enough, the necessary thumb could be obtained ...

Another advantage to biometric authentication is that the user isn't able to "give up" his "password" for a bar of chocolate (as some 70% of people in the recent Liverpool study did). Each system using biometric autentication can add their own method hashing (discussed last night) to the fingerprint/retinal/etc input so that even having the print doesn't necessarily guarantee entry.

--Chris DuPuis 14:06, 6 Oct 2005 (PDT) Biometrics would be much more secure if your fingerprints and retinal patterns were protected from snoopers. Unfortunately, it's easy to get fingerprints off anything you touch. For retinal patterns, how sure are you that someone doesn't have a close-up, high resolution photo of your face? On an insecure network, there is no guarantee that the authentication data that is being sent over the wire originated in an "approved" thumbprint or retinal scanner, and not in a program that reads in its biometric data from JPEG files.

Bruce Schneier (gee, his name sure pops up a lot, doesn't it) has written a good bit about biometrics, including this piece: Biometrics: Truths and Fictions

Keunwoo Lee: In other words, with biometrics, you might not even need to give people a bar of chocolate. Nevertheless, it seems that the important points here are:

  1. don't let "perfect" be the enemy of "better"
  2. do not confuse "better" with "perfect".

Biometrics, when used correctly as part of a multi-factor authentication system, could improve matters for some applications (for the reasons Schneier describes, it seems like a bad idea to use them for every application). However, for the foreseeable future we should expect authentication to be breakable by determined adversaries.

This leads into one of the points that Geoff brought up in his lecture, which is that the vast bulk of computer security practice and research focuses on "protecting the perimeter". But the perimeter will inevitably be broken, so we need to figure out how to cope with that.

Santtu 17:27, 9 October 2005 (PDT): Another interesting, and even scary, eavesdropping method is Differential Power Analysis where it is theoretically possible to extract a encryption key by measuring a CPUs power usage. There are ways to defend against this type of attack, but it goes to show that if you're sufficiently paranoid then you really can't trust anything. :)

--Jeff Bilger 22:20, 6 OCt 2005 (PDT) During yesterdays class, it was mentioned that Certificate Authorities (CA's) are authenticated by their public keys. A necessary precondition is that these public keys are distributed to a broad audience and this is achieved by being preloaded into web browsers such as Internet Explorer. If this is the case, how does a new CA get its public key distributed to the world? I would argue that if a user had to manually import that public key into their browser, then the barrier to entry into the CA marketplace would be way too high. I just checked my IE settings and there are 37 distinct Trusted Root Certification Authorities while there are only 29 distinct Trusted Root Certification Authorities defined in my version of Firefox. If I wanted to start my own CA tomorrow, how could I guarantee that when people purchased SSL certificates from me that visitors to SSL protected areas of their site would not see warning messages regarding trust?

Andrew Cencini - I would think that you would work with Microsoft to get your public key included in the next windows update / service pack that goes out. This is just a guess but I'd imagine at the minimum there'd be a little lead-time involved with all of that. I think someone in this class worked on IE so perhaps they may know better!

Jeff Davis I checked with some people on the team. If you want to get your stuff added there is an alias (cert@microsoft.com?) you can e-mail and you just have to sign some sort of agreement and they push it out via windowsupdate.com.

Manish Mittal I think it also depends upon if you want to add a new root CA or sub CA in a chain. CAs are organized into hierarchies with the fundamental trust root CA at the top. All other CAs in the hierarchy are subordinate CAs, and are trusted only because the root is trusted.

Russell Clarke I looked up different CAs available, and it seems like VeriSign is the only major CA around. They bought Thawte a few years ago, although they still operate under their own name. There is also a free CA called CACert (www.cacert.org).

Can computers operate more transparently?

Yi-Kai - I'm fairly sure this is not an original idea, but I'm curious what ideas people might have along these lines.

One thing about modern operating systems is that they're not very "transparent." It's hard to see what the computer is really doing -- for instance, if you open the Task Manager in Windows, you see about 20 processes, but for most of them, you can't tell what they are doing or why they are there. This lack of transparency isn't a security problem by itself, but it is a contributing factor. When a worm infects a computer, the user often does not notice it, because even though there's a lot of activity as the worm probes the network for vulnerable hosts, none of this is visible in the user interface. And as long as the user is unaware of the problem, he/she doesn't have any chance of fixing it, or at least limiting the damage (by disconnecting from the network, for instance).

We know we will never achieve perfect security. Transparency is helpful, then, because it makes problems easier to detect and fix. It's sort of like first aid training, in that it gives users the ability to handle some problems on their own, even if they cannot fix them completely.

Jameel Alsalam - I think that transparency would be helpful, but I also think that there are several factors in the computer industry which make it hard to achieve: 1) technology and software change so quickly that hardware has to be very flexible to be able to accomodate all the new advances that will happen over the (short) life of a computer system, also therefore it is probably not worth it for users to put a lot of time into understanding the subtleties of their computer since those subtleties might soon change, 2) people buy based on capabilities, not transparency - quite often it is a desired feature to not have to understand what is going on, and 3) different levels of expertise would probably want different levels of expertise (so how do you design software that can accomodate the needs of all skill levels, yet is easy enough for the novice, yet is also not buggy and is on the cutting edge of capabilities... its a lot to ask for.

Maybe what we need is to just wait for the relentless improvement of computers to slow so that computers could be bought more like cars are (where all the software could be pre-installed "options" like sunroofs and power windows). I bet first aid would become a lot harder if people were always attaching new appendages to themselves.

Keunwoo Lee 14:47, 9 October 2005 (PDT): One further issue with "transparency" is that with current operating system architectures, a "root exploit" gives the attacker the ability to compromise everything on the system, even the auditing tools. Standard "root kits" for Unix-based systems, for example, replace the standard ps command (which lists running processes, roughly analogous to the Windows task manager) with a fake one that conceals the presence of the exploit.

Partial solutions to this problem have been proposed, but it's currently an open problem in OS research to devise an auditing system that's robust to root compromises and also scales to realistic applications and operating systems.

For example, you can run the entire OS inside a virtual machine, and observe it from the "host" system. Think of a virtual machine as an "isolation chamber" inside the computer, within which there's another "virtual computer", which is running the end-user OS. Compromises to the end-user OS wouldn't compromise the entire machine. However the problem with this is that virtual machines --- at least as they are built today --- have a very low-level interface to the inner OS, and so it becomes difficult to observe what's going on at a semantic level. In other words, the VM sees things like "the OS requested memory location 0x29F08290" or "the OS read disk block 283282", but the event we're really interested in occurs at a higher level: "process X was activated, and it is not on the whitelist". For the auditing program to understand this sort of thing, it would have to possess an intimate knowledge of the inner OS's workings. I'm not aware of any projects that have managed to pull this off.

Asad Jawahar(1) The idea of running an entire OS in a virtual machine on a secure host is interesting but it will only mitigate some threats. I think you can prevent denial of service with this approach but you are still vulnerable. For example what is the gurantee of security of data in the 'sand boxed' process? If the virtual machine is compromised then you may cause information disclosure which is probably worse than denial of service.

(2)A counter argument to transparency could be privacy. Any thoughts on that?

Keunwoo Lee 17:13, 12 October 2005 (PDT): I was suggesting that VMs could be used to guarantee the integrity of audit data in the presence of root compromises, and that's all. The "boxed" OS would still be vulnerable to all the compromises that it would ordinarily be subject to, and all the data inside the sandbox could still get hosed. However, since the audit process would run outside the VM, the integrity of the audit data would remain uncompromised, permitting monitoring and forensics.

(I can't remember whether this proposal was published somewhere, or it just came up in some UW-CSE seminar in years past.)

It's true that compromises of the VM could cause information disclosure. However, root compromises cause information disclosure anyway. VM compromises won't be a major problem unless/until the VM gets as easy to break as the inner OS, and that's unlikely for a variety of reasons. (BTW, I'm talking here about VMs like Denali, not VMs like the Java Virtual Machine.)

Re: (2), privacy and transparency are indeed in conflict. For example, fundamentally, you cannot both:

  • provide every user with a listing of all the processes running on a machine, and
  • permit some user to run a program in secret.

Usually, however, there's some entity --- for example, the owner/operator of the physical machine --- whom the users must trust. One could imagine building some sort of scheme whereby audit information is only made visible to that entity.

Cmckenzie: Transparency as discussed here may only be helpful to those already well versed in at least the basics of computing and networks. The users who always click yes are probably unphased by unauthorised activity on their machines. They won't even realise the implications of that. As far as privacy, I can't see it being an issue. What right to privacy (as far as not being identified, detected, etc) do you have on someone else's property?

Lecture 6

Professor Voelker, you mentioned the “Black Unicorn,” can you provide some more background on this individual?

Stefan Savage (UCSD) A.S.L. von Bernhardi's bio at USENIX Security: Black Unicorn has served as a "Big 5" consultant, an entrepreneur, an intelligence professional, a banker, a lobbyist, and a sometime cypherpunk. A survey of his recent work includes modeling narcotics smuggling and money laundering dynamics, a study of concepts of money throughout history, and research into the behavioral economics of black markets. He is currently at work developing political risk-hedging methodologies for foreign exchange markets. 2003 marks the 10-year anniversary of the pseudonym "Black Unicorn."


Professor Voelker, can you distinguish “integrity” from “authentication” for me a bit more – does authentication allow us to ensure integrity or …? Is it that “confidentiality” and “integrity” are what we are striving for and that “authentication” and “non-repudiation” allow us to achieve those goals?

Stefan Savage (UCSD) Integrity is the property that data has not been modified in transit (i.e. that you have strong evidence that the message you receive is the same as the one which was sent). Authenticity is quite different. It reflects the property that a particular principal (e.g. user) sent you this data. Usually you'd want to combine both properties in a protocol.

Professor Voelker, during your presentation, you stated several times that the knowledge needed for carrying out various computer/cyber attacks is relatively low, how low – that is, is this something we’d expect non-university educated persons living in the Middle East or the remote regions of Afghanistan and Pakistan to be able to do (assuming they had the equip and some basic instruction/manuals)? Do we know of any incidences of al Qaeda, or those organizations loosely affiliated with it, attempting such cyber attacks? Do advanced states, such as the US and the EU countries, have the ability to crash easily another state’s internet if they choose to do so – is that a weapon the US can and would likely employ if were to engage in inter-state conflict in the future?

Stefan Savage (UCSD) Simple cyberattacks require minimal knowledge... basicallyenough to surf the Web and use a compiler. More advanced attacks may require some CS education. I am not aware of any published instances of terrorist organizations using cyberattacks against the US. Many countries have active offensive cyber attack capabilities (including the US) and it would be safe to assume they are available where appropriate.

Could someone with a computer science background please roughly label and order the types of cyber attacks according to how difficult they are to develop and how devastating they are likely to be with respect to one another (i.e. Trojan horse, worm, shell, etc.)?

Stefan Savage (UCSD) I dont think there is an easy labelling here... moreover I think any such labelling would be both out-of-date and illusory since all of these attacks can and are combined to create different capabilities.

Professor Voelker, you mentioned that forensic tracing on the internet is extremely difficult. When al Zarwaqawi posts warnings and/or takes credit for attacks that have been carried out (such as he did with respect to attacks on the Kearsarge and Ashland), is there anyway to approximate or glean his location – how much and what kinds of knowledge, if any, can we get from such postings?

Stefan Savage (UCSD) Generally speaking not. First, understand that for many networks there are no transactional logs available (even assuming legal access could be gleaned quickly). Second, understand that there are lots of services fo anonymizing transactions on the Internet (effectively laundering them through one or more third parties). Finally, one can compromise a machine, or set of machines, and launder a transaction through those machines. Unlike the physical world, the digital world is very bad wrt evidence because digital objects are non-unique and eminently forgable.

Professor Maurer and/or Dean Nacht, in the last couple of days there has been quite a bit of talk regarding a letter from al Zawahiri to al Zarwaqawi. It seems that the former believes that the latter’s brutality has gotten a little out of hand and has asked him to halt extremely violent acts such as beheadings, lest they lose the support of many of the Muslims they are trying to court or at least drum up sympathy from. This is the first I have heard of al Qaeda being concerned with such things. Maybe al Qaeda’s ideology does not put it beyond the “invisible man” scenario after all. Do you have any thoughts on the letter?


SMM: Well, this is obviously evidence.


Professor Maurer and/or Dean Nacht, what do you make of the recent “specific” but “non-credible” threats with respect to the NY subway system? Is there any indication of which agency in the US federal government or from a foreign government came across the info (or is it really likely to have come from a raid in Iraq as has been reported by some news organizations)? Also, do either of you make any of the fact that the Homeland Security Dept didn’t make too much of it but that the NY City authorities did? Is that a lack of communication problem or are the local authorities responding as they should and is the federal government leaving the initial response to the local authorities as one might expect? Should the situation there make us worry, as the post-Katrina fiasco did? It has also been reported that three suspects have been arrested but the arresting authorities have not been identified – any thoughts?

Professor Maurer and/or Dean Nacht, do either of you have a take on the McCain push to limit and clearly establish what is permitted with respect to interrogation? Would that be a good thing, a bad thing, …?


SMM: I haven't seen McCain's stuff, but will comment anyhow. Dershowtitz's book on Terrorism --there's a citation in the Lecture 2 "Readings" slides -- tells how the Israelis looked at this same issue in the late 1980s and said there were three choices: Establish a "twilight zone" for security agencies outside the law, pretend to obey the law but turn a blind eye (overwhelming choice of world governments), or establish a legal framework. The first two are pretty clearly hypocritical, and in the American system lead to Congressional howls of protest (Remember the police chief in Casablanca? "I'm shocked, shocked to find gambling going on here ...") at twenty year intervals. This is grossly unfair to the people we send out to defend us, who are later punished either because they acted or because they failed to act but are then second-guessed in either scenario. It's also unfair to the country, because it breeds inaction.

The political problem here is that if somebody acts responsibly by suggesting specific guidelines every professional politician who is not actually in power at the moment will take cheap shots. This would be o.k. if those same politicians were prepared to follow their own advice -- but we suspect that this is untrue and that the conversation is in fact sliding toward the hypocracy that the Israelis criticized. Orwell said that politicians should have "the habit of responsibility," i.e. that they should never suggest rules or programs that they are not prepared to implement. In this case, at least, the War on Terrorism depends much less on physical assets than the quality of our political discourse. Ulp.

There is also a deeper problem. The instinct to take cheap shots is reinforced by anxiety and avoidance -- i.e., it lets us dodge subjects that are confused, distasteful, or ill-defined. Our feelings about torture are deeply felt and surely well-founded, but it is also true that we have no very coherent legal or logical framework for discussing them. In domestic life, the state is not allowed to coerce defendants, but warfare is all about coercion. It seems strange to say that it's ok to shoot people, but you musn't put them into the "stress positions" that the US Army uses to punish its own people. But if decide that you accept some degree of coercion, then you must draw new lines. Yet there are no good candidate principles available.

Hopefully, we will find a way to explore intelligence policy later in the course. In the meantime, you can learn a great deal by looking at Dershowitz, Chris Mackey's The Interrogator's War, and the fifth chapter of Alexander Solzhenitisyn's Gulag Archipeligo. Various US Army and CIA interrogation manuals are also available on the Web. This would also be an excellent term project for people who want to pursue this.


Professor Maurer made the point often, during his initial lectures on terrorism, that democracy and development should not be viewed as the cure all for terrorism. There is an interesting and well-written article that supports that view in the current edition of Foreign Affairs – F. Gregory Gause III, Can Democracy Stop Terrorism?, FOREIGN AFFAIRS (Sept./Oct. 2005).


MODELING CYBERSECURITY

In order to do policy, we need to find the salient facts about a situation and build models. All disciplines do this, including for example physics. Naive people sometimes argue that the world is more complicated than any model and then advance whatever opinion feels right to them. This is wrong. The only valid objection is that we have not picked the most salient facts, or that our facts are wrong, or that they have no relevance for policy.

The point here is to take Geoff's lecture and think about what facts we can abstract. I do not claim to be an expert, folks, so please say so if the following are not generally true or especially if you can add to the list. But here's something to get started.

1. HOW MUCH SECURITY IS TOO MUCH? For any given level of technology, consumers must choose between security and power/convenience. In a normal world, we let consumers make the choice: They know their preferences and are often best informed about options. This model seems reasonable for, say corporate IT departments who must decide whether to prevent problems in advance or clean them up afterward. Certainly, it would be perfectly logical for society to decide that after-the-fact cleanup was sometimes the lowest cost way to proceed.

On the other hand, markets can be imperfect. Consumers may be ignorant or face high transaction costs that make meaningful choice impossible. Alternatively, there may be externalities: For example, consumers may not care if their machines become zombies at 3.00 am although the rest of us do. For now, these are only hypotheses. Consumer sovereignty could be the best model, any objections should invoke evidence.

2. IS MORE SECURITY DESIRABLE? In the current world, the Wall Street Journal finds it relatively easy to break into Al Qaida laptops, Steve Maurer has no chance of breaking into Ed Lazowska's laptop, and the FBI has to ask a judge's permission to break into Karl Rove's laptop. All of these things sound like good results. Privacy advocates always want to maximize privacy and privacy engineers understandably take privacy as a working goal. But would we really all be happier if, for example, on-the-fly encryption became a reality.


Chris Fleizach - a turn on this idea is, "Are more security disclosures desirable?" Hackers, researchers and professors have all based their reputations on churning out new ideas on how to break protection mechanisms. Others through posting exploit code, although they themselves are absolved of responsibility through weasel wording. No one can argue very long anymore about the use of "security through obscurity," but when Al Qaeda operatives only need to compile and run a program to mount an attack Al-Qaeda hackers break into websites, the bar for maintaining homeland security has just been lowered. In one of the previous lectures, I believe Maurer mentioned that US nuclear physicsts developing the A-Bomb stopped publishing when they realized the Russians were just using their information to forward their own program. Could we argue the same mentality should be observed today? This has similar ties to what Maurer, again, pointed out earlier, that in the past when terrorists attacked, civil liberties were oftened curtailed severely. It made people angry, but in essence it worked. Personally, I find this to go against the maxim Benjamin Franklin held, "They who would give up an essential liberty for temporary security, deserve neither liberty or security." But they may be issues that have to be examined, especially if the "President’s Information Technology Advisory Committee" is to be believed in the severity of the problem.
Geoff Voelker -- A related question is whether exploiting software vulnerabilities is a subject that should be taught. Describing how to exploit software is one step, but having students do it as an exercise in a class can be construed as training them to exploit. You'll find strong opinions on both sides. (You should be able to guess where I stand.)

3. DIMINISHING RETURNS. "Trust" without "due diligence" has limited power. But if we tie it to physical security at a few points things get better. For example, I believe that I have Verisign's correct URL because it came inside a machine in a shrinkwrapped container from Dell. Similarly, I believe Verisign did due diligence because they need reputation to compete.

Of course, I might believe that the current system of security was inadequate. In that case, I should find the lowest cost provider to improve matters. [TAKE NOTE - GENERAL PRINCIPLE!] That provider might be Verisign, in which case I could change their behavior by enacting new liability statutes or making them buy insurance.

I could also decide that Verisign was suffering from market failures. For example, they could have "agency problems" -- i.e., lie to me about the amount of due diligence they've done. This would be analogous to the bank that lies to me about the size of their assets and would have the same solution (regulation). Alternatively, Verisign could be a natural monopolist -- the costs of due diligence get averaged over the number of users, which means that the company with the most users also has the lowest costs. If so, I can't depend on competition. Back to regulation or insurance...

4. RETURN OF THE INVISIBLE MAN. The fact that Microsoft can catch hackers at the bargain rate of $50K per perp has obvious parallels to the use of rewards in terrorism. But is the analogy deeper? Hackers, like terrorists, depend on a broader community of sympathisers to stay invisible. Are hackers more or less invisible than the average terrorist?

5. NAIVE USERS. I am struck by how often the problem is user error. Have we pushed on this as hard as other areas? The natural suspicion is that NSF pays people to develop security code and that this area is now deep into diminishing returns. It might be cheaper and easier to spend some money on general education.

I recently went to a talk where the speaker said he had written a simpler e-mail security system than SSL. He reported users made 10% fewer security mistakes than SSL, but if they were briefed on how to use it then they made 50% fewer. My question, of course, is whether they would have done even better if they had been briefed on SSL itself. The point, I think, is that consumers know almost nothing about, say, phishing. So why don't we have public service spots warning them about phishing techniques or announcing golden rules to follow? A single "phishing" episode of CSI might be more valuable than $10m worth of new security code.

Geoff Voelker -- The extent to which consumers know about phishing is an interesting question. I could only hazard a guess that those who have used email extensively for a month understand to delete those emails. One issue with phishing, though, is that the cost is distributed across many people. Lots of money may be stolen in total, but each person may be hit $20-$100. How much complaining are people going to do at that small granularity?

6) WHY EUROPE? Geoff says that Europe has more crime and more money spent on defense. If both these facts are true, then the natural assumption is that European criminals have better social support networks. As the US crime writer Raymond Chandler once wrote, "We're a big, rough, wild people. Crime is the price we pay for that. Organized crime is the price we pay for being organized."

Even if this speculation is true, however, it would be necessary but not sufficient. The further question arises why European criminals do not prey on the US, given that the cyber-Atlantic is a negligible barrier. Answering this question would probably place interesting constraints on the nature of cyber crime in general.

Geoff Voelker -- Keep in mind that US credit card companies place restrictions on credit card use outside the US. By default, you usually cannot use your credit card outside the US unless you call ahead of time, tell them when and where you will be, and ask to have the card enabled.

7) INDUSTRIAL SCALE HACKING? Economics is all about scarcity. The main ason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments. The question arises, therefore, what terrorist/states can do with 1) ordinary kiddyscript stuff, 2) one or two smart people, and 3) hundreds of smart people. For example, which category does "taking down the Internet" belong to?

Geoff Voelker -- It belongs to category (1). With a relatively unsophisticated background, you can take control of many machines. Alternatively, these days with only a modest amount of money you can buy a large-scale platform of exploited machines -- the work has been done for you. These days people are using the platforms to make money through scams. The risk is there that it could be used for terrorist purposes, but it has not yet to my knowledge. Which leads to the question why...presumably because terrorist groups do not see it as an effective mechanism for achieving the goals that they want to achieve.

8) VULNERABILITIES AND INCENTIVES. Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful. In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought"). Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges. The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.

9) ARE SOME VULNERABILITIES WORSE THAN OTHERS? CERT says that 85% of hacking involves weaknesses other than encryption. But how does this map onto the vulnerabilities we worry about on the Web? For example, you might imagine that encryption takes care of credit card numbers but is more or less irrelevant to protecting web pages. The mapping matters, but what do we know about it?

10) DEALING WITH TRANSACTION COSTS. Geoff notes that EXPLORER has 100 possible security settings, Presumably, the average consumer is not equipped to pick any of them so that "none" ends up being the effective default choice. On the other hand, society could change the default to something closer to what we believe consumers would actually pick with zero transaction costs and full information. I imagine that Microsoft would be reluctant to make that decision for the rest of us. But suppose that the country's CS professors announced the "right" judgment and let people follow it if they wanted to? Suppose Congress set a default choice?

11) LESS THAN PERFECT SECURITY. There is presumably a calculus on how good security should be. For example, most secrets have a relatively short shelf life, beyond which it doesn't matter if you expect somebody to crack them (in expectation) five years from now. A few -- how to make H-Bombs -- can last fifty years or more.

12) MARKET TESTS FOR SECURITY. The idea of setting prizes for security is interesting, since it allows you to dial up ordinary "tested by time" confidence for special systems. You would also imagine that this was a good way for computer companies to reassure consumers.

Yi-Kai - There is one social aspect of cybersecurity that we haven't touched on: the hacker culture. For better or for worse, there will always be people who invent hacks, purely out of curiosity, without any criminal intent (this was the original meaning of the word "hack"). A good example is Ken Thompson's exploit involving the C compiler and the login program. It illustrates how a hack can be both an elegant piece of technical know-how, and a potentially destructive criminal act. In this sense, computer hacking is different from other forms of tinkering; it is driven more by curiosity, because software is malleable and computers are inherently multipurpose tools; and it can be beneficial (open-source software) or harmful (worms). So it's very different from tinkering with high explosives, which is almost never a good thing. This makes it a peculiar challenge for cybersecurity.

[Imran Ali] - This is an interesting point as hackers sometimes get carried away and do break the law without actually intending to do so. This is more to do with ignorance of the law rather than criminal intent. On the other hand, there are also hackers who will devise a 'hack' and then publicize it without actually performing the act itself (thus avoiding breaking the law if applicable). These type of hackers serve society as they expose flaws in systems so that we can correct our mistakes and hopefully avoid future attacks.

Grand Challenge in Information Security and Assurance.

--Mark Ihimoyan: I was recently doing some additional reading on the internet on challenges in Computer Security today and I came across this interesting site I thought I should share and see what other people have to say about it.

It is from the Computing Research Association's Grand Challenge meetings. Grand Challenges meetings seek "out-of-the-box" thinking to expose some of the exciting, deep challenges yet to be met in computing research.

At the conclusion of the conference, the participants identified four challenges worthy of sustained commitments of resources and effort:

1. Eliminate epidemic-style attacks (viruses, worms, email spam) within 10 years;

2. Develop tools and principles that allow construction of large-scale systems for important societal applications -- such as medical records systems -- that are highly trustworthy despite being attractive targets;

3. Develop quantitative information-systems risk management to be at least as good as quantitative financial risk management within the next decade;

4. Give end-users security controls they can understand and privacy they can control for the dynamic, pervasive computing environments of the future.

The slides from the conference can be seen here.

I would like to know if you guys think these challenges especially the first on the elmination of epidemic-style attacks (viruses, worms, email spam) within 10 years is actually possible.

Geoff Voelker -- I'll point out that Stefan was at this meeting and was instrumental in having this challenge be in the bullet list, and be bullet 1. As a research agenda, we would argue potentially yes. For details on our perspective on how, take a look at our NSF CyberTrust proposal: http://www.cs.ucsd.edu/~savage/papers/CIEDProposal.pdf

Also quite interesting was a point made in one of the slides (slide 9) about the role of security. "Security is like adding brakes to cars. The purpose of brakes is not to stop you: it's to enable you to go fast!Brakes help avoid accidents caused by mechanical failures in other cars, rude drivers and road hazards. Better security should be an enabler for greater freedom and confidence in the Cyber world." I do not think this has been the case in the application of computer security today.

Keunwoo Lee 17:00, 10 October 2005 (PDT): Just to respond to your last point, actually, security technology today does enable new usage models. For example, in theory, running Java and JavaScript in your web browser cannot compromise your machine, because both of these technologies run with limited privileges. This is a design decision motivated by security. Because of this design, you can run random code that you download from strangers off the Internet, which would ordinarily be pretty scary --- you wouldn't run some random binary executable or C program downloaded off the Internet. (Well, people do, but that's another matter.) Now, Java and JavaScript both suffer from design and implementation flaws, but in principle it is precisely those security features which enable this usage model. Other examples:

  • The security features of Unix (or other multi-user operating systems) enable you to share a machine between multiple users.
  • The code signing feature of Windows Update enables you to get Microsoft patches without, say, mail-ordering a physical CD.

Fundamentally, it is the desire for greater functionality that motivates more sophisticated, fine-grained security features. A computer locked in a safe deposit vault and unplugged from the network is pretty secure, but it's also useless. When you take it out of the vault and start using it, you require less blunt security measures.

Do hash collisions imply that digitally signed code/documents are invalid?

--Joe Xavier [MSFT] 10/10 4:37 PM PST: From all the reading I've been doing on this following the lecture, hash collisions make most forms of signing 'untrusted'. Code signing and S/MIME are mostly untrustable if they use a hash algorithm that's not collision-resistant.
From what I understood, code signing for a dll mostly works like this: The dll is run through a hashing function. This hash is then encrypted using a secret key (that you hope isn't compromised). This cipher (the encrypted version of the hash of the dll) now becomes the signature for the dll. You then ship the dll along with the signature. The client machine downloads the dll and the signature. It then uses the public key (for the organization that shipped the dll), unencrypts the signature and gets the hash. It then hashes the dll (using the same hashing algorithm) and compares the two hashes. If they match then the signature is deemed to be 'verified'. Code runs and stuff happens.
If the hash algorithm isn't collision-resistent then I could potentially devote machine cycles in coming up with a dll that produces the same hash (this is computationally hard but within the realm of possibility) and ship this with a signature that turns out to be valid (since the hashes will match).
- Does this imply that every piece of signed code (or digitally signed documents) that was signed using a compromised hashing algorithm is now 'untrusted'?
- If the above is true (even in part) then it appears that the only way to fix it is to abandon the existing hashing function and use one that hasn't been broken as yet. If my understanding of how code signing works is correct then this requires the client to update it's signature verification code.
- Have there been any legal cases where digitally signed documents (S/MIME e-mail for example) have been deemed inadmissable because the hashing algorithm was open to collisions?

Keunwoo Lee 17:07, 10 October 2005 (PDT): A court in NSW, Australia, dismissed a speeding ticket case because the defense was going to argue that the camera used the broken MD5 hash function to verify integrity of the image: [5]. This seems more like an instance of the prosecution being unprepared for this argument, however. I'm not really sure how practical the MD5 attack is, in the sense of enabling the attacker to produce a collision that's also a plausible false document for some real purpose (rather than a random-looking string of bits).

Jim Xiao I guess digitally signed documents using SHA-1 will continue to be trusted for a while (at least another several years) since the break of the hash algorithm is not so fundamental and finding a collision in SHA-1 still needs about 2^69 calculation (2000 times faster than brute forth method). However, it’s time for everybody begin to move to better hash standards (longer bits, harder to break) like SHA-256, SHA-384 and SHA-512, as Schneier pointed out.

Are security dialog boxes mostly CYA?

--Joe Xavier [MSFT] 10/10 6:31 PM PST: I'm by no stretch a UI-expert but I work on safety related features on the Outlook e-mail client (spam, phishing, s/mime etc) and so I've seen a few hard-to-design security-related features. Designing a security feature that's not hidden deep into the OS is made harder by the fact that most software users work in "just allow me to get my stuff done" mode. Take a simple example like SSL. The technology part is easy. Download the cert, check the cert and determine if it's valid. The hard part is getting the the User Experience right.

Fact: The average user doesn't read a dialog box that has more than 2 lines. This is not easily studied in usability tests because users are really getting paid to pay attention.
Fact: If the dialog has a "Don't show me this message again" option, bet on the user selecting this option and hitting 'OK'.
Fact: The average user has been led to believe that systems themselves are secure. It takes a great amount of user education to get users to pay attention to subtle hints and/or dialog boxes that show up repeatedly.
Fact: The best way to get a user's attention is to disable some functionality and force them to take a user action each time to enable it. This can also incredibly annoy some users so provide a not-easily-discoverable way to turn the security feature off.

We worked through all of these issues when designing the anti-phishing feature [6] that shipped as part of Office 2003 SP2 (Service Pack 2). Funnily enough, the term 'phishing', while easily recognized and understood by most techies, isn't commonly understood. This report [7] by the Pew Internet and American Life Project revealed that a whopping 70% of the users surveyed responded they weren't familiar with the term 'phishing'. So we went with 'suspicious' and some explanatory text in the Infobar and used only a single user-dismissable dialog box. The lesson here - it takes a long time for terms and (an understanding of) technology to percolate down even if you have the press shouting from the rooftops.
To me, most informational dialog boxes in a security feature appear to say : "Here's a complicated message you most probably can't understand. Clicking yes may (or may not) cause you to shoot yourself in the foot. Proceed?"

Oh well, I guess something's better than nothing. Security on the average user's machine is just about as secure as the user. This paper [8] by Steve Gribble et al made me laugh out loud (and then delete Kazaa and finally re-image my machine!).

Geoff Voelker -- Note that Steve will be speaking on this very topic on 11/9.

Certificate authority due diligence?

David Dorwin The due diligence of certificate authorities was mentioned in class and in the discussion above. As you might suspect, Verisign has failed at least once. This Microsoft Security Bulletin from 2001 (expand the Technical details section) documents an incident in which Verisign gave two certificates for "Microsoft Corporation" to someone who fraudulently claimed to be a Microsoft employee.

The bulletin also mentions Certificate Revocation Lists (CRLs) and CRL Distribution Points (CDPs), but implies that there's no standard way for distributing these. Does anyone have newer information about such measures being standardized or deployed?

That incident notwithstanding, you would think that Verisign would be extra careful when issuing certificates for large American and well-known international corporations. It's relatively straightforward to create such a list, and if humans are involved in the process, names like Microsoft should set off an alarm for the employees. However, I wonder what type of due diligence is in place for lesser-known companies, especially international ones. Even if they have offices in various countries, could one avoid local checks by requesting a certificate for My Foreign Company at the US site? The domain name system probably makes things even more difficult for certificate authorities to perform due diligence. If the certificate authorities do research on the web, do they go to mycompany.com, mycompany.net, mycompany.co.uk, mycompany.co.jp, etc.? And could one get a fraudulent certificate by creating a fake website at one of these domains?

I would guess that it would be relatively easy for someone to get a fraudulent certificate for a small business or small software house. The impact and potential for deceiving a lot of users would be less, but the damage would not seem insignificant for the business or affected users.


Mark Ihimoyan In responding to your question about standardizing the distribution of CRLs and CDPs, the preferred and standardized approach I know of determining the revocation status of certificates would be the Online Certificate Status Protocol(OCSP). According to the wiki, OCSP was created to overcome the deficiencies of CRLs. Some of the reasons highlighted for the preference of OCSP over CRLs are listed on the wiki. Basically the implementation is as follows:

- Alice and Bob have public key certificates issued by Ivan.

- Alice wishes to perform a transaction with Bob and sends him her public key certificate.

- Bob, concerned that Alice's private key may have been compromised, creates an 'OCSP request' that contains a fingerprint of Alice's public key and sends it to Ivan, the Certificate authority (CA).

- Ivan's OCSP responder looks up the revocation status of Alice's certificate (using the fingerprint Bob created as a key) in his own CA database. If Alice's private key had been compromised, this is the only trusted location at which the fact would be recorded.

- Ivan's OCSP responder confirms that Alice's certificate is still OK, and returns a signed, successful 'OCSP response' to Bob.

- Bob cryptographically verifies the signed response (He has Ivan's public key on-hand -- Ivan is a trusted responder) and ensures that it was produced recently.

- Bob completes the transaction with Alice.


DOS Attacks

--Manish Mittal: How can cryptography help to guard against DOS attacks? Its surprisingly easy to launch a denial of service attack on any web service. Each time a client requests connection to a network service or application, a corresponding server sets aside a portion of its resource to handle the connection. By flooding a target server with connection requests, the finite resources allocated to a specific service can be overwhelmed. The result is a Denial of Service to valid connection requests, system errors, and possible system crashes. Such attacks are commonly referred to as Overflow Attacks.

If the client requests are invalid then it can be discarded by putting proper checks in the system, however if the requests are valid then it becomes a tough problem.

I know that FBI has some laws against this kind of attacks. Also there are ways in which you can filter IP from which flood requests are coming. But there are ways in which you can launch distributed DOS attacks.

In Passport authentication system, we have started using HIP (Human interactive proofs). That has reduced the fake hotmail accounts and DOS attacks substantially. Are there other ways to protect against these attacks?

--Joe Xavier [MSFT]: There are a number of efforts to develop systems to detect and blacklist IP's. Where I've seen this mostly discussed is around spam. For example, large Email service Providers like Hotmail have an endemic problem in trying to stem the flow of spam. Given the volume of mail, it's smarter to detect 'bad' IP's and reject connections instead of bearing the brunt of receiving mail, scanning the mail and then junking it. For example, Hotmail receives e-mail in the order of a few billion messages a day. Similar systems are applicable to DOS attacks.
About the usefulness of HIPs - this is merely a band-aid approach to automated signups. Numerous attempts at defeating HIP are showing very promising results. This paper [9] "Computers beat Humans at Single Character Recognition in Reading based Human Interaction Proofs (HIPs)" presented at CEAS this year is an interesting read. I also happened to talk to someone at the conference about outfits that hire people in China to solve HIPS.