Talk:Lecture 6

From CyberSecurity
Revision as of 02:35, 8 October 2005 by Wjasonfisher (talk | contribs) (Lecture 6)

Jump to: navigation, search

SHA

--Jeff Davis: Bruce Schneier, author of the authoritive work on cryptography Applied Cryptography, has a blog where he posts about many things we will most likely be discussing in this class. I bring him up because he recently posted about a team of Chinese researchers who have broken SHA, here and here.

Authenticode Dialog in Microsoft Internet Explorer

--Jeff Davis: I work on the Browser UI team for Internet Explorer and for Windows XP Service Pack 2 I actually did some work on the Authenticode Dialog that was the topic of a few slides tonight. I wanted to share a few things:

  • The screen shots in the slides are of the old pre-XPSP2 dialog.
  • The old dialog says something like "Do you want to install and run '%s'..." where the %s is replaced with the name of their program. The dialog did not impose strict enough limits on the name of the control that was being downloaded, resulting in unscrupulous companies titling their programs in clever ways, e.g. "Click YES to automatically get blah blah blah..." This social engineering attack was ridiculously successful even though to us CS-types it was obvious that it was sketchy.
    • For XPSP2 we moved the dialog elements around and started imposing strict limits.
    • This goes back to trusting the root authority to do the right thing. Verisign could have required reasonable text for this field, but they did not. This is actually quite understandable because the companies that author these questionable controls tend to be a fairly litigious bunch.
  • There are various levels of warnings in the authenticode dialogs now. For example, if everything seems kosher you get a fairly limited warning dialog. If there are just a few things wrong, like the cert expired, you get a slightly more alarming (visually and textually) warning. If someone is trying to install version 3 of a control that has version 2 already installed on your machine and the certificates are signed by different entities, we block it outright.
  • And everyone knows nobody reads dialogs anyway. Most people will say 'yes' to anything while browsing the web if they think they will get something free out of it.
  • I have been on customer visits and seen usability studies where we tell people they are installing spyware and they don't care. They don't care that someone might be able to use their computer to relay spam or DoS attacks, as long as they can play the game they want, or get the cool theming effect.

So we see there are a lot of factors here beyond just the computer science. The human factor and the thread-of-litigation factor are huge.

--David Coleman: This was the topic of a major slashdot (http://www.slashdot.org) rant not too long ago. The conversation was in response to something from Bruce Schneir saying essentially that code signing was useless for computer security. What every single responder forgot was the threat of a virus being distributed as a program (driver, update, etc.) from a given vendor (that you would trust). That was a major concern several years ago. So it's true that this is not a solution to the overall security problem (as all the anti-Microsoft folks seem to like to bash it for), but it does mitigate a very real threat. It has all the problems listed above, but it is certainly better than nothing for ensuring that the bits you are about to run / install is really from the source you think it is (allowing the trust relationship to take place) and hasn't been modified on route.

Yet another reason why passwords are on the way out...

--Gmusick 10:01, 6 Oct 2005 (PDT) I ran across this article on slashdot a while back about some experiments out of Berkely on recovering typed characters by monitoring the sounds emanating from the keyboard. In the article [1] Zhuang, Zhou and Tygar claim they can get 96% accuracy on keystrokes and break 80% of 10 character passwords in less than 75 attempts.

Now a three-attempt lockout will mostly foil this technique, but they are probably going to be getting more refined and more tolerant of random noise. So eventually you could imagine gathering your co-workers passwords with a run-of-the mill tape recorder sitting on your desk.

--Jeff Davis: I can't find the paper, but last year I went to a talk at MS given by Adi Shamir where he presented his work on breaking RSA by simply listening to the sounds the machine made during the encryption process. They used a karaoke microphone one of his children had. When they analyzed the sound spectrum they saw distinct changes at different phases of the process. They traced it back to a capacitor in the power supply that emitted a different 'whine' as the power consumption of the CPU changed. He said he was confident that with a very good microphone they could recover the private key eventually...

Changing Paradigms

Chris Fleizach - The good thing about technology is that whatever people can dream, they can build (well almost). A main problem of conventional communication is dealing with someone who can intercept messages (often called the man-in-the-middle attack). Recent advances in quantum manipulation have led to commercial systems that guarantee no one else looks at your data. If they try, the quantum nature of some data being sent is altered, thus alerting both parties that there is a problem immediately. This field is called Quantum Cryptography and MagiQ is one of the first providers of such a system. The lecture tonite reiterated the commonly held belief that no system can ever be completely secure, but can we build systems that exploit fundamental properties of nature to actually achieve this? Attackers work on the belief that the targets have used some assumption that is false. What if our assumptions are that the laws of physics hold true all of the time (presumably, they do). Will there be ways around this (besides exploiting human fallibility)?

Yi-Kai - Quantum cryptography does provide some fundamental new capabilities, but it still suffers from "implementation bugs." For instance, its security depends on the ability to emit and receive single photons; but in real implementations, the emitter sometimes produces multiple photons (allowing information to leak), and the detector has limited efficiency (introducing errors which can hide an eavesdropper). A lot of work goes into building better hardware to close these loopholes. Even so, this only protects against eavesdropping on the optical fiber; there are always side-channels to worry about, like having information leak via fluctuations in the power supply, though you could surely fix these problems through heroic effort.

I think it is possible to achieve security based on the laws of physics, in principle, but probably not in practice.

Cryptosystem secure to "a high probability"

--Parvez Anandam 01:34, 6 Oct 2005 (PDT) The RSA cryptosystem, on which e-commerce is based, relies on the fact that it is difficult to quickly factor a large number. No one has proved that that's impossible to do; we just know it hasn't been done yet. The RSA cryptosystem is secure to a high probability, the thinking goes.

This raises the question: what is the probability that some very clever person will come up with an algorithm to factor numbers in polynomial time? (I don't mean quantum computing fuzziness but something you could implement today on a good old fashioned digital computer.)

That probability is close to zero, you say. At the beginning of the previous decade, that likely was the odds of proving Fermat's Last Theorem. The probability of the latter is now 1: it's been proved.

It doesn't take an evil-doer to try and solve the factoring problem for selfish motives. A University Professor is the most likely candidate for such a discovery, made to advance human knowledge. The moment she publishes that result, however, our world changes.

We have no way of quantifying the likelyhood of scientific progress in a certain direction. It seems therefore imprudent to rely on a cryptosystem that is based not on a solid mathematical proof but merely a conjecture that it is hard to crack.


--Gmusick 10:01, 6 Oct 2005 (PDT) Not really. As the prof noted, you should always try to do the best you can with what you've got. The real problem in this case is not that somebody might crack the system by doing polynomial time factoring, but that we might not detect they had cracked the system. If we know they have cracked the system, then we can guard against it while we switch to a different encryption scheme that doesn't rely on factoring. And when they crack that, we'll switch again. And it will cost money, but the question is how much?

I think one of the overriding lessons from this course is going to be we must start to think like insurance adjusters and accept that we are never going to stop bad things from happening. Instead we are going to have to analyze how much risk in terms of some value unit (lives, money, time) using imperfect technology in a given application will cost. Not doing that analysis is the imprudent part.

--Chris DuPuis 13:51, 6 Oct 2005 (PDT) Unfortunately, the idea that we'll "switch to a different encryption scheme" is based on the assumption that the data you encrypt today will be worthless tomorrow. If an attacker just snoops on all of your network traffic and records it, he will be able to decrypt it at some point in the future when algorithms, processors, and networked machines give him sufficient computing power to do so. As long as your data becomes worthless in a reasonably short time frame (a few years, maybe), encryption is a good protection. If the disclosure of your data would be harmful even many years into the future, encryption of any kind is probably insufficient protection.


--Gmusick 14:46, 6 Oct 2005 (PDT) Touche, sort of. Although I'm not assuming that current data will be "worthless" in as much as I'm assuming it will be "worth less". I'm of the opinion that most of our "personal" data loses value over time. I may be disabused of that notion as the course progresses, though.

We are in agreement that encryption by itself is insufficient. Intrusion detection is at least as important as the choice of encryption algorithm. Having personal data snooped for a few people before we cut off the bad guy is a tragedy for those people, but having it snooped for hundreds of thousands is a calamity.

--Jeff Davis 17:30, 6 Oct 2005 (PDT) Well, there are other choices. Take the amazon example; if you record my SSL transaction eventually you will be able to brute force it. However my credit card will probably have expired by then. Furthermore I can generate a one-time-use credit card number and use that and only Amex knows the mapping back to my real account number.

Encryption is like a bike lock. Will it stop someone who is sufficiently motivated? No. But it does stop 99% of the people walking by with just a little bit of larceny in their hearts. Perfect security is impossible so we raise the bar as high as possible and reasonable, and then we rely on law enforcement to clean up the scum. There are other uses for encryption besides the amazon case, but I believe that if we think them through none of them are good reasons to worry:

  • If I use encryption as part of illegal activities, I never want it to become public. Of course, we do not care about protecting criminals.
  • If I use encryption to encrypt business secrets, I never want my competitors to discover it. Eventually they will discover it, and probably without breaking the encryption. Employees leave, patents get filed; no trade secrets last forever.
  • If I use encryption as part of my activities to overthrow an illegitimate or repressive government, I never want the government to know. I suggest that if by the time they break the encryption you have not succeeded in overthrowing them, you are probably doomed anyway.

PKI Security Dialog from IE

--Dennis Galvin 01:35, 6 Oct 2005 (PDT) The article Why Johnny Can't Encrypt (even if very dated) brought up many germane points about usability. In that vein, the Internet Explorer dialog box on slide 41 from the lecture is certainly less than clear with its use of graphics to represent the threat:

  • Yellow caution sign by the line saying "The security certificate is from a trusted certifying authority,"
  • Green checkmark by the line indicating the error "The name on the security certificate is invalid...."

OK software is not perfect, but this is an excellent example of the confusing use of graphics. It also does not inspire confidence in the software being correct, nor in causing the user to contact the webmaster of the site with the invalid security certificate. For the record, the Firefox browser has confusing dialogs with respect to security as well, and this may have been corrected in the latest security release. "jeffdav" made an earlier comment about there being a lot of factors beyond just the computer science. Most users when confronted with such a dialog will click through it anyway, as the earlier post pointed out, probably muttering something under their breath about not understanding computers. Usability may be one of those things beyond computer science, but it needs to be factored heavily into GUI design.

Eavesdropping on computers

--Dennis Galvin 08:08, 6 Oct 2005 (PDT)

The lecture showed the two slides demonstrating that sensed reflected light could be used to reconstruct with a high degree of accuracy what is on the screen. I wonder how it might be at reading small 10 pt or smaller type? News articles in the past few weeks have played up an upcoming paper by three UC Berkeley researchers on using a $10 microphone and signal processing to grab keystrokes with 96% accuracy (http://www.cnn.com/2005/TECH/internet/09/21/keyboard.sniffing.ap/). There was an earlier paper from IBM in Almaden, CA, along the same lines in which Asonov and Agrawal used a microphone and a neural network to suss out with somewhat less accuracy keystrokes from keyboard sound emissions (http://www.almaden.ibm.com/software/quest/Publications/papers/ssp04.pdf). What a difference a bit of time can make. The upshot here is that even if sensitive information is immediately encrypted and only stored in encrypted form, with the combination of these two technical advances, the encryption may now be immaterial.

--Liebling 11:10, 6 Oct 2005 (PDT)

Some forms of biometrics can get around the "peeking" problem of recording passwords from visual and aural cues. Putting your thumb on a printreader doesn't make any noise. It doesn't eliminate such eavesdropping as capacitor whine, however. Furthermore, biometrics can only solve the "authentication" problem rather than the authorization and encryption problems. Even then, if the price is high enough, the necessary thumb could be obtained ...

Another advantage to biometric authentication is that the user isn't able to "give up" his "password" for a bar of chocolate (as some 70% of people in the recent Liverpool study did). Each system using biometric autentication can add their own method hashing (discussed last night) to the fingerprint/retinal/etc input so that even having the print doesn't necessarily guarantee entry.

--Chris DuPuis 14:06, 6 Oct 2005 (PDT) Biometrics would be much more secure if your fingerprints and retinal patterns were protected from snoopers. Unfortunately, it's easy to get fingerprints off anything you touch. For retinal patterns, how sure are you that someone doesn't have a close-up, high resolution photo of your face? On an insecure network, there is no guarantee that the authentication data that is being sent over the wire originated in an "approved" thumbprint or retinal scanner, and not in a program that reads in its biometric data from JPEG files.

Bruce Schneier (gee, his name sure pops up a lot, doesn't it) has written a good bit about biometrics, including this piece: Biometrics: Truths and Fictions


Certificate Authorities

--Jeff Bilger 22:20, 6 OCt 2005 (PDT) During yesterdays class, it was mentioned that Certificate Authorities (CA's) are authenticated by their public keys. A necessary precondition is that these public keys are distributed to a broad audience and this is achieved by being preloaded into web browsers such as Internet Explorer. If this is the case, how does a new CA get its public key distributed to the world? I would argue that if a user had to manually import that public key into their browser, then the barrier to entry into the CA marketplace would be way too high. I just checked my IE settings and there are 37 distinct Trusted Root Certification Authorities while there are only 29 distinct Trusted Root Certification Authorities defined in my version of Firefox. If I wanted to start my own CA tomorrow, how could I guarantee that when people purchased SSL certificates from me that visitors to SSL protected areas of their site would not see warning messages regarding trust?

Andrew Cencini - I would think that you would work with Microsoft to get your public key included in the next windows update / service pack that goes out. This is just a guess but I'd imagine at the minimum there'd be a little lead-time involved with all of that. I think someone in this class worked on IE so perhaps they may know better!

Jeff Davis I checked with some people on the team. If you want to get your stuff added there is an alias (cert@microsoft.com?) you can e-mail and you just have to sign some sort of agreement and they push it out via windowsupdate.com.

Can computers operate more transparently?

Yi-Kai - I'm fairly sure this is not an original idea, but I'm curious what ideas people might have along these lines.

One thing about modern operating systems is that they're not very "transparent." It's hard to see what the computer is really doing -- for instance, if you open the Task Manager in Windows, you see about 20 processes, but for most of them, you can't tell what they are doing or why they are there. This lack of transparency isn't a security problem by itself, but it is a contributing factor. When a worm infects a computer, the user often does not notice it, because even though there's a lot of activity as the worm probes the network for vulnerable hosts, none of this is visible in the user interface. And as long as the user is unaware of the problem, he/she doesn't have any chance of fixing it, or at least limiting the damage (by disconnecting from the network, for instance).

We know we will never achieve perfect security. Transparency is helpful, then, because it makes problems easier to detect and fix. It's sort of like first aid training, in that it gives users the ability to handle some problems on their own, even if they cannot fix them completely.

Lecture 6

Professor Voelker, you mentioned the “Black Unicorn,” can you provide some more background on this individual?

Professor Voelker, can you distinguish “integrity” from “authentication” for me a bit more – does authentication allow us to ensure integrity or …? Is it that “confidentiality” and “integrity” are what we are striving for and that “authentication” and “non-repudiation” allow us to achieve those goals?

Professor Voelker, during your presentation, you stated several times that the knowledge needed for carrying out various computer/cyber attacks is relatively low, how low – that is, is this something we’d expect non-university educated persons living in the Middle East or the remote regions of Afghanistan and Pakistan to be able to do (assuming they had the equip and some basic instruction/manuals)? Do we know of any incidences of al Qaeda, or those organizations loosely affiliated with it, attempting such cyber attacks? Do advanced states, such as the US and the EU countries, have the ability to crash easily another state’s internet if they choose to do so – is that a weapon the US can and would likely employ if were to engage in inter-state conflict in the future?

Could someone with a computer science background please roughly label and order the types of cyber attacks according to how difficult they are to develop and how devastating they are likely to be with respect to one another (i.e. Trojan horse, worm, shell, etc.)?

Professor Voelker, you mentioned that forensic tracing on the internet is extremely difficult. When al Zarwaqawi posts warnings and/or takes credit for attacks that have been carried out (such as he did with respect to attacks on the Kearsarge and Ashland), is there anyway to approximate or glean his location – how much and what kinds of knowledge, if any, can we get from such postings?

Professor Maurer and/or Dean Nacht, in the last couple of days there has been quite a bit of talk regarding a letter from al Zawahiri to al Zarwaqawi. It seems that the former believes that the latter’s brutality has gotten a little out of hand and has asked him to halt extremely violent acts such as beheadings, lest they lose the support of many of the Muslims they are trying to court or at least drum up sympathy from. This is the first I have heard of al Qaeda being concerned with such things. Maybe al Qaeda’s ideology does not put it beyond the “invisible man” scenario after all. Do you have any thoughts on the letter?

Professor Maurer and/or Dean Nacht, what do you make of the recent “specific” but “non-credible” threats with respect to the NY subway system?  Is there any indication of which agency in the US federal government or from a foreign government came across the info (or is it really likely to have come from a raid in Iraq as has been reported by some news organizations)?  Also, do either of you make any of the fact that the Homeland Security Dept didn’t make too much of it but that the NY City authorities did?  Is that a lack of communication problem or are the local authorities responding as they should and is the federal government leaving the initial response to the local authorities as one might expect?  Should the situation there make us worry, as the post-Katrina fiasco did?  It has also been reported that three suspects have been arrested but the arresting authorities have not been identified – any thoughts?

Professor Maurer and/or Dean Nacht, do either of you have a take on the McCain push to limit and clearly establish what is permitted with respect to interrogation? Would that be a good thing, a bad thing, …?

Professor Maurer made the point often, during his initial lectures on terrorism, that democracy and development should not be viewed as the cure all for terrorism. There is an interesting and well-written article that supports that view in the current edition of Foreign Affairs – F. Gregory Gause III, Can Democracy Stop Terrorism?, FOREIGN AFFAIRS (Sept./Oct. 2005).