Difference between revisions of "Talk:Lecture 6"

From CyberSecurity
Jump to: navigation, search
(Cryptosystem secure to "a high probability")
(SHA)
Line 2: Line 2:
  
 
== SHA ==
 
== SHA ==
jeffdav: [http://www.schneier.com/ Bruce Schneier], author of the authoritive work on cryptography [http://www.schneier.com/book-applied.html Applied Cryptography], has a [http://www.schneier.com/blog/ blog] where he posts about many things we will most likely be discussing in this class.  I bring him up because he recently posted about a team of Chinese researchers who have broken SHA, [http://www.schneier.com/blog/archives/2005/08/new_cryptanalyt.html here] and [http://www.schneier.com/blog/archives/2005/08/chinese_cryptog.html here].
+
--[[User:jeffdav|Jeff Davis]]: [http://www.schneier.com/ Bruce Schneier], author of the authoritive work on cryptography [http://www.schneier.com/book-applied.html Applied Cryptography], has a [http://www.schneier.com/blog/ blog] where he posts about many things we will most likely be discussing in this class.  I bring him up because he recently posted about a team of Chinese researchers who have broken SHA, [http://www.schneier.com/blog/archives/2005/08/new_cryptanalyt.html here] and [http://www.schneier.com/blog/archives/2005/08/chinese_cryptog.html here].
  
 
== Authenticode Dialog in Microsoft Internet Explorer ==
 
== Authenticode Dialog in Microsoft Internet Explorer ==

Revision as of 00:26, 7 October 2005

SHA

--Jeff Davis: Bruce Schneier, author of the authoritive work on cryptography Applied Cryptography, has a blog where he posts about many things we will most likely be discussing in this class. I bring him up because he recently posted about a team of Chinese researchers who have broken SHA, here and here.

Authenticode Dialog in Microsoft Internet Explorer

jeffdav: I work on the Browser UI team for Internet Explorer and for Windows XP Service Pack 2 I actually did some work on the Authenticode Dialog that was the topic of a few slides tonight. I wanted to share a few things:

  • The screen shots in the slides are of the old pre-XPSP2 dialog.
  • The old dialog says something like "Do you want to install and run '%s'..." where the %s is replaced with the name of their program. The dialog did not impose strict enough limits on the name of the control that was being downloaded, resulting in unscrupulous companies titling their programs in clever ways, e.g. "Click YES to automatically get blah blah blah..." This social engineering attack was ridiculously successful even though to us CS-types it was obvious that it was sketchy.
    • For XPSP2 we moved the dialog elements around and started imposing strict limits.
    • This goes back to trusting the root authority to do the right thing. Verisign could have required reasonable text for this field, but they did not. This is actually quite understandable because the companies that author these questionable controls tend to be a fairly litigious bunch.
  • There are various levels of warnings in the authenticode dialogs now. For example, if everything seems kosher you get a fairly limited warning dialog. If there are just a few things wrong, like the cert expired, you get a slightly more alarming (visually and textually) warning. If someone is trying to install version 3 of a control that has version 2 already installed on your machine and the certificates are signed by different entities, we block it outright.
  • And everyone knows nobody reads dialogs anyway. Most people will say 'yes' to anything while browsing the web if they think they will get something free out of it.
  • I have been on customer visits and seen usability studies where we tell people they are installing spyware and they don't care. They don't care that someone might be able to use their computer to relay spam or DoS attacks, as long as they can play the game they want, or get the cool theming effect.

So we see there are a lot of factors here beyond just the computer science. The human factor and the thread-of-litigation factor are huge.

Yet another reason why passwords are on the way out...

--Gmusick 10:01, 6 Oct 2005 (PDT) I ran across this article on slashdot a while back about some experiments out of Berkely on recovering typed characters by monitoring the sounds emanating from the keyboard. In the article [1] Zhuang, Zhou and Tygar claim they can get 96% accuracy on keystrokes and break 80% of 10 character passwords in less than 75 attempts.

Now a three-attempt lockout will mostly foil this technique, but they are probably going to be getting more refined and more tolerant of random noise. So eventually you could imagine gathering your co-workers passwords with a run-of-the mill tape recorder sitting on your desk.

jeffdav: I can't find the paper, but last year I went to a talk at MS given by Adi Shamir where he presented his work on breaking RSA by simply listening to the sounds the machine made during the encryption process. They used a karaoke microphone one of his children had. When they analyzed the sound spectrum they saw distinct changes at different phases of the process. They traced it back to a capacitor in the power supply that emitted a different 'whine' as the power consumption of the CPU changed. He said he was confident that with a very good microphone they could recover the private key eventually...

Changing Paradigms

Chris Fleizach - The good thing about technology is that whatever people can dream, they can build (well almost). A main problem of conventional communication is dealing with someone who can intercept messages (often called the man-in-the-middle attack). Recent advances in quantum manipulation have led to commercial systems that guarantee no one else looks at your data. If they try, the quantum nature of some data being sent is altered, thus alerting both parties that there is a problem immediately. This field is called Quantum Cryptography and MagiQ is one of the first providers of such a system. The lecture tonite reiterated the commonly held belief that no system can ever be completely secure, but can we build systems that exploit fundamental properties of nature to actually achieve this? Attackers work on the belief that the targets have used some assumption that is false. What if our assumptions are that the laws of physics hold true all of the time (presumably, they do). Will there be ways around this (besides exploiting human fallibility)?

Cryptosystem secure to "a high probability"

--Parvez Anandam 01:34, 6 Oct 2005 (PDT) The RSA cryptosystem, on which e-commerce is based, relies on the fact that it is difficult to quickly factor a large number. No one has proved that that's impossible to do; we just know it hasn't been done yet. The RSA cryptosystem is secure to a high probability, the thinking goes.

This raises the question: what is the probability that some very clever person will come up with an algorithm to factor numbers in polynomial time? (I don't mean quantum computing fuzziness but something you could implement today on a good old fashioned digital computer.)

That probability is close to zero, you say. At the beginning of the previous decade, that likely was the odds of proving Fermat's Last Theorem. The probability of the latter is now 1: it's been proved.

It doesn't take an evil-doer to try and solve the factoring problem for selfish motives. A University Professor is the most likely candidate for such a discovery, made to advance human knowledge. The moment she publishes that result, however, our world changes.

We have no way of quantifying the likelyhood of scientific progress in a certain direction. It seems therefore imprudent to rely on a cryptosystem that is based not on a solid mathematical proof but merely a conjecture that it is hard to crack.


--Gmusick 10:01, 6 Oct 2005 (PDT) Not really. As the prof noted, you should always try to do the best you can with what you've got. The real problem in this case is not that somebody might crack the system by doing polynomial time factoring, but that we might not detect they had cracked the system. If we know they have cracked the system, then we can guard against it while we switch to a different encryption scheme that doesn't rely on factoring. And when they crack that, we'll switch again. And it will cost money, but the question is how much?

I think one of the overriding lessons from this course is going to be we must start to think like insurance adjusters and accept that we are never going to stop bad things from happening. Instead we are going to have to analyze how much risk in terms of some value unit (lives, money, time) using imperfect technology in a given application will cost. Not doing that analysis is the imprudent part.

--Chris DuPuis 13:51, 6 Oct 2005 (PDT) Unfortunately, the idea that we'll "switch to a different encryption scheme" is based on the assumption that the data you encrypt today will be worthless tomorrow. If an attacker just snoops on all of your network traffic and records it, he will be able to decrypt it at some point in the future when algorithms, processors, and networked machines give him sufficient computing power to do so. As long as your data becomes worthless in a reasonably short time frame (a few years, maybe), encryption is a good protection. If the disclosure of your data would be harmful even many years into the future, encryption of any kind is probably insufficient protection.


--Gmusick 14:46, 6 Oct 2005 (PDT) Touche, sort of. Although I'm not assuming that current data will be "worthless" in as much as I'm assuming it will be "worth less". I'm of the opinion that most of our "personal" data loses value over time. I may be disabused of that notion as the course progresses, though.

We are in agreement that encryption by itself is insufficient. Intrusion detection is at least as important as the choice of encryption algorithm. Having personal data snooped for a few people before we cut off the bad guy is a tragedy for those people, but having it snooped for hundreds of thousands is a calamity.

PKI Security Dialog from IE

--Dennis Galvin 01:35, 6 Oct 2005 (PDT) The article Why Johnny Can't Encrypt (even if very dated) brought up many germane points about usability. In that vein, the Internet Explorer dialog box on slide 41 from the lecture is certainly less than clear with its use of graphics to represent the threat:

  • Yellow caution sign by the line saying "The security certificate is from a trusted certifying authority,"
  • Green checkmark by the line indicating the error "The name on the security certificate is invalid...."

OK software is not perfect, but this is an excellent example of the confusing use of graphics. It also does not inspire confidence in the software being correct, nor in causing the user to contact the webmaster of the site with the invalid security certificate. For the record, the Firefox browser has confusing dialogs with respect to security as well, and this may have been corrected in the latest security release. "jeffdav" made an earlier comment about there being a lot of factors beyond just the computer science. Most users when confronted with such a dialog will click through it anyway, as the earlier post pointed out, probably muttering something under their breath about not understanding computers. Usability may be one of those things beyond computer science, but it needs to be factored heavily into GUI design.

Eavesdropping on computers

--Dennis Galvin 08:08, 6 Oct 2005 (PDT)

The lecture showed the two slides demonstrating that sensed reflected light could be used to reconstruct with a high degree of accuracy what is on the screen. I wonder how it might be at reading small 10 pt or smaller type? News articles in the past few weeks have played up an upcoming paper by three UC Berkeley researchers on using a $10 microphone and signal processing to grab keystrokes with 96% accuracy (http://www.cnn.com/2005/TECH/internet/09/21/keyboard.sniffing.ap/). There was an earlier paper from IBM in Almaden, CA, along the same lines in which Asonov and Agrawal used a microphone and a neural network to suss out with somewhat less accuracy keystrokes from keyboard sound emissions (http://www.almaden.ibm.com/software/quest/Publications/papers/ssp04.pdf). What a difference a bit of time can make. The upshot here is that even if sensitive information is immediately encrypted and only stored in encrypted form, with the combination of these two technical advances, the encryption may now be immaterial.

--Liebling 11:10, 6 Oct 2005 (PDT)

Some forms of biometrics can get around the "peeking" problem of recording passwords from visual and aural cues. Putting your thumb on a printreader doesn't make any noise. It doesn't eliminate such eavesdropping as capacitor whine, however. Furthermore, biometrics can only solve the "authentication" problem rather than the authorization and encryption problems. Even then, if the price is high enough, the necessary thumb could be obtained ...

Another advantage to biometric authentication is that the user isn't able to "give up" his "password" for a bar of chocolate (as some 70% of people in the recent Liverpool study did). Each system using biometric autentication can add their own method hashing (discussed last night) to the fingerprint/retinal/etc input so that even having the print doesn't necessarily guarantee entry.

--Chris DuPuis 14:06, 6 Oct 2005 (PDT) Biometrics would be much more secure if your fingerprints and retinal patterns were protected from snoopers. Unfortunately, it's easy to get fingerprints off anything you touch. For retinal patterns, how sure are you that someone doesn't have a close-up, high resolution photo of your face? On an insecure network, there is no guarantee that the authentication data that is being sent over the wire originated in an "approved" thumbprint or retinal scanner, and not in a program that reads in its biometric data from JPEG files.

Bruce Schneier (gee, his name sure pops up a lot, doesn't it) has written a good bit about biometrics, including this piece: Biometrics: Truths and Fictions