Difference between revisions of "Talk:Lecture 6"

From CyberSecurity
Jump to: navigation, search
(SHA)
Line 194: Line 194:
 
Even if this speculation is true, however, it would be necessary but not sufficient.  The further question arises why European criminals do not prey on the US, given that the cyber-Atlantic is a negligible barrier.  Answering this question would probably place interesting constraints on the nature of cyber crime in general.
 
Even if this speculation is true, however, it would be necessary but not sufficient.  The further question arises why European criminals do not prey on the US, given that the cyber-Atlantic is a negligible barrier.  Answering this question would probably place interesting constraints on the nature of cyber crime in general.
  
7)    INDUSTRIAL SCALE HACKING?  The main reason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments.  The question arises, therefore, what terrorists can do with one or two smart people and what would require hundreds.  For example, which category does "taking down the Internet" belong to?
+
7)    INDUSTRIAL SCALE HACKING? Economics is all about scarcity. The main ason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments.  The question arises, therefore, what terrorist/states can do with 1) ordinary kiddyscript stuff, 2) one or two smart people, and 3) hundreds of smart people.  For example, which category does "taking down the Internet" belong to?
  
 
8)    VULNERABILITIES AND INCENTIVES.  Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful.  In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought").  Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges.  The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.
 
8)    VULNERABILITIES AND INCENTIVES.  Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful.  In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought").  Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges.  The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.

Revision as of 20:58, 9 October 2005

SHA

--Jeff Davis: Bruce Schneier, author of the authoritive work on cryptography Applied Cryptography, has a blog where he posts about many things we will most likely be discussing in this class. I bring him up because he recently posted about a team of Chinese researchers who have broken SHA, here and here.

--Noor-E-Gagan Singh: What if the Hash is altered to match whatever changes were made to the message. How do you know if the Hash itself has not been modified? I got my answer at An Illustrated Guide to Cryptographic Hashes. One signs (encrypts with one's private key) the hash of the document, the result of which is a digital signature. This ensures that only the correct public key can decrypt the Hash itself.

Authenticode Dialog in Microsoft Internet Explorer

--Jeff Davis: I work on the Browser UI team for Internet Explorer and for Windows XP Service Pack 2 I actually did some work on the Authenticode Dialog that was the topic of a few slides tonight. I wanted to share a few things:

  • The screen shots in the slides are of the old pre-XPSP2 dialog.
  • The old dialog says something like "Do you want to install and run '%s'..." where the %s is replaced with the name of their program. The dialog did not impose strict enough limits on the name of the control that was being downloaded, resulting in unscrupulous companies titling their programs in clever ways, e.g. "Click YES to automatically get blah blah blah..." This social engineering attack was ridiculously successful even though to us CS-types it was obvious that it was sketchy.
    • For XPSP2 we moved the dialog elements around and started imposing strict limits.
    • This goes back to trusting the root authority to do the right thing. Verisign could have required reasonable text for this field, but they did not. This is actually quite understandable because the companies that author these questionable controls tend to be a fairly litigious bunch.
  • There are various levels of warnings in the authenticode dialogs now. For example, if everything seems kosher you get a fairly limited warning dialog. If there are just a few things wrong, like the cert expired, you get a slightly more alarming (visually and textually) warning. If someone is trying to install version 3 of a control that has version 2 already installed on your machine and the certificates are signed by different entities, we block it outright.
  • And everyone knows nobody reads dialogs anyway. Most people will say 'yes' to anything while browsing the web if they think they will get something free out of it.
  • I have been on customer visits and seen usability studies where we tell people they are installing spyware and they don't care. They don't care that someone might be able to use their computer to relay spam or DoS attacks, as long as they can play the game they want, or get the cool theming effect.

So we see there are a lot of factors here beyond just the computer science. The human factor and the thread-of-litigation factor are huge.

--David Coleman: This was the topic of a major slashdot (http://www.slashdot.org) rant not too long ago. The conversation was in response to something from Bruce Schneir saying essentially that code signing was useless for computer security. What every single responder forgot was the threat of a virus being distributed as a program (driver, update, etc.) from a given vendor (that you would trust). That was a major concern several years ago. So it's true that this is not a solution to the overall security problem (as all the anti-Microsoft folks seem to like to bash it for), but it does mitigate a very real threat. It has all the problems listed above, but it is certainly better than nothing for ensuring that the bits you are about to run / install is really from the source you think it is (allowing the trust relationship to take place) and hasn't been modified on route.

Jared Smelser- (In response to the idea that users will accept almost anything even if it causes their computer to relay spam or DoS attacks) What are the implications of a system that lacks any true responsibility in its usage? As I understand it, the propagation of a virus is through compromised computers, what responsibility do we as users have in safeguarding the integrity of our computers? Should users who fail to protect against viruses and the subsequent use of their computer as a vehicle for the spread of said virus be held accountable? In the last class lecture we learned that a single virus that can disrupt the internet can cost billions in lost commerce as well as the untold amount of damage to infrastructure i.e. 911 emergency, flight control, etc. My question then becomes should everyone have the right to use the internet? This might sound dramatic but how many systems of commerce and infrastructure this important to the world economy are not regulated on some level, for use? I don’t think I would be a proponent of discriminating usage on the internet but I certainly feel the topic should be discussed.

Yet another reason why passwords are on the way out...

--Gmusick 10:01, 6 Oct 2005 (PDT) I ran across this article on slashdot a while back about some experiments out of Berkely on recovering typed characters by monitoring the sounds emanating from the keyboard. In the article [1] Zhuang, Zhou and Tygar claim they can get 96% accuracy on keystrokes and break 80% of 10 character passwords in less than 75 attempts.

Now a three-attempt lockout will mostly foil this technique, but they are probably going to be getting more refined and more tolerant of random noise. So eventually you could imagine gathering your co-workers passwords with a run-of-the mill tape recorder sitting on your desk.

--Jeff Davis: I can't find the paper, but last year I went to a talk at MS given by Adi Shamir where he presented his work on breaking RSA by simply listening to the sounds the machine made during the encryption process. They used a karaoke microphone one of his children had. When they analyzed the sound spectrum they saw distinct changes at different phases of the process. They traced it back to a capacitor in the power supply that emitted a different 'whine' as the power consumption of the CPU changed. He said he was confident that with a very good microphone they could recover the private key eventually...

Changing Paradigms

Chris Fleizach - The good thing about technology is that whatever people can dream, they can build (well almost). A main problem of conventional communication is dealing with someone who can intercept messages (often called the man-in-the-middle attack). Recent advances in quantum manipulation have led to commercial systems that guarantee no one else looks at your data. If they try, the quantum nature of some data being sent is altered, thus alerting both parties that there is a problem immediately. This field is called Quantum Cryptography and MagiQ is one of the first providers of such a system. The lecture tonite reiterated the commonly held belief that no system can ever be completely secure, but can we build systems that exploit fundamental properties of nature to actually achieve this? Attackers work on the belief that the targets have used some assumption that is false. What if our assumptions are that the laws of physics hold true all of the time (presumably, they do). Will there be ways around this (besides exploiting human fallibility)?

Yi-Kai - Quantum cryptography does provide some fundamental new capabilities, but it still suffers from "implementation bugs." For instance, its security depends on the ability to emit and receive single photons; but in real implementations, the emitter sometimes produces multiple photons (allowing information to leak), and the detector has limited efficiency (introducing errors which can hide an eavesdropper). A lot of work goes into building better hardware to close these loopholes. Even so, this only protects against eavesdropping on the optical fiber; there are always side-channels to worry about, like having information leak via fluctuations in the power supply, though you could surely fix these problems through heroic effort.

I think it is possible to achieve security based on the laws of physics, in principle, but probably not in practice.

A different paradigm shift

Jameel Alsalam While it seems like there might be some types of attacks that would be foiled by new high-tech solutions like quantum encryption, it sounds like it requires a significant new investment in hardware and a lot of technical challenges (I'm sure a list about this could get long), the bigger paradigm shift seems to be that there are a lot of weak points in any security system before the important data gets encrypted that is more often overlooked (assumed).

The example of reading reflections off of walls is one example, but I think that by analyzing particular systems, a lot of other weaknesses would emerge. Probably solutions need to be somewhat tailored to the system that they will be used for. If we are talking about keeping login passwords for online websites secure (which are only one application, but probably a major gateway into identity theft), then SSL seems like it is probably by far the strongest link in the chain. It is much more likely that a criminal could accomplish a number of different attacks on this sort of system than on the encryption - be it phishing, becoming the employee of an organization that holds personal information, putting recording mechanisms on public workstations (a little camera in a coffee shop would probably get several peoples' important passwords per day), or any number of other high-tech or low-tech ways of getting around encryption, which at this point is an esoteric problem for mathematicians to work on while we know of many other weaknesses to average systems.

Probably the one area where there is the most room for improvement and the highest potential to disuade attacks is by improving enforcement and forensics. As security professionals, the risk function has to do with vulnerability of the system and the penalty for a security breach. For attackers, the risk assessment has to do with the probability of success vs. the benefit of success and the penalty for failure. So we could just as profitably spend time trying to limit the benefit of success for a criminal (lower daily minimums on ATM's, or such things) or increasing the penalty for failure. It seems like the probability of success bar is already fairly high.

Yi-Kai - Yes, it does seem like crypto is the strongest link in the chain, so we should focus on practical non-cryptographic attacks.

On the other hand, there are some applications where you should never assume that crypto (or any other part of your security) is "good enough." In World War II, the Germans were convinced that the Enigma cipher was unbreakable (and the Allies did their best to hide the break). Even nowadays, surprises do happen in cryptography, like the recent attacks on MD5 and SHA-1, and some possible weaknesses in AES. These kinds of attacks will never be common, but for high-value commercial applications, they cannot be ruled out.


Cryptosystem secure to "a high probability"

--Parvez Anandam 01:34, 6 Oct 2005 (PDT) The RSA cryptosystem, on which e-commerce is based, relies on the fact that it is difficult to quickly factor a large number. No one has proved that that's impossible to do; we just know it hasn't been done yet. The RSA cryptosystem is secure to a high probability, the thinking goes.

This raises the question: what is the probability that some very clever person will come up with an algorithm to factor numbers in polynomial time? (I don't mean quantum computing fuzziness but something you could implement today on a good old fashioned digital computer.)

That probability is close to zero, you say. At the beginning of the previous decade, that likely was the odds of proving Fermat's Last Theorem. The probability of the latter is now 1: it's been proved.

It doesn't take an evil-doer to try and solve the factoring problem for selfish motives. A University Professor is the most likely candidate for such a discovery, made to advance human knowledge. The moment she publishes that result, however, our world changes.

We have no way of quantifying the likelyhood of scientific progress in a certain direction. It seems therefore imprudent to rely on a cryptosystem that is based not on a solid mathematical proof but merely a conjecture that it is hard to crack.


--Gmusick 10:01, 6 Oct 2005 (PDT) Not really. As the prof noted, you should always try to do the best you can with what you've got. The real problem in this case is not that somebody might crack the system by doing polynomial time factoring, but that we might not detect they had cracked the system. If we know they have cracked the system, then we can guard against it while we switch to a different encryption scheme that doesn't rely on factoring. And when they crack that, we'll switch again. And it will cost money, but the question is how much?

I think one of the overriding lessons from this course is going to be we must start to think like insurance adjusters and accept that we are never going to stop bad things from happening. Instead we are going to have to analyze how much risk in terms of some value unit (lives, money, time) using imperfect technology in a given application will cost. Not doing that analysis is the imprudent part.

--Chris DuPuis 13:51, 6 Oct 2005 (PDT) Unfortunately, the idea that we'll "switch to a different encryption scheme" is based on the assumption that the data you encrypt today will be worthless tomorrow. If an attacker just snoops on all of your network traffic and records it, he will be able to decrypt it at some point in the future when algorithms, processors, and networked machines give him sufficient computing power to do so. As long as your data becomes worthless in a reasonably short time frame (a few years, maybe), encryption is a good protection. If the disclosure of your data would be harmful even many years into the future, encryption of any kind is probably insufficient protection.


--Gmusick 14:46, 6 Oct 2005 (PDT) Touche, sort of. Although I'm not assuming that current data will be "worthless" in as much as I'm assuming it will be "worth less". I'm of the opinion that most of our "personal" data loses value over time. I may be disabused of that notion as the course progresses, though.

We are in agreement that encryption by itself is insufficient. Intrusion detection is at least as important as the choice of encryption algorithm. Having personal data snooped for a few people before we cut off the bad guy is a tragedy for those people, but having it snooped for hundreds of thousands is a calamity.

--Jeff Davis 17:30, 6 Oct 2005 (PDT) Well, there are other choices. Take the amazon example; if you record my SSL transaction eventually you will be able to brute force it. However my credit card will probably have expired by then. Furthermore I can generate a one-time-use credit card number and use that and only Amex knows the mapping back to my real account number.

Encryption is like a bike lock. Will it stop someone who is sufficiently motivated? No. But it does stop 99% of the people walking by with just a little bit of larceny in their hearts. Perfect security is impossible so we raise the bar as high as possible and reasonable, and then we rely on law enforcement to clean up the scum. There are other uses for encryption besides the amazon case, but I believe that if we think them through none of them are good reasons to worry:

  • If I use encryption as part of illegal activities, I never want it to become public. Of course, we do not care about protecting criminals.
  • If I use encryption to encrypt business secrets, I never want my competitors to discover it. Eventually they will discover it, and probably without breaking the encryption. Employees leave, patents get filed; no trade secrets last forever.
  • If I use encryption as part of my activities to overthrow an illegitimate or repressive government, I never want the government to know. I suggest that if by the time they break the encryption you have not succeeded in overthrowing them, you are probably doomed anyway.

PKI Security Dialog from IE

--Dennis Galvin 01:35, 6 Oct 2005 (PDT) The article Why Johnny Can't Encrypt (even if very dated) brought up many germane points about usability. In that vein, the Internet Explorer dialog box on slide 41 from the lecture is certainly less than clear with its use of graphics to represent the threat:

  • Yellow caution sign by the line saying "The security certificate is from a trusted certifying authority,"
  • Green checkmark by the line indicating the error "The name on the security certificate is invalid...."

OK software is not perfect, but this is an excellent example of the confusing use of graphics. It also does not inspire confidence in the software being correct, nor in causing the user to contact the webmaster of the site with the invalid security certificate. For the record, the Firefox browser has confusing dialogs with respect to security as well, and this may have been corrected in the latest security release. "jeffdav" made an earlier comment about there being a lot of factors beyond just the computer science. Most users when confronted with such a dialog will click through it anyway, as the earlier post pointed out, probably muttering something under their breath about not understanding computers. Usability may be one of those things beyond computer science, but it needs to be factored heavily into GUI design.

Eavesdropping on computers

--Dennis Galvin 08:08, 6 Oct 2005 (PDT)

The lecture showed the two slides demonstrating that sensed reflected light could be used to reconstruct with a high degree of accuracy what is on the screen. I wonder how it might be at reading small 10 pt or smaller type? News articles in the past few weeks have played up an upcoming paper by three UC Berkeley researchers on using a $10 microphone and signal processing to grab keystrokes with 96% accuracy (http://www.cnn.com/2005/TECH/internet/09/21/keyboard.sniffing.ap/). There was an earlier paper from IBM in Almaden, CA, along the same lines in which Asonov and Agrawal used a microphone and a neural network to suss out with somewhat less accuracy keystrokes from keyboard sound emissions (http://www.almaden.ibm.com/software/quest/Publications/papers/ssp04.pdf). What a difference a bit of time can make. The upshot here is that even if sensitive information is immediately encrypted and only stored in encrypted form, with the combination of these two technical advances, the encryption may now be immaterial.

--Liebling 11:10, 6 Oct 2005 (PDT)

Some forms of biometrics can get around the "peeking" problem of recording passwords from visual and aural cues. Putting your thumb on a printreader doesn't make any noise. It doesn't eliminate such eavesdropping as capacitor whine, however. Furthermore, biometrics can only solve the "authentication" problem rather than the authorization and encryption problems. Even then, if the price is high enough, the necessary thumb could be obtained ...

Another advantage to biometric authentication is that the user isn't able to "give up" his "password" for a bar of chocolate (as some 70% of people in the recent Liverpool study did). Each system using biometric autentication can add their own method hashing (discussed last night) to the fingerprint/retinal/etc input so that even having the print doesn't necessarily guarantee entry.

--Chris DuPuis 14:06, 6 Oct 2005 (PDT) Biometrics would be much more secure if your fingerprints and retinal patterns were protected from snoopers. Unfortunately, it's easy to get fingerprints off anything you touch. For retinal patterns, how sure are you that someone doesn't have a close-up, high resolution photo of your face? On an insecure network, there is no guarantee that the authentication data that is being sent over the wire originated in an "approved" thumbprint or retinal scanner, and not in a program that reads in its biometric data from JPEG files.

Bruce Schneier (gee, his name sure pops up a lot, doesn't it) has written a good bit about biometrics, including this piece: Biometrics: Truths and Fictions


Certificate Authorities

--Jeff Bilger 22:20, 6 OCt 2005 (PDT) During yesterdays class, it was mentioned that Certificate Authorities (CA's) are authenticated by their public keys. A necessary precondition is that these public keys are distributed to a broad audience and this is achieved by being preloaded into web browsers such as Internet Explorer. If this is the case, how does a new CA get its public key distributed to the world? I would argue that if a user had to manually import that public key into their browser, then the barrier to entry into the CA marketplace would be way too high. I just checked my IE settings and there are 37 distinct Trusted Root Certification Authorities while there are only 29 distinct Trusted Root Certification Authorities defined in my version of Firefox. If I wanted to start my own CA tomorrow, how could I guarantee that when people purchased SSL certificates from me that visitors to SSL protected areas of their site would not see warning messages regarding trust?

Andrew Cencini - I would think that you would work with Microsoft to get your public key included in the next windows update / service pack that goes out. This is just a guess but I'd imagine at the minimum there'd be a little lead-time involved with all of that. I think someone in this class worked on IE so perhaps they may know better!

Jeff Davis I checked with some people on the team. If you want to get your stuff added there is an alias (cert@microsoft.com?) you can e-mail and you just have to sign some sort of agreement and they push it out via windowsupdate.com.

Manish Mittal I think it also depends upon if you want to add a new root CA or sub CA in a chain. CAs are organized into hierarchies with the fundamental trust root CA at the top. All other CAs in the hierarchy are subordinate CAs, and are trusted only because the root is trusted.

Can computers operate more transparently?

Yi-Kai - I'm fairly sure this is not an original idea, but I'm curious what ideas people might have along these lines.

One thing about modern operating systems is that they're not very "transparent." It's hard to see what the computer is really doing -- for instance, if you open the Task Manager in Windows, you see about 20 processes, but for most of them, you can't tell what they are doing or why they are there. This lack of transparency isn't a security problem by itself, but it is a contributing factor. When a worm infects a computer, the user often does not notice it, because even though there's a lot of activity as the worm probes the network for vulnerable hosts, none of this is visible in the user interface. And as long as the user is unaware of the problem, he/she doesn't have any chance of fixing it, or at least limiting the damage (by disconnecting from the network, for instance).

We know we will never achieve perfect security. Transparency is helpful, then, because it makes problems easier to detect and fix. It's sort of like first aid training, in that it gives users the ability to handle some problems on their own, even if they cannot fix them completely.

Jameel Alsalam - I think that transparency would be helpful, but I also think that there are several factors in the computer industry which make it hard to achieve: 1) technology and software change so quickly that hardware has to be very flexible to be able to accomodate all the new advances that will happen over the (short) life of a computer system, also therefore it is probably not worth it for users to put a lot of time into understanding the subtleties of their computer since those subtleties might soon change, 2) people buy based on capabilities, not transparency - quite often it is a desired feature to not have to understand what is going on, and 3) different levels of expertise would probably want different levels of expertise (so how do you design software that can accomodate the needs of all skill levels, yet is easy enough for the novice, yet is also not buggy and is on the cutting edge of capabilities... its a lot to ask for.

Maybe what we need is to just wait for the relentless improvement of computers to slow so that computers could be bought more like cars are (where all the software could be pre-installed "options" like sunroofs and power windows). I bet first aid would become a lot harder if people were always attaching new appendages to themselves.

Lecture 6

Professor Voelker, you mentioned the “Black Unicorn,” can you provide some more background on this individual?

Professor Voelker, can you distinguish “integrity” from “authentication” for me a bit more – does authentication allow us to ensure integrity or …? Is it that “confidentiality” and “integrity” are what we are striving for and that “authentication” and “non-repudiation” allow us to achieve those goals?

Professor Voelker, during your presentation, you stated several times that the knowledge needed for carrying out various computer/cyber attacks is relatively low, how low – that is, is this something we’d expect non-university educated persons living in the Middle East or the remote regions of Afghanistan and Pakistan to be able to do (assuming they had the equip and some basic instruction/manuals)? Do we know of any incidences of al Qaeda, or those organizations loosely affiliated with it, attempting such cyber attacks? Do advanced states, such as the US and the EU countries, have the ability to crash easily another state’s internet if they choose to do so – is that a weapon the US can and would likely employ if were to engage in inter-state conflict in the future?

Could someone with a computer science background please roughly label and order the types of cyber attacks according to how difficult they are to develop and how devastating they are likely to be with respect to one another (i.e. Trojan horse, worm, shell, etc.)?

Professor Voelker, you mentioned that forensic tracing on the internet is extremely difficult. When al Zarwaqawi posts warnings and/or takes credit for attacks that have been carried out (such as he did with respect to attacks on the Kearsarge and Ashland), is there anyway to approximate or glean his location – how much and what kinds of knowledge, if any, can we get from such postings?

Professor Maurer and/or Dean Nacht, in the last couple of days there has been quite a bit of talk regarding a letter from al Zawahiri to al Zarwaqawi. It seems that the former believes that the latter’s brutality has gotten a little out of hand and has asked him to halt extremely violent acts such as beheadings, lest they lose the support of many of the Muslims they are trying to court or at least drum up sympathy from. This is the first I have heard of al Qaeda being concerned with such things. Maybe al Qaeda’s ideology does not put it beyond the “invisible man” scenario after all. Do you have any thoughts on the letter?


SMM: Well, this is obviously evidence.


Professor Maurer and/or Dean Nacht, what do you make of the recent “specific” but “non-credible” threats with respect to the NY subway system? Is there any indication of which agency in the US federal government or from a foreign government came across the info (or is it really likely to have come from a raid in Iraq as has been reported by some news organizations)? Also, do either of you make any of the fact that the Homeland Security Dept didn’t make too much of it but that the NY City authorities did? Is that a lack of communication problem or are the local authorities responding as they should and is the federal government leaving the initial response to the local authorities as one might expect? Should the situation there make us worry, as the post-Katrina fiasco did? It has also been reported that three suspects have been arrested but the arresting authorities have not been identified – any thoughts?

Professor Maurer and/or Dean Nacht, do either of you have a take on the McCain push to limit and clearly establish what is permitted with respect to interrogation? Would that be a good thing, a bad thing, …?


SMM: I haven't seen McCain's stuff, but will comment anyhow. Dershowtitz's book on Terrorism --there's a citation in the Lecture 2 "Readings" slides -- tells how the Israelis looked at this same issue in the late 1980s and said there were three choices: Establish a "twilight zone" for security agencies outside the law, pretend to obey the law but turn a blind eye (overwhelming choice of world governments), or establish a legal framework. The first two are pretty clearly hypocritical, and in the American system lead to Congressional howls of protest (Remember the police chief in Casablanca? "I'm shocked, shocked to find gambling going on here ...") at twenty year intervals. This is grossly unfair to the people we send out to defend us, who are later punished either because they acted or because they failed to act but are then second-guessed in either scenario. It's also unfair to the country, because it breeds inaction.

The political problem here is that if somebody acts responsibly by suggesting specific guidelines every professional politician who is not actually in power at the moment will take cheap shots. This would be o.k. if those same politicians were prepared to follow their own advice -- but we suspect that this is untrue and that the conversation is in fact sliding toward the hypocracy that the Israelis criticized. Orwell said that politicians should have "the habit of responsibility," i.e. that they should never suggest rules or programs that they are not prepared to implement. In this case, at least, the War on Terrorism depends much less on physical assets than the quality of our political discourse. Ulp.

There is also a deeper problem. The instinct to take cheap shots is reinforced by anxiety and avoidance -- i.e., it lets us dodge subjects that are confused, distasteful, or ill-defined. Our feelings about torture are deeply felt and surely well-founded, but it is also true that we have no very coherent legal or logical framework for discussing them. In domestic life, the state is not allowed to coerce defendants, but warfare is all about coercion. It seems strange to say that it's ok to shoot people, but you musn't put them into the "stress positions" that the US Army uses to punish its own people. But if decide that you accept some degree of coercion, then you must draw new lines. Yet there are no good candidate principles available.

Hopefully, we will find a way to explore intelligence policy later in the course. In the meantime, you can learn a great deal by looking at Dershowitz, Chris Mackey's The Interrogator's War, and the fifth chapter of Alexander Solzhenitisyn's Gulag Archipeligo. Various US Army and CIA interrogation manuals are also available on the Web. This would also be an excellent term project for people who want to pursue this.


Professor Maurer made the point often, during his initial lectures on terrorism, that democracy and development should not be viewed as the cure all for terrorism. There is an interesting and well-written article that supports that view in the current edition of Foreign Affairs – F. Gregory Gause III, Can Democracy Stop Terrorism?, FOREIGN AFFAIRS (Sept./Oct. 2005).


MODELING CYBERSECURITY

In order to do policy, we need to find the salient facts about a situation and build models. All disciplines do this, including for example physics. Naive people sometimes argue that the world is more complicated than any model and then advance whatever opinion feels right to them. This is wrong. The only valid objection is that we have not picked the most salient facts, or that our facts are wrong, or that they have no relevance for policy.

The point here is to take Geoff's lecture and think about what facts we can abstract. I do not claim to be an expert, folks, so please say so if the following are not generally true or especially if you can add to the list. But here's something to get started.

1. HOW MUCH SECURITY IS TOO MUCH? For any given level of technology, consumers must choose between security and power/convenience. In a normal world, we let consumers make the choice: They know their preferences and are often best informed about options. This model seems reasonable for, say corporate IT departments who must decide whether to prevent problems in advance or clean them up afterward. Certainly, it would be perfectly logical for society to decide that after-the-fact cleanup was sometimes the lowest cost way to proceed.

On the other hand, markets can be imperfect. Consumers may be ignorant or face high transaction costs that make meaningful choice impossible. Alternatively, there may be externalities: For example, consumers may not care if their machines become zombies at 3.00 am although the rest of us do. For now, these are only hypotheses. Consumer sovereignty could be the best model, any objections should invoke evidence.

2. IS MORE SECURITY DESIRABLE? In the current world, the Wall Street Journal finds it relatively easy to break into Al Qaida laptops, Steve Maurer has no chance of breaking into Ed Lazowska's laptop, and the FBI has to ask a judge's permission to break into Karl Rove's laptop. All of these things sound like good results. Privacy advocates always want to maximize privacy and privacy engineers understandably take privacy as a working goal. But would we really all be happier if, for example, on-the-fly encryption became a reality.

3. DIMINISHING RETURNS. "Trust" without "due diligence" has limited power. But if we tie it to physical security at a few points things get better. For example, I believe that I have Verisign's correct URL because it came inside a machine in a shrinkwrapped container from Dell. Similarly, I believe Verisign did due diligence because they need reputation to compete.

Of course, I might believe that the current system of security was inadequate. In that case, I should find the lowest cost provider to improve matters. [TAKE NOTE - GENERAL PRINCIPLE!] That provider might be Verisign, in which case I could change their behavior by enacting new liability statutes or making them buy insurance.

I could also decide that Verisign was suffering from market failures. For example, they could have "agency problems" -- i.e., lie to me about the amount of due diligence they've done. This would be analogous to the bank that lies to me about the size of their assets and would have the same solution (regulation). Alternatively, Verisign could be a natural monopolist -- the costs of due diligence get averaged over the number of users, which means that the company with the most users also has the lowest costs. If so, I can't depend on competition. Back to regulation or insurance...

4. RETURN OF THE INVISIBLE MAN. The fact that Microsoft can catch hackers at the bargain rate of $50K per perp has obvious parallels to the use of rewards in terrorism. But is the analogy deeper? Hackers, like terrorists, depend on a broader community of sympathisers to stay invisible. Are hackers more or less invisible than the average terrorist?

5. NAIVE USERS. I am struck by how often the problem is user error. Have we pushed on this as hard as other areas? The natural suspicion is that NSF pays people to develop security code and that this area is now deep into diminishing returns. It might be cheaper and easier to spend some money on general education.

I recently went to a talk where the speaker said he had written a simpler e-mail security system than SSL. He reported users made 10% fewer security mistakes than SSL, but if they were briefed on how to use it then they made 50% fewer. My question, of course, is whether they would have done even better if they had been briefed on SSL itself. The point, I think, is that consumers know almost nothing about, say, phishing. So why don't we have public service spots warning them about phishing techniques or announcing golden rules to follow? A single "phishing" episode of CSI might be more valuable than $10m worth of new security code.

6) WHY EUROPE? Geoff says that Europe has more crime and more money spent on defense. If both these facts are true, then the natural assumption is that European criminals have better social support networks. As the US crime writer Raymond Chandler once wrote, "We're a big, rough, wild people. Crime is the price we pay for that. Organized crime is the price we pay for being organized."

Even if this speculation is true, however, it would be necessary but not sufficient. The further question arises why European criminals do not prey on the US, given that the cyber-Atlantic is a negligible barrier. Answering this question would probably place interesting constraints on the nature of cyber crime in general.

7) INDUSTRIAL SCALE HACKING? Economics is all about scarcity. The main ason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments. The question arises, therefore, what terrorist/states can do with 1) ordinary kiddyscript stuff, 2) one or two smart people, and 3) hundreds of smart people. For example, which category does "taking down the Internet" belong to?

8) VULNERABILITIES AND INCENTIVES. Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful. In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought"). Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges. The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.

9) ARE SOME VULNERABILITIES WORSE THAN OTHERS? CERT says that 85% of hacking involves weaknesses other than encryption. But how does this map onto the vulnerabilities we worry about on the Web? For example, you might imagine that encryption takes care of credit card numbers but is more or less irrelevant to protecting web pages. The mapping matters, but what do we know about it?

10) DEALING WITH TRANSACTION COSTS. Geoff notes that EXPLORER has 100 possible security settings, Presumably, the average consumer is not equipped to pick any of them so that "none" ends up being the effective default choice. On the other hand, society could change the default to something closer to what we believe consumers would actually pick with zero transaction costs and full information. I imagine that Microsoft would be reluctant to make that decision for the rest of us. But suppose that the country's CS professors announced the "right" judgment and let people follow it if they wanted to? Suppose Congress set a default choice?

11) LESS THAN PERFECT SECURITY. There is presumably a calculus on how good security should be. For example, most secrets have a relatively short shelf life, beyond which it doesn't matter if you expect somebody to crack them (in expectation) five years from now. A few -- how to make H-Bombs -- can last fifty years or more.

12) MARKET TESTS FOR SECURITY. The idea of setting prizes for security is interesting, since it allows you to dial up ordinary "tested by time" confidence for special systems. You would also imagine that this was a good way for computer companies to reassure consumers.