Lessons Learned

From CyberSecurity
Revision as of 05:15, 9 December 2005 by Ttnguyen (talk | contribs) (Lessons Learned)

Jump to: navigation, search

Below are comments of Professor Maurer, reposted here to spark conversation.

In order to do policy, we need to find the salient facts about a situation and build models. All disciplines do this, including for example physics. Naive people sometimes argue that the world is more complicated than any model and then advance whatever opinion feels right to them. This is wrong. The only valid objection is that we have not picked the most salient facts, or that our facts are wrong, or that they have no relevance for policy.

1. HOW MUCH SECURITY IS TOO MUCH? For any given level of technology, consumers must choose between security and power/convenience. In a normal world, we let consumers make the choice: They know their preferences and are often best informed about options. This model seems reasonable for, say corporate IT departments who must decide whether to prevent problems in advance or clean them up afterward. Certainly, it would be perfectly logical for society to decide that after-the-fact cleanup was sometimes the lowest cost way to proceed.

On the other hand, markets can be imperfect. Consumers may be ignorant or face high transaction costs that make meaningful choice impossible. Alternatively, there may be externalities: For example, consumers may not care if their machines become zombies at 3.00 am although the rest of us do. For now, these are only hypotheses. Consumer sovereignty could be the best model, any objections should invoke evidence.

2. IS MORE SECURITY DESIRABLE? In the current world, the Wall Street Journal finds it relatively easy to break into Al Qaida laptops, Steve Maurer has no chance of breaking into Ed Lazowska's laptop, and the FBI has to ask a judge's permission to break into Karl Rove's laptop. All of these things sound like good results. Privacy advocates always want to maximize privacy and privacy engineers understandably take privacy as a working goal. But would we really all be happier if, for example, on-the-fly encryption became a reality.

3. DIMINISHING RETURNS. "Trust" without "due diligence" has limited power. But if we tie it to physical security at a few points things get better. For example, I believe that I have Verisign's correct URL because it came inside a machine in a shrinkwrapped container from Dell. Similarly, I believe Verisign did due diligence because they need reputation to compete.

Of course, I might believe that the current system of security was inadequate. In that case, I should find the lowest cost provider to improve matters. [TAKE NOTE - GENERAL PRINCIPLE!] That provider might be Verisign, in which case I could change their behavior by enacting new liability statutes or making them buy insurance.

I could also decide that Verisign was suffering from market failures. For example, they could have "agency problems" -- i.e., lie to me about the amount of due diligence they've done. This would be analogous to the bank that lies to me about the size of their assets and would have the same solution (regulation). Alternatively, Verisign could be a natural monopolist -- the costs of due diligence get averaged over the number of users, which means that the company with the most users also has the lowest costs. If so, I can't depend on competition. Back to regulation or insurance...

4. RETURN OF THE INVISIBLE MAN. The fact that Microsoft can catch hackers at the bargain rate of $50K per perp has obvious parallels to the use of rewards in terrorism. But is the analogy deeper? Hackers, like terrorists, depend on a broader community of sympathisers to stay invisible. Are hackers more or less invisible than the average terrorist?

5. NAIVE USERS. I am struck by how often the problem is user error. Have we pushed on this as hard as other areas? The natural suspicion is that NSF pays people to develop security code and that this area is now deep into diminishing returns. It might be cheaper and easier to spend some money on general education.

6) WHY EUROPE? Geoff says that Europe has more crime and more money spent on defense. If both these facts are true, then the natural assumption is that European criminals have better social support networks. As the US crime writer Raymond Chandler once wrote, "We're a big, rough, wild people. Crime is the price we pay for that. Organized crime is the price we pay for being organized."

7) INDUSTRIAL SCALE HACKING? Economics is all about scarcity. The main ason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments. The question arises, therefore, what terrorist/states can do with 1) ordinary kiddyscript stuff, 2) one or two smart people, and 3) hundreds of smart people. For example, which category does "taking down the Internet" belong to?

8) VULNERABILITIES AND INCENTIVES. Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful. In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought"). Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges. The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.

9) ARE SOME VULNERABILITIES WORSE THAN OTHERS? CERT says that 85% of hacking involves weaknesses other than encryption. But how does this map onto the vulnerabilities we worry about on the Web? For example, you might imagine that encryption takes care of credit card numbers but is more or less irrelevant to protecting web pages. The mapping matters, but what do we know about it?

10) DEALING WITH TRANSACTION COSTS. Geoff notes that EXPLORER has 100 possible security settings, Presumably, the average consumer is not equipped to pick any of them so that "none" ends up being the effective default choice. On the other hand, society could change the default to something closer to what we believe consumers would actually pick with zero transaction costs and full information. I imagine that Microsoft would be reluctant to make that decision for the rest of us. But suppose that the country's CS professors announced the "right" judgment and let people follow it if they wanted to? Suppose Congress set a default choice?

11) LESS THAN PERFECT SECURITY. There is presumably a calculus on how good security should be. For example, most secrets have a relatively short shelf life, beyond which it doesn't matter if you expect somebody to crack them (in expectation) five years from now. A few -- how to make H-Bombs -- can last fifty years or more.

12) MARKET TESTS FOR SECURITY. The idea of setting prizes for security is interesting, since it allows you to dial up ordinary "tested by time" confidence for special systems. You would also imagine that this was a good way for computer companies to reassure consumers.

Lessons Learned

  • Chris Fleizach - Terrorism is hard to do right. Really big terrorist acts are really hard to do, especially in countries with modernized legal systems, law enforcement agencies and people who are suspicious. This does not apply to cyberterrorism though.
  • Chris Fleizach - Disclosing vulnerabilities in software may be akin to disclosing blueprints of the Pentagon buildings. It may just be a bad idea.
  • Genevieve Orchard - Securing the Internet is a formidable, if not impossible, task. One of the best defenses we have against cyber attacks, as with non-cyber crimes, is deterrence through fear of being caught. The current deterrence value is almost zero, due to the anonymity obtainable by an Internet user and thus the difficulty of tracking attackers down. To get to the point where we can reliably assign blame for cyber attacks will require significant *universal* effort and cooperation - but is this realistically feasible?
  • Chris DuPuis - Society never takes measures to mitigate a threat until the threat has resulted in a disaster.
  • Trevor Nguyen - Indeed, there is a "war" against terrorism. But, will the war ever end? And, who will win the war? Realistically, there will not be a day in the future when newspaper headlines proclaim in bold fonts, "War over!" Terrorism will continue, but in varying intensities and degrees of damage. As scholars and academicians, we must take any and all perspectives we can in order to dig our trenches, build our watchtowers, keep guard and revise our war plans. We must keep on fighting to survive another day.