Difference between revisions of "Lessons Learned"

From CyberSecurity
Jump to: navigation, search
Line 60: Line 60:
  
 
[[Aiqbal]] One other interesting concept is the similarity in defense policies in relation to very different forms of terrorism. Rescorla's lecture and his dual-use of the word "virus" really hit this concept on the nose. The government response to a biological virus attack and cyber virus attack are fairly similar, and quite reactionary, as he explained. The population's fear of both exists. The extent of the impact of both - in terms of individuals affected - can be paralleled. These are interesting ideas for us to consider when it comes to policy making.
 
[[Aiqbal]] One other interesting concept is the similarity in defense policies in relation to very different forms of terrorism. Rescorla's lecture and his dual-use of the word "virus" really hit this concept on the nose. The government response to a biological virus attack and cyber virus attack are fairly similar, and quite reactionary, as he explained. The population's fear of both exists. The extent of the impact of both - in terms of individuals affected - can be paralleled. These are interesting ideas for us to consider when it comes to policy making.
 +
 +
 +
* --[[User:Dg|Dennis Galvin]] 08:05, 12 December 2005 (PST) - I keep thinking "Just wait until the technology gets there. Then we'll be able to ...." One thing that cropped up (repeatedly) to debunk that notion was that policy defines how the technology you need to solve a problem is to be applied. I'm reminded of H.L. Mencken's aphorism (from 1917): "There is always an easy solution to every human problem — neat, plausible and wrong." You can either replace "easy" with "technological" or insert "technological" after "easy." Technology comes with its own set of problems which must then must be solved. Then I thought of Keasling's lecture (clearly one of the very best lectures). It encapsulates everything: 1) science working toward solutions to medical and biological issues; 2) use of computational technology; 3) market forces driving down the price; 4) the potential for bad guys to get the technology and turn it against society; and 5) no easy answer. So the chicken and egg debate is moot, Mencken is right ... in all things human, we have plenty of hopes for easy solutions (usually technologically mediated) to vexing problems of our humanity, but those solutions can only be temporary.

Revision as of 16:05, 12 December 2005

Below are comments of Professor Maurer, reposted here to spark conversation.

In order to do policy, we need to find the salient facts about a situation and build models. All disciplines do this, including for example physics. Naive people sometimes argue that the world is more complicated than any model and then advance whatever opinion feels right to them. This is wrong. The only valid objection is that we have not picked the most salient facts, or that our facts are wrong, or that they have no relevance for policy.

1. HOW MUCH SECURITY IS TOO MUCH? For any given level of technology, consumers must choose between security and power/convenience. In a normal world, we let consumers make the choice: They know their preferences and are often best informed about options. This model seems reasonable for, say corporate IT departments who must decide whether to prevent problems in advance or clean them up afterward. Certainly, it would be perfectly logical for society to decide that after-the-fact cleanup was sometimes the lowest cost way to proceed.

On the other hand, markets can be imperfect. Consumers may be ignorant or face high transaction costs that make meaningful choice impossible. Alternatively, there may be externalities: For example, consumers may not care if their machines become zombies at 3.00 am although the rest of us do. For now, these are only hypotheses. Consumer sovereignty could be the best model, any objections should invoke evidence.

2. IS MORE SECURITY DESIRABLE? In the current world, the Wall Street Journal finds it relatively easy to break into Al Qaida laptops, Steve Maurer has no chance of breaking into Ed Lazowska's laptop, and the FBI has to ask a judge's permission to break into Karl Rove's laptop. All of these things sound like good results. Privacy advocates always want to maximize privacy and privacy engineers understandably take privacy as a working goal. But would we really all be happier if, for example, on-the-fly encryption became a reality.

3. DIMINISHING RETURNS. "Trust" without "due diligence" has limited power. But if we tie it to physical security at a few points things get better. For example, I believe that I have Verisign's correct URL because it came inside a machine in a shrinkwrapped container from Dell. Similarly, I believe Verisign did due diligence because they need reputation to compete.

Of course, I might believe that the current system of security was inadequate. In that case, I should find the lowest cost provider to improve matters. [TAKE NOTE - GENERAL PRINCIPLE!] That provider might be Verisign, in which case I could change their behavior by enacting new liability statutes or making them buy insurance.

I could also decide that Verisign was suffering from market failures. For example, they could have "agency problems" -- i.e., lie to me about the amount of due diligence they've done. This would be analogous to the bank that lies to me about the size of their assets and would have the same solution (regulation). Alternatively, Verisign could be a natural monopolist -- the costs of due diligence get averaged over the number of users, which means that the company with the most users also has the lowest costs. If so, I can't depend on competition. Back to regulation or insurance...

4. RETURN OF THE INVISIBLE MAN. The fact that Microsoft can catch hackers at the bargain rate of $50K per perp has obvious parallels to the use of rewards in terrorism. But is the analogy deeper? Hackers, like terrorists, depend on a broader community of sympathisers to stay invisible. Are hackers more or less invisible than the average terrorist?

5. NAIVE USERS. I am struck by how often the problem is user error. Have we pushed on this as hard as other areas? The natural suspicion is that NSF pays people to develop security code and that this area is now deep into diminishing returns. It might be cheaper and easier to spend some money on general education.

6) WHY EUROPE? Geoff says that Europe has more crime and more money spent on defense. If both these facts are true, then the natural assumption is that European criminals have better social support networks. As the US crime writer Raymond Chandler once wrote, "We're a big, rough, wild people. Crime is the price we pay for that. Organized crime is the price we pay for being organized."

7) INDUSTRIAL SCALE HACKING? Economics is all about scarcity. The main ason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments. The question arises, therefore, what terrorist/states can do with 1) ordinary kiddyscript stuff, 2) one or two smart people, and 3) hundreds of smart people. For example, which category does "taking down the Internet" belong to?

8) VULNERABILITIES AND INCENTIVES. Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful. In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought"). Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges. The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.

9) ARE SOME VULNERABILITIES WORSE THAN OTHERS? CERT says that 85% of hacking involves weaknesses other than encryption. But how does this map onto the vulnerabilities we worry about on the Web? For example, you might imagine that encryption takes care of credit card numbers but is more or less irrelevant to protecting web pages. The mapping matters, but what do we know about it?

10) DEALING WITH TRANSACTION COSTS. Geoff notes that EXPLORER has 100 possible security settings, Presumably, the average consumer is not equipped to pick any of them so that "none" ends up being the effective default choice. On the other hand, society could change the default to something closer to what we believe consumers would actually pick with zero transaction costs and full information. I imagine that Microsoft would be reluctant to make that decision for the rest of us. But suppose that the country's CS professors announced the "right" judgment and let people follow it if they wanted to? Suppose Congress set a default choice?

11) LESS THAN PERFECT SECURITY. There is presumably a calculus on how good security should be. For example, most secrets have a relatively short shelf life, beyond which it doesn't matter if you expect somebody to crack them (in expectation) five years from now. A few -- how to make H-Bombs -- can last fifty years or more.

12) MARKET TESTS FOR SECURITY. The idea of setting prizes for security is interesting, since it allows you to dial up ordinary "tested by time" confidence for special systems. You would also imagine that this was a good way for computer companies to reassure consumers.

Lessons Learned

  • Chris Fleizach - Terrorism is hard to do right. Really big terrorist acts are really hard to do, especially in countries with modernized legal systems, law enforcement agencies and people who are suspicious. This does not apply to cyberterrorism though.
  • Chris Fleizach - Disclosing vulnerabilities in software may be akin to disclosing blueprints of the Pentagon buildings. It may just be a bad idea.
  • Aiqbal Adding to these two points, today's terrorism is very different than yesterday's: terrorism today is directed at an ideal or at huge groups of people, instead of specific individuals. Also, the technology being used to formulate the attacks is incredibly advanced. Furthermore, as we have witnessed, today's terrorism can impact us in our home base. It is not something that impacts us only when we are outside our homeland. Each of these developments forces us to take a very different view of security.
  • Genevieve Orchard - Securing the Internet is a formidable, if not impossible, task. One of the best defenses we have against cyber attacks, as with non-cyber crimes, is deterrence through fear of being caught. The current deterrence value is almost zero, due to the anonymity obtainable by an Internet user and thus the difficulty of tracking attackers down. To get to the point where we can reliably assign blame for cyber attacks will require significant *universal* effort and cooperation - but is this realistically feasible?
  • Chris DuPuis - Society never takes measures to mitigate a threat until the threat has resulted in a disaster.
  • Trevor Nguyen - Indeed, there is a "war" against terrorism. But, will the war ever end? And, who will win the war? Realistically, there will not be a day in the future when newspaper headlines proclaim in bold fonts, "War over!" Terrorism will continue, but in varying intensities and degrees of damage. As scholars and academicians, we must take any and all perspectives we can in order to dig our trenches, build our watchtowers, keep guard and revise our war plans. We must keep on fighting to survive another day.

Altin DastmalchiTo follow, terrorism is a hot word that has been used to reshape countries foreign policies, including the US. I think that the war in Iraq will eventually just have to be left alone and allow our troops to come home and the Iraqis to set-up-shop. This however will not be the end of the war on terror. The war on terror will only be set to the side once another era of fear is introduced. IE(from Mccarthyism to terrorism to ???). In all honesty, im not saying that terrorism is not a problem, im just saying that it is being used as a fear tactic, and i do not see any soon change in policy until a new era is introduced to society.

  • Yi-Kai - There's another paper by Ross Anderson, Why Information Security is Hard - An Economic Perspective, that has some good insights. It suggests that the design of computer security systems is often motivated by other goals, such as increasing market share and shifting risk onto the users. This explains why many security products are so ineffective. It's surprisingly hard to get the computer industry's self-interest to align with the needs of ordinary users. The same lesson probably applies to many other kinds of security (for instance, protecting critical infrastructure).

Barry Demchak As I got an education on CBRN and terrorism, I realized that most people don't have access to this information. And many people, if they did, still wouldn't be able to draw conclusions from it much less have an effect at all on policy. In fact, most people get their information from the press, and along with it they are also fed their perspective. The problem is that the press itself has a poor understanding of these issues, and propagates hysterical or politically loaded views ... and this effect itself becomes a policy driver. As such, the press becomes a megaphone for the terrorists. In fact, the press could be considered the objective of an attack. We need to spend a lot more time on enlisting the press in ways to neutralize (instead of magnify) terrorism's value.

  • Pravin Mittal People are driven by emotion (fear) rather than facts. This is the conclusion I came from as I got educated on the facts of terrorism. Unfortunately, media in united states which is more concerned about its ratings (popularity as it directly ties to advertisting revenue) it tells people what they want to hear rather than facts. This reminds of interesting experiment 'John B. Watson did' in 1916 where he concluded "fear, not love, is the most powerful tool for conditioning a person's social and emotional life. He trained a infant (Little Albert) who each time reached out in delight to touch a bunny, watson claw hammer to make a startling noise. Soon infant recoiled everytime he saw rabbit even there was no hair-splitting clan accompanying it. Unfortunately, fear and hysteria about terroism is being used by polticians to drive agendas at the cost of real national issues of present times.
  • Pravin Mittal The other useful insight that I learned from class was during Lampson's lecture. He noted, it is not lock but the fear of getting caught and persecuted which deters criminal. I strongly believe if we invest more and device system to track down the criminals rather than just trying to develop software without any security bugs (which I think is impossible) the problem of cybersecurity can be solved more effectively.
  • Aiqbal It is interesting that we bring up the media. One concept the course brought light to was the interplay between the withholding and the free dissemination of information in relation to security. Is it good to put security practices out in the open? Should the world know what our interrogation practices are, how our security systems work, etc? From a journalistic stanpoint, clearly, the answer is yes. From a security standpoint, the answer might be negative. Terrorists should not be aware of what to expect when they are a) committing their acts or b) when it comes down to interrogation. If they have already ready the CIA Manual and Army Field Manual online, they already will comprehend the nature of interrogation before being subject to it; this will make the security practices ineffective. I have said this before, and I will say it again: as a journalist I have a challenge withholding information; my duty is to find it and report it. As a policy maker or security person, withholding information is a crucial need and, from my standpoint, it may be necessary for my safety and the safety of others. Regardless of this debate, isn't it already too late - haven't terrorsits already seen a good portion of our security policies and structures with the help of search engine technology like Google's?

Brian McGuire The technical portion of the class taught me how significant the efforts were at some companies to secure their IT infrastructures but that despite that effort, they aren't confident in their ability to protect their systems from attack. So it seems to be a continuous struggle to catch up in a game that they can never get ahead on. I suspect the exact same talks will be given years form now only with a new list of technologies and terms involved. On the policy side I found myself being pulled in three directions. Maybe the US was failing to protect itself from terrorists because the problem is so complex and there are so many points at which we can be attacked, or, maybe we were going way too far and had managed to cross a line to become more like the ‘enemy’ than we’d like, or maybe terrorists weren’t all that good and we should have just done nothing at all. So there are a lot of unknowns or uncertainties, and, in reference to the last comment, unknowns bring about fear. Once the problem is understood I believe the fear will subside, and classes like this will facilitate that process.

Tolba Overall when thinking about the takeaways from the lectures, I definitely see what Prof. Maurer alluded to, about ‘cyber and national security’ being a multidisciplinary problem and a tough one. Another very important point I concluded along the journey is that awareness is key, both on the government side to turn attention to the severity of the problem, as well as the public awareness side educating people about the real threats versus the ones the people tend to obsess about (like radiological). Finally, what was disturbing to me is the attitude of some speakers about racial targeting and supporting policies like torture with its collateral practices like secret detention, secret interrogations, ‘rendition’ and other artifacts that damage the US reputation domestically and abroad and, in my humble opinion, are a threat to national security rather than a defense measure. Maybe the administration is finally coming to terms with this fact indicated by Condoleezza Rice’s latest promises/concessions in her visit to Europe and the white-house slowly backing away from possibly vetoing out an anti-torture legislation.

Katie Veazey I enjoyed the first part of the course much more than the second half. As a public policy student, I found some of the cybersecurity lectures a bit dense and technical. But, I did learn a lot about the real threats involved by having much of the nation's information available over the internet. Working with computer science students on the midterm project, gave me a whole new perspective on how both cybersecurity and national security are related. I think it will take all different kinds of minds and perspectives to solve the terrorist problem in the U.S. From a policy mind to one who understands how to hack into systems, both need to work to solve our nation's security system in the future. The most important thing I will take away from this class is the fact that we will most likely face a terrorist attack in the future, the question is when? I think no matter how much technical planning and collaboration that occurs there will be an attack, hopefully on a small scale if we continue to attempt to solve the holes in our nation's security system by creating policies and advancing technology.

Aiqbal One other interesting concept is the similarity in defense policies in relation to very different forms of terrorism. Rescorla's lecture and his dual-use of the word "virus" really hit this concept on the nose. The government response to a biological virus attack and cyber virus attack are fairly similar, and quite reactionary, as he explained. The population's fear of both exists. The extent of the impact of both - in terms of individuals affected - can be paralleled. These are interesting ideas for us to consider when it comes to policy making.


  • --Dennis Galvin 08:05, 12 December 2005 (PST) - I keep thinking "Just wait until the technology gets there. Then we'll be able to ...." One thing that cropped up (repeatedly) to debunk that notion was that policy defines how the technology you need to solve a problem is to be applied. I'm reminded of H.L. Mencken's aphorism (from 1917): "There is always an easy solution to every human problem — neat, plausible and wrong." You can either replace "easy" with "technological" or insert "technological" after "easy." Technology comes with its own set of problems which must then must be solved. Then I thought of Keasling's lecture (clearly one of the very best lectures). It encapsulates everything: 1) science working toward solutions to medical and biological issues; 2) use of computational technology; 3) market forces driving down the price; 4) the potential for bad guys to get the technology and turn it against society; and 5) no easy answer. So the chicken and egg debate is moot, Mencken is right ... in all things human, we have plenty of hopes for easy solutions (usually technologically mediated) to vexing problems of our humanity, but those solutions can only be temporary.