Lessons Learned

From CyberSecurity
Jump to: navigation, search

Opening Comments by Professor Maurer reposted from Lecture 6

In order to do policy, we need to find the salient facts about a situation and build models. All disciplines do this, including for example physics. Naive people sometimes argue that the world is more complicated than any model and then advance whatever opinion feels right to them. This is wrong. The only valid objection is that we have not picked the most salient facts, or that our facts are wrong, or that they have no relevance for policy.

1. HOW MUCH SECURITY IS TOO MUCH? For any given level of technology, consumers must choose between security and power/convenience. In a normal world, we let consumers make the choice: They know their preferences and are often best informed about options. This model seems reasonable for, say corporate IT departments who must decide whether to prevent problems in advance or clean them up afterward. Certainly, it would be perfectly logical for society to decide that after-the-fact cleanup was sometimes the lowest cost way to proceed.

On the other hand, markets can be imperfect. Consumers may be ignorant or face high transaction costs that make meaningful choice impossible. Alternatively, there may be externalities: For example, consumers may not care if their machines become zombies at 3.00 am although the rest of us do. For now, these are only hypotheses. Consumer sovereignty could be the best model, any objections should invoke evidence.

2. IS MORE SECURITY DESIRABLE? In the current world, the Wall Street Journal finds it relatively easy to break into Al Qaida laptops, Steve Maurer has no chance of breaking into Ed Lazowska's laptop, and the FBI has to ask a judge's permission to break into Karl Rove's laptop. All of these things sound like good results. Privacy advocates always want to maximize privacy and privacy engineers understandably take privacy as a working goal. But would we really all be happier if, for example, on-the-fly encryption became a reality.

3. DIMINISHING RETURNS. "Trust" without "due diligence" has limited power. But if we tie it to physical security at a few points things get better. For example, I believe that I have Verisign's correct URL because it came inside a machine in a shrinkwrapped container from Dell. Similarly, I believe Verisign did due diligence because they need reputation to compete.

Of course, I might believe that the current system of security was inadequate. In that case, I should find the lowest cost provider to improve matters. [TAKE NOTE - GENERAL PRINCIPLE!] That provider might be Verisign, in which case I could change their behavior by enacting new liability statutes or making them buy insurance.

I could also decide that Verisign was suffering from market failures. For example, they could have "agency problems" -- i.e., lie to me about the amount of due diligence they've done. This would be analogous to the bank that lies to me about the size of their assets and would have the same solution (regulation). Alternatively, Verisign could be a natural monopolist -- the costs of due diligence get averaged over the number of users, which means that the company with the most users also has the lowest costs. If so, I can't depend on competition. Back to regulation or insurance...

4. RETURN OF THE INVISIBLE MAN. The fact that Microsoft can catch hackers at the bargain rate of $50K per perp has obvious parallels to the use of rewards in terrorism. But is the analogy deeper? Hackers, like terrorists, depend on a broader community of sympathisers to stay invisible. Are hackers more or less invisible than the average terrorist?

5. NAIVE USERS. I am struck by how often the problem is user error. Have we pushed on this as hard as other areas? The natural suspicion is that NSF pays people to develop security code and that this area is now deep into diminishing returns. It might be cheaper and easier to spend some money on general education.

6) WHY EUROPE? Geoff says that Europe has more crime and more money spent on defense. If both these facts are true, then the natural assumption is that European criminals have better social support networks. As the US crime writer Raymond Chandler once wrote, "We're a big, rough, wild people. Crime is the price we pay for that. Organized crime is the price we pay for being organized."

7) INDUSTRIAL SCALE HACKING? Economics is all about scarcity. The main ason that terrorists don't get ordinary nuclear/chemical/bio WMD is that these technologies require huge investments. The question arises, therefore, what terrorist/states can do with 1) ordinary kiddyscript stuff, 2) one or two smart people, and 3) hundreds of smart people. For example, which category does "taking down the Internet" belong to?

8) VULNERABILITIES AND INCENTIVES. Geoff's observation that 50% of hacking involves basic programming errors (e.g., overflow buffers) suggests that incentives are powerful. In this course, we distinguish between engineering viewpoints -- what can I do if everyone follows instructions (which may be lengthy, burdensome, or complex) -- and social science viewpoints ("how can I get people to behave as they ought"). Geoff's observation suggests that incentive issues are at least co-equal with basic technical challenges. The fact that the financial industry has been able to define threats better than the CS community suggests that different threats do, in fact, receive different priorities.

9) ARE SOME VULNERABILITIES WORSE THAN OTHERS? CERT says that 85% of hacking involves weaknesses other than encryption. But how does this map onto the vulnerabilities we worry about on the Web? For example, you might imagine that encryption takes care of credit card numbers but is more or less irrelevant to protecting web pages. The mapping matters, but what do we know about it?

10) DEALING WITH TRANSACTION COSTS. Geoff notes that EXPLORER has 100 possible security settings, Presumably, the average consumer is not equipped to pick any of them so that "none" ends up being the effective default choice. On the other hand, society could change the default to something closer to what we believe consumers would actually pick with zero transaction costs and full information. I imagine that Microsoft would be reluctant to make that decision for the rest of us. But suppose that the country's CS professors announced the "right" judgment and let people follow it if they wanted to? Suppose Congress set a default choice?

11) LESS THAN PERFECT SECURITY. There is presumably a calculus on how good security should be. For example, most secrets have a relatively short shelf life, beyond which it doesn't matter if you expect somebody to crack them (in expectation) five years from now. A few -- how to make H-Bombs -- can last fifty years or more.

12) MARKET TESTS FOR SECURITY. The idea of setting prizes for security is interesting, since it allows you to dial up ordinary "tested by time" confidence for special systems. You would also imagine that this was a good way for computer companies to reassure consumers.

Lessons Learned Discussion

  • Chris Fleizach - Terrorism is hard to do right. Really big terrorist acts are really hard to do, especially in countries with modernized legal systems, law enforcement agencies and people who are suspicious. This does not apply to cyberterrorism though.
  • Chris Fleizach - Disclosing vulnerabilities in software may be akin to disclosing blueprints of the Pentagon buildings. It may just be a bad idea.
  • Aiqbal Adding to these two points, today's terrorism is very different than yesterday's: terrorism today is directed at an ideal or at huge groups of people, instead of specific individuals. Also, the technology being used to formulate the attacks is incredibly advanced. Furthermore, as we have witnessed, today's terrorism can impact us in our home base. It is not something that impacts us only when we are outside our homeland. Each of these developments forces us to take a very different view of security.
  • Genevieve Orchard - Securing the Internet is a formidable, if not impossible, task. One of the best defenses we have against cyber attacks, as with non-cyber crimes, is deterrence through fear of being caught. The current deterrence value is almost zero, due to the anonymity obtainable by an Internet user and thus the difficulty of tracking attackers down. To get to the point where we can reliably assign blame for cyber attacks will require significant *universal* effort and cooperation - but is this realistically feasible?
  • Chris DuPuis - Society never takes measures to mitigate a threat until the threat has resulted in a disaster.
  • Trevor Nguyen - Indeed, there is a "war" against terrorism. But, will the war ever end? And, who will win the war? Realistically, there will not be a day in the future when newspaper headlines proclaim in bold fonts, "War over!" Terrorism will continue, but in varying intensities and degrees of damage. As scholars and academicians, we must take any and all perspectives we can in order to dig our trenches, build our watchtowers, keep guard and revise our war plans. We must keep on fighting to survive another day.

Altin DastmalchiTo follow, terrorism is a hot word that has been used to reshape countries foreign policies, including the US. I think that the war in Iraq will eventually just have to be left alone and allow our troops to come home and the Iraqis to set-up-shop. This however will not be the end of the war on terror. The war on terror will only be set to the side once another era of fear is introduced. IE(from Mccarthyism to terrorism to ???). In all honesty, im not saying that terrorism is not a problem, im just saying that it is being used as a fear tactic, and i do not see any soon change in policy until a new era is introduced to society.

  • Yi-Kai - There's another paper by Ross Anderson, Why Information Security is Hard - An Economic Perspective, that has some good insights. It suggests that the design of computer security systems is often motivated by other goals, such as increasing market share and shifting risk onto the users. This explains why many security products are so ineffective. It's surprisingly hard to get the computer industry's self-interest to align with the needs of ordinary users. The same lesson probably applies to many other kinds of security (for instance, protecting critical infrastructure).

Barry Demchak As I got an education on CBRN and terrorism, I realized that most people don't have access to this information. And many people, if they did, still wouldn't be able to draw conclusions from it much less have an effect at all on policy. In fact, most people get their information from the press, and along with it they are also fed their perspective. The problem is that the press itself has a poor understanding of these issues, and propagates hysterical or politically loaded views ... and this effect itself becomes a policy driver. As such, the press becomes a megaphone for the terrorists. In fact, the press could be considered the objective of an attack. We need to spend a lot more time on enlisting the press in ways to neutralize (instead of magnify) terrorism's value.

  • Pravin Mittal People are driven by emotion (fear) rather than facts. This is the conclusion I came from as I got educated on the facts of terrorism. Unfortunately, media in united states which is more concerned about its ratings (popularity as it directly ties to advertisting revenue) it tells people what they want to hear rather than facts. This reminds of interesting experiment 'John B. Watson did' in 1916 where he concluded "fear, not love, is the most powerful tool for conditioning a person's social and emotional life. He trained a infant (Little Albert) who each time reached out in delight to touch a bunny, watson claw hammer to make a startling noise. Soon infant recoiled everytime he saw rabbit even there was no hair-splitting clan accompanying it. Unfortunately, fear and hysteria about terroism is being used by polticians to drive agendas at the cost of real national issues of present times.
  • Pravin Mittal The other useful insight that I learned from class was during Lampson's lecture. He noted, it is not lock but the fear of getting caught and persecuted which deters criminal. I strongly believe if we invest more and device system to track down the criminals rather than just trying to develop software without any security bugs (which I think is impossible) the problem of cybersecurity can be solved more effectively.
  • Aiqbal It is interesting that we bring up the media. One concept the course brought light to was the interplay between the withholding and the free dissemination of information in relation to security. Is it good to put security practices out in the open? Should the world know what our interrogation practices are, how our security systems work, etc? From a journalistic stanpoint, clearly, the answer is yes. From a security standpoint, the answer might be negative. Terrorists should not be aware of what to expect when they are a) committing their acts or b) when it comes down to interrogation. If they have already ready the CIA Manual and Army Field Manual online, they already will comprehend the nature of interrogation before being subject to it; this will make the security practices ineffective. I have said this before, and I will say it again: as a journalist I have a challenge withholding information; my duty is to find it and report it. As a policy maker or security person, withholding information is a crucial need and, from my standpoint, it may be necessary for my safety and the safety of others. Regardless of this debate, isn't it already too late - haven't terrorsits already seen a good portion of our security policies and structures with the help of search engine technology like Google's?

Brian McGuire The technical portion of the class taught me how significant the efforts were at some companies to secure their IT infrastructures but that despite that effort, they aren't confident in their ability to protect their systems from attack. So it seems to be a continuous struggle to catch up in a game that they can never get ahead on. I suspect the exact same talks will be given years form now only with a new list of technologies and terms involved. On the policy side I found myself being pulled in three directions. Maybe the US was failing to protect itself from terrorists because the problem is so complex and there are so many points at which we can be attacked, or, maybe we were going way too far and had managed to cross a line to become more like the ‘enemy’ than we’d like, or maybe terrorists weren’t all that good and we should have just done nothing at all. So there are a lot of unknowns or uncertainties, and, in reference to the last comment, unknowns bring about fear. Once the problem is understood I believe the fear will subside, and classes like this will facilitate that process.

Tolba Overall when thinking about the takeaways from the lectures, I definitely see what Prof. Maurer alluded to, about ‘cyber and national security’ being a multidisciplinary problem and a tough one. Another very important point I concluded along the journey is that awareness is key, both on the government side to turn attention to the severity of the problem, as well as the public awareness side educating people about the real threats versus the ones the people tend to obsess about (like radiological). Finally, what was disturbing to me is the attitude of some speakers about racial targeting and supporting policies like torture with its collateral practices like secret detention, secret interrogations, ‘rendition’ and other artifacts that damage the US reputation domestically and abroad and, in my humble opinion, are a threat to national security rather than a defense measure. Maybe the administration is finally coming to terms with this fact indicated by Condoleezza Rice’s latest promises/concessions in her visit to Europe and the white-house slowly backing away from possibly vetoing out an anti-torture legislation.

Katie Veazey I enjoyed the first part of the course much more than the second half. As a public policy student, I found some of the cybersecurity lectures a bit dense and technical. But, I did learn a lot about the real threats involved by having much of the nation's information available over the internet. Working with computer science students on the midterm project, gave me a whole new perspective on how both cybersecurity and national security are related. I think it will take all different kinds of minds and perspectives to solve the terrorist problem in the U.S. From a policy mind to one who understands how to hack into systems, both need to work to solve our nation's security system in the future. The most important thing I will take away from this class is the fact that we will most likely face a terrorist attack in the future, the question is when? I think no matter how much technical planning and collaboration that occurs there will be an attack, hopefully on a small scale if we continue to attempt to solve the holes in our nation's security system by creating policies and advancing technology.

Aiqbal One other interesting concept is the similarity in defense policies in relation to very different forms of terrorism. Rescorla's lecture and his dual-use of the word "virus" really hit this concept on the nose. The government response to a biological virus attack and cyber virus attack are fairly similar, and quite reactionary, as he explained. The population's fear of both exists. The extent of the impact of both - in terms of individuals affected - can be paralleled. These are interesting ideas for us to consider when it comes to policy making.


--Dennis Galvin 08:05, 12 December 2005 (PST) - I keep thinking "Just wait until the technology gets there. Then we'll be able to ...." One thing that cropped up (repeatedly) to debunk that notion was that policy defines how the technology you need to solve a problem is to be applied. I'm reminded of H.L. Mencken's aphorism (from 1917): "There is always an easy solution to every human problem — neat, plausible and wrong." You can either replace "easy" with "technological" or insert "technological" after "easy." Technology comes with its own set of problems which must then must be solved. Then I thought of Keasling's lecture (clearly one of the very best lectures). It encapsulates everything: 1) science working toward solutions to medical and biological issues; 2) use of computational technology; 3) market forces driving down the price; 4) the potential for bad guys to get the technology and turn it against society; and 5) no easy answer. So the chicken and egg debate is moot, Mencken is right ... in all things human, we have plenty of hopes for easy solutions (usually technologically mediated) to vexing problems of our humanity, but those solutions can only be temporary.

--Joe Xavier 14:35, 12 December 2005 (PST) - Some of the things that I took away from the class caputured as sound-bites:
- Most machines are compromised. We're awash in malware and just don't know it as yet
- Most companies that are faced with either shipping software quickly or spending more time on fixing security holes will invariably go with the business rule - ship quicker.
- A security issue isn't a security concern until it's discovered and made public. The "underground" network where vulnerability information is shared is so closed that the average consumer probably hears about 10% of the issues
- Making systems more secure by default can almost make the system unusable
- The same principles that I've observed above apply to terrorist threats and activities. A cyber-attack is not sci-fi, it's just a wave waiting tolaunch itself. Once it does hit we'll probably have a "fix-everything" phase similar to the Y2K issue.
- Working on cyber-crime is more profitable than working on making systems more secure :)

--Jeff Bilger - Some thoughts and lessons learned from the class

  • Microsoft is spending lots of time, money, and effort on security, but what about other companies as well as the government?
  • It was interesting to note that over time, hackers have been slowly exploiting "up the stack". This is not a good trend when you hear so much about the future of web services.
  • Why does society put up with buggy software? Are Mechanical, Aerospace, and Civil Engineers held to a higher standard or is their domain less affected by complexity?
  • Usability and different modes of interacting with computers has evolved little compared to other areas of Computer Science. This is underscored by the fact that most modern UI's are based on ideas developed within Englebart's Augment Project in the 60's!


--pdavis USER ERROR One thing that keeps leaping out at me is the devastatingly central place in all this of user error. A machine is a machine. No matter how great our security technology is, some idiot who doesn’t understand the technology they’re using (me, for example) can screw it up with incredible ease, often without even realizing they’ve done anything wrong.

While user error is a pitfall with all technology--a car is unsafe if you can’t drive--the computer case is unique because of the greater separation of user to technology. People who don’t know computer science (again, people like… me) tend to regard it as a sort of magic, something so far beyond their comprehension as to be not even worth an attempt at understanding. When something goes wrong, it’s mystifying. Or, it’s the fault of the geek who built the thing. I will admit right here that even after three months of this class I haven’t figured out how my firewall works, or if the settings are correct. Yet if my computer were invaded tonight, I would probably either blame Symantec or just resign myself to the idea that at any time, a talented hacker can have his or her way with my machine. They know the field, I don’t stand a chance, therefore I hold no responsibility. It would take a while for it to occur to me that I might have been able to defend myself with software already on my computer, if only I’d known which boxes to check.

This sort of defeatist attitude towards computer technology is a huge problem. What can we do to fix it? Education is obviously going to be a big part of the solution. Many companies already train employees on cybersecurity; schools should do the same. I’m almost tempted to say some sort of tutorial should be mandatory, almost like a licensing process, when you buy a modem. Of course, you can lead a horse to water, but… Until people really experience the damages caused by their negligence AND are able to recognize that negligence to be the cause of the damage, I'm afraid they simply won’t be convinced to care.


--pdavis INVISIBLE MAN I’m not sure the “invisible man” concept we’ve applied to terrorists in this course is going to work for hackers and cyber-criminals, for a couple of reasons. First, we’ve mentioned the increased anonymity afforded by the internet. A lot of times people in the “community” who maybe would like to turn a guy in don’t have any better idea of who he is than law enforcement does. Second, and I think more to the point, when the community turns on terrorists, it is because something, either one event or the just incessant violence, is so affecting and horrible that the people can no longer see the terrorists as being “on their side” (take for example the IRA and the 1987 Enniskillen massacre). With cyber-crime, where the damages are mostly monetary, it will be far more difficult for anyone to do anything horrifying enough to create a similar reaction. Exceptions might be messing with the power grid or 911 call centers, but it’s going to take something of that magnitude (i.e. visible human suffering).

--NAseef Nilkund Aseef I was under the impression that Terrorism and cyber security is a fairly new topic. I beleived that the threats posed were something we are concerned today and it was never an issue before. While the latter might be partly true the former one is definately not. This class helped me understand it. If we compare todays hackers profiles, incentive and their activities against the early hackers we see a great deal of relevence. I was very convinced when in one of the leacture a parallelism was drawn from the Hawala method of transfering money in the early days which is still a common way of funding money to the terrorist organizations for their illegal activties. Also speaker Brian Del Monte compared the traits of a terrorist to the followers of kali godess.

Another myth clarified was regarding the use of Weapons of Mass Destruction. The very thought of the aftermaths of using a bilogical/chemical/radiological weapon was nerve recking to me. But from the lectures I gathered that creating such a weapon is really a very difficult task ( if not impossible ) and it would take a lot of effort form the terrorsit organizations to build up the money, technical exertise and the infrastructure in some cases to build and launch such an attack using the weapons of mass destruction.

Another lesson learned that has a broader, universal appeal, is that of tolerance of diversity. If we respect one’s religious life to the unique context of the society and state we live in, we would kill hatred for one another which seems to be the root cause for the birth of many terrorist organizations.

Steve Crockett/SC I felt the class was very interesting, even all the computer stuff that was way over my head. It seems like a lot of the defensive strategies in the cyberworld are rooted in the classic attack and defense methods of the physical world. I would say the big difference between the two in my opinion is that the applied strategies of the physical world are often influenced by fear, emotional feelings, and poliitcal/legal concerns. The cyber battlefield seems pure. It looks like the strategies of safeguarding systems from hackers etc is based primarily (if not completely) on analytical judgments and technical knowledge.

  • --Parvez Anandam 01:26, 17 December 2005 (PST): The Public and the Private.

I love the Internet as it is today because of the anonymity it provides. I cherish the fact that acitivities that were once public, such as researching a topic in a library, can now be private, because of that anonymity. I also deeply value the ability to voice an opinion without retribution. This possibility of anonymity is however a big hindrance when transactions of a public nature are necessary, such as in commerce. This dichotomy between the public and private, between the accountable and the unaccountable is what gives the Internet its split personality. It is crucial to appreciate that both sides are infinitely valuable. Butler Lampson expounded on this most eloquently in his lecture. Security is based on accountability; as Lampson said: "security relies on deterrence more than locks". Therefore, if the same Internet is a common platform for accountability and anonymity, this implies that the Internet cannot, and more importantly, should not, be secured.

Any supranational cyber policy must clearly define the public sphere of the Internet and attempt to secure only it. The private sphere should be left alone, lawless, therefore free.


--Jack Menzel - Having just finished our groups paper on cyber security and law I actually came to quite different conclusions that Parvez did above. A "lawless" internet would not be of any value. Freedom and legislation are not mutually exclusive concepts. Imagine that there was a place where complete anonymity was guaranteed but on the way there thieves were given the "freedom" to steal your credit card information and cooped your machine into its army of zombies; suddenly your anonymous internet is no longer so anonymous. The crux of this problem is not that we need a lawless internet in order to allow anonymity, but that we need to better define the concept of what is "public" and "private" on the internet. We need to refine our protocols such that we can have both spaces clearly defined on the internet. There needs to be a secure "space" to be used for dealing with sensitive information and a public space where anonymity can be preserved (as was the gist of the red/green concept of dividing an operating system). While I agree that "security relies on deterrence more than locks" we currently don't have a lot locks on the internet and we have even less accountability and without accountability even the strongest legislative deterrence is worthless. Looking at "the internet" as a single monolithic entity that must be either anonymous or accountable is not seeing the entire picture. The internet is a collection of hundreds of protocols and it will be through the refinement of these protocols that will be able to define the future landscape with safe public and private spaces.

And on another note concerning more generally what I, as a computer science student took away from the class I would have to say that ,the first series of lectures on the history of terrorism and the motivation of current terrorism were of particular interest since the motivations of terrorists is constantly being muddled by the current administration's preposterous definition: "any actions by those who hate freedom".

Marty Lyons, UW -- In trying to condense all the we covered into a 15 minute summary, my takeaway is that security in the context of physical or electronic terms is directly related to the target environment. People often cite El Al as having the best airline security in the world. They're still using metal utensils for meals, so they've effectively moved their security perimeter far in advance of the actual aircraft. The questioning and profiling done by the security teams would be considered inappropriate by some, rude by many. But the economic price (cheap questioning versus expensive in-aircraft deterrent) seems to make sense.

So how do we translate good security -- for physical assets or electronic -- into the context of a free and open society, without restricting commerce, liberty, and the rights guaranteed by law? Whether it's screening for dangerous cargo (chemical, nuclear, biological), detecting communications (voice, data) related to terrorism, countering electronic attacks (Internet, telecom), or stopping physical damage (infrastructure, buildings, people) there is a gray area of uncertainty between detection, enforcement, and prosecution. We covered all of this in the course, and its obvious that there are a lot of smart people thinking about solutions to these problems.

Based on all the lectures, readings, and research, I'm leaning towards the theory that if the problem is rooted in the physical world. we've got a good chance at security without infringing on civil liberties too extensively. In the electronic realm, this problem is much, much more complex, and as we move increasingly towards putting information online, this is where issues of national sovereignty and law come into play. We've just seen an example of this with the (apparently) unauthorized telecommunications interceptions against U.S. citizens, done without warrants. The executive branch is claiming authority existed for this activity, while the judicial view seems counter to that assertion.

What if an intercept prevented detonation of a CBRN attack? Most would argue that the intercept was justified. But if nothing happened, for those that had their conversations intercepted [1], they are likely of a different opinion.

This is the balancing act we're going to be faced with for a long time to come. Greater openness and accountability will help lead to better solutions, and resources like this class is probably one of the best ways to get people thinking about solutions.