Talk:Lecture 13

From CyberSecurity
Revision as of 21:46, 27 November 2005 by Smaurer (talk | contribs)

Jump to: navigation, search

Rescorla's biological security parallel

--Aiqbal - I was very interested in Eric's parallel between cybersecurity and biological security. Patches, as he mentioned, are very difficulty to create, and also take a very long time to distribute/implement (e.g. shot-based vaccines). I don't know if this parallel is used fairly frequently in discussions on biological warfare, but it may make for a good talking point in terms of policy. Using the cybersecurity model provides an example that has impacted almost every individual: at some point or another, our personal computer or our company's computer or our e-mails have been hit by a virus. That isn't necessarily the case with a deadly biological virus.

SMM: The astonishing stability of our world against WMD has historically been tied to the very large capital costs needed to get into the game. In the 1950s, the US military did enough homework to figure out that a true WMD bioweapons capability (as opposed to killing a few dozen people) would be comparable to the atomic bomb project. On-line viruses, of course, cost nothing to make once you have the first one. The analogy would, however, work if you could genetically engineer viruses that were not just lethal but also wildly contagious. This is a much harder problem for modern genetic engineering, mostly because our models of epidemics aren't good enough to predict whether a totally new bug would cause a plague or just peter out. Nature creates new plagues a couple of times per century, but it has a big budget -- presumably, there are thousands of new bugs for every disease that takes off.


Discovering our Weaknesses (not really lecture-related)

--Gorchard 00:41, 24 November 2005 (PST) - I had an interesting thought while watching the PBS "Cyber-War" program that someone linked to back in the discussion page of lecture 5 or so. The cyber attack that we want to avoid at all costs is a terrorist attack aimed at taking down power grids, communications, or other critical infrastructure. So perhaps non-terrorist cyber attacks of recent years, especially those created 'just for fun', have actually benefitted us more than they've harmed us. They have alerted us to the extreme vulnerabilities in computers on the internet and the possible damage that could be done...and in response we've become much more aware and started to design systems and implement measures to make such attacks more difficult. One argument against that might be that those attacks have also alerted terrorists to the attack opportunities available via the internet, but I feel there is a 'bright side' to attacks and viruses of recent years that might be overlooked.

High value vulnerabilities v. Low value

Chris Fleizach - One issue that wasn't discussed by Eric Rescorla is classifying vulnerabilities between critical and low priority. His research showed very little in terms of a trend in reducing the number of overall vulnerabilities, but how many of those vulnerabilties were major issues? For example, when next buffer overflow in CUPS (a printing server) is found in RedHat that allows a user to perform a DoS on printing services, does it affect that many people? Maybe no one noticed it before because no one really cares that much except the security researchers looking to increase the number of vulnerabilities they find.

Another point that was brought up briefly questions if the total number of investigators is increasing does it also point to an increase in vulnerabilities found over some longer time span. His model assumes an infinite number of possible vulnerabilities, which would mean the number of vulnerabilities found should be going up as more researchers enter the field. But, if the number of researchers is going up and the number of bugs found is at a constant rate (or going down), then it seems like the quality of software might be improving.


Eric Rescorla - Good questions. WRT to the question of the severity of vulnerabilities, you do get similar results if you look at just the vulnerabilities that ICAT rated as severe, though it's not clear how much those ratings tell you, of course. The question about the number of researchers in the field is a good one and one we have no good way to control for. On the other hand, we don't really know what the shape of that curve looks like and it's confounded by the amount of attention the researchers pay to any individual piece of software.

Dirty Bombs

Sean West (2nd Year MPP/GSPP): One of the most interesting questions in homeland security today is that of the dirty bomb/radiological dispersal device. Most recently, we have heard of the threat of dirty bombs by Jose Padilla, accused of plotting to detonate an RDD in Chicago--only to be reclassified by the Bush Administration as a criminal rather than an enemy combatant in the last week. But the case of the dirty bomb raises a lot of questions about just how much more damaging one would be than a conventional bomb. Surely, we should fear any type of bomb or attack on our society, but in people's minds there seems to be a dichotomy between conventional attacks and what are generally referred to as "weapons of mass destruction." But just as Prof Ackerman described in a previous lecture, dirty bombs are more a weapon of mass disruption than one of destruction. Graham Allision makes a similar point in Italic textNuclear TerrorismItalic text, yet society at large remains much more fearful of an RDD than a conventional attack. While I do not wish to argue that we should not fear an RDD, or that the threat of radiation spreading post-attack to first responders isn't a major issue, I am wondering how we should reflect this understanding of the true limits of damage of an RDD in our homeland security policy. Should we treat it like a "Conventional AttackPlus" or should we continue to place it alongside WMD? How much should anticipation of public panic even in light of limited damage inform our policy?

Chris Fleiach - Many of the lectures/reading in this course have stressed repeatedly the general ineffectiveness of a dirty bomb. One lecture even conjected that Al Qaeda discouraged Padilla from attempting to use such an attack, because it would not accomplish much. So the course work on this issue, instead of focusing on damage and mitigation, has changed to one on perception. Whereas with chemical, biological and nuclear, the perception matches reality, radiological devices are in polar opposition. If it is only a perception point, ceretainly the public's mind can be changed. If that is done, then it won't be able to act as a weapon of mass disruption, and we will have removed one more attack vector from terrorists. Why has the White House allowed this threat perception be continued to be taken out of context... Perhaps, there is a usefulness in creating unnecessary anxiety for the purposes of extending power.

Lecture 13 Comments and Questions ...

Professor Maurer, this may have been implied by your lecture but how much do you think the Cold War situation effected the public's attitude towards nuclear energy? It seems likely to me that the baby boomers having to do duck and cover drills and live through the Cuban Missile Crisis must have really added to a negative perception of nuclear energy and radioactive substance generally.

SMM: Sure. They also went to a lot of movies about giant ants wriggling out of the desert...

Professor Maurer, as a former litigator, what is your stance on tort reform? You seemed to suggest that plaintiff’s attorneys play a large role in maintaining the public's negative perception of nuclear energy.

SMM: I don't think you should justify tort reform as a way of controlling "dangerous speech." It may make sense on other grounds, but that's not really related to the course.

I understand that FEMA primarily handles natural disasters. However, if there was a dirty bomb attack would they be tasked with the cleanup? If not, which agency would be? If so, and the Dept of Homeland Security believes that a radiological device is a real threat, we may all be in a bit of trouble since I understand that FEMA is undergoing huge budget cuts and is actually being downgraded, organizationally speaking, within the Dept of Homeland Security to more of an office than an agency.

SMM: FEMA handles all the disasters, it was originally tasked with nuclear war. Moving around the organization boxes rarely means that the capabilities will disappear, FEMA hasn't worked very well so it's at least reasonable to think that organizational reform could make things better.

Professor Maurer, you seemed to be pretty excited about benchmarking and mental models insofar as they might improve communication on the true dangers associated with nuclear energy. From what I understand, the new Energy Bill and the current leadership at the Dept of Energy are pushing nuclear energy -- has DOE been at the fore of developing/implementing such a PR campaign based on benchmarking and mental modeling? If so, do you have any examples?

SMM: No, academics have pushed this. Another place where the government should pay more attention?

Professor Maurer, when you mentioned that torture may have helped prevent the Philippines airliner attacks, were you referring to Bojinka or some other plot?

SMM: Yes, Bojinka. And that's controversial.

One of the students mentioned Israel and the Landau Commission, with respect to establishing limits on torture; can someone provide a bit more information on the Commission?

One of the students also commented on the notion of symmetry/symmetrical enforcement concerning the Geneva Conventions, according to Professor John Yoo who authored many of the memos upon which the Bush administration developed their policies on interrogation techniques, that is the precise reason why enemy combatants should not find protection under the Conventions. That is, because they are not, by definition, tied to any state, there is no way to ensure that they will reciprocate treatment.

Also, with respect to how far is too far, what are peoples' thoughts on the photos from Abu Ghraib. Personally, I am not so sure that using dogs to frighten people is over the line. With regard to humiliation, Professor Maurer, that came in rather high on your list of what was too far; to me humiliation seems acceptable -- were you classifying it in some specific form or not?

With regard to the White House's, and especially VP Cheney's refusal to back interrogation reform and set concrete limits, from a pure international political sense, would it not benefit the US to form such boundaries and then, if need be, simply break them later. I mean, we are signatories to the UN Charter but that hasn't stopped us from using force in a manner that doesn't always coincide with it. Politically, I don't see how the country goes wrong by instituting, or claiming to, such regulations if it, in reality, can always pull away from them later. Any thoughts?

Concerning interrogation techniques, what is wrong with setting broad boundaries and leaving wide discretion to those responsible for interrogation -- sort of like any authorizing statute/administrative agency relationship. It's not clear to me that anything was wrong with what has existed previously. If people go over the line, they should be held responsible under the law but I would think, it makes sense to leave a fair amount of discretion to those few professionals who have the requisite knowledge to make informed decisions about the subject and not judges or legislators.

SMM: The problem is that you should draw lines before you decide whether people have gone over them. Otherwise three things happen. First, the wrong people make decisions. Interrogation is ultimately about values, "professionals" have no advantage over legislators in this regard. Second, you end up punishing people in hindsight once the threat has gone away. This invites hypocricy and unfairness. Third, if government agents understand that Monday Morning Quarterbacking is the rule then they will stop well short of anything that might conceivably be criticized in a different political climate. So you end up with a government that is weaker than it would be with an honest, ex ante statement of what is and isn't ok to do.

Just as a reflection on Mr. Rescorla's point about black hats using old exploits to spread menace, I received two emails in the last couple of days, supposedly from the FBI, with an attached zip program. Looking online to see what this things does, it turns out that it uses your own computers zip capability, which you have to initiate to zip your files and send the bundle off to whomever sent the email. To me, this seems rather old school.

Mr. Rescorla also picked backed up a point that I tried to make on the wiki last week, that is that to the general public, so long as their individual system isn't impacted in a way that interferes with there use, they don't seems to care. As such, it seems ill-advised to spend so much on cyber security (apart from those systems that really contain sensitive information) if 1) there is no evidence that it is solving the problem 2) there isn't widespread public demand; unless, as was pointed out, marketing is a significant motive.

Classified Testing Aimed at Enhancing Security

--Aiqbal - Not much of our discussion has focused on the reading on Joseph Hamilton and the classified plutonium testing. Firstly, it is appalling that such testing was allowed. I am sure many would agree that the injection of plutonium into a patient's body without that patient's approval is clearly a violation of their individual health and rights. However, given the time period in which Hamilton engaged in his activities, do you think he was warranted?

My comments regarding this article could well be transferred to the entire idea of security testing. As a journalist, I, of course, am forced to say that all testing information should be open to the public. My duty as a journalist is "sunshining," i.e. bringing all actions into the public's eye, especially when they relate to the citizenry's health and individual rights. However, from a security standpoint, doesn't classified research have to occur? Don't government officials have a duty to find solutions to threats, and at times, have to engage in research that is kept from the public - and nosy journalists? Of course, this is the premise of classification and the NSA. Is it right?

Most of us would probably say so. But most journalists would have difficulty accepting that argument.

Fear of radiological catastrophes - Cultural?

--Parvez Anandam 19:25, 25 November 2005 (PST): Prof. Maurer argues successfully that the US as a whole has an overly paranoid view of the risks of radiological disasters. Now, is there hope that this fear can be lessened?

It is useful to consider another society where this fear is not as deep-seated. That country would have to be a western, affluent, society for the comparison to hold value. France is such a country. Even though the US and France would often like to believe that they couldn't be more divergent, they clearly have much in common.

In their adoption of nuclear power, however, France and the US are vastly different. France has embraced it: over 80% of its energy is nuclear. The US hasn't: only 20% of its energys is nuclear (even though it is the largest producer in the world in absolute terms). While there are numerous reasons for this, certainly one of the reasons has to be the population's perception of dirty bombs (whether of the accidental or intentional variety).

The Energy Policy Act of 2005, signed into law in August, may be a sign of changing US perception. Therein is a strong thrust to augment the US nuclear energy program. One can hold hope for a feedback loop: seeing more nuclear reactors operational may serve to allay the public's fear of all things nuclear.

SMM: This is a very nice connection. You could further argue that reducing dissonance (in this case, the public's dislike for "gray area" risks) pushes society toward one of two options. In the US the public gets rid of the gray area by getting rid of powerplants, which leaves it with high fear of radioactivity. In the French model, the public gets plenty of powerplants but this makes people rationalize until the risk appears to be trivial. Neither model has much to do with underlying science, but as Parvaz implies the intuition is that a middling model (a few plants, modest fear) is unstable.

One loophole in the argument is that the powerplant situation says more about what their politicians think than the man-on-the-street. So you can treat Parvaz's point as a prediction that the average Frenchman isn't worried. Just for fun, I took a look on-line. No survey data so far, but the anecedotal evidence suggests that the French public is fine with nuclear -- and even sees the plants as tourist attactions. http://www.pbs.org/wgbh/pages/frontline/shows/reaction/etc/script.html.

Microsoft Vulerabilities Trends

Jack Menzel -- One of the things I didn't hear discussed in Eric's talk that would probably account for some of the strange trends he sees in the NT 4 bug fix trends aside from resources devoted to maintaining the operating system is the corporate philosophy regarding what bugs are actually fixed. Speaking from my three years experience working for Windows Serviceability producing hotfixes, there have been several dramitic shifts in the "bug bar". When I started in 2002 the trend was a simple and idealistic "bugs are BAD, if a customer reports a bug we must fix the bug", there was little regard for overall effect that the fix would have on the entire operating system. This combined with the security push had the result that though we fixed as many customer issues as we possibly could there was a very high regression rate. Then as everyone went through the difficulty of stabilizing the operating system for XP SP2 and W03 SP1 and a number of very visible recalls of security fixes the what-to-fix pendulum began its swing back to a much more conservative bug bar. Currently fixes are very carefully scrutinized. If they are not security, then they must have a strong business justification, have a very contained effect on the overall OS, and the fix itself should be made to minimize code churn.

Though this says little about how many bugs _actually_ exist in NT 4, because there is single entity making the fixes their philosophy on what is fixed, how, and when will highly influence the bug fix trends.