Talk:Lecture 13

From CyberSecurity
Revision as of 18:17, 24 November 2005 by Fleizach (talk | contribs)

Jump to: navigation, search

Discovering our Weaknesses (not really lecture-related)

--Gorchard 00:41, 24 November 2005 (PST) - I had an interesting thought while watching the PBS "Cyber-War" program that someone linked to back in the discussion page of lecture 5 or so. The cyber attack that we want to avoid at all costs is a terrorist attack aimed at taking down power grids, communications, or other critical infrastructure. So perhaps non-terrorist cyber attacks of recent years, especially those created 'just for fun', have actually benefitted us more than they've harmed us. They have alerted us to the extreme vulnerabilities in computers on the internet and the possible damage that could be done...and in response we've become much more aware and started to design systems and implement measures to make such attacks more difficult. One argument against that might be that those attacks have also alerted terrorists to the attack opportunities available via the internet, but I feel there is a 'bright side' to attacks and viruses of recent years that might be overlooked.

High value vulnerabilities v. Low value

Chris Fleizach - One issue that wasn't discussed by Eric Rescorla is classifying vulnerabilities between critical and low priority. His research showed very little in terms of a trend in reducing the number of overall vulnerabilities, but how many of those vulnerabilties were major issues? For example, when next buffer overflow in CUPS (a printing server) is found in RedHat that allows a user to perform a DoS on printing services, does it affect that many people? Maybe no one noticed it before because no one really cares that much except the security researchers looking to increase the number of vulnerabilities they find.

Another point that was brought up briefly questions if the total number of investigators is increasing does it also point to an increase in vulnerabilities found over some longer time span. His model assumes an infinite number of possible vulnerabilities, which would mean the number of vulnerabilities found should be going up as more researchers enter the field. But, if the number of researchers is going up and the number of bugs found is at a constant rate (or going down), then it seems like the quality of software might be improving.