Difference between revisions of "Student Projects:CyberSecurity"

From CSEP590TU
Jump to: navigation, search
(TODO)
Line 6: Line 6:
 
# team organization. Not sure what is meant by this - we should follow up with Ed. Does this mean assigning roles like Editor, Researcher, Writer? Assigning writers for sections?
 
# team organization. Not sure what is meant by this - we should follow up with Ed. Does this mean assigning roles like Editor, Researcher, Writer? Assigning writers for sections?
  
cmbenner: I think the best way to go about this is: assign each person to write a  chapter according to their interest while making sure there's little overlap, set an approx page limit for each chapter, assign a due date for everyone to turn in their chapters to everyone else to review the entire paper and comment, then we each go back and fix our sections according to each other's comments. maybe someone writes an intro piece, maybe one of the sections serves as an intro piece. I'm happy to be an overview type editor--I was an editor at a policy journal before coming to UW and miss it.  
+
cmbenner: I think the best way to go about this is: assign each person to write a  chapter according to his/her interests, set an approx page limit for each chapter (8 pp?), assign a due date for everyone to turn in their chapters to everyone else to review and comment on, then we each go back and fix our sections according to each other's comments. maybe someone writes an intro piece, maybe one of the sections serves as an intro piece. I'm happy to be an overview type editor--I was an editor at a policy journal before coming to UW and miss it.
  
 +
So a possible next step would be for each person to write a paragraph on what their proposed chapter would look like (I set out what I'd like to do below in the middle of the discussion), we can all review each other's to see how the pieces fit, and then edit this into a one page doc to turn in by next Monday. If we need to meet in person, I can come to MS instead of UW for class on Thursday and meet before or after.
  
 
=== Threat trends over the next 10 years ===
 
=== Threat trends over the next 10 years ===

Revision as of 03:25, 31 October 2004

Attack, Defense, and Responsibility

TODO

Over the next week we need to come up with:

  1. our subtopics (I think the below topics are our major subtopics, we probably want to go into more detail).
  2. some sources
  3. team organization. Not sure what is meant by this - we should follow up with Ed. Does this mean assigning roles like Editor, Researcher, Writer? Assigning writers for sections?

cmbenner: I think the best way to go about this is: assign each person to write a chapter according to his/her interests, set an approx page limit for each chapter (8 pp?), assign a due date for everyone to turn in their chapters to everyone else to review and comment on, then we each go back and fix our sections according to each other's comments. maybe someone writes an intro piece, maybe one of the sections serves as an intro piece. I'm happy to be an overview type editor--I was an editor at a policy journal before coming to UW and miss it.

So a possible next step would be for each person to write a paragraph on what their proposed chapter would look like (I set out what I'd like to do below in the middle of the discussion), we can all review each other's to see how the pieces fit, and then edit this into a one page doc to turn in by next Monday. If we need to meet in person, I can come to MS instead of UW for class on Thursday and meet before or after.

Threat trends over the next 10 years

A little context - types of threats today:

  1. Virus attacks
  2. Denial of service
  3. Spyware/Adware
  4. Phishing attacks

No source for this, but I suspect if the IT industry is successful in reducing the flaws in our software, we'll see more phishing attacks / spyware attacks which target the weakest link of the system - the end user. Definitely in the profiles I've seen of black hats they target the weakest link (I know I would :)). And if we strengthen the technology sufficiently to make it stronger than the user, attackers will just focus on the user.

Which begs the question - what can the technology do to strengthen the end user? How do we protect their security tokens? Can we do better than passwords? How do prevent phishing or provide means of authenication like we have in the paper world with watermarking and other devices to prove authenticity. Few users understand security certificates.

Defense trends over the next 10 years

(This is not the main topic of this section) Is it possible to create secure software?

How are developments like managed code (Java/.NET), buffer overflow protection, heap overflow protection, runtime debugging checks, the secure C runtime, etc., going to affect code reliability and security in the next decade.

Can automated bug detection, static analysis and runtime, help us? Prefix, Prefast, FxCop, Presharp, Lint, <external tools>.

Hardware support? NX, Palladium.

--Jack Richins 22:34, 29 Oct 2004 (PDT) Should we move the following to the next section as a discussion of responsibility? ->


cmbenner: Below seems to be a section on measuring software quality (comments follow the two next grafs):

Can we tell if software is bug-free? Or bug-free enough? When a car with an embedded computer crashes is the software vendor to blame or the car manufacturer to blame (see the cases of the Thai politician locked in a parked car at http://catless.ncl.ac.uk/Risks/22.73.html#subj4.1 or any of the "runaway car from hell" stories.)

Can a government body understand software security risks when the general population doesn't understand software let alone software security? What about when we develop more complex systems (such as quantum computing.) "Computer people" today are like the shade-tree mechanics of the 1950's, many of whom wouldn't touch the engine of a Toyota Prius with a ten-foot wrench?

cmbenner: I think this section (the above two paragraphs) would make a good chaper in and of itself and is in keeping with some of the things I've been looking at and want to write on (if I'm not stepping on toes here: haven't discussed fully with everyone): 1. can we measure security in any reasonable way and devise objective replicable tests to do so? and 2. is it possible to create an underwriter's lab or other certifying body/government apparatus to rate software security so that consumers understand how secure the software they are buying is? This could lead into an incentives discussion: if people understood security, they could demand it and companies would be incentivized to produce it.


Who takes responsibility for security flaws and exploitations?

When the software is produced by companies? By individuals? If it's open sourced?
What kind of incentives are there for companies/open source groups to produce more reliable software? For example, if the originator of the code is responsible, is the threat of law suites when something goes wrong strong enough of an incentive or possible even too strong? Can these incentives be improved?

Responsibility:

What responsibility does a software vendor have to users with regards to security flaws and exploitations? There's an implied warranty that the software will work for the purposes it was bought (details?). Does this implied warranty of functionality extend to OSS? Can a click-wrap license indemnify a commercial software vendor?

What responsibility does a software vendor have to society? If a single vendor holds a significant portion of the market is there a responsiblity to protect the users of this network? Think about commerically operated utilities such as telephone or cable television companies as comparators. There seems to be an a priori consensus that Microsoft Windows is insecure. Is it? If so, does Microsoft have a responsibility to protect society by fixing their software?

What about vendors who produce software to run (actual) utilities? (see http://catless.ncl.ac.uk/Risks/23.18.html for an example of General Electric's XA/21 system causing the Northeast Blackout.) Would an implied warranty of suitability be granted by a commercial vendor if that vendor leverages OSS as part of a flawed solution? Or would (for example) Linus Torvalds hold some responsibility for a kernel bug which causes a blackout? (Linux has issues with race conditions during asynchronous swapping of virtual memory pages which is the same kind of bug that caused the XA/21 failure.)

Incentives/disincentives:

Would anyone produce important but risky software if their company were potentially liable for all damages resulting from usage of that software? In the small case, would an OSS developer ever contribute code if h/she were to be held responsible for usage? In the large case, would Diebold make voting machines if they were responsible for damages resulting from voting fraud or voting machine failure which changed the outcome of a presidential election?

What incentives exist for companies or OSS contributors to create secure and reliable software? Is there a legal responsibility? Does an OSS contributor have a market incentive toward quality?

cmbenner: The legal liability question is interesting: currently, I believe, you can't sue successfully for a bad product unless that product causes injury or death, not just monetary loss. So that's why MS doesn't get sued for all the viruses. I've heard legal academics, at least at UW, aren't all that interested in studying software liability because they think it's non-issue, at least in the present day. Could get more interesting though, to consider futuristic scenarios, as more and more things go online: what if your alarm system was hooked up to the internet, a virus brought it down, and a bad guy, knowing that particular kind of alarm system was out, entered your house and killed you?

Should a government proscribe the use and/or development of a particular breed of software? Is a government which decides to use Windows responsible for Windows-based attacks on the system (virii, cracking, DDOS, etc.)? (See the case of the UK government using Windows for Warships: http://www.theregister.co.uk/2004/09/06/ams_goes_windows_for_warships/) If a government mandates OSS is there any responsiblity when a failure is experienced? Can a government know that their software choice is appropriate?