Student Projects:CyberSecurity

From CSEP590TU
Revision as of 05:49, 30 October 2004 by Jackr (talk | contribs) (Threat trends over the next 10 years)

Jump to: navigation, search

Attack, Defense, and Responsibility

TODO

Over the next week we need to come up with:

  1. our subtopics (I think the below topics are our major subtopics, we probably want to go into more detail).
  2. some sources
  3. team organization. Not sure what is meant by this - we should follow up with Ed. Does this mean assigning roles like Editor, Researcher, Writer? Assigning writers for sections?

Threat trends over the next 10 years

Types of threats today:

  1. Virus attacks
  2. Denial of service
  3. Spyware/Adware
  4. Phishing attacks

No source for this, but I suspect if the IT industry is successful in reducing the flaws in our software, we'll see more phishing attacks / spyware attacks which target the weakest link of the system - the end user. Definitely in the profiles I've seen of black hats they target the weakest link (I know I would :)). And if we strengthen the technology sufficiently to make it stronger than the user, attackers will just focus on the user.

Which begs the question - what can the technology do to strengthen the end user? How do we protect their security tokens? Can we do better than passwords? How do prevent phishing or provide means of authenication like we have in the paper world with watermarking and other devices to prove authenticity. Few users understand security certificates.

Defense trends over the next 10 years

(This is not the main topic of this section) Is it possible to create secure software?

How are developments like managed code (Java/.NET), buffer overflow protection, heap overflow protection, runtime debugging checks, the secure C runtime, etc., going to affect code reliability and security in the next decade.

Can automated bug detection, static analysis and runtime, help us? Prefix, Prefast, FxCop, Presharp, Lint, <external tools>.

Hardware support? NX, Palladium.

--Jack Richins 22:34, 29 Oct 2004 (PDT) Should we move the following to the next section as a discussion of responsibility? ->

Can we tell if software is bug-free? Or bug-free enough? When a car with an embedded computer crashes is the software vendor to blame or the car manufacturer to blame (see the cases of the Thai politician locked in a parked car at http://catless.ncl.ac.uk/Risks/22.73.html#subj4.1 or any of the "runaway car from hell" stories.)

Can a government body understand software security risks when the general population doesn't understand software let alone software security? What about when we develop more complex systems (such as quantum computing.) "Computer people" today are like the shade-tree mechanics of the 1950's, many of whom wouldn't touch the engine of a Toyota Prius with a ten-foot wrench?

Who takes responsibility for security flaws and exploitations?

When the software is produced by companies? By individuals? If it's open sourced?
What kind of incentives are there for companies/open source groups to produce more reliable software? For example, if the originator of the code is responsible, is the threat of law suites when something goes wrong strong enough of an incentive or possible even too strong? Can these incentives be improved?

Responsibility:

What responsibility does a software vendor have to users with regards to security flaws and exploitations? There's an implied warranty that the software will work for the purposes it was bought (details?). Does this implied warranty of functionality extend to OSS? Can a click-wrap license indemnify a commercial software vendor?

What responsibility does a software vendor have to society? If a single vendor holds a significant portion of the market is there a responsiblity to protect the users of this network? Think about commerically operated utilities such as telephone or cable television companies as comparators. There seems to be an a priori consensus that Microsoft Windows is insecure. Is it? If so, does Microsoft have a responsibility to protect society by fixing their software?

What about vendors who produce software to run (actual) utilities? (see http://catless.ncl.ac.uk/Risks/23.18.html for an example of General Electric's XA/21 system causing the Northeast Blackout.) Would an implied warranty of suitability be granted by a commercial vendor if that vendor leverages OSS as part of a flawed solution? Or would (for example) Linus Torvalds hold some responsibility for a kernel bug which causes a blackout? (Linux has issues with race conditions during asynchronous swapping of virtual memory pages which is the same kind of bug that caused the XA/21 failure.)

Incentives/disincentives:

Would anyone produce important but risky software if their company were potentially liable for all damages resulting from usage of that software? In the small case, would an OSS developer ever contribute code if h/she were to be held responsible for usage? In the large case, would Diebold make voting machines if they were responsible for damages resulting from voting fraud or voting machine failure which changed the outcome of a presidential election?

What incentives exist for companies or OSS contributors to create secure and reliable software? Is there a legal responsibility? Does an OSS contributor have a market incentive toward quality?

Should a government proscribe the use and/or development of a particular breed of software? Is a government which decides to use Windows responsible for Windows-based attacks on the system (virii, cracking, DDOS, etc.)? (See the case of the UK government using Windows for Warships: http://www.theregister.co.uk/2004/09/06/ams_goes_windows_for_warships/) If a government mandates OSS is there any responsiblity when a failure is experienced? Can a government know that their software choice is appropriate?