Difference between revisions of "Student Projects:CyberSecurity"

From CSEP590TU
Jump to: navigation, search
(Defense trends over the next 10 years)
(Defense trends over the next 10 years)
Line 6: Line 6:
 
(This is not the main topic of this section) Is it possible to create secure software?
 
(This is not the main topic of this section) Is it possible to create secure software?
  
How are developments like managed code (Java/.NET), buffer overflow protection, heap overflow protection, runtime debugging checks, the secure C runtime, etc., going to affect code reliability and security in the next decade. Can we tell if software is bug-free? Or bug-free enough? When a car with an embedded computer crashes is the software vendor to blame or the car manufacturer to blame (see the cases of the Thai politician locked in a parked car at http://catless.ncl.ac.uk/Risks/22.73.html#subj4.1 or any of the "runaway car from hell" stories.)  
+
How are developments like managed code (Java/.NET), buffer overflow protection, heap overflow protection, runtime debugging checks, the secure C runtime, etc., going to affect code reliability and security in the next decade.  
 +
 
 +
Can automated bug detection, static analysis and runtime, help us? Prefix, Prefast, FxCop, Presharp, Lint, <external tools>.
 +
 
 +
Hardware support? NX, Palladium.
 +
 
 +
--[[User:Jackr|Jack Richins]] 22:34, 29 Oct 2004 (PDT) Should we move the following to the next section ->
 +
 
 +
Can we tell if software is bug-free? Or bug-free enough? When a car with an embedded computer crashes is the software vendor to blame or the car manufacturer to blame (see the cases of the Thai politician locked in a parked car at http://catless.ncl.ac.uk/Risks/22.73.html#subj4.1 or any of the "runaway car from hell" stories.)  
  
 
Can a government body understand software security risks when the general population doesn't understand software let alone software security? What about when we develop more complex systems (such as quantum computing.) "Computer people" today are like the shade-tree mechanics of the 1950's, many of whom wouldn't touch the engine of a Toyota Prius with a ten-foot wrench?
 
Can a government body understand software security risks when the general population doesn't understand software let alone software security? What about when we develop more complex systems (such as quantum computing.) "Computer people" today are like the shade-tree mechanics of the 1950's, many of whom wouldn't touch the engine of a Toyota Prius with a ten-foot wrench?

Revision as of 05:34, 30 October 2004

Attack, Defense, and Responsibility

Threat trends over the next 10 years

Defense trends over the next 10 years

(This is not the main topic of this section) Is it possible to create secure software?

How are developments like managed code (Java/.NET), buffer overflow protection, heap overflow protection, runtime debugging checks, the secure C runtime, etc., going to affect code reliability and security in the next decade.

Can automated bug detection, static analysis and runtime, help us? Prefix, Prefast, FxCop, Presharp, Lint, <external tools>.

Hardware support? NX, Palladium.

--Jack Richins 22:34, 29 Oct 2004 (PDT) Should we move the following to the next section ->

Can we tell if software is bug-free? Or bug-free enough? When a car with an embedded computer crashes is the software vendor to blame or the car manufacturer to blame (see the cases of the Thai politician locked in a parked car at http://catless.ncl.ac.uk/Risks/22.73.html#subj4.1 or any of the "runaway car from hell" stories.)

Can a government body understand software security risks when the general population doesn't understand software let alone software security? What about when we develop more complex systems (such as quantum computing.) "Computer people" today are like the shade-tree mechanics of the 1950's, many of whom wouldn't touch the engine of a Toyota Prius with a ten-foot wrench?

Who takes responsibility for security flaws and exploitations?

When the software is produced by companies? By individuals? If it's open sourced?
What kind of incentives are there for companies/open source groups to produce more reliable software? For example, if the originator of the code is responsible, is the threat of law suites when something goes wrong strong enough of an incentive or possible even too strong? Can these incentives be improved?

Responsibility:

What responsibility does a software vendor have to users with regards to security flaws and exploitations? There's an implied warranty that the software will work for the purposes it was bought (details?). Does this implied warranty of functionality extend to OSS? Can a click-wrap license indemnify a commercial software vendor?

What responsibility does a software vendor have to society? If a single vendor holds a significant portion of the market is there a responsiblity to protect the users of this network? Think about commerically operated utilities such as telephone or cable television companies as comparators. There seems to be an a priori consensus that Microsoft Windows is insecure. Is it? If so, does Microsoft have a responsibility to protect society by fixing their software?

What about vendors who produce software to run (actual) utilities? (see http://catless.ncl.ac.uk/Risks/23.18.html for an example of General Electric's XA/21 system causing the Northeast Blackout.) Would an implied warranty of suitability be granted by a commercial vendor if that vendor leverages OSS as part of a flawed solution? Or would (for example) Linus Torvalds hold some responsibility for a kernel bug which causes a blackout? (Linux has issues with race conditions during asynchronous swapping of virtual memory pages which is the same kind of bug that caused the XA/21 failure.)

Incentives/disincentives:

Would anyone produce important but risky software if their company were potentially liable for all damages resulting from usage of that software? In the small case, would an OSS developer ever contribute code if h/she were to be held responsible for usage? In the large case, would Diebold make voting machines if they were responsible for damages resulting from voting fraud or voting machine failure which changed the outcome of a presidential election?

What incentives exist for companies or OSS contributors to create secure and reliable software? Is there a legal responsibility? Does an OSS contributor have a market incentive toward quality?

Should a government proscribe the use and/or development of a particular breed of software? Is a government which decides to use Windows responsible for Windows-based attacks on the system (virii, cracking, DDOS, etc.)? (See the case of the UK government using Windows for Warships: http://www.theregister.co.uk/2004/09/06/ams_goes_windows_for_warships/) If a government mandates OSS is there any responsiblity when a failure is experienced? Can a government know that their software choice is appropriate?