One page write-up of project

From CSEP590TU
Revision as of 08:03, 7 November 2004 by Jackr (talk | contribs)

Jump to: navigation, search

Team Members

Caroline Benner, Andrew Pardoe, Jack Richins, John Spaith, Santeri (Santtu) Voutilainen

Overview

The paper begins with an examination of past, present and future threates to computer security. We then examine possible technical solutions and there related costs. We move into the area of who is responsible for security breeches and their costs to society. We finish with two possible non-legal solutions to the problem of security: whether software engineering should be licensed as civil engineering is licensed and whether educating the consumer about security and providing common, understandable ratings of software security would solve the problem of security through economic demand.

Chapter 1 -- Threats: Past, Present, and Future (Lead author: Santtu)

This chapter lays the foundation for the rest of the paper by discussing past, present and future threats, and materialization of these threats in real attacks. Threats will be examined from two viewpoints, the purely technical view of different vulnerabilities in computer code and systems, as well as the effects of service disruption at potential targets. The discussion on purely technical vulnerabilities will cover the progression of newly identified vulnerabilities from the various flavors of buffer overflows, integer under/overflows, algorithmic flaws to improved brute force attacks. Discussion on threatened targets will cover the threats posed by integration of computer systems into an ever increasing number of devices and areas where the effect of attacks may extend from disruption of service and to life threatening. such as health and financial record management, everyday appliances, and military equipment.

Chapter 2 -- Defense: Technical Solutions (Lead author: Jack)

This chapter surveys promising areas of research in designing secure systems. Software that analyzes software code shows promise in finding security defects that humans have difficulty finding. There is speculation that different software languages, such as functional languages which have a firmer mathematical basis, would be more secure that the popular languages being used today. There are also attempts at detecting security intrusions or violations as software is being executed.

Additionally, this chapter examines the obstacles and costs of these approaches. All these approaches will incur trade-offs relative to our current approaches in software design. Tools that find security defects in source code will increase the time needed to write software as engineers' time is spent investigating and fixing possible defects found by these tools. Different programming languages will incur training and design costs as engineers go through learning how to learn these new languages. Detecting security issues while software is running will decrease overall performance of applications.

Sources:

  • Methods For The Prevention, Detection And Removal Of Software Security Vulnerabilities, Jay-Evan J. Tevis & John A. Hamilton, Jr., ACM Southeast Conference’04, April 2–3, 2004, Huntsville, AL, USA.
  • Securing Web Application Code by Static Analysis and Runtime Protection, Yao-Wen Huang, Fang Yu, Christian Hang, Chung-Hung Tsai, D. T. Lee, Sy-Yen Kuo, WWW 2004, May 17–22, 2004, New York, New York, USA.

Chapter 3 -- Responsibility for security flaws and exploitations (Lead author: Andrew)

This chapter examines who is ultimately responsible for the costs incurred through security flaws and exploitations? How does this differ when the software is produced by corporations, individuals or an open sourced project? What kind of incentives are there for companies/open source groups to produce more reliable software? Can these incentives be improved?

Responsibility should scale with the importance of usage. Does a click-wrap license or a disclaimer in an OSS license indemnify the creator of the software of all responsiblity? Do corporate products incur more liability than OSS projects? What responsibility does a software vendor have to society? If a single vendor holds a significant portion of the market is there a greater responsibility to protect the users of this network? Software which controls critical societal infrastructure (such as an automobile, a power plant or a voting machine) is at the far end of this scale. Are governments choosing software responsibly?

What incentives exist to produce secure software? Would legal liability be a greater incentive or a disincentive to innovate? Would anyone produce important but risky software if their company were potentially liable for all damages resulting from usage of that software?

Chapter 4 -- Should software engineers be licensed? (Lead author: John)

This chapter explores the question of whether external organizations (such as the government or standards organizations like the IEEE) should take a more active roll in the internal practices of software vendors. Most engineers need training and licensing to practice their disciplines but software can be built by kids in a garage. A bridge constructed by a thoughtful Roman can last for 2000 years. Can we say the same for our software?

Ten years from now we'll have better software design tools and have better practices for developing software. But the tools are only as good as the people using them and the stakes will be much higher if we screw up. What are the tradeoffs between having a free-flowing software design environment that encourages innovation and risk versus a profesional management that offers higher security?

Standards bodies have traditionally restricted their governance of software to externalities - like whether a vendor's TCP stack is complaint. Should they expand their governance to include licensing, code reviews, requirement of intrusion detection, and other development practices? Or are market forces and possible litigation in and of themselves enough?

Chapter 5 -- Getting the consumer to understand security (Lead author: Caroline)

One major problem with the provision of security in software is that companies haven't had an incentive to provide it since consumers don't demand it. Why? Because they don't have a clue what security is. This chapter explains that there is no way to definitively tell whether a piece of software is secure--indeed it's been proven that you can't prove a piece of software is secure and the best computer scientists can do is approximate how secure something is by evaluating how good the design is, how good the SE practices are, how the software performs in tests that catch the low-hanging fruit bugs. It goes on to evaluate tests for measuring security that can be translated into something consumers understand and consider whether consumers would start to demand security more if these tests became more well-known (for example, the Common Criteria, a government sponsored test of the above approximations of secure software).