Student Projects:CyberSecurity

From CSEP590TU
Revision as of 23:59, 8 November 2004 by Cmbenner (talk | contribs)

Jump to: navigation, search

Attack, Defense, and Responsibility

Deadlines

Administrative details: font, style, chapter length, etc.

One page write-up of project

Writing the paper--Discussion

Early discussion--leading up to the one page outline of Nov. 8.--think we can move the below discussion here to make things neater

Over the next week we need to come up with:

  1. our subtopics (I think the below topics are our major subtopics, we probably want to go into more detail).
  2. some sources
  3. team organization. Not sure what is meant by this - we should follow up with Ed. Does this mean assigning roles like Editor, Researcher, Writer? Assigning writers for sections?
  4. Before anyone starts seriously writing we should also pick a style manual so that we'll be consistent at the end. :) APA or some other? I haven't seen anything on this in any of the instructions. (Santtu Voutilainen)

cmbenner: I think the best way to go about this is: assign each person to write a chapter according to his/her interests, set an approx page limit for each chapter (8 pp?), assign a due date for everyone to turn in their chapters to everyone else to review and comment on, then we each go back and fix our sections according to each other's comments. maybe someone writes an intro piece, maybe one of the sections serves as an intro piece. I'm happy to be an overview type editor--I was an editor at a policy journal before coming to UW and miss it.

So a possible next step would be for each person to write a paragraph on what their proposed chapter would look like (I set out what I'd like to do below in the middle of the discussion), we can all review each other's to see how the pieces fit, and then edit this into a one page doc to turn in by next Monday. If we need to meet in person, I can come to MS instead of UW for class on Thursday and meet before or after.

--Jack Richins 14:29, 4 Nov 2004 (PST): I like Caroline's proposal. I'm going to post my ideas so far for a subtopic on the page Caroline made. Don't be afraid to want the same topic as me - I don't feel that much more strongly about it than several others related to this project, so I'm opening to doing something else :).

Andrew Sat 6 Nov: As I was the last one to stick in a chapter I've gone ahead and filled out a one(-ish) page writeup of the project. I tend to be a little verbose, however, so feel free to edit away. I won't apologize for stepping on toes--you can just curse me under your collective breaths.

Threat trends over the next 10 years

A little context - types of threats today:

  1. Virus attacks
  2. Denial of service
  3. Spyware/Adware
  4. Phishing attacks

No source for this, but I suspect if the IT industry is successful in reducing the flaws in our software, we'll see more phishing attacks / spyware attacks which target the weakest link of the system - the end user. Definitely in the profiles I've seen of black hats they target the weakest link (I know I would :)). And if we strengthen the technology sufficiently to make it stronger than the user, attackers will just focus on the user.

Which begs the question - what can the technology do to strengthen the end user? How do we protect their security tokens? Can we do better than passwords? How do prevent phishing or provide means of authenication like we have in the paper world with watermarking and other devices to prove authenticity. Few users understand security certificates.

Santtu Voutilainen -- In addition to new attack threats, we may want to touch on future targets. With the increasing integration of computers (such as "cars from hell" as mentioned below) as well as storage of private and intimate data (health records?) in an ever more interconnected manner there will be targets that may be even more tempting than today.

Defense trends over the next 10 years

(This is not the main topic of this section) Is it possible to create secure software?

How are developments like managed code (Java/.NET), buffer overflow protection, heap overflow protection, runtime debugging checks, the secure C runtime, etc., going to affect code reliability and security in the next decade.

Can automated bug detection, static analysis and runtime, help us? Prefix, Prefast, FxCop, Presharp, Lint, <external tools>.

Hardware support? NX, Palladium.

--Jack Richins 22:34, 29 Oct 2004 (PDT) Should we move the following to the next section as a discussion of responsibility? ->


cmbenner: Below seems to be a section on measuring software quality (comments follow the two next grafs):

Can we tell if software is bug-free? Or bug-free enough? When a car with an embedded computer crashes is the software vendor to blame or the car manufacturer to blame (see the cases of the Thai politician locked in a parked car at http://catless.ncl.ac.uk/Risks/22.73.html#subj4.1 or any of the "runaway car from hell" stories.)

Can a government body understand software security risks when the general population doesn't understand software let alone software security? What about when we develop more complex systems (such as quantum computing.) "Computer people" today are like the shade-tree mechanics of the 1950's, many of whom wouldn't touch the engine of a Toyota Prius with a ten-foot wrench?

cmbenner: I think this section (the above two paragraphs) would make a good chaper in and of itself and is in keeping with some of the things I've been looking at and want to write on (if I'm not stepping on toes here: haven't discussed fully with everyone): 1. can we measure security in any reasonable way and devise objective replicable tests to do so? and 2. is it possible to create an underwriter's lab or other certifying body/government apparatus to rate software security so that consumers understand how secure the software they are buying is? This could lead into an incentives discussion: if people understood security, they could demand it and companies would be incentivized to produce it.


Who takes responsibility for security flaws and exploitations?

When the software is produced by companies? By individuals? If it's open sourced?
What kind of incentives are there for companies/open source groups to produce more reliable software? For example, if the originator of the code is responsible, is the threat of law suites when something goes wrong strong enough of an incentive or possible even too strong? Can these incentives be improved?

Responsibility:

What responsibility does a software vendor have to users with regards to security flaws and exploitations? There's an implied warranty that the software will work for the purposes it was bought (details?). Does this implied warranty of functionality extend to OSS? Can a click-wrap license indemnify a commercial software vendor?

What responsibility does a software vendor have to society? If a single vendor holds a significant portion of the market is there a responsiblity to protect the users of this network? Think about commerically operated utilities such as telephone or cable television companies as comparators. There seems to be an a priori consensus that Microsoft Windows is insecure. Is it? If so, does Microsoft have a responsibility to protect society by fixing their software?

What about vendors who produce software to run (actual) utilities? (see http://catless.ncl.ac.uk/Risks/23.18.html for an example of General Electric's XA/21 system causing the Northeast Blackout.) Would an implied warranty of suitability be granted by a commercial vendor if that vendor leverages OSS as part of a flawed solution? Or would (for example) Linus Torvalds hold some responsibility for a kernel bug which causes a blackout? (Linux has issues with race conditions during asynchronous swapping of virtual memory pages which is the same kind of bug that caused the XA/21 failure.)

Incentives/disincentives:

Would anyone produce important but risky software if their company were potentially liable for all damages resulting from usage of that software? In the small case, would an OSS developer ever contribute code if h/she were to be held responsible for usage? In the large case, would Diebold make voting machines if they were responsible for damages resulting from voting fraud or voting machine failure which changed the outcome of a presidential election?

What incentives exist for companies or OSS contributors to create secure and reliable software? Is there a legal responsibility? Does an OSS contributor have a market incentive toward quality?

cmbenner: The legal liability question is interesting: currently, I believe, you can't sue successfully for a bad product unless that product causes injury or death, not just monetary loss. So that's why MS doesn't get sued for all the viruses. I've heard legal academics, at least at UW, aren't all that interested in studying software liability because they think it's non-issue, at least in the present day. Could get more interesting though, to consider futuristic scenarios, as more and more things go online: what if your alarm system was hooked up to the internet, a virus brought it down, and a bad guy, knowing that particular kind of alarm system was out, entered your house and killed you?

Should a government proscribe the use and/or development of a particular breed of software? Is a government which decides to use Windows responsible for Windows-based attacks on the system (virii, cracking, DDOS, etc.)? (See the case of the UK government using Windows for Warships: http://www.theregister.co.uk/2004/09/06/ams_goes_windows_for_warships/) If a government mandates OSS is there any responsiblity when a failure is experienced? Can a government know that their software choice is appropriate?


Possible Outline of Paper

Flow of control ideas

[cmbenner:] (Insert "no interest in toe-stepping" disclaimer here--implied from now on) To bring my policy perspective to this paper: One thought on writing policy briefs is: they (they being the profs at policy schools) advise you to write a paper that amounts to a knowledgeable person/people presenting a problem to a policy-maker that needs solving, then explaining options for fixing it without injecting your bias into the presentation. Basically you want a lay-man to grasp the issue and come away feeling equipped with some potential smart solutions for the problem. Based on Ed and Steve's instructions of 10/14/04 (under project schedule and guidelines on the course homepage), this seems to be the kind of paper they want. So accordingly, I'd suggest we want to focus on a particular problem confronting the policy-maker, explain it thoroughly, and offer him options for fixing it. In my mind (and you should keep in mind I missed your initial talk so steer me in a different direction if I'm off here), that problem is that software/cyber/online/whatever-term security is lousy. Specifically, the problem is that there are threats out there, our defenses are sub-par, and that there is a responsibility to make this better.

Giving the futuristic bent proposed for the paper, here's one approach. And here I have no idea what sort of chapter each person is interested in writing so I might be mixing stuff up for folks; I'm simply trying to think as a policy-maker receiving this paper:

1. THE PROBLEM that we're telling policy-makers about: over the next X years (up to 2014?), as the technology evolves, the threats we face are A, B, and C ("the problem" for the policy-makers). Like some clown wanting to take down the whole Internet. Like Andrew the smart hacker with too much time on his hands. Some of these threats exist today, others might be evolving. Here we could talk about the difficulty of countering threats, the problems with creating secure software, anything that can be discussed as part of "the problem" for policy-makers.

2. THE OPTIONS FOR FIXING this problem: Here's where we'd consider our options for defending against these threats and whose responsibility it is to bring these defenses into play. Here, the policy-maker would care about the advantages and disadvantages to (including the feasibility of) each approach for defending.

This might be roughly divisible into: technical, policy, economic and legal solutions. I don't know that we have to cover all of them: perhaps if there is interest, some of us could tackle one option each for improving security (i.e., I could do economic focusing on underwriter's lab)

--technical issues: i.e., encrypted Internet, or better software engineering--how does Jack go about verifying his code is safe at the design level? What tools does he have at his disposal? Should better software engineering practices--more documentation, code reviews, whatever, be implemented? How do we patch security vulnerabilities - especially in an embedded device? Is it Jack's responsibility as a software engineer to make more secure stuff (should ethics be taught to computer science students?) Is it his employer's job to enforce better SE practices?

-- policy/government: is Jack licensed? Should he be--would it help? Is it the government's responsibility to improve security by licensing software engineers? Should the feds/world-wide body run a center to alert people to security vulnerabilities? How does the FBI bust Andrew?

--economic: how do you incentivize companies to play ball with security (one idea: consumer education thru underwriter's lab?) Is it the consumer's responsibility to be an educated consumer of security and demand it? Is it Jack's company's responsibility to educate the consumer so they will want to pay for it? (witness Microsoft/RSA/2 other software companies' big ad on the New York Times Op-Ed page about a week ago explaining why security is important).

--legal: should the legal system bear the responsibility of making software secure--are vendor liability lawsuits a good idea? Pros and cons? Will the legal system evolve this way as more devices go online and faulty security produces the sort of defective products that can get folks killed (my somewhat far-fetched murderer entering a house knowing all Internet-connected alarms of a certain type were down). Who can Santtu sue? Will Santtu be able to convince a jury that a particular vulnerability was the fault of a particular company? What sort of case law might get established here?


POSSIBLE CHAPTER ORGANIZATIONS: One way of organizing chapters according to the above lay-out would be if people were interested in particular topics described above:

1. chapter with a lead author on the threats over the next 10 years. Or two chapters on threats if threats are easily divisible into two chapters and 2 people want to write on threats.

2. chapter with lead author on defense/responsibility #1: (take technical: you could do a chapter on one technical solution like software engineering and who should take responsibility for it or overview a whole slew of technical solutions including software engineering and who should take responsibility for them.)

3. chapter with lead author on defense/responsibility #2: a second technical solution, if two people want to write on technical defenses, or a government/policy solution, maybe, focused on licensing engineers or on creating a federal body to issue patches or on overviewing a range of policy/gov't solutions

4. chapter with lead author on defense/responsibility #3, say, economic solutions or overviewing a slew of econ solutions

5. and a chapter with lead author on defense/responsibility #4 or a conclusion or another issue I've left out of this schema that's interesting or whatever

[jspaith NEW:] Caroline, feel free to step on my toes anytime you want. I think your proposal makes a lot of sense and is better suited for the public policy rather than mine below, which is too engineering focused.

cmbenner 11/4/04: I like the idea of tracing a security exploit in 2014--what a great lead in for a threat chapter! That would catch policy makers' attention.



[jspaith OLD:] I'll defer to the previous editor about how we should organize our chapters, but my thought is that we should "follow the code" through a case study of what we think a security exploit will look like in the year 2014, from infastructure -> development -> exploit -> patch -> punishment.

I don't know how all this would break down as far as work items. Like Caroline (and everyone else I'm sure) I'm not trying to step on toes or sieze control of this group, so these are just suggestions. When we start proposing our chapters below in section 2, I think it'd be a good idea for people to ask questions to the other folks not in their area before we really dig into research/writing.

(1) What's the state of the network/Internet in 2014? What type of network platform am I writing my app to/is the attacker going after? Is *everything* encrypted? What does my Cisco router running at 10 terahertz do in tomorrow that it can't today for security? Suppose some clown wants to take down the Internet as a whole - never mind going after a piddly app at a time. Being on the core networking in WinCE, I'm particularly interested in this one.

(2) How does Jack, who writes filesystem code, go about veryfing his code is safe at design time? How does security factor into architectural design, what training does Jack have (or is he required to be licensed now to be a dev?). What tools (prefix, etc...) or hardware is available for him?

(3) Caroline runs an underwriters lab, as described above. What is her business model - does she charge companies directly? Do the feds subsidize her? After Jack sends her his code, how does she decide whether it's *really* secure or not? Jack wants to ship tomorrow since he wants $$ now, but Caroline thinks his code is crap and not secure in some weird corner case. Can she block his ship or give him a bad rating? What is business+technical relationship between the two?

(4) Andrew is a philosophy major with too much time on his hands, so he turns into a hacker. What are his tools? What are his attacks of choice?

(5) Andrew is a smart hacker. He finds a nasty bug in Jack's code and lets an exploit out into the wild. How do we detect this? Do the feds run a center, or is it sometheing like today where Microsoft has its own security bulletins, or is it some worldwide body? How do we patch the problem - especially if this is an embedded device. Cisco saves the day in the router? And how does the FBI (or some other future agency) track Andrew down?

(6) Santtu is a rich lawyer but wants to be a really rich lawyer. Who gets punished/who does he sue for this horrendous security bug? Jack? Andrew? Caroline? Who gets the rewards if Santtu wins (other than Santtu the lawyer of course :))? How do we determine the difference between Jack screwing up and the user being incompetent?

Proposed Chapters

Andrew Pardoe

This chapter examines who is ultimately responsible for the costs incurred through security flaws and exploitations? How does this differ when the software is produced by corporations, individuals or an open sourced project? What kind of incentives are there for companies/open source groups to produce more reliable software? Can these incentives be improved?

Responsibility should scale with the importance of usage. Does a click-wrap license or a disclaimer in an OSS license indemnify the creator of the software of all responsiblity? Do corporate products incur more liability than OSS projects? What responsibility does a software vendor have to society? If a single vendor holds a significant portion of the market is there a greater responsibility to protect the users of this network? Software which controls critical societal infrastructure (such as an automobile, a power plant or a voting machine) is at the far end of this scale. Are governments choosing software responsibly?

What incentives exist to produce secure software? Would legal liability be a greater incentive or a disincentive to innovate? Would anyone produce important but risky software if their company were potentially liable for all damages resulting from usage of that software?

Caroline Benner

Getting the consumer to understand security

One major problem with the provision of security in software is that companies haven't had an incentive to provide it since consumers don't demand it. Why? Because they don't have a clue what security is. This chapter would explain to policy makers that there is no way to definitively tell whether a piece of software is secure--indeed it's been proven that you can't prove a piece of software is secure, I think--and the best computer scientists can do is approximate how secure something is by evaluating how good the design is, how good the SE practices are, how the software performs in tests that catch the low-hanging fruit bugs (any other measurable approximations for security that you guys know of--aside from patch releases?). It would go on to evaluate tests for measuring security that can be translated into something consumers understand and consider whether consumers would start to demand security more if these tests became more well-known (for example, the Common Criteria, a government sponsored test of the above approximations of secure software).

I guess in the outline this would be "Defense/responsibility #2: economic fix"

Santtu Voutilainen -- Caroline, just to verify, would this be "Defense/responsibility #2: a second technical solution" or "Defense/responsibility #3: economic solution" on the list of possible chapter organizations above?

Jack Richins

Defense: Techincal Solutions

This subtopic will survey the promising areas of research in designing secure systems. Software that analyzes software code shows promise in finding security defects that humans have difficultly finding. There is speculation that different software languages, such as functional languages which have a firmer mathematical basis would be more secure that the popular languages being used today. There are also attempts at detecting security intrusions or violations as software is being executed.

Additionally, this section will examine the obstacles and costs of these approaches. All these approaches will incur trade offs relative to our current approaches in software design. Tools that find security defects in source code will increase the time needed to write software as engineers time is spent investigating and fixing possible defects found by these tools. Different programming languages will incur training and design costs as engineers go through learning how to learn these new languages. Detecting security issues while software is running will decrease overall performance of applications.

Sources:

  • Methods For The Prevention, Detection And Removal Of Software Security Vulnerabilities, Jay-Evan J. Tevis & John A. Hamilton, Jr., ACM Southeast Conference’04, April 2–3, 2004, Huntsville, AL, USA.
  • Securing Web Application Code by Static Analysis and Runtime Protection, Yao-Wen Huang, Fang Yu, Christian Hang, Chung-Hung Tsai, D. T. Lee, Sy-Yen Kuo, WWW 2004, May 17–22, 2004, New York, New York, USA.

cmbenner: So Jack's chapter looks to be on one element of fixing the problem: as I characterized it: "defense/responsibility #1: technical solution."

John Spaith

Should software engineers be licenced?

I can't roll out of bed tomorrow and design a draw bridge. I need a professional license in a relevant field of engineering. Sure, civil engineers have awesome CAD tools to help them nowadays. But they're useless to me because I don't know how to use them - at least not correctly. A bridge constructed by a thoughtful Roman can last for 2000 years. Can we say the same for our software?

10 years from now we'll have better software design tools and have better practices for developing software. But the tools are only as good as the people using them and the stakes will be much higher if we screw up. What are the tradeoffs between having a free-flowing software design environment that encourages innovation and risk versus a profesional management that offers higher security? Think Captain Kirk versus Captain Picard. Should software engineers be licensed? Standards bodies have traditionally restricted their governance of software to externalities - like whether a vendor's TCP stack is complaint. Should external organizations (ie the government) take a more active roll in the internal practices of software vendors? This could be licensing, code reviews, requirement of intrusion detection, and other development practices. Or are market forces and possible litigation in and of themselves enough?

This chapter will consider these questions. Although this addresses the general question of code quality, I'll focus on the security aspect and use security scenarios to motivate the discussion and so I "play nice" with everyone else's topics.

Sources:

Santeri (Santtu) Voutilainen

Chapter 1 -- Threats: Past, Present, and Future

This chapter will lay the foundation for the rest of the paper by discussing past, present and future threats, and materialization of these threats in real attacks. Threats will be examined from two viewpoints, the purely technical view of different vulnerabilities in computer code and systems, as well as the effects of service disruption at potential targets. The discussion on purely technical vulnerabilities will cover the progression of newly identified vulnerabilities from the various flavors of buffer overflows, integer under/overflows, algorithmic flaws to improved brute force attacks. Discussion on threatened targets will cover the threats posed by integration of computer systems into an ever increasing number of devices and areas where the effect of attacks may extend from disruption of service and to life threatening. such as health and financial record management, everyday appliances, and military equipment.

Sources: