Talk:Lecture 15

From CyberSecurity
Jump to: navigation, search

Why aren't computers more stable?

David Dorwin The issue of computers being very unstable compared to the phone system came up in the lecture tonight. I'll open up the discussion and look forward to others' comments.

One reason is that consumers are not willing to pay for it. You can't have a $500 laptop with all the features you demand and still pay for the development and validation of really stable applications. The same is true for the $40 scanner with drivers that sometimes crash. Corporations make business decisions about how much they can afford to put into development and resolving such issues. I have a feeling that they often decide that the issues aren't upsetting enough customers or to enough extent to justify spending time and money on the problem. As competition continues to drive prices down, there is even less money to spend on validation and fixing bugs. In some ways, the issue is analogous to the outsourcing debate – Americans complain that jobs are going overseas then go Wal-Mart and leave with bags full of stuff from China.

Another factor when comparing PCs to most other products is the number of companies involved. I’m guessing that the telephone network is comprised of components and software from tens of companies. The same logic goes for your car. The maker specs each of the components that go into your car and verified that they work well together. Even if you buy third-party (non-maker brand) parts, there is a limited number of companies making each part and they have (hopefully) verified that they work in the cars they are designed for. In most cases, parts interact with only a few other parts, so this would seem to reduce the potential for problems. As an example, your oil filter doesn’t interact with or share resources with your shocks, tires, or stereo. The number of companies or individuals that are writing software and drivers for Windows XP is nearly countless. All of them must work well together, and if there is a bug (memory leak, runaway process, bug check, etc.) in any one of them, it can make the entire computer appear unstable.


Brian McGuire I like the analogy between cars and software - both of them usually work when they are new, but then occasionally need to be fixed due to wear and tear, or design flaws (recalls). Also, manufacturers of both are under pressure to find the cheapest solution to a problem that will last just past any warranty. Both degrade overtime - cars from physical wear and computers due to the lack of protection or uniformed users installing spywear.

Another factor that might be important is that users value how useful something is over how stable it is. Even though the phone line always works, myself and many people I know don't own a land line because it is an order of magnitude less useful than a mobile phone for approximately the same cost, even if sometimes a cell tower goes down and I don't get reception at home. To extend that to computers - most people don't have to have their home computer working perfectly all of the time - so a problem won't necessarily have a huge impact. If they do have to have a working computer it's cheap enough to purchase a second computer or laptop and have redundant capabilities in case of a problem. While an individual computer might not be stable, you are pretty safe just by purchasing a second one. And like cell phones, they are getting to be cheap enough that you can throw them away as long as you back up important information.

Voelker We know how to make reliable software systems. Obvious examples are the phone system, air traffic control, embedded software on military and commercial aircraft (computers fly the planes, pilots steer them), Wall Street, etc. Making software systems reliable, though, comes at a significant cost.

Using consumer and business software as a rough analogy to mobile wireless phone service, from an engineering perspective we can make wireless phone service as reliable as a landline service. But it would have substantial impact on price, coverage, rate of innovation and change, etc.

There are other social factors at play here, too. Perhaps computers are cheap enough for you to purchase two of them for increased reliability. But there are many people in the US who cannot afford even one. Yet they are quite likely to be able to afford landline phone service, perhaps with assistance, because the phone company is under obligation to provide it.

Red/Green/Blue - are color schemes/perimeter defenses still viable?

It was interesting to note the analogies drawn with conventionl warfare/castles etc. However can such a scheme be viable today? Ed had raised this question, using examples of users bringing in floppy disks, USB drives and such. Another aspect is users taking their laptops home or while travelling and VPN'ing into the corporate network. The problem with planning a perimeter defense is that there is no perimeter anymore. The example of VPN and mobility of elements also suggests that a color scheme would also have to consider the factor of time (or location depending on the way you look at it). A laptop when connected from inside the corporate network, is "Green". But when VPN'ed in it's "Blue" or even "Red" (it's pretty easy to hack the usual VPN settings so that you have access to both the internet and your corporate network). Talking about colors started me thinking of Butler Lampson's talk. He had presented such a colorful lecture on the red/green security zones on PCs. Butler used only 2 colors, but one can easily think of the use of more colors or security levels. E.g. a blue layer, through which all data movement occurs between red and green layers. Well, given that we are considering such schemes; what that signifies to me is that the perimeter has been invaded and become so fragmented that we now have the need for security layers on our own individual workstations!! Pretty depressing, I think something went wrong somewhere.

--Chris DuPuis 08:18, 8 December 2005 (PST) I would contend that any network design that even lets users' desktop systems be inside the "Green" zone is fundamentally insecure. Really, you don't want trouble caused by users' vulnerable applications, especially their email client and web browser, to be able to take out your mission-critical systems. This goes doubly for laptops, which are entirely outside your control once users leave the building with them.

--David Dorwin GreenBorder, which Steven Gribble mentioned during his lecture on 11/9 claims to keep your "VPNs and corporate networks clear of mobile infestations." Their definition of "Green" is much different than this week, though.

--Rob Anderson Indeed, coloring schemes are useful only for reducing the likelihood of being infected with viruses. They are not at all useful for defending against malicious insiders, which are the #1 source of attacks. Many security researchers seem to prefer 'harm-mitigation', or systems for reducing the amount of damage that an attack can cause.

--liebling You can still control your perimiter to some extent. At Microsoft, every machine that connects via VPN undergoes pretty thorough security checks (viruses, spyware, security patches, internal software). The connecting machine must have a smart card reader with the user's smart card inserted. Per policy everyone's laptop file systems should be encrypted so that if stolen, the contents are irretrevable. This applies to Smart Phones as well (we are supposed to password-lock them all the time) but I think people rarely follow this restriction.

Voelker Some companies take this a step further. If you want to connect a laptop to the corporate network via VPN, then you must use a company laptop that the company chooses and configures. You do not have admin privileges on it. It is also considered disposable. You are expected to store all of your data on reliable storage in the company. The laptop can be wiped and restored with the latest image.

Btw, Butler would likely resist adding more colors to his scheme. There is already one color too many, but grudgingly seems necessary.

--Dennis Galvin 08:52, 12 December 2005 (PST) - Still the well-placed malcontent insider can do more damage than somebody outside the moat in most cases. The combination of knowledge plus inside situation is deadly. Phil Venables addressed this with the concept of restricting technology access to what the employee needs. In terms of IP theft and transporting knowledge about vulnerabilities, the bits and bytes don't need to leave the premises over the ethernet or through a computer at all. The same malcontent employee leaves the premises every day with little bits of proprietary information lodged in his brain. The perimeter based firewall is still a useful concept, just like the locks on houses and car doors. I think Marty's conceptualization was a good one. And clearly it needs to be more complex ... machines and people need to have their access restricted by business rules.


+ Some companies take this a step further. If you want to connect a

A contradiction, or the problem is really hard to solve

Tolba As asked by Prof. Maurer, thinking about the complexity of the problem at hand and how several factors mount the problem of Cyber-Security almost to the level of unsolvable. It struck me how sometimes even the same factor can contribute both to enhancing security on the one side and lowering it on another. In the last lecture, when discussing the how stable phone switches are (or at least are believed to be) the speaker attributed this in part to the fact that system designers worried more about fundamental design decisions and not so much about pretty UI. However when moving to a different part of the lecture namely the AOL famous post maintenance failure, he mentioned that the AOL system back then didn’t have the ‘pretty’ warning UI along the lines of ‘Are you sure you want to override all the router entries?’. Even though at the shallowest level this might seem like a contradiction, but it sadly reminds us of the fact of the magnitude of the problem we are dealing with.

Voelker The generalization that Marty was making was that UIs add a lot of code to a program, often more than the "core" code for just doing what the program was intended to do. Imagine the source code for the calc program on Windows and how much code there is for creating, drawing, and handling events on the "+" button versus the code for actually doing the addition. The point is, then, that UIs greatly extend the amount of code you have to be worried about in terms of failures and security.

The specific example that Marty gave about router configuration is one that is well-known in the networking community. Essentially, operators edit large text files containing complex configuration entries using standard editors. There are two opportunities for improvement here. The most important one is not a UI problem, it is a tool problem. Independent of how these configuration files are created, if you had a tool to check whether the files made sense (is the syntax correct? does the configuration pass sanity checks? will it result in a routing table that fits inside my routers?). It seems like an obvious thing to do, but the problem is difficult (e.g., Feamster at MIT is finishing a thesis on it). Being able to use such a tool after making configuration changes will let you sleep better at night.

The second opportunity for improvement is, perhaps, a better UI for changing the files. A validation tool might catch problems, but it does not reduce the number of problems. A better UI might reduce the number of tries it takes to get configuration changes correct.