Talk:Lecture 7

From CyberSecurity
Revision as of 18:15, 19 October 2005 by Asadj (talk | contribs) (Why kill the cash cow?)

Jump to: navigation, search

SCADA Systems, Al Qaeda & Cyberterrorism

--Jeff Bilger - Dr. Lazowska briefly mentioned SCADA systems during tonight's lecture. Back in April of 2003, the PBS documentary program Frontline aired a program titled Cyber War! that highlighted the vulnerability of our power grid due to SCADA systems. It's a bit theatrical, but worth a watch since it touches on all the topics we have discussed in class so far.

Also, it would be interesting to know if anything has been done since 2003 to further secure these SCADA systems from attack.

PITAC to PCAST, but where's the action on reports?

Avichal 23:25, 12 October 2005 (PDT) PITAC's charter was allowed to expire on June 1, 2005. On Sep 30,2005 it was revived in a way by extending the charter of PCAST to also cover network and information technology.[1].

In my opinion that dilutes the focus that could and was achieved on the role of Information Technology by the PITAC commitee which was solely focussing on IT. The opposing argument is that PCAST will be able to address IT in a more holistic fashion. Regardless, the basic problem is the lack of action on the reports that are generated by these committees.

Be it the 9/11 report or various PITAC reports, administration owes it to the public to implement suggested measures as is practical,and update the public routinely of it's progress. It's high time the administration followed it's rhetoric with some action and did more about homeland & cybersecurity than ratcheting up and down the threat level on a colored scale.

It's also interesting to note that Kvamme (Co-Chair PCAST) notes his first priority would be to examine the progress of IT R&D at the federal level [2] - an area for which the PITAC reports have shown the goverment to be performing egregiously.

CyberSecurity Progress

Chris Fleizach It was interesting to note that Dr. Lazowska and his committee came to the conclusion the federal government needed to lead research in security measures because private industry couldn't provide the funding or the vision. Yet, Phil Venables followed immediately after and presented a holistic approach to Goldman Sachs security system that would certainly rival any governmental agency, with multiple layers of protection, constant network scanning, mock attacks on a regular basis and a slew of contigency strategies that would certainly make PITAC proud. He even mentioned that they did spend some time thinking about issues five to ten years down the road. The main difference from what Dr. Lazowska was trying aim for is that this research from Goldman Sachs is probably protected and not available to the general public, so what happens is a re-inventing of the wheel at each organization. But then Kirk Bailey and Ernie Hayden mentioned that their most useful system was the "Agora" team, circumventing public and formal discussion for quick, informal aid. I think we can safely assume that the government will not change its stance on science and basic research for the next three years, so perhaps we have seen the past, present and future of cyber security research - security implementation and research driven by companies motivated by economic realities and diffused through informal channels. Was it ever any different?

Jameel Alsalam To be fair, I think that Dr. Lazowska's point was addressing not so much the implementation of security measures in the corporate world, but basic R&D in the production of new security products, or more secure IT products. From Phil Venables's talk, it sounds like Goldman-Sachs is doing a magnificent job in implementing the best technologies that it can as well as putting in a number of structures that support its security - this implementation is a major factor in actually acheiving security, and that task is so complex that a 5-year plan is actually needed just to manage the implementation! This is seperate however from long-term research on the products themselves.

Keunwoo Lee 00:41, 14 October 2005 (PDT): Just to give an example of the kind of fundamental innovation that industry on its own usually isn't very good at developing: consider things like public-key cryptography, or multi-user operating systems, or TCP/IP. Places like Goldman-Sachs don't invent stuff on that scale, though they seem to be at the forefront of pushing available technical innovation, and developing holistic security practices (including social practices) that complement technical innovation.

Now, historically, places like IBM Research, Intel Research, Bell Labs, Xerox PARC, and DEC SRC have done fundamental research in IT, but usually major innovations have involved essential cross-pollination from academia and government; see Ed's lecture from last year, 10/07/04. Furthermore, of the places I just mentioned, only IBM and Intel are still going strong; Bell Labs, PARC, and SRC are all, to greater or lesser degree, shadows of their former selves. (Counterbalancing that to some extent, MSR, which is huge, didn't even exist a couple of decades ago.)

--Chris DuPuis 10:48, 14 October 2005 (PDT) I agree with Jameel's point. The fact that Goldman-Sachs NEEDS such an elaborate and authoritarian security policy can be seen as evidence that the current state-of-the-art of computer security is woefully inadequate. Not only is such a security policy difficult to implement and expensive to maintain, but the number of human agents involved adds the possibility for human error at every step. Also, what of sites that have a smaller IT budget than Goldman-Sachs? Should they just be written off? (The existence of huge botnets suggests that they HAVE been written off.)

Another problem with the Goldman-Sachs example is that, even if they are taking steps to automate their security, they are not sharing their tools and processes with the rest of the world. This means that everyone that requires security needs to reinvent the wheel.

A much better solution would be to build our networks using secure pieces, which requires research into better protocols, better operating systems, better programming methods, and better network infrastructure.


SMM: Prof. Scotchmer's point deserves careful thought. The fact that the US has chosen to rely on government for long-term R&D since World War II does not prove that this is the only, let alone the best way to do things. Indeed, Goldman Sachs seems to be an important counter-example: You may not like what they do, but it seems pretty competent.

I'm not qualified to say whether Goldman Sachs could have built a technologically better system. My guess is that the only way to know is if somebody actually builds one. In the meanwhile, everybody says that security is mostly about humans. The language of humans is incentives. My sense is that Goldman Sachs is quite close to the frontier of what is technologically possible, but they got there by getting employees to submit to random searches, apply memes, etc. You and I don't want to live in that world, but we might if they offered us the same level of incentives.

On reinventing the wheel, that's possible but not obvious. This is precisely the sort of own-use software project that companies like to do in-house and then open source. Also, Goldman Sachs is not averse to making a profit. The fact that it keeps spinning off outside security companies suggests that the wheel is available to everyone (for a price).

Yi-Kai - On the other hand, Goldman-Sachs has developed a security model that is tailored to their specific needs. What they do is probably less appropriate for government needs, like critical infrastructure protection. For instance, if Goldman-Sachs feels that some business deal is too risky, they can simply decide not to do it; whereas it's much harder for government to just "write off" a vulnerable piece of infrastructure. You could argue that this is all a matter of incentives; but I think the incentives for business and for government are so different that we basically have to accept them as they are.

As a result, I think there is some security research that must be government-supported, because it is specific to the needs of government -- it has no other "customers." This research could be done by universities, or by national labs like Los Alamos, or by private contractors, but in every case it would have to involve government funding.

SMM: When the government owns infrastructure, then it is just another business and should do its own R&D. The question is when it should support general research on behalf of the rest of society. As Prof. Lazowska said, this is a question for another course. The point you should be aware of, though, is that the answers are murky. My sympathies are all on the side of funding universities, but policy should be based on economic logic and this is a famously difficult argument.

Avi Springer: Might the economic logic surrounding "public goods" apply here? As the lecturers pointed out, IT is so essential to our critical infrastructure that we all really have a great stake in cyber security, so cyber security can be thought of as a public good. The economic logic suggests that if we leave it to the private market, not enough R&D for cyber security will be produced since private entities will just try to "free ride" on developments made by others. It seems the govt. would therefore be justified to fund R&D with tax dollars since it will (hopefully) benefit society at large.

Noor-E-Gagan Singh Chris comments that "I think we can safely assume that the government will not change its stance on science and basic research". In my opinion, the government will change only its stance if the American society asks for it.

The current state of affair is that our young generation's role models are from Sports and Entertainment fields. As the inventor Dean Kamen said on a visit to Microsoft campus some years back- "Majority of kids in America spend more time on sports than studies. The irony is that a miniscule percentage of them will actually make any money from that sport. If instead they studied science and mathematics, they could be instrumental in improving our quality of life and getting jobs that pay well." There are many Asian societies that our obsessed with science, engineering and technology and they are racing ahead of US.

Effective security on Internet will require a monumental effort which will rival the creation of Internet itself. As was mentioned in our class last week too, the Internet transmission protocols are too simplistic and the current user base in billions of users was not something that was anyone foresaw. The government has to invest money and resources in it as we cannot just depend on private sector to do the task. This task is beyond the means of for-profit corporations.

Isn't security on our streets, highways and borders the responsibility of the government then how come we don’t demand the same on information highways?

Ted Zuvich : I took the class that Dr. Lazowska referred to regarding IT and public policy [3]. Quite a large chunk of that course was taken up with the argument that there are certain types of research that are in the public good, but not necessarily in the "good" for any particular one business. The argument went on to conclude that areas of research like this were the province of government research. Research into cybersecurity definitely falls into this category. Also, I was really struck by how much knowledge about this stuff gets passed along in the proverbial dark, beer-smelling smoke-filled room. That does not seem to be a good situation.

--Dennis Galvin 20:11, 18 October 2005 (PDT) I asked Phil Venables on the break after his presentation if he cared whether the US government funded basic research into cyber-technology and cyber-security. The answer was what I thought it would be: "No." Interesting that much of the technology Phil uses has its roots in US (and other) Government funded research (for example cryptography, TCP, the structure of the internet, etc.). Goldman-Sachs is beholden to nobody but their stockholders, and stockholders are a notoriously fickle bunch beholden to the PE ratio. I suspect G-S will, when it becomes feasible to crack current state of the art cryptographic technology, eagerly adopt the next next wave of crypto-systems developed by government funding of research in universities and at national labs (Sandia, LLL, Oak Ridge, ...). Even though G-S may not care about research funding, they clearly do partake of the benefits. Where I'm going with this: There is a payoff for government(s) for investing in research. Given that internet protocols and security will require extensive re-invention over the next few years (say 1-15), I don't see private industry in the technology-consumer (eg G-S) stepping up to the plate, and that leaves us with not so many choices: 1) government investment; 2) private investment (the IBM's and Microsoft's of the world); 3) volunteerism (open source). Private investment would mean the technology and it's implementation is owned by the stockholders. Volunteerism, while I admire it tremendously, is remarkably devoid of funding sources to conduct the basic research, but somewhat capable of implementing the technologies once the research is done. So, maybe government(s) do need to fund this research. I draw a tenuous analogy to Garret Hardin's Tragedy of the Commons (see paper here and Wikipedia's wiki article). If we let private industry do the research, we may very well see the Internet and Cybersecurity commons trashed.

Is Secrecy Really That Important?

--Gmusick 08:47, 13 October 2005 (PDT) As noted by one of my classmates during the lecture, there is a paradox in the security community where they want highly-trained security experts yet they don't really discuss security in public in a substantial way so people can learn about it. This points to an even deeper problem where our public officials continually classify reports about the state of security at public facilities.

I can't remember if it was Kirk or Ernie that made a dismissive comment about there being no FOIAs (short for Freedom of Information Act requests) at the Agora meetings, but as a former journalist and a current security student, this really bugged me. Public officials always complain about underfunding of security initiatives but they rarely tell you exactly why they need the money...they just say "trust me". How are we, the public, supposed to know what is truly important and what is not when highly political decisions are made in near total secrecy based on "classified" information that could say anything from we will suffer a nuclear attack in our harbor next week to the moon is made of cheese?

And an irony of the situation is that I've heard over and over again, at least in computer science, that hiding and obfuscation are some of the least effective ways to secure your systems. Sounds like a topic for a paper, no?

Jameel Alsalam But hiding and obfuscation are the cheapest form of security, also... I agree with you that it makes no sense for government agencies to rely on cyber-criminals not realizing the weaknesses that exist - but given that I do not really trust the numerous government agencies to be able to quickly respond to exposed weaknesses, I am a bit leary of publicizing them to broadly (which I think that some people see as a way to spur a reaction to those threats).

On the Agora meetings, and FOIA not applying to them - it wasn't the nicest way for it to be put since we like our governments to not have to hide too much from us, but from the little that I have had a chance to observe beaurocratic systems - when they have a lot of scrutiny, I do not think that those systems become more effective - if anything the beaurocracy springs into action spending all its time defending itself from scrutiny. Which is certainly not how we want security professionals in our governments spending their time.

Yi-Kai - I think the intention of the Agora isn't so much to hide security issues from the public, but to allow experts to share information more freely with each other. For instance, it lets people like Kirk and Ernie talk candidly with other experts, without worrying about public relations or politics (which would be concerns in a public meeting). Ultimately, I think we need both things: open public discussions through official government channels, and private informal relationships like the Agora.

Keunwoo Lee 01:01, 14 October 2005 (PDT): I'm the one who asked the question that Gmusick refers to. I agree with Yi that there's certainly a place for Agora, and I think perhaps I phrased my question in an overly negative way. The participants in Agora, and more broadly the list of people in Kirk and Ernie's cell phone address book, form a tightly knit social network based on trust relationships built up over time. Within this network, confidential information can circulate very freely, both through formal channels like white papers and through informal channels like talking over beer. This practice probably works very well to solve the problem it was designed to solve (information dissemination among existing professionals).

What I'm concerned about is that, as a society, we also have another problem to solve: how to produce a new generation of security professionals that's significantly larger than the existing generation, probably by a couple orders of magnitude. Somehow, we need to finesse the tradeoff between keeping necessary secrets secret, and opening information access to educate the next generation. My question was not rhetorical: How do we get 60,000-70,000 security professionals when the only way to learn "the real deal" is over beer? I really have no clue (though I suspect we should reconsider exactly how much of that information needs to be kept secret). It seems like a hard problem.

I do think, however, that it's not necessarily the job of people like Kirk and Ernie to figure out the answer. They have their own full-time-and-a-half jobs to do. If we had a DHS with some kind of coherent vision for the future, it would presumably be looking at this problem, and addressing it by funding education programs on a massive scale or something.

Altin Dastmalchi, UCB I agree with the DHS claim. I think that that department is not doing all it can. I dont think that a name change and color coded warning systems are insuring full safety to our lives. The thing that gets me is that there objectives are biased towards certain races, and this effect is not in our favor. For example, if your screening for a certain suspect you are limiting your search to specifics, and letting others slip throught the search. This is true for DHS, they need to implement an equal method of national safety, because anyone can be a terrorist.

SMM: The problem with classifying data is that it protects bureaucrats as well as infrastructure. I was very struck by the idea that someone had paid the Seattle government to write an expensive analysis of 60 worldwide terrorist groups. My first guess would have been that Seattle's government knows something about about local targets, but almost nothing about international politics. My second guess is that if Seattle's government figured out something useful about international terrorism, we would be better off telling people about the it than keeping quiet (the terrorists already know about themselves). The thing that keeps university scholarship worth reading is openness (less politely, the prospect of being criticized). Since I'm not allowed to read the report, how do I know that the grant money was well spent? Maybe they really do spend the money on beer...

Joe Xavier 05:57, 16 October 2005 (PDT) : The only assumption that can possibly hold in building secure systems is Shannon's Maxim - the enemy knows the system. Given that assumption, does it make sense to hide information? I'm guessing it does but in a limited set of circumstances.
-- Is there a good paper around this topic - something that deals with when it makes sense to reveal information about an exploit of compromised system and when to make it public?

[--Dennis Galvin 20:54, 18 October 2005 (PDT)] The topic is quite debateable. Many in the open source community argue that rapid disclosure provides the most timely resolutions, while a company like Microsoft might argue that 'responsible' disclosure is the only way to go. If the bad guys discover the vulnerability, and use it without disclosure we might get into a bad situation. Microsoft, I'm sure wants to do as much regression testing as they can before they release a fix for an exploit. I do agree we must assume our adversary knows our system. There is definitely an enhanced window of exposure once a fix for an exploit has been released by Microsoft before patching of affected systems occurs (G-S, Port, and City of Seattle talks). This window is enhanced for the attackers, as they have a really good starting point from which to mount their exploits, and the inertia of patching vulnerable systems works in the attackers favor as well. There are many ranting, rambling threads on the topic at /. but not many erudite discussions. I don't know that the folks who advocate "full disclosure" are any more effective at getting patches more quickly onto systems than those who advocate "responsible disclosure."


The cynic in me has me thinking 'these guys don't want someone else pointing out flaws in their reasoning' each time I've heard the term 'confidential' used to refer to a security-related project. The cynic in me also has me thinking that slapping 'need-to-know-basis' on a project is the best way to continually fund yourself. But who polices the cyberpolice? If you don't have a qualitative and quantitative means of assessing the effectiveness of a cyber-security unit, how do you determine how much money and personnel to push their way?
Arguably, the private sector has better metrics to evaluate the effectiveness of it's IT security unit. For example, at Goldman Sachs, the motive behind breaking into systems (either as a frontal attack to make a statement or as a secret, leave-no-traces stealth operation) are obvious - financial gain. You can be assured that people are trying to break in. And they *will* keep trying to break in. If you have an extented period without a break-in you have a high degree of assurance that your IT security guys are doing a good job.
Who's trying to break into the water system? No idea. However, it's a possible scenario. Sure, disabling or otherwise tampering with critical infrastructure in a major city can have major consequences. But there's no evidence for the effectiveness of money spent to secure the information security here since it's not even certain that someone's trying.

One thing that jumped out at me and made me sit up straight is learning during the lecture that the City of Seattle's systems are set up to receive auto-updates without testing. I certainly hope that this is only for desktops and machines running non-critical systems.

[--Dennis Galvin 20:54, 18 October 2005 (PDT)] Ouch. I heard that too. I wonder what else the City of Seattle's technology folk are doing that prevents the testing. Perhaps it is a case of not enough human resources dedicated to the task (finding money in an empty pot is not always easy). Clearly, it would be in their best interests to devote the time and effort to testing. Even desktops and non-critical systems can be compromised in such a way to cause great harm. But certainly the patches need to be tested before they are implemented on the 911 system computers.

Would a Balkanized Internet give US better cybersecurity?

--Gmusick 13:08, 13 October 2005 (PDT) I was reading an article about the conflict between the US and the rest of the world about control of the DNS servers hereand it got me wondering if breaking up the internet would be a good thing to do in terms of cybersecurity.

From the presentations last night it looks like most organized crime comes from overseas locations where they have less restrictive laws and/or no means to enforce them. So if we balkanized the internet, my theory goes, we could simply cut off entire countries from the very lucrative US market until they started cracking down on their criminals and/or terrorists.

One downside is we would lose easy access to intelligence because we would presumably not be allowed on their networks if they weren't allowed on ours. But then we could always send remote teams to these locations to hook onto their networks and do the intel gathering.

But on the plus side, remote attacks from locations beyond our jurisdiction would be eliminated (in theory). The criminals and terrorists would have to set up shop in our country or in other countries friendly to us. And this would give us a chance to nab them and interrogate them.

Chris Fleizach Although an interesting point, the resulting uproar would surely quash any attempt. China, among many other countries, has attempted to restrict access out of the country to places where they can learn about democracy. Of course they have failed, although largely due to the help of outsiders in other countries acting as routers for people on the inside (even the US government is in on the act NYTimes). How soon before routers would be set up in countries that are able to access America? Then we're left with the same problem. We could go after the routers, but what if the routers are made out of botnets?

--Parvez Anandam: I was Director of Development at the small startup mentioned in the NY Times article referenced above in Chris' note. SafeWeb provided an anonymizing proxy that let employees at corporations securely surf the Web, free from the kind of monitoring put in place by Goldman Sachs. Triangle Boy was a packet-reflector that SafeWeb came up with for the venture capital arm of the CIA. The idea was that volunteers (random people) would run this packet reflector and someone in China or Iran could get their IP address through email or other means. That person in China or Iran could then access the SafeWeb anonymizing service and surf the Web that way: one hopes that they were accessing the New York Times or other news sites (the server logs did certainly show a lot of traffic to such respectable sites but they were certainly not the only sites). Everything worked quite well. Unfortunately, ad revenue for the free service left something to be desired; who would have guessed? SafeWeb had to reinvent itself from a standard-bearer for privacy and anonymity to a tool that corporations could use to monitor users. It's interesting how the same technology can be used for opposite goals. SafeWeb was acquired by Symantec exactly two years ago, on October 15, 2003.

Keunwoo Lee 01:18, 14 October 2005 (PDT): A technical clarification: DNS is only a name resolution system. DNS is what allows your computer to figure out, for example, that the string "cubist.cs.washington.edu" corresponds to the numeric IP address 128.208.1.51. But communication on the Internet doesn't require DNS; you can connect to hosts based on IP address alone, without ever consulting DNS, and indeed I believe that is what most worms do. Even if DNS split, everyone on the Internet would still share the IP address space. So, splitting DNS could make life inconvenient for users --- who want e.g. google.com to point to Google Inc.'s servers no matter where in the world they are --- but virus/worm writers wouldn't be affected at all.

(And even an exploit that depended somehow on hostnames could be written to consult the US DNS servers, the UN DNS servers, or both, as needed.)

As for the larger issue of Balkanizing the Internet (not just DNS): as Geoff said in one of his lectures, it's technically feasible to cut the wires between nations, but nobody wants to do it because the economic impacts would be huge.

--Chris DuPuis 10:19, 14 October 2005 (PDT) Cutting the U.S. off from the global Internet would be a great way to ensure that we become a technological backwater. While the rest of the world enjoys free international communication, we would be cut off from everyone else. We would be unable to participate in an increasingly networked research community, which might drive researchers to leave the country in pursuit of more amenable conditions elsewhere. And we would still be vulnerable to domestic attacks.

Chris Fleizach - Although the EU and China would like the US to give up control of ICANN (the organization responsible for giving out IPs and running the Root servers) to the UN, the US has resisted pressure. There has been some talk that China might set up it's own system if demands are not met, which would essentially fragment the internet into smaller entities. It's not a serious concern yet, but as the importance of the internet grows in other countries to the point it has in the US, many countries may no longer want a non-profit, US based group (ICANN) in a position to disrupt their cyber-infrastructure, should they so wish. Or rather, they may not want just one US organization responsible for so much.


--Gmusick 12:24, 14 October 2005 (PDT) It doesn't have to be absolute. I only said we would cut ourselves off from countries/geographic regions that did not/would not enforce appropriately stringent laws (from our perspective) regarding cybercrime/terrorism. Given the current geo-political climate that would still probably leave us fully connected with most of Europe, Canada, India, Australia, China (maybe), Taiwan, Japan and a few others.

And, yes, we would still be vulnerable to internal attacks. But at least they would be within our jurisdiction so we could go after the perps using police instead of sending several hundred thousand troops after them in a hostile land to little or no effect.

This shouldn't seem like a terribly radical idea. PGP is built upon the idea of allowing access through a trust-based system built up by reputation. Balkanizing the internet (if that could even be done as noted above since DNS names and IP addresses are indeed different creatures) would just be a logical extension of that to the level of a national political unit.

Anyway, I'm not actually advocating it. But it would make for an interesting "what if" study to see what all the far-reaching affects would be.

--Anil Relkuntwar 20:18, 17 October 2005 (PDT): Adding to Gmusick point, PGP (and many other cyrptograpghic techniques) are infact designed for trusted parties to communicate over un-trusted channels specifically the internet. Given the internet as its stand today, yes its un-trusted but then it is simple and quite flexible which allows a lot of applications to build upon. For reasons stated above and also noted in the lecture nobody would like to take it down or balknise the internet as a channel iteself. Building a complete secure system/network based on 100% PGP is not a bad idea. Cyber-security is about the securing the IT infrastructures from cyber-criminals without cutting the internet.

Asad Jawahar Disconnecting yourself from the rest of the world and living in isolation will probably make you safe but it would have a very negative impact on economy. The benefits of a global society and no trade barriers are recognized by everyone. I think we have to be careful in how we put up defences and not turn into a closed society or a police state.

Secondly if you devise a mechanism to disconnect other countries from the US, then this may well be used against us by disconneting us from the rest of the world and adversly affecting our economy.

Interesting Article Discussing Cyber-counter terrorism

--Gmusick 15:34, 14 October 2005 (PDT) Al-Qaida proving elusive on the Net

--Hema 17:47, 14 October 2005 (PDT)Nice Article. We spend millions of dollars trying to protect ourselves from the next possible attack. We are playing the defensive game. I would be more happy if we become offensive instead and trouble trouble before the trouble troubles us. It should be very easy to do a cyber attack on suspected terrorist sites or cyber crime gangs rather than trying to figure when and how they will be attacking us next. The article argues that we lose information that might be valuable if we continue monitoring surreptitiously instead. To me, we should not pass up the opportunity to disrupt their activities or atleast cause a major inconvenience to them.

--Imran Ali In order to go on the offensive, the government would have to spend millions of dollars in addition to what is spent on defense. Also, how would you determine whether a site is really a 'terrorist' site or not? This is moving into dangerous territory as the government would now have the power to attack *any* sites that it would deem in conflict with it's own self interest. The first step, imo, would to not allow the government to decide which sites are related to 'terrorism', instead an objective third-party should help in that effort. This is somewhat idealistic, but given the alternative, it would be preferable to allowing the government free reign to attack any sites it deems dangerous.

--Gmusick 17:08, 15 October 2005 (PDT) The so called "Bush Doctrine" already allows the US to INVADE any country it deems to be supporting terrorist activities and the wars in Iraq and Afghanistan are already costing about $100 Billion per year to the US government, not to mention the civilian casualties. So would it really be all that bad if we (the US) launched a cyber-campaign instead of a real-life one? What's worse, not being able to order from Amazon.com because we are being counterattacked in cyberspace or 4 dead marines, 20 dead civilians and over a hundred injured by a car bomb in Baghdad?

--brianmcg Somewhere between offensive and defensive is the concept of deploying defenses that are more active than any corporation (and I’m guessing government) would deploy. There are ‘hack-back’ systems which recognize attacks and respond the attacking system as a sort of retribution. Here’s a brief summary, including reasons why it is probably a bad idea: http://www.sosresearch.org/publications/ISTAS02hackback.PDF Will there ever be a cyber attack equivalent of a ‘make my day’ law where an active defense is allowed by law?

Chris Fleizach - I agree that is probably a bad idea to hack-back, but there is now a legal precedent to do so with Florida's gun laws that allow "Floridians to "meet force with force," erasing the "duty to retreat." Of course, this applies when a life is "threatened," but can a business claim their "life" is also threatend by a DDoS and that they have a right to meet force with force? At least in Florida they do.

Why kill the cash cow?

Joe Xavier 18:38, 16 October 2005 (PDT): Caveat - This is merely me thinking aloud. Yes, I do work for a large software company that is part of the self-sustaining ecosystem I describe below :)
We spent a small portion of the last few lectures talking about the necessity of building secure systems. A secure system could be defined in a number of ways - a system that cannot be accessed except by a pre-defined set of users, a system that is inherently open (anyone can access it) but the data on the system can only be accessed (read) by a pre-defined set of users, a large-scale data-transfer system where unauthorized users cannot learn anything about the data by simply reading all the packets ... the list is endless.
And in this ecosystem that we call the current state of computing, we have vendors who sell software that doesn't qualify as secure. And co-existing in this ecosystem are companies that profit off the vulnerabilities in the systems that the other vendors sell. Year after year this economy sustains itself.
Security products pile on shelves right next to boxes that sell the systems that they're meant to make more secure. And this is a multi-billion dollar market and growing. Except for a few exceptions based on social-engineering (spam, phishing), security vendors sell solutions to issues that should not have existed in the first place. Something tells me that this is the *only* industry that can sustain a model like the one I (very loosely) sketched above. Does anyone know of any other industry or economic model where this happens? I can't think of one.
So, given that this ecosystem is worth tons of money, employs a ton of people and keeps a healthy portion of the NASDAQ ticking quietly, does it make sense to kill it? Wouldn't building secure systems also kill off a major portion of the computer industry? For example, if OS vendors were to develop and ship good anti-virus systems, where would McAfee make it's money? Or, if tougher legislation and better forensics made it really scary to hack into computers and the number of network breach attempts fell drastically - how would IT security departments justify their budgets?

Writing this made me think of an answer to the question I posed above - the computer industry ecosystem that I cynically described above seems to have similarities with the interplay between terrorist groups and various security agencies. How do you justify a defence budget without something to defend against? You need extremist groups actively engaged in subversion or at least convey the impression of enemy activity. And the best part (again, apologies for the apparent cynicism) is that it's hard to quantify the effectiveness of an agency whose role is to protect a country from terrorists. An apparent failure - a successful terrorist action - is grounds to increase funding for the agency instead of an inquiry into why the failure happened. Go figure.


Chris Fleizach - During the last lecture I believe, it was mentioned, jokingly, that we might end up buying into a protection racket where teenage botnet controllers would keep our systems up to date so that others can't take it over. In the same vein as this comment, isn't this what we're doing already... paying Norton, McAfee et al to protect our systems? And yet they consistently fail (on purpose?). Why is it that an Eastern European hacker can care more about the security of your system than a company you may be paying a monthly subscription fee to? Microsoft did enter the arena with their spyware remover and there has been hints that they might enter into the anti-virus market. Does that mean Microsoft will want to perpetuate insecure system policy in order to exploit the anti-virus market?

Joe Xavier 23:58, 16 October 2005 (PDT): No, I wasn't trying to imply that platform vendors will try to perpetuate security flaws. In fact, they'll do everything to make their systems inherently secure or face the threat of their platforms losing share.
It was obvious early on that security flaws in the Windows platform (and apps) - and the hype threin generated - will simply provide fuel to competitors trying to wrest marketshare away. And that's become such a common refrain that's it's quickly becoming stale - 'oh yeah, windows is insecure. open source is the way to go'. Given the number of security fixes (that don't get called such) in linux, I won't even get into that argument :).
Microsoft has in its own best interests to have it a secure platform without the need for third-party tools, including any it might ship itself. There's not much money in shipping anti-virus tools. The market for spyware-removal tools isn't big. Nothing's as big as securing the platform itself. And keeping that platform safe simply makes good sense.

Ted Zuvich - I like Chris' argument. Paying Norton is like paying an insurance payment...and then the insurance doesn't work anyway. A legalized racket, even if they are NOT deliberately failing.

Asad Jawahar I guess you would have to do a cost benefit analysis to really understand the benefits of keeping this ecosystem alive. What does it cost the economy every year because of security flaws vs the revenue/jobs generated by these security companies?

Human Cost & Factors for security and compliance

Avichal 20:00, 18 October 2005 (PDT) Phil Venables touched on regulatory and compliance measures. Having worked in a financial institution for quite a while I have seen the environment change significantly over the years with having to meet one regulatory requirement after another just like Phil described. From Phil's comments it seems they are side-stepping this process by taking a holistic view of ensuring quality and standards than having to chase down every new legislation.

However there is also a very human side to this, which is the people which need to follow the procedures implemented. Being a technical person I have been on the receiving side of most of this at my institution and it hasn't been pretty. Usually there is little understanding of the actual regulations and we only see a metastasized version of them in form of the arcane and monolithic procedures that we are required to follow. For e.g. in order to monitor the changes to our production systems we have an elaborate and tough to navigate change request (CR) process. But the joke about it is that 'You need a CR to move the mouse on a server but not to reboot it'. Which reminds me of the example that Ed Lazowska mentioned, about users keeping passwords like Jan21, Feb21 when being required to change them every month.

The procedures developed have to keep the end-user in mind and users need to be well-informed about them. This is overall true of the security domain as well, well crafted but complicated security frameworks may not end up being used (like Windows Security mechanisms whose features are heavily underutilized in most cases). In my opinion the whole approach about security needs to change from a techno-geek perspective to something heavily involving social-engineering elements at all stages.