Talk:Lecture 10

From CyberSecurity
Jump to: navigation, search

Responsible Disclosure of the security vulnerabilities

Pravin Mittal

What should be responsible way to disclose a security vulnerability for an "ethical hacker"? I am little torn if there should be full disclosure to public or limited disclosure to the software vendor and disclose it only once the patch is out as I can see the pros and cons for both of them.

Limited disclosure helps vendor to release patches for the flaws before the bad guys decide to use for nefarious activities.

But what if vendors are not responsive and "black hat" hackers are capable of finding flaws on their own? And full disclosure may also allow especailly in open-source community to react quickly and fix the problem good example such as BugTraq.

Also to quote Elias Levy who was named "one of the 10 most important people of the Decade by Netword Computing" "Back in 1993, the Internet was actually far less secure than it is today because there was little or no dissemination of information to the public about how to keep malicious users or hackers from taking advantage of vulnerabilities,"

Also, I would like to hear from public policy students, if they are stated gudilines/laws/policy from the U.S government?

I did find the comment by Richard Clarke, President Bush's special advisor for cyber space security, said security professionals have an obligation to be responsible with the disclosure of security vulnerabilities. They should first report vulnerabilities to the vendor who makes the software in which the vulnerability is found, and then tell the government if the vendor doesn't take action.

Pravin Mittal


SMM: As usual, the criterion is easy to state but hard to operationalize. We should follow whichever practice minimizes the number of malicious exploits. The twist is that Dr. Lackey can find potentially disasterous problems that, apparently, the bad hat community never gets around to noticing and/or turning into actual disasters. Dr. Pustilnik's comment that his group is much more fiendish than the guys at bad hat conventions is pretty much the same point. If we believe that Microsoft has gotten out ahead of the (quasi-open source) bad hat community, then total openness might not be the best policy.

The idea that companies will ignore defects unless you do exploits is really interesting. In fact, it screams "institutional failure" or "market imperfection." More precisely, the intuition should be that there is some discrete thing wrong that, if you could only find and fix it, would make things globally better. The current hacker/exploit system looks like a bizarre historical accident that would never evolve the same way twice. Are there other mechanisms that would work better? For example, if companies were required to carry insurance, would the insurance company start jumping up and down to get their attention faster? Maybe that won't work, but it's worth thinking about what would.

BTW, the idea that security can now push back on marketing and say "that can't be done" is a very positive sign. In the old days, companies that shipped insecure products got the revenues and shifted the losses onto consumers. Now they are seeing something closer to the net cost. Telling marketing "no" says we're getting the incentives right, that projects with net negative value for society also have net negative value for the vendor.

The question about having prizes for bugs is really interesting. You could imagine that the expected payout per bug would be smaller than the $2K/person/day that MS pays out for finding bugs Lackey's way. I was particularly puzzled by the idea that paying people prizes for finding bugs is contrary to public policy. When you get past the "giving in to terrorists" rhetoric, what you're really doing is either outsourcing your bug-hunting or persuading people that it is more profitable to turn their discoveries into money than to be malicious. (In the latter case the semi-ethical hackers get subsidized to discover more bugs than they otherwise would, so some of the bugs you buy might never have been discovered without a prize and are therefore wasted. But the same thing happens when you hire Lackey, so you can't really count it as a fundamental objection.) I suppose that there are also mechanical problems with prizes -- you don't want to pay prizes to guys who then turn around and do exploits anyway -- but one imagines that there are fixes for that. For example, you could make payment conditional on nobody exploiting the same bug for 90 days.

Finally, the continued prevalence of parsing and buffer overflow errors suggests that the industry isn't anywhere near the limits of what computer science knows how to do. Presumably, simple errors are about social institutions and incentives. The question is, what percent of the existing problem is amenable to those kinds of reforms? Using Microsoft bulletins to figure out what percent of problems are social vs. frontier CS problems would make an interesting term paper.

Geoff Voelker I found the attitude about buffer overflows from the first two speakers somewhat worrisome. Basically, they were not interested in exploits that a "trained monkey" could perform, like buffer overflows. Instead, they wanted to focus their attention on the tough, challenging cryptoanalysis problems such as finding flaws in security protocols (e.g., Kerberos or 802.11). This worries me because it falls into the classic pattern of not focusing on the easiest way to break a system (practical security), but instead on what is technically challenging or creative. (The perfect is the enemy of the good.)

Keunwoo Lee: I also found the speakers' attitudes towards automatic programming tools to be overly glib and pessimistic. For example, yes, a buffer overrun in a type-safe language can lead to a denial of service attack (index out of bounds or out-of-memory exception) rather than arbitrary code execution, but I'd claim that this is an obvious improvement. If your code crashes with an exception, then you shut down the server and patch it, wheras if you give the attacker a root shell, you may not even notice, plus the attacker can bring down the server and additionally do whatever else (s)he desires. Some programming languages and automated tools can virtually eliminate certain classes of bugs. Yes, there will always be security problems that automated tools do not catch, but that argument could be used against penetration testing or any other security measure that's not 100% complete. Eliminating root exploits via buffer overruns, format string attacks, and race conditions would make a big quantititative difference. These may not be the "cool, hard cracks" that security geeks love, but they're still an important and solvable problem.

Rob Anderson I couldn't agree more. I asked Mark about this during the lecture, and blew me away with his dismissiveness w.r.t. automated means of eliminating vulnerabilities. We're still shipping software with bugs, and---surprise!---we always will. No amount of training or cracking the whip is going to eliminate those bugs. We didn't train C/C++ programmers to stop inserting memory leaks; instead we gave them a garbage collector because we wanted a 100% solution, and because programmers are naturally lazy. It seems that the right place to focus money is on automated tools for eliminating vulnerabilities, and more importantly, languages and systems that prevent vulnerabilities in the first place. A DoS is, as Keunwoo points out, far, far better than a root exploit.

Jeff Bigham

  • Prizes for Bugs. A complication with awarding prizes for finding bugs is coming up with a pricing structure that makes sense and doesn't end up making matters worse. A problem exists when hackers could get more on the open market for their exploits than they could from the prize, so, to make prizes work, businesses would always have to offer more than would be potentially offered on the open market. Given the presumed high profitability of bot nets and the like, it seems that a good exploit could potentially be worth millions. Unfortunately, the potential payoff may be much higher than what the hacker could actually get and it seems incredibly difficult to gauge that accurately. It's also not clear to me how a hacker could prove to Microsoft that they've found a legitimate exploit without giving away a huge hint to how it was accomplished (if they break into a Microsoft computer, they've probably left breadcrumbs that the smart people at Microsoft could piece together). Given the uncertainty of these new psuedo-ethical-maybe hackers at so many levels it seems like MS might be out a whole lot more money paying off a lot of threats of dubious seriousness with the side effect of enticing people into targetting their software instead of that of their competitors. Without prizes, however, MS can hire people to do the same sorts of things while under their control and without nearly as much risk of them going bad.

Geoff Voelker There are a couple of assumptions here that may not hold. First, I suspect that there are many more people willing to submit exploits for prizes than to put them on the botnet market. Prizes make the effort legitimate, and turn it into a programming activity like any other (e.g., Google giving prizes for the programming projects). Second, you do not necessarily have to exploit a computer at Microsoft -- you just need to demonstrate one in MS software.

BotNets and Windows Operating Systems

Chris Fleizach - We haven't heard a great deal of BotNets in this class, but I think their threat potential deserves attention. A few reports have come out from industries, like online gambling (usually operating outside the US and its legal protections), that have detailed either how they succumbed to or defeated a targeted DDoS attack from a BotNet. Although I couldn't find a reference, there was a Wired article about an off-shore gambling company which was the target of an attack that at its peak had 3Gb/sec directed at it. I'm assuming 100% of those hosts were compromised Windows machines (otherwise separate botnet software would have to be written for each type of OS for little gain). I was curious if Microsoft thought they should take an active role in stopping the BotNet problem, or if they were responsible at all. In one sense, the software controlling the computer for the BotNet was allowed to be installed by the user and it must do what it needs to do. The system assumes the user installed it and runs with the user's privileges. Many times, social engineering can fool a user into installing such software without exploiting vulnerabilities inherent in the OS. The first speaker mentioned that they would continue to improve their spyware and adware detectors, frequently sending updates that can find new engines. The most obvious problem with this approach, is that an entriprising hacker can write his own BotNet controller that Microsoft won't know about it.

The next obvious solution is to turn on the firewall and disallow incoming connections, which would stop a BotNet controller from accessing the computer. But currently, when software is installed as a user, there is nothing stopping that software from disabling the firewall entirely, or just for the specific port needed. Linux and MacOS both require a password to enter the control panel and change settings like the firewall, but Windows has never done this. Access is granted based on the first login. It seems, just from a cursory examination, that preventing BotNets might start by allowing access to critical system configuration through passwords. Does anyone from MS know if Vista will do more to protect against these problems? Do they have better ideas about stopping BotNets before they start? Is it their problem at all?

Geoff Voelker

We will hear more about botnets later in the quarter.

As to whether MS feels responsible, I would put it in terms of whether MS feels market pressure. And the answer is yes -- they incur a huge support cost as a result of spyware, adware, etc. Malicious code tends to cause stability problems on people's computers. The first place people blame and call is MS. Since support is costly, MS would like to eliminate the source of those calls.

I'm not entirely clear on the proposed scenario, but I don't think that firewalls are a solution here. Once malicious code is able to run on a machine, it can control the firewalls.


Jeff Bilger From my experience, the first place people blame and call is the company that makes the program that they were using when they encountered a problem. For example, if they are using an Internet based application, they will call that company's technical support line and not MS. To further complicate matters, anti-spyware and virus programs are becoming more an more aggressive in detection. The end result is that both spyware and the programs used to combat spyware are interfering with Internet based applications.

At the company I work for, the majority of the technical support calls that we handle are related to spyware interfering with our program and/or anti-spyware/virus tools generating false positives and blocking our program.

Also, it would be interesting to hear from someone at MS regarding their support policies (free or fee based?) regarding home users of their operating systems.


--Imran Ali 12:42, 4 November 2005 (PST) It looks like the FBI apprehended the 'Botmaster', see article: FBI agents bust Botmaster His sentence carries a maximum term of 50 years. It looks like the government is cracking down hard on this type of criminal activity. The article also indicates that the Botmaster was 'lured' to their FBI offices. I wonder how they actually tracked him down in the first place?

Jack Menzel I'm not sure how reputable of a source "pcpro.com" is but they claim that the luring entailed being " lured to FBI offices in order to pick up equipment seized in an earlier raid". There was a more in depth report from the Washington Post yesterday.


Asad Jawahar From what I know, MS is hardening the security in Vista on similar lines that you stated ie Vista will probably require password for security sensitive opterations even if you are logged on as admin.


Manish Mittal I would like to see more on Botnet as well. Botnets are used predominatly to create DDOS attacks and other kind of attacks. It can be used to send spam, launch phishing attacks and denial of service attack. Its really funny that these hackers even use extortion as a means to get money from the gambling site.

Few eye popping examples are listed in this article as well http://online.wsj.com/article/SB112128442038984802.html?mod=2_1163_1

Criminals penetrated the database of CardSystems Solutions Inc., nabbing up to 200,000 Visa, MasterCard, American Express and Discover card numbers and potentially exposing tens of millions more. Leading high-tech companies in Israel allegedly planted surveillance software on the computers of their business rivals. British security officials warned of a computer attack aimed at stealing sensitive information from banks, insurers and other parts of that country's "critical infrastructure."

Meanwhile, government authorities report that hackers are stepping up attempts to attack critical systems such as water, electricity, finance, transportation and communications. Last year, the Department of Homeland Security prepared a worst-case cyberdisaster scenario where criminals broke into financial-services facilities.

Twenty million credit cards were canceled, automated teller machines failed nationwide, payroll checks couldn't be delivered, and computer malfunctions caused a weeklong shutdown of pension and mutual-fund companies. "Citizens no longer trust any part of the U.S. financial system," the scenario concluded.

--Khaled Sedky 03:08, 9 November 2005 (PST)A good reference for understanding what botnets are and some general tips of how to avoid being infected by such trojans are found at [1]

From this article and from the text of this thread it becomes obvious that here are many reasons this threat should be taken seriously. In the broadest sense, the user experience is being compromised. Uses of botnets directly impact existing spam, anti-piracy, and customer safety initiatives. Botnets are bothersome to individual consumers, they incur cost to enterprise customers, and DDoS threatens the very infrastructure of connected computing. Infected machines represent users who no longer have a safe computing experience. Modern bots harvest email addresses and open back doors that allow controllers to attach and execute new, arbitrary code on the target at any time. It also becomes apparant that there is a criminal use of botnets including Theft, Piracy, Malicious DDos, Large Scale extortion and the most important and thatis most relevant to subject of the course Trafficking/Profiteering and Terrorism and on this later subject, It's not out of the realm of possibility that this disruptive weapon is within the reach of a terrorist organization that would use it in combination with a physical attack to achieve maximum disruption.

To my understanding that Microsoft owning a considerable percentage of the market share of desktop OSes and with tools like windows update from one side and tools reporting crashing like OCA and anti-virus tools and through PSS calls and crash dumps, has a plan of fighting Botnets in a 4 ways approach. This is through Detecting, Disrupting, Discoureging and DisInfecting. There are task forces geared up for that effort.

Jack Menzel- Speaking of Microsoft's plans on defending against bot nets, from this article in PCWorld it says they plan to extend the Anti-spyware software to address rootkits and other malicious software that would constitute a bot net.

And to comment on the 4 way approach: it seems that there should be another pillar added to that platform, "Educate". You have been trained by society to be very careful about whom you let into your house but much less so about what code users will let run on their boxes.

Examples of Vulnerabilities

Jeff Davis Last night people were asking for examples of security vulnerabilities that are beyond just the basic buffer overflow, things that tools like Prefix and Prefast can not detect. Internet Explorer has had plenty of these. We had an interesting vulnerability in the download dialog where the evil web site could repeatedly send an executable, causing IE to display the download prompt asking the user to Run, Save or Cancel. If the web site did it quickly enough they could create enough dialogs to run the system out of memory. When this happened, the dialog would fail to display. The developer who wrote the code never considered what would happen if DialogBoxParam() failed due to OOM and due to a quirk in the way he initialized his variables, the switch statement afterward would fall into the Run case allowing the bad guys to run arbitrary code on the machine.

In Writing Secure Code, page 54, when listing security principles to live by, Howard states clearly: "Don't mix code and data." Unfortunately for us, web pages by thier nature are mixtures of code and data. People have done all sorts of creative things with jscript. Prior to Windows XP Service Pack 2 web sites could call window.createPopup() (not the same as window.open()), creating a borderless, chromeless, topmost window with HTML content of their choosing. The function was originally intended to make scripting menus on web sites easier for developers. However, it lead to all sorts of spoofing attacks, like overlaying the Address bar with phone text, covering up the Cancel button on the ActiveX install dialog, etc. Another fun vulnerability was web sites that would wait for the mousedown event and then move the IE window out from under the mouse, creating a drag and drop event. If they were clever about how they did this, evil web sites could put links on your desktop, in your favorites folder, etc.

Obviously switching languages, from C++ to something "safer" is not a good solution, nor is relying on tools like Prefix (not to in any way disparriage the work the Prefix team does). I saw Theo de Raadt speak at an ACM conference once where he spoke about the OpenBSD three-year, line-by-line code audit. He said 90% of the vulnerabilities stemmed from people not fully understanding the APIs they called (or the documentaiton being broken, etc), and he is right.

Keunwoo Lee 01:26, 9 November 2005 (PST): Why are better tools and languages "obviously" not a good (partial) solution? They won't catch some of the bugs you describe, but they will catch a lot of important ones. Go to CERT and count the bugs that could have been prevented with a type-safe language. Just on today's CERT front page, just considering Microsoft software, I find this and this.
I also think that we shouldn't underestimate the power of tools. Researchers at MSR and elsewhere have developed a number of tools that use program analysis exactly for the purpose of to checking API usage bugs. Are they perfect? Of course not. But what is the alternative? Raw labor. Program analysis tools do not replace labor; when well designed, at least, they are a labor amplifier. Compared to using unassisted labor, the right tool can enable the programmer to produce higher quality software with less effort. And certain tools can eliminate certain classes of bugs entirely.
Finally, it's far from clear to me that research into better programming models couldn't help prevent attacks of exactly the sort you describe. One could devise some kind of formal model of user interface actions (all possible keystrokes, all possible mouse up and mousedown events) and interface transitions. One could then use model checking to ensure that no sequence of events can lead to a violation of certain checkable properties. Once again, such a tool would be imperfect, but it could yield a quantitative improvement. Can such a tool be built? I don't know. It's not obvious to me that it can't be done, and if I were the head of Microsoft Research then I'd task a researcher to either do it or prove to me that it's impossible.

David Dorwin Searching Microsoft's website, it appears PREFast will be in Visual Studio 2005 ("Whidbey"). Is PREFix going to remain internal?

--Dennis Galvin 16:07, 9 November 2005 (PST) Why I think switching languages is not a good solution: No language is completely safe. Unsafe aspects of a language may not be known when the language is created, but only become apparent at a later time. Compilers are also software, so they are not free of implementation bugs. If you can actually entirely eliminate one class of error from the range of possiblities via language switch, there will always be another type of error which can be exploited. Can we eliminate all errors by running everything in a sandbox? In that the sandbox and OS are software, likely not. Switching languages to eliminate security flaws feels like the marketing department promising something the engineers and developers cannot truly deliver.

False Postives in On-line Crash Analysis

Jeff Davis What the first speaker said about false postives is true. I debugged a lot of dumps where the stack cookie (/GS switch stuff) was clearly visible on the stack in the place it should be, only one bit had been flipped. Or it was one word away from where it should be on the stack. These are indicitive of faulty hardware (or overclocked hardware). Or the stack was just complete random garbage, which indicates a serious flaw somewhere but it is pretty difficult to debug if the stack is blown away.

In all the OCA buckets I have looked at, I only found one buffer overflow from a GS violation and it was a false postive-- the cookie was there on the stack just off by a bit. But while I was in there looking at the code, I noticed some string manipulation in a different function that looked suspect.

Rob Anderson I suppose it would be possible to launch a DDoS attack of sorts by flooding MS with falsified online-crash analyses. Regular stacks from plain OLA's could be modified, or new stacks made up. Endless hours of Microsoft developer time could be wasted on all the bogus information. I'm surprised that MS hasn't found some way to secure this channel.

Attack and Penetration Exercise or Industrial Espionage

Eric Leonard Listening to Brian Lopez discuss the vulnerability risk assessments conducted at the Lawrence Livermore National Laboratory that attempt to pentrate into sensitive national sites has made we aware (or just paranoid) of suspicious incidents that I've witnessed first hand. I work within Microsoft's IT department and my greater team has responsibility for many critical internal applications in the areas of finance, HR and operations within the company. On two separate occations, I've had confidential documentation--including application designs and database diagrams--taken from my office. My boss has had unknown callers asking strange questions over the phone about our security and disaster procedures. Laptops left overnight in open offices have been stolen. While these incidents may be not be related, they show how if conducted and coordinated by people with a malicious intent, they could be used to mount an attack. It also shows that security needs to go beyond just firewalls and intrusion detection systems. Attacks based on social engineering or theft can be just as fruitful. I've heard in the press that we've been targets of industrial espionage in the past. I also know that the company has paid several external companies to try to penetrate our systems. Should every small security incident be treated as if it were an attack? When does caution go too far, if ever?

Small Size Big Impact

Tolba: The first speaker just glossed over this topic but I think it has a very profound impact. Handheld devices and consumer electronics based on special versions of popular software like Windows are becoming more and more popular. Such devices are growing to play a central role in many peoples’ lives. With high-speed and permanent connection plans becoming more affordable on those devices like smart-phones, they are becoming more vulnerable to attacks traditionally targeted at desktop and server systems. The impact of this is especially scary given the fact that system software on such devices is usually embedded in some form of read-only memory (ROM) which heightens the update barrier for such devices. In time handhelds and consumer electronics could become very attractive targets for attackers (if they are not already) and turn into fertile grounds for information theft and more.

Manish Mittal: With reference to this article http://www.securitydocs.com/library/3188

It's interesting to look at the stats on loss of handheld devices containing sensitive information. Two major threats to this information are: Loss of the handheld device Malware attack

Rightfully so, the author suggests that unless fully engaged in the company’s security efforts, end-users can be an organization’s greatest vulnerability. Awareness training, and related activities, is the best way to obtain end-user participation in a security program.

Tolba: The article also seems to indicate that PDA/Handheld malware attacks are on the rise. It also points to another attack vector which is Bluetooth. Looking at the known virus table, it seems that a great percentage of the attacks are either made possible or are enhanced by BT. Also physical security of the device presents another risk, especially given lost PDA statistics. I think system vendors are turning their attention to this particular problem. I remember coming across a Windows Mobile 5 advertisement where they showcase the feature where an IT administrator could remotely wipeout the contents of a PDA (in his domain). That’s actually taking the vulnerability (permanent connection) and turning it into a safety measure. I believe such innovative thinking to be key in winning the war against malicious software and information theft.

Single Point of Failure

--Chris DuPuis 15:15, 3 November 2005 (PST) In Brian Lopez's presentation, he described a wide variety of ways in which the details of a company's security infrastructure can become public, and the measures that he recommended to mitigate such information leakage.

However, he also mentioned the crates of security-related documents that organizations send to his team for the purpose of the intrustion test. In addition, he indicated that the clients sometimes (not always) call his team back to see how they have progressed. These two facts imply that, somewhere at Lawrence Livermore, there is a storeroom full of juicy security details for some of the most sensitive sites in the country. Also, based on the description of some number of crates per client, and the large number of clients, it seems likely that a great many of these crates are stored in some kind of offsite document storage facility.

I wonder if the clients are appraised of the risk associated with the storage area for these documents being compromised.

Marty Lyons, UW -- Security at DoD approved facilities (contractors or government) use various references on how to store sensitive information. http://www.dss.mil/isec/nispom.pdf (DoD 5220.22-M National Industrial Security Program - Operating Manual) is a reference to some of these policies and procedures. In general, most things used for research purposes unless required for follow-up work tend to be destroyed as part of a standard document retention or destruction schedule. Like most institutions, government facilities have a finite amount of physical space to store documents, so there is a standard for deciding when something needs to be destroyed to free up space. 5220.22 covers some of that, otherwise it is done by command (Navy, Air Force, etc) or site (base, command) policy. In this instance, I'd say it's likely that once penetration testing is complete, the source data used for the analysis is likely destroyed for security -- the risks and costs of retention are higher than the value in keeping it (nearly zero) since the work has been completed.

Chris Fleizach - In a similar vein, ChoicePoint, a company that sells reports to industries like insurance, collects, stores and analyzes sensitive consumer information. Last february there was an issue where they sold over a 150K credit reports to identity theives because they didn't have policy to do background checks on their customers. And more disturbing is that ChoicePoint has no financial incentive to keep the information secret, since they bear none of the costs of fraudelent transactions and identity theft. This seems exactly like a ploy Brian Lopez's team would go through when analyzing a company. I suspect most top-secret government institutions have policies that will usually make it too difficult for most attackers to get through, companies that hold data for consumers are more plentiful with less regulation. Information they hold is usually valuable to only one individual (like a credit report), unlike trying to protect a state secret. The result, a burgeoning atmosphere for criminals to try to exploit.

David Dorwin There is some financial incentive for ChoicePoint as long as there are some viable alternatives for their clients. One would think that clients won't want to be affiliated with companies that the public knows has lost consumer data. This summer Visa and American Express announced they would end their relationship with credit card processing company CardSystems after it exposed nearly 40 million credit card accounts. (Visa has since said it would continue its relationship with Cardsystems.)

The ChoicePoint scenario described above sounds like a giant loophole. Is anyone aware of pending legislation to close such loopholes?

Andrew Cencini After reading the Spyware paper for this upcoming week's lectures, I find it interesting that the authors of the paper describe in (some) detail the layout of the UW campus network. Probably most of that information is public anyway but I found it to be interesting and well-timed to read a paper that would help an attacker reduce some of the legwork of dissecting a target network.

Passports and RFID

-Jessica Miller 16:10, 4 November 2005 (PST) I have come across a very relevant article in Wired [http://www.wired.com/news/privacy/0,1848,69453,00.html?tw=wn_tophead_2 Fatal Flaw Weakens RFID Passports ] and thought it would be of interest to many of us. The article is written by Bruce Schneier who wrote the pop-security book called "Secrets and Lies: Digital Security in a Networked World". In the Wired article, Schneier talks about the RFID technology and the privacy issues that come along with deploying RFID in passports. Schneier also explains how the State Department has tried to mitigate these security concerns with various "fixes". In end, though, Schneier is still dissatified and argues that this RFID passport technology should have not been designed behind closed doors and should be accessible to public scrutiny. In any case, I thought this was another excellent example of whether or not technology should be made available to the public as a security measure. (Also it is very relevant as the State Department plans to issue these RFID passports in October 2006.)

-Scott Rose 17:56, 20 September 2005 (PST) Schneier writes in his blog that he recommends that all US passport holders immediately renew their passports, regardless expiration date, as a last chance to get a RFID-free passport. For some, it's already too late- the Denver office is already issuing RFID-enabled US passports. Current policy allows users to renew at any time, regardless expiration date.

Katie Veazey: Not sure what part of the lecture discussion this fits in, but I highly Brian Lopez's talk. He did a great job of breaking down what his group does in terms of pre-assessment, assessment, and post-assessment. The fact that his group examines everything from an organzation's press releases, annual reports, firewalls, and physical compound etc. was amazing. I wondered how long this process takes?? Do you think the assessment program lasts a few months, a year, or longer? I don't know how fast his staff can accomplish certain tasks or how many people he has working for him....

Chemical weapons dumped at sea off the U.S. coast

Marty Lyons, UW -- This article from the Daily Press (Newport News, VA) details chemical weapons which were dumped at sea. They are now decaying and beginning to show up in fishing nets, marine life, and more. The effects of this short-sightedness could be long-term and influence all kinds of human usage of waterfront areas, seafood consumption, and further exploration of sea areas for other uses such as oil production. There is as yet no major public or political response, but one could imagine this will be a major news event once there is an actual public safety impact; hopefully this situation is noted and acted on by the media and Congress.

Special report from the Daily Press at:

http://www.dailypress.com/news/local/dp-chemdumping-stories,0,7934774.storygallery

Part 1 (excerpt below): http://www.dailypress.com/news/local/dp-02761sy0oct30,0,3545637.story

Part 2: http://www.dailypress.com/news/local/dp-02774sy0oct31,0,6036010.story

Decades of dumping chemical arms leave a risky legacy

BY JOHN M.R. BULL Daily Press (Newport News, Va.)

NEWPORT NEWS, Va. - In the summer of 2004, a clam-dredging operation off New Jersey pulled up an old artillery shell.

The long-submerged World War I-era explosive was filled with a black tarlike substance.

Bomb disposal technicians from Dover Air Force Base, Del., were brought in to dismantle it. Three of them were injured - one hospitalized with large pus-filled blisters on an arm and hand.

The shell was filled with mustard gas in solid form.

What was long feared by the few military officials in the know had come to pass: Chemical weapons that the Army dumped at sea decades ago finally ended up on shore in the United States.

[...]

Hashing Passwords and Encrypting Credit Card Information - making it easier for fraudsters ?

--Hema 20:31, 6 November 2005 (PST) In ecommerce websites it has become a SOX requirement to hash and store the passwords of users.The advantage of hashing is that an unauthorized person accessing the database won't see the user's password. But seeing the password is only useful if they use the same password on other websites. If someone can break into a company database, users will hate the company even if the passwords are hashed.

More often frausters tend to use the same passwords in the site. Companies have Fraud Monitoring Groups who monitor the website transactions everyday. Knowledge of fraudster passwords over time helps them to identify new fraud accounts created on their website. One way hashing of passwords takes them a longer time to identify fraudsters. Also encrypting credit cards leads to a similar problem. Fraudsters use Credit Card Generators to attack sites and Fraud Prevention Groups find it easy to catch such people as they are able to identify them generating credit cards with the same BIN numbers. But with encryption they are no longer able to use the same tactics.

Could someone give examples of how internet based companies have adapted themselves to identify fraud transactions ?

Marty Lyons, UW -- Here's an example for you. When we designed AOL (this is back in 1993, so bear with my aged example...) verifying credit card information was fairly straightforward, and involved sending the card number, expiration date, and name to the clearinghouse, to verify the card was "good". Just like when you make a retail purchase, we would get back codes for "approved", "declined" (with numerical reason code), or "invalid" (the card number was just wrong, or the checksum didn't work). You could model that fairly readily in terms of processing overhead, and with our signup rate (let's say it was hundreds of customers per minute) you could figure out whether that was a viable authentication model.

Fast forward to 1996, when AOL had grown from roughly a quarter million to over 5 million customers, and we were signing up customers at the rate of sometimes thousands per hour. Suddenly performing authentication becomes a big deal, since now there's lot of processing and network overhead just to ship that data to the clearinghouse and get an answer. To keep fraud down, you impose an extra condition that the customer has to provide a billing address and so on -- but that adds to your overhead, and slows the user experience. Slow it down too much and the (potential) customer may get tired of waiting, hang up his modem, and go do something else.

So it's a fine line here between imposing too much checking and just enough. At some level, the banking industry has built in a very large acceptable loss percentage into the credit card equation. Until that percentage is reduced, and the technology deployed to require more than just information you can get in a text file (biometric authentication, one time password/two factor authentication via SecureID [2] type device), the problem of large scale credit card fraud will only continue to worsen.

How are people identfying fraud with retail purchases? The vast majority of sites use only those four elements: card number, expiration date, name on card, billing address; sometimes the CVV [3] (additional check digits) are requested. Most times the only security is the SSL connection from the browser to back end, and more often then not the data is *not* encrypted on the server side. Certainly e-commerce companies of size have figured out how to do this correctly (amazon.com, ebay, etc), but since more online storefronts are run by small businesses which are not subject to public company regulations (like Sarbanes-Oxley), this is typically not the case. There is very little dynamic (AI, etc) type fraud detection done by the industry as a whole -- it's left to the credit card processors since they have all the pattern data of what's happening against a card. In the past, American Express had the most aggressive behavior of declining purchases which didn't fit a user's behavior profile, some VISA processors are now equally so.

The answer is you want to commit a lot of fraud is simple: get a big list of credit card numbers with addresses (easy) and cycle through them quick.

Risk Based National Security?

Sean West (2nd Year GSPP): So the third speaker addressed the topic of measuring risk and vulnerabilities and I was fascinated by the topic. The 9/11 Commission Report advocates switching to a risk and vulnerabilities based homeland security system. What this means is that we should not be funding homeland security in a broad state by state basis but rather we should fund it on the basis of what is actually facing a legitimate threat of attack. This eliminates counties 200 miles away from any conceivable target being able to purchase humvees and flak jackets with homeland security money; in short it reduces inefficiency. I wonder how much a risk based criterion can be applied to national security (not homeland security). Can we measure foreign aid on a risk of terrorist propogation in certain areas? Can we measure the laws we impose on the private sector in terms of minimum standards on a risk based criterion--only applying to feasible targets? Can we apply this to financial transaction monitoring to better guage who are high-risk clients?

Avi Springer: I thought a somewhat troubling aspect of the risk assessment projects Brian Lopez described is that they seem to be exclusively client-sponsored. That is to say, the team assesses risks to organizations and/or specific locations only once someone, be that a private company or a government agency like DHS, has requested that they do so. From a national homeland security perspective, it would seem desirable for Mr. Lopez and his team, as an objective and experienced group, to actively seek out risk assessment projects using their own “CAW” and “CVT” methodology rather than waiting for patrons to come along with specific projects in mind. Maybe they do this, but I don’t think it was mentioned. I suppose DHS probably has some sort of methodology they use to figure out what projects to ask Mr. Lopez’s team to do, but as we’ve seen in recent months DHS doesn’t always do the greatest job is setting priorities. At any rate, I guess my more general point is that risk assessment methodology shouldn’t just be applied within projects, but also in figuring out which projects to undertake in the first place in order to add the most value national security-wise.

--Ianrich 21:22, 9 November 2005 (PST)Ian Richardson: Risk based national security is a very interesting approach and I think it has applications beyond the allocation of resources to geographical areas. Since the national counterterrorist focus is on deterring and preventing September 11th style attacks by organizations like Islamic Jihad, their modus operandi is especially relevant to our counterterrorist strategy. Al Qaeda's major attacks have not been particularly sophisticated. They tend to exploit unperceived vulnerabilities (knives on planes, driving a tiny inflatable toward a US Warship, a rental truck with a bomb in it etc.) to attack relatively unprotected targets.

This is by no means a systematic analysis of Al Qaeda's operation, but I think it casts doubt on the value of spending billions of dollars protecting targets that are relatively unlikely to be the subject of an attack by any of these organizations that are, after all, the reason we have vast amounts of homeland security funds in the first place. After having observed all of the lectures on cybersecurity, the standard of "trained monkey" used by computer security experts to characterize the level of expertise needed to implement many potentially serious attacks against our critical infrastructure seems to me to be a standard that many of us "policy" students at Berkeley would fail to meet. I don't have much confidence that the disillusioned and relatively undereducated middle-eastern youth that are most likely to attempt attacks on the United States or our critical infrastructure would meet that standard either. Likewise the focus on weapons of mass destruction seems to be missing the point.

Why does homeland security and cyberterrorism focus on the deterrence and prevention of catastrophic attacks when our likely attackers tend to use relatively low-tech and low-sophistication methods of attack? Where are the experts on the non-proliferation of shoulder-mounted surface to air missiles, or the protection and defense of what Brian Lopez termed "FUD-hubs" like schools and sports stadiums from conventional explosives? Perhaps WMD attacks and catastrophic hacks of vital infrastructure (our power grid for instance) are more glamorous, or maybe it is in fact easier for us to imagine a defense to a complex, multi-step, yet relatively unlikely, attack. I'd argue that we prefer to defend against such attacks because there are more layers of defense to apply to such threats, more opportunities for success. It is considerably more difficult to bring our vast technological and economic superiority to bear on a low-technology, easily implemented attack like those initiated by Chechen separatists against Russian "FUD-Hubs" in the last few years that resulted in high casualty rates and required little in the way of resources to execute.

New WEP products still shipping

--Dennis Galvin 06:27, 8 November 2005 (PST): Interesting to note that in spite of the fact that WEP (Wireless Equivalent Privacy Wikipedia article) is demonstrably broken, that relatively new consumer products are still being shipped with the technology. WPA (Wi-Fi Protected Access Wikipedia article) is clearly what should be shipping.

Kodak's EasyShare-One camera (USA Today article, Glenn Fleishman article) ships with only WEP. What are some implications of this? In order to utilize the wireless features she bought the camera for, the consumer must downgrade her wireless access point or router to WEP from the more secure WPA. Definitely not nice. OK, to be fair, Kodak will produce a firmware upgrade to provide WPA, but I doubt many customers will upgrade their firmware and re-enable WPA on their access point/router. By releasing a WEP enabled product, Kodak has just lengthened the time WEP must be supported or will be present. Kodak is clearly a large corporation (though struggling with viability, and a change in principal market technology) and could certainly have afforded the wait. Is this simply a case of "rush to market, fix the problems later" thinking?

Another example: Roku's Soundbridge product supports only WEP ([4] Roku web site). Interesting to note from a cnet review of the Soundbridge 500 product, "It turns out that Roku's WEP implementation only works with a key index 1." [5] In Roku's case, they have irresponsibly released product which further weakens the already provably broken WEP technology. I recall there was a promise to provide a firmware upgrade, but can't locate a web ref. The Soundbridge M1000, and M2000 products (more recently released than the 500) are also still only WEP capable. Fleishman's article, referenced above, says Roku's CEO gave some lame excuse about firmware lagging.

WPA technology has been widely available since June 2004. WEP has been known to be weak since 2001 or so. Surely in 2005 companies should not be releasing product which cannot be used with the more secure WPA. Would full vulnerability disclosure with published exploit code have prevented this? In light of the relatively irresponsible actions of at least some corporations, is the only responsible disclosure full disclosure?

Marty Lyons, UW -- There is a demand-pull problem at work here: Most consumers don't know that over-the-air encryption is available, and certainly haven't been educated as to why it's important. So there is still a learning curve to get the customer to understand what this stuff is, and having Yet More Acryonyms for the average home user isn't helping. Ask the average person at Best Buy walking out with his Linksys what WEP is and they couldn't be bothered. Tell them they should ensure they use WPA, and they'll give you that "what now?" look.

As an industry we've made this stuff FUNDAMENTALLY too difficult to use, so it's not fair to blame the customers, even if the vendor should actually be held to a higher standard, and ship secure networking devices (see the "Why Johnny Can't Encrypt" Usenix paper from 5 October). It was in Windows 2000 that Microsoft shipped the first decent sign-on security, and it's just now (five years!) that corporate customers are taking advantage of it. And you only need ONE insecure Windows 98 box on your network to make it all for naught.

The problem with single-factor authentication (passwords/anything you can just type) is going to get to a point -- soon -- where it's impossible to guarantee security from device to device. It's nice that WPA encryption is available, but there are going to be people relying on WEP for a long, long time, and all that data will remain at risk. The only realistic way to get people to understand security is to move to a two-factor scheme (something you have, something you know) as with SecurID tokens [6], or smart VPN clients for every device.

Chris Fleizach - It was odd to hear that Joshua Lackey, who broke WEP on his own, still used WEP in his own house because he didn't want to buy new hardware. He was happy that people in his neighborhood used at least WEP. I think the problem is that most consumers just aren't that concerned with privacy or security. There have been studies that show 70% of employees will gave out their passwords for a bar of chocolate. We routinely throw away receipts with our credit card numbers on them. The main deterrence is that we think people don't care enough about us to make a concerted effort to attack our flimsy defenses. As Lampson noted in his paper on security (reading for next week), what we need more of is deterrence and criminal apprehension. As in the real world, we don't really secure our houses, but we make it harder for criminals because most likely, we believe, they will be caught. Thus, there is a strong deterrence. Unfortunately, that doesn't exist with computer security yet, but consumers and probably most manufacturers still believe it.

--Dennis Galvin 12:57, 9 November 2005 (PST) I think Joshua said of the more than 30 wi-fi routers / access points he could see, that just a couple were using WAP, the majority were unsecured and the rest were WEP. From my house I can see 12 wi-fi access points, of those twelve, only mine uses any encryption at all (WAP). One of the unsecured access points belongs to a self-employed IT consultant working out of his home. I wonder what his customers would think.... Or maybe he has just set up a honeypot and is trolling.... Joshua made it sound like there was nothing in his home network worth going after.

I just took a peek at the credit card receipts in my wallet, all only had either the last 4 digits or an authorization printed on them. But your point is well taken because so many of us are more than willing to pitch incredibly useful (to identity thieves) information (old tax returns, bill statements, cancelled checks, etc.) in the recyle bin or trash.

Marty Lyon's point about demand-pull is spot on. The manufacturers have convinced us we need to have wifi enabled technological trifles. If only they could swing the demand-pull phenomena around the backside of wifi to security. A hard sell, but similar to deadbolt locks on houses 15 years ago ... when I sold a house in 1990, the real estate agent thought the deadbolt locks on both exterior doors might scare off customers ... when I bought my current house 2 years ago, the insurance agent gave me a discount and a sign up bonus for having them.

--Dan Liebling Not valuing one's home network is a good motivator for inaction in upgrading, the ease of the upgrade notwithstanding. The perception is likely that hackers are looking to "steal" something and what's the point of stealing my family photos? I don't think the realization of malicious destruction has really set in the consumer mind. How many people would know -- or even know someone who has had a home network intrusion?

As to ease of installation, I just changed to WPA on my home wireless network. For both laptops I had to download drivers, including ones from Cisco which took 30 minutes due to registration required and a whole lot of other crap.

Privacy of employees leaving the company

--Dan Liebling One topic I haven't seen mentioned here was sort of an offhand remark by Lackey about companies monitoring ex-employees who might have an axe to grind against their former employer. Does anyone know that their company explicitly does this? Seems like it would probably be contracted to a third party as well. The privacy issues seem innumerable.

--Gorchard 09:11, 10 November 2005 (PST) I know that Microsoft conducts exit interviews (is that common?), and probably one of the reasons they do this is to get a feeling for the employee's state of mind upon leaving the company. Whether they do any monitoring or not, I don't know, but it probably helps them to know up front if an employee on the way out has particularly strong negative feelings towards Microsoft and might be wanting revenge in some way...

--Dennis Galvin 09:31, 10 November 2005 (PST) Phil Venables mentioned in his talk the "garden period" ... not quite the same thing, but a cooling off period. It sounded as though monitoring might occur there.

--Brian McGuire I suspect that key employees might be loosly monitored if there is a noncompete agreement, but I think the exit interviews are more likely used to collect information about how a company can reduce turnover which can be costly. Odds are that if Microsoft were monitoring ex-employees over security concerns it would be more widely known.

Lecture 10: Thoughts & Questions

Mr. Lopez, I thought your presentation was extremely helpful and think it would have been nice to have had much ealier in the semester. Thank you.

Mr. Lopez, with respect to dealing with the concerns of stakeholders, when/how do you cut that segment of the assessment off? It seems like that could go on forever. Any thoughts?

Mr. Lopez, you mentioned "holographic tests," can you discuss those a bit more -- i.e. what are they and what types are there?

Mr. Lopez, I found it rather impressive that the assessment team was able to narrow down the sites of interest in the greater Los Angeles area so much. Given the fact that that was a possibility, do you think that doing the same throughout the country is likely in the near future? Are there other, similar teams doing the same types of assessments elsewhere (i.e. private firms, LANL, etc.)?

Mr. Lopez, what do you think about the current tendency for lawmakers to focus on sites in middle America that are much less likely to be targeted than sites on the coasts -- knowing what you know, is that appropriate or do you find it wasteful and not cost effective?

Mr. Lopez, do you have any career advice for someone wanting to get involved in the field?