Talk:Lecture 12

From CyberSecurity
Jump to: navigation, search

Liability in Honeypots

Chris Fleizach - Here's my idea to prevent liability issues in honeypots. What you want is your infected computer to communicate with the master (and issue IRC commands for example) so it appears to be an active bot, but you don't want it to participate in infection and DDoS attacks. So when data comes in, it's allowed to infect a computer. Any further data that it produces gets sent to a buddy computer and a gateway. The gateway holds up the data. The buddy computers runs network services in sandboxes that allow it to examine what happens to the system. If surreptitious files are created or modified, sockets opened or memory changed in unpredictable ways, then we assume the data coming from the infected bot is malicious, so we tell the gateway to discard that output. If nothing happens (ie an IRC command is sent), then the buddy computer notifies the gateway and allows it to proceed. The same idea applies for a DDoS, but it should be easier to identify since a dramatic increase in bandwidth can be noticed quickly. One issue that this doesn't address immediately is what if an infected bot then sends out exploits that affect other OS's (ie. Win 2000 or WinXP SP2 and the buddy is running SP1). So the buddy computer won't get infected and allow it to proceed, possibly causing problems. The immediate solution is to multiplex your buddy to run various versions of OS's, which shouldn't be that difficult with virtual machine software (VMware, Xen), but could raise complications if you need to model various patch states.

Daryl Sterling Jr - What if a bot were created that "phoned home" and downloaded the latest version of Adaware and ran in on the "infected" computer, then deleted itself then slowly distributed itself to other machines? And it ONLY did cleaning when the machine was idle and only spread itself when the network had low usage? Also, to get around legalities, before it installed itself, it ASKED to user to click "Yes" or "Ok"...because we all know how well that works for spyware, why wouldn't it work for good stuff?

Ted Zuvich - Chris, I guess the issue is that you don't know whether the bot is phoning home to get an update (which you want to allow) or whether its doing an attack of some sort. It might get pretty tricky, trying to tell the difference.

Santtu 20:57, 28 November 2005 (PST) - It's definitely a tempting idea, but number of software configurations you'd have to be able to deal with could be huge. You might be able to only check the most common configurations, but that would leave you open to liability for the other configurations. Just maintaining the required number of VMs would be costly. I did recently read somewhere (possibly one of the readings) about minimizing the resources required by a VM based on the fact that VM images are largely identical. This would allow more VMs on a single physical machine but you'd probably still need a whole bunch of physical machines to cover the interesting configurations. I also think it is easy to fall into the trap of designing something like this to work for current bot behavior. I suspect bots would also quickly adapt to these detection techniques and change their behavior, possibly to something similar to what Daryl suggested.

Lecture 12 Comments & Questions

Mr. Varian, in the news recently, there has been some discussion of anti-piracy software that Sony included on some of its newer CDs and DVDs that automatically and unbeknownst to the owner of the PC, uploaded itself when the CDs/DVDs were placed into the PC's drive. It turns out that the software is riddled with exploits that can and have been utilized by persons to add malware or turn PCs into Zombies --the software is also extremely difficult, impossible for most common users -- to remove once it has uploaded itself. Do you have any thoughts on the legal implications for Sony here? Have similar cases been brought, if so, what have courts determined -- is there a common law standard developing? The series of cases that are being or will likely be brought as a result of the Sony debacle seem to provide a great place for the courts to come in and place liability -- be it least cost avoider or due care. Might you have any thoughts on the matter?

Mr. Varian, it seems to me that the call for a private representative organization or regulatory body to step in and set cyber security standards is a pretty good idea -- which agencies do you envision doing this, is there one or more with the statutory authority to do so or would Congress have to pass new legislation authorizing such authority?

Mr. Varian, it also seems, at this point, that cyber crime isn't affecting most consumers/citizens -- we here about it but with the exception of those in the field or those that are hit with something particular damaging, I don't think it comes across to people as pressing; I think many people, so long as being a zombie doesn't disrupt their use of their computer, don't see the connection enough to care -- do you think that is really is what is responsible for the lack of this issue being addressed by regulators or private industry? After all, it is going to impose some sort of extra cost on those participants, industry/regulators. Maybe the critical mass of public understanding just isn't there yet. A bit of education and PR then would seem to be in order, some directed at Joe Public.

Mr. Varian, do you think there is any place for introducing strict liability or a product liability type regime into the cyber security world? Should manufacturers of software have to prove they have used the best technology possible to avoid liability?

Mr. Varian, just out of curiosity what imposes the greatest overall societal cost, a part from distribution, the American or UK model with respect to ATMs?

Mr. Varian, are there any statistics on how many individuals/companies are purchasing cyber attack insurance?

Professor Savage, if 99% or viruses are zoos just put out to prove a point, does that demonstrate that the criminal penalties should be greatly increased -- if that's the case and the problem is largely coming from an intellectual exercise it seems like deterrence here would be fairly easy to impose effectively. Any thoughts?

From a real cynical point of view, it seems almost like you have two groups within an already very small segment of society who understand enough to either attempt to do good or bad within the cyber world -- like the two are just battling eachother and that that battle actually provides money, jobs, etc. to the narrow segment -- ensures employment. Should that factor into how we go about fighting the problem or dealing with cyber issues?

Regarding liability concerns brought about by traking worms, it seems ot me that if it was clear that the set up was designed to further security and was diagnostic, that regulators would be willing to issue some sort of waiver of liability -- is that where we are heading or is that not even on the radar yet? I just have trouble believing that liability concerns are really that much of an issue here -- from an equity, legislative, regulation, public good standpoint it seems clear that liability concerns shouldn't stand in the way. Is somebody working on appropriate legislation or regulation here -- specific interest groups and/or politicians?

Professor Paxson, can you provide me with a list of countries/states that either do not allow the recording or monitoring of any personal info (I think those are going to be mostly Euro, right?)and those that don't care what is monitored/recorded at all (I am guessing China and ...)? Thanks.

Ted Zuvich - With respect to the ATM liability question: I've got some friends that live in the UK, and it seems like every single one of them has at one point or another had an ATM or VISA problem. The universal response from the UK banks was "too bad, so sad." Since the bank can quite easily absorb a $20 loss, but I can't, I would think that the US model is better for society. Otherwise (as happens in the UK), the bank has absolutely no incentive to do anything about ATM fraud or error.

Insurance, Liability, and Federal Assistance

abc As part of last night's lectures, there was discussion of how insurance companies could potentially provide coverage for damage caused by CyberAttacks. One observation I have is that it seems the vast majority of cyberattacks are more widespread than a singleton attack on a single host. Worms that take out the Internet, clog internal networks, and disrupt a company's computing resources are big, messy events. So, essentially, unless coverage is scoped to discrete, singleton attacks, it would seem that every big worm that has a broad impact on the Internet could essentially make insurance companies go bankrupt. Slammer, Code Red, and others are all like mini-Hurricane Katrinas. Given this assertion, what burden should be placed on the federal government (in the US) to step in and help either individual companies (or insurance companies if some sort of scheme could be worked out effectively) when these Big Ones hit? In many cases, the damage caused to these companies is caused by outside forces (perhaps similar to an "Act of God") and may not be preventable at all. Does the government have a responsibility to step in and help? Should the government also invest in providing infrastructure, response and capacity measures to help cope with these threats? Just some thoughts...

Chris Fleizach - I think you're going to find it would be difficult to classify a devastating worm as a "natural disaster," and not only because it's man made, but because it's something, with due diligence you can prepare for and mitigate. There are anti-DDoS products and ways to filter even the largest of attacks if you're prepared to spend the time and money to do so. Similarly, corporate machines can all have firewalls, all have limited ability to send data back out, all be included in some sort of bandwidth shaping to avoid them becoming part of the problem. Network Access Control can force computers on your network to be secure to a degree. The only issue is that it takes skill and money to do these things.

Another interesting point to note relates to our speakers having said our computers exist in a hostile world and that we're just waiting for the next terrible thing to happen. But, it hasn't really. In fact, there haven't been any devastating worms since SQL slammer almost 3 years ago that have taken down the Internet. The reason, it seems, is that there's little monetary remuneration with just taking down the Internet. Instead, more money could be made taking down one high value victim. Or instead of using an exploit to wreak havoc, you would use the exploit to install bots on as many computers as possible. The introduction of organized criminality into hacking has removed a lot of the prank-ish nature that previous "generations" were familiar with. This new age is more insidious in a way, but they have a vested interest in keeping the network up and running.

abc Anti-DDOS products and other filters are only as good as the paths between your company and your business partners. Wouldn't a larger, generally destructive worm (even if your particular site isn't taken out) still grind things to a halt if you can't submit financial information to your bank, purchase supplies from your suppliers, and arrange shipping for your products? It is sort of like surviving a nuclear attack and being the last person left on earth. More to the point - the general impact on business in those cases is not trivial and not everyone can afford DDOS shields, etc. (just like not everyone can afford flood insurance, etc.) - while it's not really an Act of God (far from it!) in many ways large-scale worm attacks behave in similar ways - you can only plan and prepare so much, and they're designed to go after everything in their path.

Leonarde I’m not sure that the damage done during a large-scale internet attack can be compared to the damage done by hurricane Katrina. Not all businesses would be affected equally nor would the damage be on such a long-term, catastrophic scale. There isn’t the homogeneity of the server like there is of the desktop. Despite what Microsoft or Red Hat may want, not all systems are running IIS with SQL Server or Apache with MySql. This would mean some server-specific exploits would completely pass some companies by (like SQL Slammer). Even if the all systems were equally disrupted, the material loss from the disruption can vary greatly depending on the nature of the online business. For example, Amazon would suffer much worse than Barnes and Noble even though they are considered direct competitors. Amazon’s presence is completely on the Internet, whereas most of Barnes and Noble’s revenue is generated through brick and mortar stores. Also, after an attack systems may need patching, but hardware infrastructure goes undamaged. Companies usually can get back into business again once the software is rehabilitated and data is recovered. I’m not trying to say that insurance isn’t needed. On the contrary, cyber attacks can be devastating, especially when they are able to damage the reputation of the company and lose customers to competitors. Insurance would also be necessary if the attack were based on fraudulent transactions that diverted finances directly from the company. I’m just trying to say that large-scale attacks generally won’t have the same impact and scale that a catastrophic natural disaster would have such that the federal government would need to step in a bail the insurance industry out.

David Coleman I cannot disagree strongly enough with the concept of assigning liability using either of the models presented in the lecture when dealing with intentional criminal acts. They seem very valid for accidents (hence the notation containing the capital A in the formulas) or other types of random situations. However, when dealing with knowlingly criminal activity, I believe that the perpetrator of the crime should be held completely liable. Any assignment of liabilities to other parties simply lessens the culpability of the criminal. The response to this might be something like, "No, the party who commits the crime is still completely culpable legally." However, given that the jury system often works more on emotion that logic and facts, the situation is ripe for the excuse of, "Well, they could have stopped me, but they didn't. It's their fault."

Dr. Varian didn't address the issue of the cost factor in assigning liability for this type of activity to the most convenient party. If ISPs are now liable for the traffic that goes over their wires, they need to invest in monitoring equipment and liability insurance. This raises the cost to the end user. I agree that it would eventually reduce activity, but at the cost of both privacy and dollars.

The example of ATM machines / bank computer systems isn't particularly relevant here. It's an interesting example of how the assignment of liability dramatically alters behavior of the parties (correctly in the US in my opinion). I think it was valid and useful assignment of liability because it assigned the liability to the parties directly involved. An analagous example would have been to assign to the phone company because the bank information was traveling over the phone lines from time to time. That clearly would have been inappropriate and ineffective.

Assignment of liability is a useful tool but should not be applied in the case of criminal activity.

--Hema 13:57, 20 November 2005 (PST) I also found it disturbing that in the lecture about assigning liabilities, assigning liability to the perpetuator or the criminal was not even talked about. Unless we have a very stringent punishment laws for criminals this is going to be a very attractive profession. Also Law Enforcement should pay as much attention to catching cyber criminals as they would do for any other criminal. Computer crime investigators & prosecutors should be given access to technology required to conduct complex investigations. More effort should be put into the development of the law in dealing with cross-border jurisdiction as that is very much in its infancy. Unfortunately while criminals have adapted the advancements of computer technology to further their own illegal activities our law enforcement hasn't.

Chris has made a very good point in that these days the intruders/Cyber Criminal have the same incentives as us to keep the network up and running.

Jack Menzel-- However, when we are talking about insurance the whole point is that the perpetrator may not be able to bear the entire cost of the damage that they cause. If you were guaranteed that the criminal would be able to compensate for their damage then why buy insurance in the first place? It's not that this absolute liability is not important, however I would argue that for the context of this particular discussion I'd take it as given that the criminal will be prosecuted and that _criminal_ liability will be properly administered.


Marty Lyons, UW CSE -- There's a problem with insuring against attacks because of the problem of getting data to the actuaries. Unlike with natural disasters of human illness, it's difficult to quantify the problem. What are the variables and how would you assign statistics to them -- for diseases there is epidemiology with well established statistical modeling techniques, with natural events we have data on actual events and how to intepret forecasts with finite accuracy. I asked Prof. Varian during the class session how would the secondary markets handle this lack of accuracy, particularly the re-insurance companies (companies which insure insurance companies). Maybe my question lacked sufficient detail, but I'm not sure I still understand how this will work.

Having worked in an insurance company (I was a systems programmer, so no insurance jokes!) I watched how the actuaries built up models to determine pricing. The pricing structure is mapped carefully onto the macro models of how the firm is profiting not just temporally quarter to quarter, but how they are doing relative to the competition both regionally and nationally. The re-insurance companies extrapolate even further, and ensure that the insurers aren't placing bad bets by insuring those of exceptionally high risk without an appropriate compensation structure.

On the other end of this we have the criminals engaging in the attacks. Law enforcement is both overwhelmed with the quantity of attacks while lacking the resources in staffing and funding to take on all but the most high profile cases. Having spoken to both FBI and prosecutors, it's well acknowledged how few of the crimes are able to be handled.

In the middle, we have the actual user of a system who just wants to protect his assets. Let's take it as a given that he's going to come under attack. He wants to ensure two things -- a) limit the damages and/or information disclosure, b) limit the financial losses from the attack itself, as well as suits by individuals and shareholders of the business. If there was a chance, the owner would like to add c) find and prosecute the culprit.

Assuming the owner wants insurance against "bad software" this is available today under a general liability policy. Maybe it will be expanded by the actuaries -- how many Windows system do you run? How many Linux machines? What versions of software? What are the configuration settings? The matrix of possibilities could get untenable for the insurers, so we're more likely to see insurers hire independent auditors who will (as part of the contract) continually attempt penetration testing of the insured. If they break in, insurance rates could change -- perhaps dramatically.

As a systems person, I look at all of the crazy things people are doing to "protect" themselves and can't help but wonder if it would make more sense to have the IEEE or other standards body certify certain system designs? E.g. for a small company, we've certified the following configurations to connect yourself to the net (maybe a hardened router, firewall, proxy server design); for a large company, you must have a three layer defense which includes intrusion detection etc. That could form the foundation of an insurer's statistical model since there would finally be a stable reference point (the control group, if you're a natural scientist). This also helps law enforcement in that with a known model to work backwards from, it both assists in preventing the break-in as well as gathering evidence if something successfully penetrates your perimeter. And since it's a "standard", if any one of these models is defeated, the data applies to everyone, and thus the insurers could recalibrate rates effectively.

This seems the simplest way to allow the three parties - user, insurer, enforcement - cooperate with limited overhead and without the burden of government involvement.

Can insurance companies comprehend cybersecurity issues?

Noor-E-Gagan Singh- According to Encarta->The basic assumption in actuarial science is that the frequency with which events occurred in the past may be used to predict or measure the probability of their occurring in the future.

As we discussed our classes too that the frequency of cybersecurity breaches has not followed any patterns. It seems to me that today's networked technology world is too complicated for an insurance company to assess properly. The rate of change in a world where every machine can see and talk to billions of networked computers is just too gigantic to model against. Another example in which I feel insurance has failed is the medical malpractice arena. Human physiology has proved too complicated for insurance companies and hence so many controversies around medical malpractice insurance.

Rob Anderson Indeed, imagine that an attack against a bank could result in the draining of billions of dollars in funds. Or it might result in only the theft of a few credit card numbers, which may or may not be useful. An attack against a public utility might deface a website, or it might shut down a power grid for millions of consumers. How do we calculate probabilities of these things? This is a very different actuarial game than estimating human lifetimes, or calculating the risk a block of buildings will suffer a devastating fire.

Avichal 13:29, 21 November 2005 (PST) Information Technology (IT) industry is still in it's nascent years and since the development cycle has been far comressed (a few years rather than decades or centuries for other industries) the insurance industry will certainly have to play catch up. Hopefully given more time, statistical trends may emerge, allowing insurance industry to assess threats/risks and associate dollar values to it without having a total understanding of the cyberinfrastructure itself. One problem is that there is more of a chance in IT industry for disruptive technologies to emerge (from the good or bad guys) which may totally change the playing field.

The problems in the medical malpractive arena has partly been the failure of the legaly infrastructure. This also remain relatively unchartered terrotiry as far as IT industry goes. We are likely to commit a few mistakes on the way (e.g. DMCA which is a mistake in my opinion), but hopefully not fatal ones.

Asad Jawahar ID thenft is an area where I think the insurance industry has done fairly well and that is probably more closer to the cybersecurity arena. There are alreay a number of companies offering data/network/hacking insurance Here is an interesting article from 2002 that tells you the type of coverages available and where you may be able tyo get them http://www.savetz.com/articles/newarch_datainsurance.php I guess a lot of progress has been made since then.

Sean West (2nd year MPP/GSPP): I am a firm believe that anything can be modelled and that insurance companies would could easily model the risk to any business entity of falling victim to a cyber attack given information about the threats, risks, consequences and preparation of that company. While none of these predictions would be foolproof, neither are those predictions made for life insurance. Sure, companies can predict the average life span of someone given their health information but they cannot predict for that person whether they would die in an accident etc. But by spreading this risk over thousands, indeed millions, of people, the companies are able to provide a predictable service. The same should go for cyber attacks--there is no reason that they are any different and cannot be modelled and risk cannot be spread. The real question is how much of a role the government should play in preventing cyber attacks. Given the logic of the social contract, government exists at our pleasure to provide us, chiefly, security which we lacked in the state of nature. Now, the govenrment does what it can to provide security in normal senses--policing/homeland security etc--so why not in the cyber sense. Especially when our business and personal information depends more on cyber activity than physical activity, why wouldn't the governemnt be responsible to prevent break-ins such as they are with physical buildings?

Benefits/Drawbacks of having a viable insurance industry in the computer security arena

Noor-E-Gagan Singh- In a world where people can buy insurance against computer crime, I think we will see insurance companies training law enforcement members on how to prevent computer crime and prosecute cybercriminals. Just like today insurance companies train various police departments to find stolen cars. They will also ensure that cross-country laws are strengthened for faster response times to cybercrime.

What are the other benefits/drawbacks of having a large cybersecurity insurance market?

Manish Mittal- I guess once insurance companies are in play, spyware, virus protection makers will try to get them to authorize their product. Once your product is preferred by insurance then it stands a better chance of doing well in the market.

Intrusion detection

Manish Mittal- Intrusion detection definitely helps in identifying the source of the incoming probes or attacks. False positives is definitely a problem in that it is difficult to know how much can be filtered out without potentially missing an attack. One of the most obvious reasons why false alarms occur is because tools are stateless. To detect an intrusion, simple pattern matching of signatures is often insufficient. If the signature is not carefully designed, there will be lots of matches.

Data mining can also help improve intrusion detection by adding a level of focus to anomaly detection. By identifying bounds for valid network activity, data mining will aid in distinguishing attack activity from common everyday traffic on the network.

I have few questions on the implementation of NIDS. How does NIDS affect the performance of the system? Under what circumstances should you use offline intrusion detection compared to real time detection? Does NIDS use some form of data mining? If not, are there any intrusion detection system that leverage this? Does anyone make some standard signatures or patterns available to help with new IDS implementation? I see some similarity between spam filters and IDS. Can they be combined? Is passive approach more reliable than active approach?

Intrusion Detection vs. Intrusion Prevention

--Chris DuPuis 12:31, 21 November 2005 (PST) In the paper describing Bro, the design clearly indicates that the desire is for a system for doing intrusion detection, rather than doing anything to keep intruders out. The philosophy seems to have been that staff would be alerted to the attempted break-in, and they would be able to respond to the threat before any real damage could be done.

In the presentation, however, it sounded like the system was moving away from just intrusion detection, and into filtering the network traffic to prevent attacks from happening. In the face of fast-propagating worms like Slammer, this seems to be the only possible defense--by the time the staff are notified of the problem, the entire network could be infected. This is very different from the traditional threat model of a human attacker attempting to break into a machine.

In light of this changed threat model, is intrusion detection still a relevant tool?

Marty Lyons, UW CSE -- I think you'll find amongst people who have actually run large networks a pretty disparate set of experiences and ideas on how to tackle this. The problem is that as you move more data, the amount you have to analyze continues to increase linearly, but the anomolies and actual flagged data start going up faster. And the amount of time it takes to actually analyze them never seems to go down, or if it does, it's at the expense of smart people gaining institutional memory of prior attacks. There have been companies like Micromuse (http://www.micromuse.com) who do data aggregation and analysis, but they tend to be very expensive for small to mid size enterprises. Routing companies want to move this functionality onto silicon at the network edge (where the availability of processing horsepower sits), but it's been an uneasy thing to make work in reality.

The sad truth in the production world is there is no easy solution to this problem available today. Fundamentally if we're going to give people whole systems, with a spinning disk, operating system, and all, those devices are going to need to be qualified, hardened, and secured in some way. Right now we're still as an industry mostly shipping junk, so we're reaping the risks of our own lack of engineering discipline.

Brian McGuire Part of the benefit of an intrusion detection system is that it becomes possible to prove that an attack was not the fault of the company, so they are not culpable. The assumption is that the network will never be completely secure and that a significant attack will happen eventually. An intrusion detection system could be used to prove that the company was not at fault or had made a best effort at protecting critical assets. Without IDS proving this might be more difficult.

Rob Anderson There are even more basic reasons still. An IDS allows the organization to change its tactics dynamically, so it can do things like disconnect some or all of its ports from the Internet. It can shut down internal systems. In can activate internal honeypots. It can enable active defense technologies that will traceroute the attacker, or at least the zombie nodes he is using.

It’s a little bit more diverse than what you think

Tolba The great degree of homogeneity of the internet was alluded to by the last speaker’s un-mutated species that licks everything icky example. It’s very true that a big portion of the internet runs on the a few incarnations of the same popular OS. The other factor I like to consider is the spread of high-speed internet which in itself is like adding a disastrous component to this formula. But with closer examination we can see a subtle desirable side-effect to the spread of high-speed. High-speed is slightly coupled with an abundance of PC’s sharing the abundant bandwidth. This is typically done by means of some routing device multiplexing the front-gate connection and protecting devices behind it (through the default configuration is most cases). Those routers which are based on several proprietary OS’es add a diversity factor to the mix. This diversity makes it tougher for malware to spread and protect the more homogenous environment behind it.

Chris Fleizach - You bring up a good point in that all these high speed, homogenous PCs are connected to routers. These are usually provided by large corporations offering broadband service. Cable companies, phone companies, Yahoo, SBC, etc. These companies are in a great position to filter out hosts that have been compromised and are in turn doing damage. All of them already say they may inspect your packets and block certain ports. A few companies block outgoing port 25, since it means you are sending email from a machine that probably doesn't need to. If these companies could be convinced of the need for more proactive filtering, a lot of botnets would lose their steam. If they see SYN flooding or other types of DDoS sequences coming from a host, they can just shut them down and force them to only see a website telling them their host is compromised. The same goes for SPAM. There might be a few false positives for people running their own servers, but then again most broadband providers prohibit use of servers in the first place.

--Chris DuPuis 20:15, 22 November 2005 (PST) - One problem with this picture is that most consumer-level routing devices (and even some higher-grade stuff) uses some variant of one of the popular OSes that run the machines attached to these routers. My DSL modem/wireless router runs Linux. I know of a number others that use Linux or one of the BSDs. I would be shocked if Microsoft didn't have at least a plan (and more likely, a complete SDK) for using Windows in networking hardware.

Marty Lyons, UW CSE -- An issue with these network effects is that sometimes you need maximum cooperation to achieve a unified effect. Let's say every backbone provider in America blocks outbound port 25 from home networks.... the SPAM load moves to Canada. All of those providers block 25... the SPAM load moves to China (well, it already has, but you get the picture). The feedback from customers when you take *away* a service that they used to have is never positive, even if it effects a small and verbal crowd; they tend to be early adopters and/or opinion leaders, and sometimes will move to a competing service who hasn't removed that service (they leave Comcast for Earthlink, etc). In an industry offering connectivity, with customer churn a big deal financially, the last thing you'd want is to give customers a reason to leave. So this is part of the reason why someone will always be happy to sell you "unlimited connectivity", somewhere.

My personal belief on this is that as we move towards IPv6, and everything has an IP address (your TV, stereo, toilet), the number of connectivity providers who can make the port-by-port judgement calls on what to block and what to allow will decrease to zero. Users are going to be responsible for the safety and security of devices they connect to the net, and the burden will be on the OS and hardware vendors to ship secure-by-default designs.