Sunday, August 12, 2007

Dodging Company Computer Security

Wall Street Journal article on how to dodge the most common security policies in big companies.
There's only one problem with what we're doing: Our employers sometimes don't like it. Partly, they want us to work while we're at work. And partly, they're afraid that what we're doing compromises the company's computer network -- putting the company at risk in a host of ways. So they've asked their information-technology departments to block us from bringing our home to work.

End of story? Not so fast. To find out whether it's possible to get around the IT departments, we asked Web experts for some advice. Specifically, we asked them to find the top 10 secrets our IT departments don't want us to know. How to surf to blocked sites without leaving any traces, for instance, or carry on instant-message chats without having to download software.

But, to keep everybody honest, we also turned to security pros to learn just what chances we take by doing an end run around the IT department.

Most of the things they list are pretty well known, and companies with real security have made it very difficult to impossible to use some of these. I got a kick out of some of the blogs ranting against this article. Computer World has this blog entry on topic.
I understand why the security people are unhappy with the WSJ for publishing this piece.

But the security people should understand that, on this one, they're dead wrong.

Not a little wrong -- completely, 100% wrong.

And I'm really appalled to think that serious security professionals believe what the WSJ published was a bunch of deep, dark secrets to corporate users.

I have to agree. Those people that are going to misuse company resources know these things and a lot more. The rest of the workers will just live with limitations and be good. They have a few links from blogs that have some irritated security professionals. Frankly, their complaints are pretty lame overall.

Here's one that is pretty fanatical, and quite pathetic.
I anticipate Ms. Vara being vilified by mainstream InfoSec professionals for this article, and well she should. Teaching users how to “Search For Work Documents At Home” or “How To Store Work Files Online” is a stupid thing to do, no doubt. But the IRM community should explain to Ms. Vara that she is not a professional risk analyst, does not have a clue as to what the most probable Threat Community Actions, Attack Vectors, or consequences of a Loss Event are, her “How To Stay Safe” suggestions are impotent, and as such - she would do very well to shut her piehole.
Rhetoric like this is a touch childish. Reasoned discourse this is not. I also haven't seen any discussions on how to mitigate these risks. Instead of name calling doing something constructive would have been much more interesting and instructive. Well, it is a blog, though one disguised in professional trappings.


9 comments:

David said...

You hit it right - childish.

alex said...

Any mitigation suggestions offered by the InfoSec community would be wholly inadequate because risk must be measured from the perspective of those who are the data owners. Both complexity of analysis and that perspective are why, IMHO, the author would have been better off not offering any probable loss event actions at all, and would have done well to find another subject to write about.

Nylarthotep said...

I disagree. The infosec community likely has many viable solutions that are of value to IT departments in attempting mitigation of risks. They don't have to do anything, but options suggested at least provide possibilities they could consider.

Also, any suggestions would not be perfect since any such suggestions would need to be tailored to the specific need. The benefit overall would be at least a dialoge to broaden security where none may already exist and aid those who don't have a good understanding of the risk.

Rob said...

This story misses the point, again.

Frank Hayes' article has been soundly ripped apart in the comments already, but when he says "Not a little wrong -- completely, 100% wrong.

And I'm really appalled to think that serious security professionals believe what the WSJ published was a bunch of deep, dark secrets to corporate users."

He is completely wrong. 100%.

We don't care about the "secrets". We care about the fact that a message is being printed by ignorant non-professionals in a widely read journal which basically says: "It's ok to dick around with IT to see if you can break it, everyone's doing it and you won't get caught".

Alex knows risk better than anyone else I know. Blogs are not supposed to be anything other than personal opinion. If he wants to say "shut your piehole", it is his absolute right to do so, just as it is your right to get on your soapbox and pontificate.

I think you need to get a sense of humour and understand the bigger picture a little better.

A wise man once said to me: "Before you criticise someone, walk a mile in their shoes."

"That way, when you criticise them, you'll be a mile away and have their shoes."

Oh yes, it might be childish, but I like it a lot. Just like Alex and his blog.

alex said...

"I disagree. The infosec community likely has many viable solutions that are of value to IT departments in attempting mitigation of risks. They don't have to do anything, but options suggested at least provide possibilities they could consider."

Do you mean "risk" or "things that will positively impact the general populations ability to resist the force of a threat agent"? To me, these are very different things.

I'm usually more specific in terminology, but if we can agree that risk is derived from probability or event and impact of event - then I believe my assertion stands for two key reasons:

1.) Probability of event must be driven by two things. First is frequency with which we can expect contact and action on the part of the threat agent. The second is our ability to withstand the actions of the threat agent.

More simply stated - we cannot have the probability of an event if they do not come into contact with us or if they do not act against us.

We also cannot have the probability of an event if we are able to resist the actions of the threat agent. If "Our Controls" > "Threat Capability" then probability is also pretty low.

2.) We don't have inputs for the probable impact of events. In considering impact, we're pretty foolish if we don't consider how the various factors that make up impact interrelate with our threat community.

For example, a financial institution is usually mainly concerned with regulatory impacts in any discreet risk scenario. Sometimes they must mind the need to replace stolen money, but for the most part - their impact pain comes from the government.

A member of the military-industrial complex, however, might be more interested in the probable impact on competitive advantage. Airbus probably doesn't want someone to be sending plans for their newest airliner via a non-secured channel for a reason - that reason could be Boeing, Tupolev, or any number of international competitors.

Now, for me to sit down and say that I can broadly suggest solutions that will lessen (that is, mitigate) the probability of event and impact of event without knowing very specific information about strength of compensating controls, probable threat communities ( and their most probable actions, controls or frequency of contact), and probable impacts within context of the organization feels like it would either be so self evident (don't do what the WSJ is telling you to do) or too broad to be of particular value to any specific security architect. We may *easily* be over-spending based on probable impact, or the frequency of threat event, but until we understand those factors within the context of a specific threat community, we simply don't know.

Now the funny thing is, when I read that article - the first thing that came to mind was that much of what she was suggesting were low-risk actions to the broad majority of users. It's not breaking sacred-cow policy that gets me (if you've read my blog, you would know I have a particular disdain for IRM paternalism). Rather, what upset me at first is the fact that the WSJ assumes they can identify the risk tolerance/probability of event/impact of event for their aggregate readership.

This is just downright silly. My other complaint, of course, is that my friends, security professionals, CISO's, auditors, etc... are the ones that will take the fall in this day and age of regulations and government intervention should an event occur - as unlikely as that event may be. Again, with SOX and GLBA, NERC, etc... telling readers it's "OK" to take these actions is not unlike advocating playing loose and fast with GAAP. I don't like that fact that the world is this way now, but it is.

Nylarthotep said...

Well, let's see.

First, I've been an InfoSec professional for about 8 years. Been walking the walk all that time. So any banter about walking in someone else's shoes is frankly a crock.

Weigh it all as you will, but publishing these dodges doesn't make a company that has properly prepared any more at risk than if it hadn't been published.

By these standards those who advocate openess for the system flaws that have been found should be ostracized.

And the WSJ article doesn't say to try and break IT, it merely points out ways to bypass controls. It also states that there are punitive risks associated with breaking company policy. Individuals need to weigh those risks themselves.

Risk mitigation is always on the mind of those securing the companies assets. If you haven't already studied and understood those risks shown in the WSJ article, maybe you should reperform your risk/threat assessment. Frankly, I am of the opinion that these dodges are for the most part already addressed by the vast majority of companies, who have weighed the overall threat and have chosen what they wanted to do. They aren't new, they just aren't widely publicized.

Rob said...

I give up. You can't educate the ignorant.

Read it again, get a sense of humour and grow up.

I'm not coming back here.

Nylarthotep said...

"You can't educate the ignorant."

Thanks Rob, you've proven your own point.

As for growing up, maybe you should take that lesson yourself since you have yet to provide any reasoned or intelligent discussion. Just lots of whining and name calling.

I don't fully agree with Alex, but at least his comments are intelligent and make an argument in reasoned context.

Not coming back. BFD.

Nylarthotep said...

Alex,

I completely agree with the fact that analyzing the full extent of each companies threat/risk vector is a requirement and the cost/benefit relation to any mitigation also must be performed. This specific context is where analysis goes once an attack vector is understood and the mitigation possibilities have been addressed.

I don't believe that this should negate discussion on possible methods of mitigation. Even if it's just an exercise in "what if" the exercise in itself will provide information, if not action that would be helpful. Each security practitioner can then take what is discussed and analyze it within their own business context.

I don't believe that any discussed solution would be too broad for usefulness, especially considering that some of the described dodges are fairly specific.

I can understand the anxiety related to having to deal with a bunch of people learning how to dodge security rather than just comply. I don't believe though that the present laws would bring you any further into a fault scenario if you have already made decisions on how to mitigate those threats. I don't like taking on more risk myself, but I re-perform risk/threat analysis regularly enough to ensure that broader risk is addressed in a timely manner.