Big Ideas


  • Risk as expected value 

How secure should your organization be? How secure do you keep yourself?

Today, your personal safety would have be greatly enhanced by...staying home in bed all day. When you go out, you're exposing yourself to all kinds of risk - you could get hit by a bus, stabbed by a mugger, killed by lightning, etc. Obviously the negative value of those consequences, IF they occurred, would have been colossal yet you went out and about your business anyway. How do you account for that? Expected value.

Expected value considers the probability of an outcome actually occurring and discounts the outcome's value by that proportion. So while you might assign an astronomical negative value like -$1,000,000 to getting killed by lightning, the likelihood of it actually happening is astronomically low, (1 in 2,320,000) and the expected negative value, the product of the two, is only around -43 cents.

Meanwhile, the expected positive value of going out today was much higher - if you went to work you had an excellent chance of earning some money, for example, and the expected positive value (eg, $100 * .99% = $99) far outweighed the expected negative value, the risk, of being killed by lightning and the other nasty things that could have happened. Being a rational individual (?), you opted to go out and go to work, despite the risk- the rewards were worth it.

Organizations need to make the same kinds of risk tradeoffs to balance risk and reward rationally.

  • Balance/Tradeoffs

As in most of life, infoSec in organizations is full of tradeoffs that need to be balanced thoughtfully. Getting some gain always costs you something – hopefully the benefit is greater than the cost so there’s a net benefit from the deal.

But sometimes it’s not obvious what that cost is, especially because we often don’t recognize opportunity cost – the cost of the opportunities missed by making the choice we made. But we should. We need to be cognizant of the costs, all of them, as well as all the benefits, when we make a decision, if we’re going to make wise choices.

A couple of big tradeoff decisions underlie many infoSec-related choices.

First, there is the balance between security and accessibility. We can make something really, really secure by making it highly inaccessible. For example, if we had a disk drive full of secret information, we could make it super safe by sealing it in a steel case and burying it but that would make it hard to access for whoever did need to use it. On the flip side, making the information highly accessible by putting it on the Internet, for example, where everyone can reach it from anywhere in the world, also makes it hard to secure. The trick is to find the right balance between the two extremes – a balance that will be different for different kinds of information and different organizations/situations. By considering the costs and benefits carefully, we can strike that balance wisely.

In organizations, another tradeoff exists in the tension between user freedom and collective security – “the greater good.” In organizations, InfoSec professionals are often viewed as getting in the way of staff who want to be left alone to get their work done efficiently. They often resist any restrictions on how they do things, eg. preventing executable attachments to emails, even though those restrictions are designed to enhance security, because restrictions interfere with their work and they often underestimate the risks of how they prefer to operate.

Indeed, their attitude may be that they should be left alone to do as they please – they are willing to accept the risk. But in an organization, when one workstation is compromised, for example, it gives the hacker an opening (he has infiltrated the defensive “fortress walls” and is now operating freely within the “soft,” vulnerable interior). Thus, the maverick who refuses to comply with policy is actually endangering others and the organization itself, not just himself. The risks are “externalized” – spread out among the others – so the maverick’s own share of that risk is smaller, outweighed by his own perceived gain.

The best antidote to this problem is education – making users aware of the need to consider collective good and the magnitude of the risk involved so that they are motivated to comply with policy. Generally, this works better than trying to force compliance since people are resourceful and find clever ways to circumvent controls when they try. At the same time, we need to be considerate of letting people work efficiently as we can, and sometimes we can apply technology or just good process design to make compliance easier and less cumbersome if we try.


  • Weakest Link, Layers, Defense in Depth

The concept of a fortress - a defensive perimeter that protects the “good guys” by keeping the “bad guys” out - dates back pretty far. There are lots of ancient castles in Europe for example, the Great Wall of China, etc. The reason there are lots of examples is that the practice worked pretty well for a long time and so it’s only natural that we seized upon it in the information world when…well, when we started opening up access through networking and the “bad guys” out there realized there was valuable “booty” there to be had.

Those old castle designers learned (probably the hard way…ouch!) some really useful best practices. For example, the village couldn’t be entirely self-contained – you needed at least one portal so the good guys could go get stuff that wasn’t available inside and bring it back. But portals were a pain – they were “soft spots”, easier to breach than going over the walls, so you wanted few of these – probably just one/castle. Of course the easiest route in was to convince the gatekeeper to open the door (social engineering) and there were some crafty “bad guys” out there so you needed really smart gatekeeping, something that was really hard to come by back then. (Think “Trojan Horse”). Another reason to keep portals to a minimum.

Higher walls were obviously a good thing but relying on height alone was unwise since really determined bad guys could be (are) really resourceful. Hmm…how about adding another obstacle – a water hazard, or “moat”, encircling the whole place. We call this layering, or “defense in depth.” As long as the bridge was up, the bad guys would have to slog though the muddy water, probably taking off their armor so they could swim and that would be a great time to rain down arrows, rocks, etc. on them. Now the walls were there just to stop the few, if any, who could make it past the moat. Pretty daunting for your average bad buy who might just take one look and move on the to next castle and we try to do the same thing in infoSec. Layers, layers, layers.

Of course, the weakest link problem also carries over from castle days, too, unfortunately. You could have really high, thick walls with spike-y things all along the top but if your door hinges were rusty, a standard-issue battering ram could make your walls irrelevant in short order. (See the Monty comic strip on the Illustrations page of this blog site.)

Unfortunately, we live in an age where we want to access data easily, all the time, from anywhere and this shoots a lot of holes in the old fortress/perimeter approach. This white paper by Dan Geer, entitled, “The Shrinking Perimeter: Making the Case for Data-Level Risk Management” explains how the “fortress” concept is “crumbling” over time as the “walls” are necessarily becoming more and more porous. Too many portals of all stripes! Projecting out, we have to think in terms of building little fortresses around the individual chunks of information instead. Unfortunately, that’s harder to do but that’s the challenge were facing in a world of increasing accessibility demands.

  • Hacker ROI

Adopting the economic perspective championed by Dan Geer and especially Ross Anderson at the University of Cambridge in England (see the Keep Learning page of this blog site for infoSec heros to follow), we need to think in terms of the hacker’s ROI when planning our defenses. This is where the trend toward profit-motivated hacking actually works to our advantage. We know our best defenses can always be defeated if enough skill and determination is applied, but if the time/effort required outweighs the potential benefit/reward, then the attack becomes unprofitable.

Unfortunately, the things we love about technology, the ability to move information around the world at lightning speed, provides leverage to hackers too – they can squeeze a lot of money out of a file full of credit card numbers on short order once they get them. So the rewards can be quite high – maybe higher than we can raise the effort threshold to get them. But with defense in depth (see above), we can at least be quite discouraging to hackers that get through one line of defense, only to find another in their way.

And there is also the adage about the two guys running away from the bear in the woods – the first guy says “I don’t think we can outrun him” and the second guy says “I only have to outrun you”. If a hacker finds the ROI at our organization to be at least worse than our peers, he’ll bypass us for them, at least for now.