Human Error Results in Being 0wn3d

Bill Brenner's article in the July 2005 Information Security magazine clued me in to a press release by the Computing Technology Industry Association (CompTIA). They announced the results of their third annual CompTIA Study on IT Security and the Workforce. From the press release:

"Human error, either alone or in combination with a technical malfunction, was blamed for four out of every five IT security breaches (79.3 percent), the study found. That figure is not statistically different from last year."

This study and the 2004 edition appear to be the source for other reports that claim 80% of security breaches are the result of human error. Note the CompTIA study says "Human error, either alone or in combination with a technical malfunction," is to blame.

Nevertheless, I am not surprised by this figure. I rarely perform an incident response for an organization that is beaten by a zero day exploit in the hands of an uber 31337 h@x0r. In most cases someone poorly configures a server, or doesn't patch it, or makes an honest mistake. The fact is many IT systems are complicated, and none are getting simpler. Administrators have too many responsibilities and too few resources. They are often directed by managers who have decided to weigh "business realities" more important than security. The enterprise is not running a defensible network architecture and the level of network awareness is marginal or nonexistent. No network security or host integrity monitoring is done.

In such an environment, it is easy to see how a lapse in a firewall rule, a misapplied patch, or an incorrect application setting can provide the foothold a worm or attacker needs.

So what is my answer? No amount of preventative measures will ever stop all intrusions. I recommend applying as much protection as your resources will allow, and then monitor everywhere you can. If your monitoring doesn't help you identify a policy failure and/or intrusion, it will at least provide the evidence needed to remediate the problem, and then better prevent and/or detect the incident in the future.

Update: I found this Infonetics Research press release that stated the following:

"In Infonetics Research’s latest study, The Costs of Enterprise Downtime, North American Vertical Markets 2005, 230 companies with more than 1,000 employees each from five vertical markets—finance, healthcare, transportation/logistics, manufacturing, and retail—were surveyed about their network downtime...

'The finance and manufacturing verticals are bleeding the most, with the average financial institution experiencing 1,180 hours of downtime per year, costing them 16% of their annual revenue, or $222 million, and manufacturers are losing an average of 9% of their annual revenue,' said Jeff Wilson, principal analyst of Infonetics Research and author of the study...

Human error is the cause of at least a fifth of the downtime costs for all five verticals, and almost a third for financial institutions; this can only be fixed by adding and improving IT processes...

Security downtime is not a major issue anywhere, though it reaches 8% of costs within financial organizations."

Comments

Anonymous said…
Richard,

I rarely perform an incident response for an organization that is beaten by a zero day exploit in the hands of an uber 31337 h@x0r.

Go figure. With all the talk floating around the 'Net about things that could happen, it's nice to see someone posting what actually does happen. With too few resources, sometimes system admins will blame issues on rootkits and uberhaxors, simply b/c they don't have the time or skills to determine what the root cause really is.

Administrators have too many responsibilities and too few resources.

Amen to that!

...it will at least provide the evidence needed...

It's too bad that remediation and prevention doesn't seem to be the "business reality" that many IT managers are pushing. More often than not, it seems that the IT manager is asking why the box isn't back in production, so sysadmins are left to reinstall the system from clean media and backup.

I have to say that I'm not sure if throwing protections in place and monitoring everything is really the way to go. For example, throwing A/V and/or IPS products on systems, and then monitoring them, may sound like a good step to take, but what happens when all of that new code is introduced to a production system? How much more difficult is troubleshooting going to be for a sys admin crew that is already behind the power curve, now that new software is introduced into the mix?

When I was interviewing at a small financial services company, they asked me what my first question would be on my first day, if they offered me the job. I told them that I'd want a network diagram...these folks were small, about 300 users at the time. Well, on my first day, I asked my question...and was told that it was now my job to make one.

My point with all this is that rather than adding complexity by "applying as much protection as your resources will allow", perhaps a better approach would be to take a well-planned, phased approach toward simplicity. Start by taking an inventory, getting a picture of what you have and where it is. From there, map out your services and applications. Yes, I know it's extra work, but there are consulting companies out there that can help.

Ultimately, the extra work will be worth it, b/c then you can put your protections in place, and what you see when you monitor will be meaningful.

H. Carvey
"Windows Forensics and Incident Recovery"
http://www.windows-ir.com
http://windowsir.blogspot.com
Hi Harlan,

I've seen your thread on anti-virus on Web servers over on your blog. I agree with you about that. I should have been more specific, perhaps saying "appropriate protection." Defense-in-depth is more of the idea I had, and less running three personal firewalls because three "must be" better than two or one!
Anonymous said…
Harlan -

I am happy to see your comments here, as I am giving a talk on "Security Triage" at the Nebraska Cert Con about this very topic.

Reading your thoughts here, and comparing it to the slides I have prepared, they sync up to a great degree. In the talk, I try to cover steps administrators can take to secure their network when security is just another task on their plate. (Often times, not nearly the most important task from their manager's POV.)

In the talk, one of my main points is that the best thing a administrator can do it get to know their network and systems in as most intimate a way possible. Know and document items like running process, normal network consumption, open ports, services provided, etc. I have ran into many networks that don't have that basic level of knowledge of their systems.

I want to be able to ask someone "What process run on server X?" and get a known good config. Only with that base line knowledge will we ever know if everything is "OK".
Anonymous said…
This comment has been removed by a blog administrator.

Popular posts from this blog

Zeek in Action Videos

New Book! The Best of TaoSecurity Blog, Volume 4

MITRE ATT&CK Tactics Are Not Tactics