Research: 2010/2011 CSI Survey

Jun 06, 2011


2010/2011 Computer Crime and Security Survey

With this document, the CSI Survey achieves its fifteen-year mark. Both the aims and format of the survey continue to evolve. As you’ll see in the findings that follow, many of the results reported by our respondents easily could have been predicted based on looking at results from the past several years. There has always been an almost surprising stability to answers about tools and methodology in this survey and this year is not an exception.

What is different, broadly speaking, is that there is considerably more context within which these results may be interpreted. There are a number of very good reports of various kinds now available on the Web. All of them that we’re aware of, with the  exception of this one, are either provided by vendors or are offered by analyst firms. That’s not to say that there’s anything wrong with these sources. A tremendous amount of useful information is offered in these various reports. But independent research seems fundamental and we believe the survey provides this. Beginning last year, there were three important changes to this survey. The first was that a “Comprehensive” edition was offered, one of its key objectives being to attempt to take other report findings into account so that a proper context could be achieved. Additionally, the survey questionnaire added questions that attempted to determine not only what security technologies respondents used, but additionally how satisfied they are with those technologies. This year, we continue both with a more comprehensive report document but also with the questions regarding satisfaction with results.

As was the case last year, respondents did not seem to feel that their challenges were attributable to a lack of investment in their security programs or dissatisfaction with security tools, but rather that, despite all their efforts, they still could not be certain about what was really going on in their environments, nor whether all their efforts were truly effective. This lack of visibility into the severity of threats and the degree to which threats are effectively mitigated is a perennial problem in security and it presents problems for anyone trying to make sense of the state of information security. If respondents are unsure about what is happening on their networks, one could well argue, how can they possibly provide meaningful information on a survey questionnaire?

We would argue that, for typical security incidents, enterprise security departments have relatively reliable and accurate powers of observation. They generally know when one strain or another of a virus is making its way through their end-user population’s computers. They know when money goes missing from key bank accounts. And even if their perceptions on some points aren’t necessarily altogether accurate, having a gauge of the perceptions of security practitioners can be useful. The respondents’ concern about visibility into their networks has more to do with stealthier forms of data exfiltration and with newer, more complex attacks. Along with the respondents, we see plenty to worry about in this regard and will discuss it further at more than one point in this report. Finally, although most of the survey questions produce numbers and figures detailing the types and severity of respondents’ security incidents and the particular components of their security programs, some of the most enlightening discoveries were found in the open-ended questions about respondents’ hopes and fears.

Research Report