Opinion: Companies Failing To Protect Against Insider Threat From Software Developers

Companies are failing to protect themselves from threats that might be planted in custom code.

Mario Morejon, Contributor

July 20, 2006

2 Min Read

There's lots of talk in the industry about how companies can guard against outside IT and network threats. But what can they do to shield themselves from the possibility of in-house developers, consultants, testers and other tech-savvy "insiders" entering malicious code or gaining access to live data?

Well, that's a weighty problem facing many businesses today. And it exists not because of the lack of tools but a lack of foresight from the IT industry in general.

In recent years, the market has been saturated with many kinds of intrusion-prevention tools to address various application vulnerabilities. But no programming methodology exists today that prevents the most trusted users from getting away with wrongdoing.

While regulatory standards such as Sarbanes-Oxley and HIPAA, including the open Web application project, address some security measures through an audit process, they fail to impose enterprisewide abstraction layers that prevent key IT users with application and/or database access from gaining direct access to critical financial data.

Instead, standards mainly focus on enforcing confidentiality. They provide general guidelines for companies to concentrate on access control issues, application configurations and code vulnerabilities.

Code reviews, which are usually performed by program team leaders and project managers manually, don't take into account systemwide malicious code that can be introduced by in-house developers. And to put it bluntly, code written at the unit level is free to do whatever programmers want it to do.

For instance, there currently are no step-by-step code search procedures and guidelines to help managers identify backdoors. These vulnerabilities are found only when theft becomes too obvious or by chance during auditors' code reviews.

Simple social engineering techniques used by in-house programmers are the dirty secret in the application security space, and everyone seems to be sleeping on it. Even the strictest security policies that can be implemented today don't address this issue directly.

The root of the problem stems from the lack of substantive connection between application design and code. Even if code is thoroughly reviewed--during test phases or when applications are placed in maintenance cycles--there are no methodologies to help managers identify flaws.

This security vacuum can be fortuitous for solution providers looking to improve their application security solutions. Since there are no concrete testing methodologies that can prevent nefarious code from being introduced into production systems, solution providers can offer expert reviews during application testing. They also can build simple parsing tools and spider search techniques to look for telltale signs of wrongdoing.

That, at least, could be an initial step in helping many businesses tackle a largely overlooked security problem.

MARIO MOREJON is a technical editor for the CRN Test Center.

Read more about:

20062006

About the Author(s)

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights