informa
/
4 MIN READ
Commentary

CWE/SANS Top 25 Programming Errors

A group of security experts comprised of vendors, government experts, educators, and individuals published Mitre's Common Weakness Enumeration, a scheme that identifies common programming problems and offers guidance to avoid the problem in the first place. The group hopes the CWE list will be used by colleges to teach secure programming, vendors to avoid the mistakes, and customers to demand these problems are not in shipping code.
A group of security experts comprised of vendors, government experts, educators, and individuals published Mitre's Common Weakness Enumeration, a scheme that identifies common programming problems and offers guidance to avoid the problem in the first place. The group hopes the CWE list will be used by colleges to teach secure programming, vendors to avoid the mistakes, and customers to demand these problems are not in shipping code.The effort is a long time coming. A great deal of information security focuses on managing and patching application vulnerabilities after the fact, which is costly, requires specific IT processes to manage patches, and takes time to truly remove the problem from all affected computers. The current state of application development provides a fertile ground for miscreants to leverage exploitable vulnerabilities to their own end.

The fundamental problem is that applications are poorly written in the first place. Even with the outcry of security professionals about the need to teach programmers secure coding processes, little has changed in years, with few exceptions. There was a lot of excitement when Bill Gates sent to a memo to Microsoft employees stating security was job one and the company spent two years building a secure software development life cycle program. But little has trickled down to independent software developers.

Judging by the buzz in the security community about the CWE/SANS Top 25, the effort is a welcome one. Raising awareness is all well and good, but unless there is actual change in how software is written, the list is just a list. One of the stated goals is that companies can use the list as part of a contract with software developers to ensure at least the most egregious errors are purged from code. But there are a whole lot of problems that have to be resolved before this is really an actionable requirement.

First off, the customer has to be able to prove or disprove that the problems exist or not and dictate damages if these errors are the source of an exploitable vulnerability. While the announcement also mentioned test tools that can find the Top 25 errors, I know that the test tools aren't exactly something a nonprogrammer can fire up and get much use out of. Looks like a market opportunity for software validation testing consulting.

Clear contract language is going to have to be developed to address a very complicated issue. For example, what is the responsibility for a software developer to deliver an application free from the Top 25 problems? Is the developer responsible for just the code they write, or does that include libraries as well? Extending responsibility to libraries means a developer becomes responsible for potentially millions of lines of code. That is a big risk for the developer and assessing a library is a high cost. Does the responsibility extend to services and application may use or the OS and hardware if the application is delivered as an appliance? What happens of the developers follows the guidance and can produce a report stating the application is free of the Top 25 problems, but security issues remain undetected? There are a lot of practical issues that need to be resolved.

Contact language favors the side that has the most power. If a customer needs the software developer more than the software developer needs the customer, the contract will favor the developer, and vice versa. That doesn't really change the landscape much. To change the balance of power, you, the customer have to be willing to state what programming errors you are concerned with and what you will accept as proof that the programming errors are not in the application and you have to make those statements so that they favor you. Otherwise, be prepared to walk away from the contract and find another software developer. SANS has sample contract language that must be adapted for your company's use.

Of course, the other scenario occurs when a developer delivers an application with a security appliance designed to protect the application from attacks and the appliance sits between the application and the world. Does that satisfy any requirements for secure coding? If a properly configured security appliance can stop the problems before they reach the application, then for all practical purposes, does the problem exist? Does it matter?

According to Robert Martin, CWE project leader with Mitre, "The goal is to have every piece of software in every context be strong enough to keep doing the job it was written for independent of what or who is attacking it." So no, in the spirit of addressing the CWE/SANS Top 25, security appliances are insufficient. But I know, because I have been told by at least one Web application firewall vendor, that software developers use security appliances like Web application firewalls specifically to address any customer requirements to ensure applications are secure.

I am not advocating using a network security appliance to address software issues. I am pointing out a very real case of how applications are delivered and deployed.

Before I get all excited about this list, I am going to have to see how this all shakes out. Mitre and SANS are doing a great job trying to raise awareness about application security problems and getting the problems addressed in applications, but much work remains and I am looking forward to seeing what the folks involved come up with.