informa
/
2 MIN READ
Commentary

What's So Bad About Undisclosed Security Fixes?

A recent blog entry by Adrian Kingsley-Hughes raises some concern about Microsoft's documentation of security fixes for Windows Vista Service Pack 1. Microsoft has said that it will make changes to the code that could possibly close security holes, but specifics of those changes won't be documented. This seems like a very reasonable policy, and I can't see how Microsoft could do otherwise.
A recent blog entry by Adrian Kingsley-Hughes raises some concern about Microsoft's documentation of security fixes for Windows Vista Service Pack 1. Microsoft has said that it will make changes to the code that could possibly close security holes, but specifics of those changes won't be documented. This seems like a very reasonable policy, and I can't see how Microsoft could do otherwise.As I understand Microsoft's process, called Secure Development Lifecycle, the company is constantly revisiting and re-examining old code as it makes changes and enhancements. During that process, it may discover issues that could potentially lead to some sort of security compromise. If so, it fixes the problem.

Now, the question is whether Microsoft needs to not only include those kinds of fixes in a big update such as Vista SP1, but also go back and issue a corresponding patch and security bulletin for users who don't plan on immediately upgrading to SP1. I would say that depends on a lot of factors, such as whether the problem is easily exploitable, the seriousness of the security breach, and the risk of breaking other things by applying the patch.

Here's a simple example: Suppose that a Microsoft code review of a function detects a bug where a buffer could potentially be overflowed to cause a security issue. Sounds bad, doesn't it? However, let's say that code is called in 10 different places, and a code inspection shows that there doesn't seem to be any situation where the calling code would ever pass data that could overflow the buffer.

In cases like that, certainly you want to fix the function. That way, there will never be a buffer overflow possibility when new code is written to call the function. Lacking any evidence that the problem can be exploited, however, there is no need to issue a patch. Likewise, there is no need to publish the fact that you've found and fixed an unexploitable problem in the code. It would be a waste of time and effort to do so.

To some extent I'll concede this sounds like security by obscurity. Yet as much as that term gets a bad reputation, there is value to staying quiet about minor changes that don't appear to be directly exploitable. There is no reason to give hackers any reason to focus on particular areas to prove you wrong. These are not actual exploits, they only have the potential to become exploits if someone can figure out a way to use them.

Editor's Choice
Samuel Greengard, Contributing Reporter
Cynthia Harvey, Freelance Journalist, InformationWeek
Carrie Pallardy, Contributing Reporter
John Edwards, Technology Journalist & Author
Astrid Gobardhan, Data Privacy Officer, VFS Global
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing