If you're like most of us, you probably lost time and productivity recently in weeding out dozens, hundreds, or even thousands of worm-generated E-mails; or in having to filter and dump the bogus "we detected a virus in your E-mail" auto-reply messages that the worm triggered as a secondary effect (see "The Danger In Auto-Reply Messages").
InformationWeek's Editor in Chief Bob Evans was affected by it, too, and that caused him to focus several articles on the topics of hacking and security, including a column called "Secure Computing Must Move To The Front." If you haven't yet seen that item, please check it out now; it'll be well worth your time, and will help put the rest of this column in context.
You probably share Bob's anger and agree when he writes that acts of cybervandalism "...are illegal, they are dangerous, they are costly and cowardly, and they must be treated as such, which means that the agents behind these acts need to be rooted out, prosecuted to the fullest extent of the law, and punished."
But the cybercriminals aren't the whole story, and Bob goes on to address software publishers: "It is time for ... vendors to accept their own responsibility to toss out flawed development strategies, to stop viewing patches as upgrades, to cease with the evasive language that attempts to ascribe blame everywhere but on themselves. It is time for Microsoft in particular to step up to its promise of 'trustworthy computing' so boldly proclaimed many months ago by Bill Gates himself..."
Indeed, Microsoft is at the heart of our current online security woes; there are real and systemic problems with Microsoft's software development process. For example, consider buffer overruns, which can be exploited to stuff hostile code into a PC. It's easy for buffer-overrun vulnerabilities to happen--they're one of the most common types of programming error. But buffer-overrun problems have affected Microsoft software time and again across the years and across multiple Microsoft product lines.
If these buffer overrun issues were isolated cases, that would be one thing. But the sheer number and persistence of this kind of problem in Microsoft software suggests to me there's a fundamental blind spot in Microsoft's corporate programming practices, and a glaring and obvious hole in their quality assurance strategies.
Obviously, these buffer problems can be found and patched after the fact--buffer-overrun patches make up a huge percentage of Windows Update items. Why can't they be found and patched beforehand? How hard would it be for the world's largest desktop-software company to establish internal programming standards to ensure that all input buffers in all Microsoft products are protected by security routines? Or to check that whatever data is entering the software is of the type, length, and format that the software expects and can handle?
Microsoft might answer, "it's very hard," but many security issues in Microsoft software are discovered after the fact by small companies and one-person shops. Somehow, these small companies and individuals manage to do what Microsoft itself cannot--or rather, will not, do. Surely, a flaw that can be discovered by some lone programmer working in his basement ought to be able to be discovered by the world's largest desktop-software company.
So clearly, there are very real problems with the way Microsoft builds and tests software, and no amount of white papers or PR spin or windy speeches will change it. What it will take is for Microsoft to be far more aggressive in reviewing existing code; and far more rigorous in testing new code. I don't know if Microsoft is up to the challenge. They could and should be; they surely have the resources. But a long--and I mean long!--history of the same programmatic errors showing up again and again in Microsoft software suggests that something in Microsoft's corporate structure is preventing positive change.