The Explorer: Resource Leaks, Part Three - InformationWeek
Software // Enterprise Applications
02:48 PM
Fred Langa
Fred Langa

The Explorer: Resource Leaks, Part Three

Five Steps That May Plug Your Leaks, Once And For All.

In Part One of this Resource Leaks series, we discussed the how and why of "resource leaks;" what they are, the problems they can cause, and how you can determine if your system is suffering from them. To refresh your memory, resource leaks typically involve two special, fixed-size, internal scratchpad areas of Windows memory; their size is unchangeable, and unconnected to how much RAM you have. In poorly coded software applications, some of this special memory may be used by a program but not released when it closes. Over time, more and more of these limited resources may be marked as "in use" even when they're really not. Eventually, there's not enough space available to continue, and you get an "out of memory" error message (even if you have tons of RAM), or a crash.

In Part Two, I detailed the inner workings of a variety of tools and utilities that claim to solve memory leaks. Along with explaining the pros and cons of "opening holes in RAM," "RAM defragmentation," and related issues, Part Two tells you why these apps can be worthless or even counterproductive. But it does detail one limited and specific use of one particular freeware utility that I feel is worthwhile.

After Part Two appeared, I also covered some ancillary information in my newsletter. Last week, for example, I explained why Windows has memory limits in the first place.

Now it's time to pull it all together: In this column, Part Three, I'll explain a multi-part strategy I've developed that just may let you solve your memory leak problems once and for all -- or, barring that, perhaps reduce their severity to a negligible level.

In my case, I'm able to prevent most memory- and resource-related crashes in the first place, and can skate into the single digit range of system resources without any trouble at all. And when an app does die for some reason, I can now potentially recover orphaned general memory without rebooting. In my tests, I've gone day after day after day with my resources rock-steady and stable. I very rarely have to reboot due to a software problem or crash. Almost always, the only time I reboot my main PC now is when I choose to create a disk image (a reinstallable, byte-for-byte replica of the hard drive structure and data) for backup purposes; the disk imaging software requires that Windows be shut down so that it can properly record all the files, including those that are normally in use by Windows.

It's not just my main PC. My test PCs here at are likewise rock-stable, and I also have a heavily-used Win98-based Internet access server here that can go weeks without a reboot. When it does crash, it's usually because someone has tried to hack in from the outside world. (No one's ever gotten in; my firewall commits suicide before allowing the intruders to enter. <g> I'll reboot the server when that happens.)

The Core Idea
The idea for this five-part plan came to me after talking with other people and seeing the wide variety of experiences with regard to resource leaks. While I can run for long periods without resource problems, other folks can only run for periods of time ranging from a couple days down to just a few hours. What could possibly account for these huge differences?

After much thought, I believe the answer is in the way all of Windows' various memory subsystems work together. Trouble in one area of Windows' memory subsystems could trigger or exacerbate trouble in other areas. Or, to put it another way, trying to solve a memory/resource problem by focusing on just one or two areas probably isn't enough. On my systems, for example, I've optimized all Windows memory areas and systems -- the swapfile, Vcache, MapCache and so on; and I'm also very careful with what software I run. I'm betting that if you optimize your Windows memory areas and avoid the very worst, leakiest programs, you too can probably get excellent results -- and a much more stable Windows.

Here's what's involved. It's not hard, but it touches on many areas, so fasten your seatbelts -- we're going to be moving fast!

1 of 4
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
[Interop ITX 2017] State Of DevOps Report
[Interop ITX 2017] State Of DevOps Report
The DevOps movement brings application development and infrastructure operations together to increase efficiency and deploy applications more quickly. But embracing DevOps means making significant cultural, organizational, and technological changes. This research report will examine how and why IT organizations are adopting DevOps methodologies, the effects on their staff and processes, and the tools they are utilizing for the best results.
Register for InformationWeek Newsletters
White Papers
Current Issue
Digital Transformation Myths & Truths
Transformation is on every IT organization's to-do list, but effectively transforming IT means a major shift in technology as well as business models and culture. In this IT Trend Report, we examine some of the misconceptions of digital transformation and look at steps you can take to succeed technically and culturally.
Twitter Feed
Sponsored Live Streaming Video
Everything You've Been Told About Mobility Is Wrong
Attend this video symposium with Sean Wisdom, Global Director of Mobility Solutions, and learn about how you can harness powerful new products to mobilize your business potential.
Flash Poll