SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : Computer Learning -- Ignore unavailable to you. Want to Upgrade?


To: bosquedog who wrote (17030)3/1/2001 11:23:32 AM
From: PMS Witch  Read Replies (3) | Respond to of 110626
 
Warning --- Another rant…

I was thinking about your defrag question, and began wondering if writing defrag programs are like building an air compressor into cars with leaky tyres: It’d be a whole lot smarter to use tyres that held air, just as it would be smarter to build systems that didn’t require defragging. But how?

What causes fragmentation? I think systems writing to the first available space on disk explains much of it. Storing files larger than contiguous available space necessitates that those files be divided into pieces matching the sizes of the empty spaces as they are found. With each piece, the writing process is delayed by the time of at least one sector’s latency, and possibly, although rarer, much more time consuming seeks. But as much, or little, time is wasted in writing to a fragmented disk, it’s the reading that wastes the time. Many files, and they are often large, are written once and read many times over. On my system, I would speculate that well over 90% of my files have never been re-written, but have been read repeatedly. Excel and Word, to name two huge files, after being written to disk once, are read several times daily, and have been for years.

How do we avoid writing fragmented files? Let’s examine the causes. When the system begins writing a file, it has no idea how large a file it will be writing: It just starts, and when the data runs out, it finishes. Today, with systems having RAM available that is many, many times the capacity of even the largest files, there’s no reason why a file cannot be first stored in a cache, the size determined, and a suitable vacant area of disk allocated to its storage before it is written. On my system, the largest (non-swap) files are 5meg, and could be cached easily by machines equipped with minimal RAM. Only when disks fill to levels where large vacant spaces are unavailable, will files be split, and even then, splitting would be minimised by intelligent caching. Today’s systems do not exploit resources to achieve this potential efficiency. While we wait, is there anything we can do now?

Windows allows us to make a few adjustments that may help. I’ll deal with swap files first. With increased RAM available on most systems, swap file sizes no longer need be as large as we’ve grown accustomed to. Still, we don’t want to place unnecessary limits on ourselves, so I’d suggest a generous minimum size, with no maximum. I’ve posted earlier how to determine and set these values, so I won’t repeat it here. Next, I’d set Windows caching to a generous minimum, and again, no maximum. Monitoring my system tells me that I can use considerable RAM for multiple memory hogging applications, or for file cache, but that I’ve never needed both simultaneously. My testing tell me that file cache RAM gets assigned elsewhere automatically when needed, so I needn’t worry about running out.

So, what’s my settings?

In SYSTEM.INI, under the headings, I’ve made these changes…

[386Eng]
ConservativeSwapfileUsage=1
PagingDrive=C:
MinPagingFileSize=57344

[vcache]
MinFileCache=8192
ChunkSize=512

Here’s why…

ConservativeSwapfileUsage=1 tells my system to use RAM like it’s on sale, only resorting to the swapfile as a last resort.

PagingDrive=C: tells my system to use the outermost partition on my fastest drive.

MinPagingFileSize=57344 makes my swapfile a little over 50 meg.

MinFileCache=8192 tells my system that I want this much RAM, as a minimum, used for disk caching. It’s in the budget, it’s there, go use it.

ChunkSize=512 tells my system that … Oh shit! I can’t remember what this does or why I chose this value. Sorry! Ed, Mark, Cat, or anyone --- please step in.

Back to defragging…

If one has a defragging program that is quick, convenient, and easy to use, the benefit of using it regularly can outweigh the cost, because the cost is small. If one uses the defragging program supplied with Windows, they may find that defragging once, or once a year, is sufficient. This single defrag will put their huge, often used system files in order, and once in place, they’ll stay that way. Over time, the files repeatedly read and written will get fragmented, but they’re less important anyway.

Again, I find myself out of step with conventional practice and recommendations. Each must look at the cases and explanations presented in support of various practices and decide for themselves what they will do. We all have different systems meeting different needs.

Cheers, PW.

P.S. For perspective, my system has 384meg RAM, 6gig disk, and Win98.