To: rudedog who wrote (27695 ) 2/14/2000 1:23:00 AM From: QwikSand Read Replies (1) | Respond to of 64865
**Extremely OT Technical Junk -- Skip This Post If You're Not Rudedog**NT does a number of things in background including checks of memory and reallocation of disk storage to reduce fragmentation . That's interesting; it contradicts my experience wrt the file system. Although I think NTFS is a good file system because it seems quite resistant to corruption (more so than most of the Unix file systems), it seems even more resistant to defragmentation. Since Microsoft doesn't make the architecture of NTFS public (claiming security considerations LOL), I don't know how they allocate space. All I know is that judging by the defragmenter displays I've seen, NTFS seems prone to bad defragmentation and nearly impossible to completely defragment/consolidate. In fact, past a certain cluster size, which I believe is 4k, none of the commercial third-party defraggers, the best of which is Diskeeper 5 by Executive Software, will even try. They just say: Cluster size > 4k -- exit. But then let's say you need an NTFS to keep up and record a 1394 video data stream. Try it with a 4k NTFS...it won't keep up...too much head movement updating their indices and bit maps and logs and whatnot. Your best shot is to use a 64k cluster. This keeps up with fast input, but after a little use the file system is totally fragmented and no software will fix it. Your option is to dump, reformat, and reload, period (as far as I can tell). This has always seemed strange to me...logically, wouldn't it get easier to defragment a file system as the cluster size grows, as long as you have memory to buffer n+1 clusters? I'm hoping this gets fixed in the new NTFS in W2K.Next generation products from Compaq will allow hot swap of memory and processor without taking they system down . I'll bet you Solaris is a lot closer to being able to support this than W2K. --QS