To: Jon Tara who wrote (17472 ) 6/30/1999 10:24:00 AM From: Stormweaver Read Replies (1) | Respond to of 64865
Memory defragmentation alone is not reason to reboot your box... You said discontiguous pages as need be, when there is a need to create contiguous memory longer than 1 page for an application requesting it. There is nothing to gain through physical defragmentation, because there is no additional overhead associated with the use of discontiguous pages ... It is NOT "increasingly difficult for the OS to allocate contigious chunks for large requests", because it is not necessary to ever do so. DIS-contiguous pages will work just fine. The OS MAKES them contiguous, without having to move a byte, with a bit of help from the hardware. Some OSs have been doing this for, oh, 30 years or so. (IBM) Sure it will. If the OS has split an allocation across a page boundary then the access to that region may/will generate a PAGE FAULT ; which depending on the physical memory could force the OS to load it from disk. The longer and OS is running with busy memory intensive applications (like on a server) the more memory will get fragmented. As a sidenote example I had a customer running a desktop application under X that was doing all kinds of odd sized memory allocations. Under SunOS 4.x this worked fine ... under Solaris 2.5.1 the application was completely unuseable after about 30 minutes; the memory management was so poor compared to SunOS. Granted the application itself was not being heap friendly, allocating 2-3 byte chunks, but this example alone is enough to remind me what kind of crap goes on in the OS wrt to memory management. Rebooting is necessary to ensure memory defragmentation for application performance, and to ensure leaked memory/other resources are released by the OS/application.