SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : Dream Machine ( Build your own PC )

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Clarence Dodge who wrote (7238)4/25/1999 5:24:00 PM
From: Spots  Read Replies (2) of 14778
 
Well, we pretty much concluded that my conclusions about
NT paging files were incorrect, though I've since
verified my assertion, at least in NT's case, that
if there aren't enough disk pages to map all virtual
memory pages then you will see extreme thrashing as
virtual memory usage approaches its limits.

Well NT thrashes like a Superman with a flail in a barn
full of wheat. I've seen thrashing, but NEVER to the
extent that NT does when you put a little memory pressure
on it. Sean says Unix uses a similar implementation, which
I can't either verify or dispute. I guess the reasoning
behind it is you shouldn't approach the virtual memory limit
or you get what you deserve, but I'll be darned if I can
see how any system that's supposed to be commercially
viable can get away with it.

Push ANY virtual memory
system to the limit and you'll get a lot of thrashing,
to be sure; it's the nature of the beast. But there
has been a tremendous amount of research on how to
degrade gracefully as the limits are approached.
One of these days we must discuss it further here.

Now, to answer your real question <gg>. I continue to
recommend a paging file or files which aggregate at
least twice the size of real physical memory. There
are some trade-offs in this that I don't completely
understand owing to NT's specific virtual memory
algorithms. The gist, I think (an inference drawn
from looking at memory allocations rather than any hard
evidence), is that the bigger the virtual memory available
the more real memory NT gloms to use for disk buffers
and lockable memory.

If this inference is correct, you would lose some potentially available real memory before you had to if you make the
page files too large. This is ridiculous to my mind, and
maybe it's wrong (it's only an inference); a decent paging
algorithm should NEVER penalize you for having a bigger
swap space. Nevertheless, my best guess is that it is
so.

So, I continue to recommend a page file about twice the
size of real memory. I would go bigger if I could convince
myself that it didn't hurt me too much. I run about 3 times
myself, so I guess that's my best reccomendation over all,
with the caveats above.

There is absolutely nothing to be gained by allocating
different page files to different instances of an OS.
I would allocate them the same size on each drive in
each instance (sauce for the goose is sauce for the gander).
I have not understood your notion of isolated drive.
I have concluded that it is like trying to express
the brotherhood of man to a feminist. There is no
such thing to a feminist. Best I can tell, they don't
understand the notion of man, as a species (homo sapiens,
or should I say homo sap). Everything has to be
gender-specific, apparently. End of diatribe.

There is no such thing as an isolated drive in windows.
Evertyhing is smeared out, whether you like it or not.
I don't, but WDIK? I can't build a 50 million dollar house.

I know I haven't answered all your questions, but I am
out of time at the moment.

Regards,

Spots
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext