SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : Dream Machine ( Build your own PC ) -- Ignore unavailable to you. Want to Upgrade?


To: Dave Hanson who wrote (3537)11/14/1998 6:51:00 PM
From: Len  Read Replies (1) | Respond to of 14778
 
So, Dave, how large of a pagefile(s) do you now have?<g>

Len



To: Dave Hanson who wrote (3537)11/15/1998 12:22:00 PM
From: Spots  Respond to of 14778
 
>>add a second 128 meg mem stick
for 256 total, and was pleasently surprised to find a noticable difference
under NT.

I was being a bit sloppy and also a bit lazy in my earlier
comments. Lazy in that I hadn't actually looked up the
NT definition of the "commit charge" but made an assumption;
sloppy in that I glossed over the fact that there's always
some turnover in the contents of virtual memory even if there's
no change in its total size.

Let's take lazy first, as it has the bigger effect.

I looked up the virtual memory commit charge as NT uses the
term, and it is actually a commitment against the page file,
whereas I assumed in my laziness that it was a commitment
against the total address space. The difference is non-paged
memory -- anything that the OS has locked down -- which
is not mapped to the page file. This would, as you suggested,
include disk cache (it's considered gauche to page your
disk cache to disk <gg>), as well as other memory resident
OS structures, plus memory locked down by user processes.

This means that if the pageable memory allocation (commit
charge) exceeds physical memory MINUS the currently allocated
non-pageable memory, there is a potential for page faults,
or other OS-initiated disk activity. (An alternative to
page faults is to reduce file cache, but this causes its
disk activity of its own.) To illustrate, right now my
64mb machine looks like this (in K):

64948 Phys mem
17176 Cache
12988 Non-paged kernel
-----
34784 Available for process memory

I don't know if user process locked-down memory is included
in the kernel non-paged number or not, but for discussion, let's
pretend it is, and that the entire 34+ mb is available for
paged memory. This means that if my commit charge exceed
34 mb, I have a potential for page faults. At the moment
my commit charge is 69264K, or call it 69mb, about twice
my pageable memory, so I do have a sizeable potential
for faults, even though the commit is barely above
physical memory size.

BUT actually what's important is the way my processes
use memory. In fact, it would be a very poor use of memory
if I never got a page fault (though when a commodity gets
cheap enough, we use it more to save other trouble, which
is why my next machine is a minimum 256megs <g>).

This brings me to sloppy. Research going back 40 years
shows that processes use subsets of their address spaces,
often in highly predictable patterns. These patterns
change over time, but at a much slower rate than memory
accesses, or even page faults. Thus, what is important
to fitting a process into available memory (and therefore
avoiding page faults) is not the size of its virtual
address space (commit charge) but the size of the subset
that is actively being used at any time, that is, being
currently referenced as the process executes. This
subset is known as the working set of the process.
If the working sets of all active processes fit into
available memory, good. If not, thrashing and very
poor performance.

This can work both ways - sometimes processes with
small working sets change them often. As an extreme
example, suppose a process normally uses little memory
but on occasion allocates a large memory space, uses
it for a bit, then frees it (a database might do this
to sort in, for example). Such a process would appear
to have a fairly small working set most of the time,
but could nevertheless cause extreme amounts of thrashing
on occasion.

Well, this is much too long and is getting off the point,
so I will stop. To return briefly to your comment,
you are probably seeing paging due to overcommitting
pageable memory (or rather, were before adding more
physical memory), which happens somewhat earlier than
my former comments indicated. It could also depend
on the specific memory patterns of the apps you're
running.

Spots