SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : Computer Learning

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: mr.mark who wrote (19442)5/3/2001 6:50:53 AM
From: thecow   of 110652
 
Thanks for the Norton update. I ran across an interesting article in non-geekspeak about windows resources that I thought was appropriate to the recent discussions on the thread.

www2.whidbey.net

Memory Use By Windows

The following was posted on Delltalk in the summer of 1999 by Kickaha Ota in response to a question from a reader who was under the impression that the 640K resource limitation of Window 95 and 95 would be eliminated by Windows 2000.

Actually, it's not a 640K limitation; it's a 64K+64K+64K limitation. It looks like it's time for the "Why are resources so limited?" rambling explanation again.

First of all, the 640K limitation is a completely different thing. Essentially, due to the leftovers from DOS still living on in Windows, there are a few things that have to live in the first 640K of the computer's memory. But that's usually not too much of a problem, and it's not what causes resource limitations. Resource limitations aren't leftovers from DOS; they're leftovers from the original Windows.

In order to understand why resources are limited, we first have to understand a bit about what resources are and how they work. Resources are Windows objects that a program can manipulate. For example, every window on the screen is a resource. Every picture that's displayed on the screen is probably a resource. If an application opens a file on disk, that open file is a resource. And so on, and so on.

If an application needs to use a resource, it asks the operating system to create or load it. For example, a program can say, "Hey, Windows, I need to create a window that's 300 pixels wide by 200 pixels high, okay?" Windows then goes ahead and creates or loads that resource, and gives the application back a magic number that represents it. "Okay, I've created your window, and it's #38710." Then the application can use that magic number to ask Windows to do other things related to that resource. "Okay, Windows; could you please display #38710 in the upper-left corner of the screen?" "Gotcha." Finally, when an application is through with a resource, it tells Windows to dispose of it. "Okay, please delete #38710." "Gotcha."

So, what format do these magic numbers take? Well, on most operating systems, it would be what's called a "pointer". You can think of memory as being like a post office, a huge collection of little boxes stretching off into the distance; every box can hold one piece of information. And just like every post office box has a number, every memory location has an address--a number that's used to access it. A pointer to something in memory is simply the address of the area in memory where it's stored. So, if I were a regular OS, and an application asked me to load a window, and I loaded that window into memory starting at memory address #12345678, I would tell the application "OK, I've loaded that window; it's #12345678."

On an Intel machine, these pointers are four bytes long. So if an application needs to hold a pointer to something, it needs to use up four bytes of memory in order to do it. That presented a problem to the original designers of Windows. Remember, memory was very limited back then; an 8MB machine was huge, and 4MB was more typical. And an application can use thousands and thousands of resources. So if resources were referred to by pointers, so that an application needed to use up four bytes of memory every time it wanted to refer to a resource, it could wind up using up huge chunks of memory just for these resource pointers.

So, instead, the Windows designers used a different scheme. They created the resource table. The resource table is essentially a big list of information about all the resources that are in memory at any given time. So if an application tells Windows to load a resource, Windows finds an empty spot in this resource table, and fills it in with the information about the resource that was just loaded. Now, instead of giving the application a four-byte pointer to the resource, Windows can just tell the application where the resource is in the table. If I tell Windows to load a window, and that window winds up taking the 383rd slot in the resource table, Windows will tell me "Okay, I've loaded the resource, and it's #383." Since these 'index numbers' are much smaller numbers than memory addresses, under this scheme, a resource's number can be stored in only two bytes instead of four; when you only have a few megabytes of memory to work with, and lots of resources being used, that's a huge improvement.

There's a problem with this scheme. There's only so many different possible values that you can store in a certain number of bytes of computer memory, just like there's only so many different numbers you can write down if you aren't allowed to use more than a certain number of digits. If you have four bytes of memory to work with, you can store billions of different possible values in those four bytes. But if you only have two bytes, there's only 65536 different numbers that you can store in those two bytes. So if you use two-byte numbers as your resource identifiers, you can't have more than 65536 resources loaded into memory at one time; if you loaded more than that, there'd be no way for programs to tell them apart. But on the computers of the day, there'd be no way to fit more than a few thousand resources into memory at one time anyway. So this limitation wasn't seen as being a problem, and the Windows designers went ahead and used the resource table and two-byte resource identifiers.

Now, we leap ahead to the present day. Memory is incredibly cheap; the memory savings from using two-byte resource numbers instead of four-byte pointers simply aren't significant anymore. There'd be more than enough memory to hold hundreds of thousands of resources in memory at one time. But there's still only 65,536 different possible resource identifiers; so only that many resources can be loaded into memory at once. Beyond that, you're out of resources, no matter how much memory you have left.

Why doesn't Microsoft just change Windows to use larger resource identifiers--say, four bytes instead of two? Because if Microsoft did that, it would make every existing Windows program stop working; every single Windows program written since the dawn of time would need to be rewritten to use the four-byte identifiers instead. Customers wouldn't like that. Not at all.

So, instead of making this huge change that would fix the problem once and for all but break everything in the process, Microsoft has made smaller changes to try to wriggle around the problem. The biggest one is dividing the resource table into three parts. If you look at Windows, the bulk of the Windows code is contained in three large libraries; there's USER.DLL (which holds most of the routines that manage the user interface), GDI.DLL (which holds the routines that manage graphics), and KERNEL.DLL (which holds the routines that manage the computer's hardware). When Windows applications ask Windows to do things, they almost always do it by making calls to one of these three DLLs. Because of the way Windows is designed, Microsoft was able to create separate resource tables for each of these DLLs. If an application creates a window, the window resource goes in USER.DLL's resource table; if it loads a picture, the picture is stored in GDI.DLL's resource table; and if it opens a disk file, the open file's information is stored in KERNEL.DLL's resource table. If the window winds up being in the 18th slot in the USER.DLL resource table, and the picture winds up in the 18th slot in the GDI.DLL resource table, both resources will have the identifier "18"; but because applications never ask GDI.DLL to do anything with windows, and never ask USER.DLL to do anything with pictures, Windows can still tell the two resources apart and not get confused. So this triples the amount of resources that can be in memory at one time; now, instead of just having "system resources", we have "USER resources", "GDI resources", and "KERNEL resources". (If you look in the Resource Meter, you'll see these three columns listed separately.) This reduces the problem, but it certainly doesn't solve it; as we've all found, it's still entirely possible for one or more of these three resource tables to run out.

"Is it that NT is just better at managing the 640k?"

Again, the 640K doesn't have anything to do with resource problems. Windows NT does effectively have a higher resource limit. The reason for that: Applications under Windows NT are logically separated from each other much more tightly under Windows NT than under Windows 95/98; with rare exceptions, NT applications aren't allowed to access each other's resources in the same way that Windows 95/98 applications are. Because of that, Windows NT can often get away with creating separate resource tables for each application, rather than having a single set of shared resource tables for all applications like Windows 95/98 does. So the applications can't wind up starving each other for resources; each application can manage its resources separately. That allows many more resources to be in memory at once. (That's also one of the reasons why someapps designed for 95/98 won't run under NT.)

"I thought that OS' like BeOS (I believe it has a 2 MB system resource limit) were able to break the 640k limit because they just abandoned the DOS skeleton of whichever OS they were building..."

Whenever you abandon backward compatibility, you make things much easier for yourself. Again, the Microsoft folks could solve this problem in a flash, if they were willing to break all existing Windows apps to do it.

"I had hoped that W2K would abandon DOS, but it doesn't seem that this is the case."

Eventually, our salvation will come.

The original Windows used 16-bit (two-byte) values for just about everything. When Windows 95 came out, it was also called "Win32" by programmers, because many of these key 16-bit values were changed to 32 bits. That broke lots and lots and lots of programs, but it also allowed for the sorts of features that Windows 95 and 98 have today.

Now Microsoft is working on developing "Win64", which will change many of the 32-bit values used by programs to 64-bit values. In the process, I hope that they also revise some of the leftover 16-bit values that weren't raised to 32 bits in Win32, like the resource identifiers. Again, this will require programs to be rewritten; but with processors being so powerful these days, there's a good chance that Microsoft can come up with a "Win32 emulator", that will allow existing Windows programs to run under Win64 without changes. And for applications that have been rewritten, it could make resource limitations a thing of the past, at least for the next few years.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext