SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: TimF who wrote (64206)11/21/2001 1:19:31 PM
From: tcmayRead Replies (1) | Respond to of 275872
 
"I understand that voltage is applied to the gate which then allows current to flow from the source to the drain. This is an electrically switch not a physical switch which would presumably be too big and too slow. But how does the current get applied to the gate? If it uses its own similar electrical switch then the question just gets backed up one level. If it uses a physical switch then would not that switch be too big and too slow? Everything that I have read about transistors and microprocessors either just assumes such knowledge, or if basic enough not to assume it, it either deals with the individual transistor's operation, like that Intel link, or it deals with the overall design of the microprocessor (how many ALUs or FPUs, the cache structure, prefetching, how the CPU deals with branches, ect.) , or the techniques used to create the MPU. Any good links that deal with how transistors work together to perform basic MPU functions, that doesn't require an engineering degree to understand?"

If I understand your question correctly, you're first asking if the logic to control (switch) another switch (the memory) doesn't just "back up" the problem indefinitely?

No, because the switches are _multiplexed_. That is, a few switches can then switch the state of hundreds or thousands of other switches.

As a simple way to see this, picture a 32 x 32 array of memory cells (various MOS switches, for example). Each row and column has some logic (switches) to bring an access line high or low. Combinations of these highs and lows on the rows and columns can change the state of the memory cells. And sense the states.

The amount of this logic on the edges, the periphery, is some simple multiple of 32.

As the memory arrays get larger, the number of bits stored goes as the _square_ of the number of rows and columns, but the amount of peripheral logic goes as _linearly_.

Put another way, the peripheral logic switches do more than just store one bit...they control or steer the logic of the other switches.

This is common throughout electronics. The key to binary logic, in fact, is just this fact that the number of "things" (bits, instructions, states, objects, etc.) is addressable by a logarithmic number of address bits. A 32-bit addressing scheme can manipulate or access a much "bigger" thing, namely, a space of 2^32 things. And a 64-bit addressing scheme only requires about twice as many controlling logic transistors, and yet it now can access or address 2^64 things, which is _enormously_ larger.

Such is the power of binary logic (pun intended).

As for links to articles, you can try searching with Google. Milo Morai was offended that I cited _books_ instead of doing such a search and giving him a bunch of links. Well, the fact is that students, whether in high school, JC, or university, still learn primarily from good textbooks. Most Web pages are pale imitations. Deal with it.

For those who don't want to buy books, libraries still exist.

Or an hour spent in an easy chair at Borders or Barnes and Noble.

--Tim May



To: TimF who wrote (64206)11/21/2001 2:14:52 PM
From: rsi_boyRespond to of 275872
 
"doesn't require an engineering degree to understand?"
Well I have an engineering degree (electrical/computer even) and I still don't understand the complexity of transistor design...

Seriously though, when you think of a transistor as a switch the analogy works in the theoretical sense but in the practical world there are a lot of nonlinear effects at play. For example, leakage current refers to the fact that in practice, it is difficult (read: impossible) to manufacture a transistor that is ever perfectly on or off, or that switches instantly . The transistor even when "off" is still on a little bit, wasting a little bit of power, generating a little bit of unnecessary heat. To understand computers at the logical level (where we assume transistors act perfectly), you need to study digital logic. An introductory text on that subject would show you how to assemble a handful of transistors into logic gates. With these basic functions (AND, OR, NOT etc.) you can start to build the basic building blocks of digital logic (such as adders, memory cells, shifters, flip-flops, multiplexers etc.) With those building blocks (and a good understanding of the relevant theory and protocols (handshaking, little vs big endian, address schemes, binary math algorithms etc.) you can start understanding how to build the building blocks of processors (alu's, registers, program counters, caches, decoders etc...). And so yes, unfortunately it gets pretty complicated...