SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Barry Grossman who wrote (43807)1/4/1998 5:20:00 AM
From: Joe NYC  Respond to of 186894
 
Barry,

What kinds of programs are current pc's constraining?

It is a combination of things. Usually the slowest component in the PC determins the hardest to overcome bottleneck. The slowest component is the hard disk. There are number of things than can help overcome it. The easiest is a plentiful memory. Until about last 2 years, memory was fairly expensive, and sometimes it was hard to persuade clients to upgrade all their computers to say 16 MB of memory (which used to be $600 or so for a long time)

With plentiful memory, you as a programmer can sometimes rely on the fact that the piece of data you will need next will come from either local disk cache or memory cache on the server, not from a hard disk.

Which brings me to the second bottleneck - the network. Here, there are several things that can speed things up: On the server side, it is first of all memory, processor and faster hard disk / RAID array. Ideally, you would want the information to come from memory, not the hard disk.

Once the server has the information, it needs to send it to you. Here, the network handwidth plays a role. Gradual upgrade of Ethernet with Fast Ethernet is going on, and Intel played very important role here by undercutting 3com, forcing them to lower prices to Intel level. The result is that there is no longer any reason not to by Fast Ethernet. The second piece of the puzzle are network hubs/switches/routers. With hubs, a large number of users, sometimes the whole network shares the same bandwidth, which used to be 10 Mb/s.
Switches / routers segment the network trafic, so that the information that needs to go from point A to B takes the shortest route, and doesn't go anywhere else. These days, the price of the Fast Ethernet switches is down to less then $200/port, probably half of that for Ethernet. With these prices, some companies can afford to have a dedicated line on a switch for every user. When this is in place, you will get the response from the network that is approaching the response you get when accessing information locally. BTW, Intel is becomming a bigger player here.

It's only when you get the information, you get to act on it on a workstation. Here is where most of the application code gets executed. You have a choice of tools. There have always been the quick (as far as development time) and dirty tools - dBase, FoxPro, Clipper, Visual Basic, Powerbuilder. These tools generally have a layer that runs over the hardware that provides the programmer with more friendly environment to run the code, more gracefull crashes, memory management etc. Most of the custom built applications are build using these tools. These applications are CPU hogs. They need up to 10 times as many CPU cycles to do the same thing as C or C++ does.

So fast CPUs allowed a lot of the custom made applications to be written with these tools. A lot of them would not have been written if the programmer had to write them in C or C++.

C/C++ applications generally tend to take a lot more time to write, but they execute a lot more efficiently. As I said, up to 10 times faster in CPU specific tasks, or more. This is because the protective layer is removed. Your instructions, lines of code, generally result in a single or a couple of instructions that execute on the CPU. But the applications are harder to debug, a lot longer to test, and there is more potencial that bugs go undetected during the testing process.

What's going on on the application tool front is that the 4GL tool vendors (the first group) are incorporating native code compilers to their tools, that retain some of the advantages of the 4GL tools and acquire more of the speed of C/C++. Visual Basic, Powerbuilder are some examples. This trend may not be seem beneficial from the point of view of Intel investor, but many technology users are benefiting from it. One of the walls that many Visual Basic applications tend to hit has been moved a little further out, giving the army of Visual Basic programmers more room. Also the normal increase of performace that is expected of CPUs helps as well.

Under the conditions I outlined, your typical custom applications faces a CPU bottleneck when the CPU needs to sort or cycle through a lot of data that it already has in memory, or in case it needs to do calculations on the data.

I have written a number of business applications for many industries, but to tell you the truth, this is far less common than people think. Or, the CPU delay tends to be insignificant compared to other delays. If it takes you 5 seconds to get the data from it's source, it is not that important that your faster cpu takes .1 second rather than .2 to process it.

What programming becomes unconstrained and more feasible with the announced and upcoming PII 's and Merced?

Given you have hundreds of Megabytes, even Gigabytes of memory in your servers, the CPU or # of CPUs will make a lot of difference. Since prices of memory are still in a free fall, and Merced is around the corner we will see significant performace increases.

On the workstation end, I think the move to the 100 MHz memory bus is more important than clock speed increase from 300 MHz to 333 MHz on the P-II side. When it comes to performance you have to get the data and then you have to process it. Suppose you have the data in memory, and a task takes 2 seconds to execute, of that trasfer of data from memory to CPU takes 1 second and processing takes another 1 second, doubling the CPU performance will reduce the time it takes to perform the task by 25%, rather than 50%.

In addition to the 100 MHz memory clock, Deschutes will have faster L2 and later more of it. This is probably more important than the increase of clock speed.

If you compare P-II and Deschutes, P-II is almost a non-event. If it didn't exist, the world whould not have noticed. I have a lot more respect for Deshutes. (maybe one day I will even learn how to spell it)

What programming will become unconstrained when the next generation of chips arrives in another 18 months?

The ability to access and process almost unlimited amouns of data will provide management with even better and more timely information about their business. It will also help target the marketing efforts more effectively.

Also, Intel has a growing presence in networking, which may grow faster than CPU sales. Few people are content with standalone PCs. You want to be connected, and be connected at high speed.

In my 1st example, the data took 5 seconds to arrive from it's source and .2 seconds to process. If the network/server become a lot faster and it takes only .2 seconds to arrive and .2 seconds to process, the CPU becomes somewhat of a bottleneck, and it may be more logical to upgrade to a faster CPU.

There is another programming benefit that you get from adequate hardware. Given plentiful hardware resources, you are free to chose the "correct" approach, rather that a shortcut that speeds up performance, but ends up costing a lot more in maintance, non-extensibility of the program and unclear logic. The Y2K problem is a typical example.

Joe

PS: sorry about the long-winded reply.