SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: Ali Chen who wrote (74165)3/11/2002 3:41:07 AM
From: PetzRespond to of 275872
 
<Differences in calling conventions are irrelevant since
in a normally designed application the most time (90-95%) is spent in loops...>

Library functions like sin, cos, sqrt are often called inside loops. Using registers (when there's lots of them) to pass arguments can be significant because library functions usually the equivalent of only a few lines of code. And if the functions are "in-lined," the extra registers definitely come in handy to generate optimal code.

Petz



To: Ali Chen who wrote (74165)3/11/2002 4:47:20 AM
From: Gopher BrokeRespond to of 275872
 
in a normally designed application the most time
(90-95%) is spent in loops inside subroutines/objects


I think you are talking about applications written years ago when performance we the #1 consideration on most programmers minds and they would unravel function calls to squeeze another 10% out of the loop. Device drivers still have this level of optimisation but not "real world" programs.

Nowadays, with the advent of object oriented languages, it is very common to have an object's member functions called within tight loops yet not declared for compiler inlining.

Calling functions within loops is not just laziness by programmers. These days maintainability is the number one goal for most systems, even at the expense of performance. Buying more hardware is a lot cheaper than hiring programmers to fix bugs caused by over-complexity. So breaking things down into little functions and reusing them means you don't have to debug the same think over and over again. The downside is that the number of function calls goes up substantially.



To: Ali Chen who wrote (74165)3/11/2002 2:39:28 PM
From: Joe NYCRespond to of 275872
 
Ali,

Differences in calling conventions are irrelevant since
in a normally designed application the most time
(90-95%) is spent in loops inside subroutines/objects,
and the call overhead is a tiny fraction of the
run time.


Good point. Yes, a lot of time is spent in loops, but with object oriented programming, the trend is to go with smaller granularity, a lot of code is moved into tiny code fragments.

There is one thing about the extra registers. If somebody wants to hand-tweak the code, the 8 extra 64 bit registers store information equal to 16 32-bit registers. It would take a lot less time to retrieve 32 bits from a register (upper or lower) than to make a round trip to memory. I doubt compilers would implement this type of optimization.

Joe