SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Siebel Systems (SEBL) - strong buy? -- Ignore unavailable to you. Want to Upgrade?


To: Hardly B. Solipsist who wrote (6491)10/23/2002 10:22:31 PM
From: ahhaha  Read Replies (1) | Respond to of 6974
 
An Interview with Microsoft Chief Architect Anders Hejlsberg

...

We've talked about Java, C++, and scripting. I have heard a number of people here at the PDC argue that there really is no difference between .NET IL (IL is the Microsoft Intermediate Language that all compilers must produce to run in the .NET framework) and the Java byte code that is consumed by the Java Virtual Machine (JVM). It's clear from the talks you've given that you do not agree. Would you care to comment further on the distinction?

Hejlsberg:

Sure. First of all, the idea of ILs is a very old idea. You could trace the concept back to the UCSD Pascal p- machine (an early implementation of Pascal for personal computers) or to Smalltalk. p-code is used by Basic and Visual Basic. Parts of Word, internally, use a p-code engine because it's more compact. So, p-code is nothing new.

I think the approach we've taken with the IL is interesting in that we give you options to control when compilation -- or translation, if you will -- of the IL to native code occurs. With managed C++, you can actually generate native code directly from source. Managed C++ can also generate IL, as can C# and VB. And when you install your code we give you the option to compile it at that point; to compile the IL to native at that point, so that when you run it there's no just-in-time compiler overhead. We also give you the option of running and compiling code dynamically, just-in-time compilation. And, of course, having an IL gives you many advantages, such as the ability to move to different CPU architectures and to introduce verifiability in type safety and then build the security system on top of that.

I think one of the key differences between our IL design and Java byte code specifically, is that we made the decision up-front to not have interpreters. Our code will always run native. So, even when you produce IL, you are never running an interpreter. We even have different styles of JITs. For the compact framework, we have the EconoJIT, as we call it, which is a very simple JIT [Editor's Note: .NET Compact is a subset of the .NET framework designed to be ported to other devices and platforms.]. For the desktop version we have a more full-fledged JIT, and we even have JITs that use the same back end as our C++ compiler. However, those take longer so you would only use them at install time.

When you make the decision up-front to favor execution of native code over interpretation, you are making a decision that strongly influences design of the IL. It changes which instructions are included, what type information is included, and how it is conveyed. If you look at the two ILs, you'll notice that they're quite different. In a sense, our IL is type-neutral. There's no information in the instructions that specifies the type of the arguments. Rather, that is inferred by what's been pushed on the stack. This approach makes the IL more compact. A JIT compiler needs to have that information anyway, so there's no reason to carry it along in the instructions. So you end up with some different design decisions, which in turn makes it easier to translate IL into native code.

Osborn:

What distinction needs to be made between interpretation and the approach that you're describing?

Hejlsberg:

At the core of an interpreter is a loop that fetches some bytes out of a p-code stream, which then falls into a big switch statement that says, "Oh, this was an ADD instruction, so it goes over here, but this wasn't" -- and so forth.

An interpreter emulates a CPU. We turn it upside down and we do one pass -- we always do one pass -- where we convert the instructions into machine code. Now, that machine code, in the case of EconoJIT, is actually very simple in that it just builds a list of calls and push instructions, and calls to runtime helpers. Then it sets off on that list instead. And, of course, that code executes much faster than interpreted code.

Osborn:

So, let me run through this: You're completely compiling the code. Then, when you're done, the bits are ready to run completely, though the point at which translation from IL to machine code occurs may vary.

Hejlsberg:

Yes. But then we may, if it's in a memory-constrained environment on a small device, throw the code away after we've run it.