SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Microcap & Penny Stocks : Patriot Scientific - PTSC -- Ignore unavailable to you. Want to Upgrade?


To: Urlman who wrote (4203)1/17/1998 3:09:00 PM
From: Urlman  Respond to of 8581
 
Stack-based Java a back-to-future step
Embedded Systems
David A. Greve, Design Engineer and Matthew M. Wilding, Research
ÿ
01/12/98
Electronic Engineering Times
Page 92
Copyright 1998 CMP Publications Inc.

ÿ

Engineer, Advanced Technology Center, Rockwell-Collins Inc., Cedar Rapids, Iowa#

In early 1996, a small group of Rockwell-Collins engineers in the Advanced Architecture Microprocessor group (AAMP) gathered to review the newly acquired Java Virtual Machine (JVM) specification. After a time, one engineer mumbled, "Aren't we already building one of these?" This offhand remark struck a chord with the others and culminated in July 1997 with Rockwell's announcement of the development of the first silicon implementation of the JVM, the JEM1 microprocessor. The JEM1 is the first Java processor because it is the first to execute the JVM instruction set directly.

This result was made possible by the AAMP engineers who realized that they could capitalize on the success of the AAMP architecture while leveraging the considerable commercial development surrounding Java by modifying an AAMP to implement the JVM instruction set directly.

The popularity of Sun's JVM specification is, in a sense, a validation of the AAMP team's many years of effort. The use of a stack-based architecture providing high code density and portability is a mainstay of the AAMP architecture. The JVM leverages these concepts while simultaneously supporting a modern, object-oriented environment.

The JVM also represents an opportunity to serve better the AAMP team's internal customers. For years the architecture has been supported by a relatively small group of engineers who did everything from processor design to compiler and code generator development to test equipment and debugger design and construction. The JVM's strong industry presence provides an opportunity to leverage an industry standard to obtain commercial support. It is now possible to obtain off-the-shelf Java compilers and debuggers that use standard interfaces. Further, the high profile enjoyed by the Java language makes it alluring to new recruits.

Developing a processor that executes JVM instructions has another important advantage. Other processor manufacturers such as Sun are developing JVM direct-execution machines with different power-performance tradeoffs that can provide a path for JEM customers who need higher performance. This frees the JEM development team to concentrate on providing a low-power processor designed to meet the requirements of embedded real-time applications.

Although Java was conceived as a language for use in embedded systems, it has several aspects that fuel debate over its usefulness in such systems. These debates typically involve direct memory access, garbage collection, memory requirements, object-oriented performance and real-time capabilities. The JEM team understood that it had to provide solutions to these problems for Java to be a success in the embedded market. In solving these problems, the JEM team benefited greatly from working with Intermetrics (Cambridge, Mass.), a software company with significant Java experience. Intermetrics is primarily responsible for the development of the JEM run-time system and static linker as well as the development of a source-level debugger.

Although the JEM must extend the JVM instruction set to provide direct access to memory, it is undesirable to require native methods written in another language simply to utilize these instructions. The Intermetrics-developed JEM static linker solves this problem by directly replacing selected native-method calls with JEM-specific opcodes. This technique allows an entire JEM system to be developed using Java and provides significant portability and language interoperability since only standard Java -class files are required to program a JEM system.

Garbage collection is also an issue in the use of Java . Most real-time embedded systems cannot afford the overhead and nondeterminism of garbage collection. But more importantly most embedded systems do not require dynamic memory reclamation.

The JEM team concluded that it was possible to write useful Java software that does not require garbage collection. As a result, the JEM directly implements a static memory allocation scheme useful in systems that do not require or cannot allow garbage collection while providing the hooks necessary for implementing more sophisticated dynamic allocation and reclamation schemes.

Beyond the issue of garbage collection is the broader issue of Java memory requirements. The entire Java run-time image is far larger than typical embedded systems. In this area the JEM static linker proves useful as it provides optional pruning of the run-time system to minimize such memory resource requirements.

The JVM instruction set is inherently object oriented and, because the JEM implements these instructions directly, the JEM accrues a significant advantage in efficient implementation of object-oriented languages. The JEM team also invested significant effort in the development of class file data structures that minimize space requirements and maximize computational efficiency. Because most embedded applications will be statically linked, the data structures are specifically engineered to support dynamic class loading without penalizing statically linked systems.

The automatic state-saving and restoration features borrowed from the AAMP provide extremely fast context switching and interrupt handling. Support has also been added for periodic threads and deterministic scheduling, extending the usefulness of the JEM in real-time embedded applications.

The JEM team believes it has provided solutions to many of the challenges of using Java for embedded applications. Although many may still debate Java 's future in embedded systems, the debates have a familiar ring to those involved in the initial introduction of high-level languages to embedded systems. It was once widely believed that the inefficiencies of compiler-generated code would limit the applicability of high-level languages to embedded systems. But compilers improved, as did processor performance, and the productivity and ease of use offered by high-level languages ultimately won over most programmers of embedded systems. The JEM team believes that the productivity gains and ease of use of the Java language will prove just as compelling as the attributes of the original high-level languages and embedded-system developers will again embrace a new software paradigm.

JEM's first application will likely be in a product that is part of an avionics suite. Though Rockwell-Collins has no plans to market the JEM directly, it was developed with the knowledge that many markets could benefit from the technology that it brings together. JEM's low-power operation makes it suitable for battery-powered applications. Also, its small memory footprint and fast context switching let it support deeply embedded systems, and the nature of the JVM makes it a good candidate for systems with safety and security requirements.



To: Urlman who wrote (4203)1/17/1998 3:11:00 PM
From: Urlman  Read Replies (1) | Respond to of 8581
 
MicroJava chip set emerges, bit by bit
Embedded Systems
Harlan McGhan, Technical Marketing Manager, Volume Products Group,
ÿ
01/12/98
Electronic Engineering Times
Page 96
Copyright 1998 CMP Publications Inc.

ÿ

Sun Microelectronics, Mountain View, Calif.#

What makes Java technology more than just another new and better object-oriented programming language is that Java is a write-once, run-anywhere programming platform, implemented via the Java Virtual Machine. The JVM is a complete, highly portable execution environment for Java programs, that can be added on top of any combination of CPU and operating system. When implemented in full compliance with its specification, the JVM guarantees that the same Java program will generate-bit-for-bit-the identical results in all cases, regardless of the underlying CPU-OS combination.

The JVM, then, is a software abstraction layer that both generalizes and conceals all specifics of the underlying CPU and OS. The benefit is that a single image of a program will run anywhere the virtual machine is available, eliminating the need to write and maintain different versions of the same program for different platforms.

But there is a price to be paid for these advantages. Specifically, programs no longer have any immediate contact with the underlying hardware that ultimately runs them, but must execute indirectly through a translation mechanism built into the JVM. The JVM first verifies the byte-code program, and then translates the universal safe byte-code instructions into machine-specific binary instructions understood by the underlying CPU.

In principle, there are only three possible responses to the burden of indirect execution imposed by Java technology. The most direct response is simply to accept this burden and manage it as necessary. Which is to say that, although software translation mechanisms impose an overhead burden on code execution, the nature of this burden is not fixed. Where ample execution time is available (because, for example, programs are I/O bound), interpreters offer a simple and reliable way to run Java programs. Alternatively, where ample machine resources are available (for example, a large memory subsystem of 32 Mbytes or more), just-in-time compilers hold out the promise of execution speeds rivaling, and in some cases even exceeding, the performance of traditional statically compiled programs.

Limited resources

As software translation techniques approach their limits of effectiveness-as both available execution time grows shorter and available machine resources become more limited-two alternative "reductive" execution strategies for Java programs can be considered.

The first alternative is direct compilation, eliminating the byte-code intermediate program format. Where the need to maximize execution efficiency is paramount, programs written in the Java programming language can be directly compiled to platform-specific machine binaries, just like any other high-level language. But since this solution sacrifices the two principal virtues of Java -platform independence and the strong Java security model-it is of limited value, meriting consideration only where neither of these issues carries any weight.

For many segments of the market, particularly in the embedded world, the best solution is a second alternative, direct execution, thus eliminating the translation of byte-code programs. This is a much more palatable reduction in the execution chain for Java programs, keeping the byte-code program format, preserving all the associated virtues of platform independence and program security, but eliminating the need to transform this format for execution purposes.

But implementing this alternative requires a new CPU architecture, conceived in the post- Java era, and explicitly designed to read and execute Java byte-code instructions directly in hardware.

This is precisely the goal of and motivation behind Sun's JavaChips architecture, first with the original picoJava core architecture and now with the new microJava chip set.

The initial implementations of this architecture accomplished what previously was not possible-they simultaneously maximize the efficiency (minimize the overhead) of executing a Java program, while preserving all the benefits of platform independence and run-time security associated with the byte-code program format.

But as originally implemented, the JavaChip had a very important limitation because it is a new CPU architecture. Though JavaChips are not limited to executing Java code, it is what they do best. If there is non- Java code written in any other high-level language, for best performance it is necessary to use a compiler to convert these programs into Java code recompiled specifically for their target JavaChip. This becomes more burdensome in direct proportion to the amount of legacy (non- Java ) code the system must run.

The latest JavaChip implementation, the microJava family, was developed to deal with this. First is the microJava 701, designed to run non- Java code about as efficiently as comparable RISC-CPU architectures. While it offers no tangible price/performance advantage when executing non- Java code, as a JavaChip it executes Java byte code faster than the alternative, which is to use a general-purpose chip optimized for other high-level languages.

The 701 is designed to be a CPU, facilitating creation of inexpensive "thin-client" machines of all types. This means it makes possible complete systems designed in minimal time and built for minimal cost. In this respect, microJava 701 closely resembles SME's current system-on-a-chip microSparc CPUs: hook up a memory subsystem and an I/O subsystem to the controllers integrated on-chip, and you're ready to do useful work.

Even as the 701 attempts to minimize system-level cost by building the core-logic chip set into the CPU itself, it tries to preserve enough design flexibility to fit readily into a range of possible applications falling within its price/performance budget.

To meet this goal, it integrates only those basic functions any type of system needs. And even there, it tries to preserve choice wherever possible.