To All : MS Strategy vs Novell ( and Partner's) strategies for Object oriented distributed processing......
Here is an article on how it all stacks up. The question is who will succeed with their strategy?
======================================================================
Source: PC Magazine, March 25, 1997 v16 n6 p200(3).
Title: PC size, mainframe power. (trends in distributed objects and naming services)(Looking Forward: Technology on the Way) (Company Business and Marketing)(Cover Story) Author: Larry Seltzer
Abstract: IBM, Netscape and Microsoft are basing their future Internet strategies on distributed object models. Distributed Common Object Model (DCOM) is the distributed version of the COM model behind ActiveX and OLE, and it represents Microsoft's effort to define the future of distributed computing; it is built into Windows and will soon be available for Unix and the Mac. DCOM uses the same programming model as COM, but porting programs to it can take some work. The Object Management Group is a vendor alliance that includes Netscape and IBM and promotes an object management architecture (OMA)comprising the CORBA object broker, Internet Interoperability Protocol (IIOP) and related services. Java Development Kit (JDK) implements a RBA-compliant IDL facility and a native Java remote method invocation facility. IBM's CICS transaction server is the oldest and most widely used transaction-processing system; it runs primarily on mainframes but has been ported to Windows NT, OS/2 and AIX. Microsoft's Transaction Server for Windows NT Server 4.0 is based on ActiveX.
Full Text COPYRIGHT 1997 Ziff-Davis Publishing Company
Despite competing approaches, the latest trends in distributed objects and naming services are empowering PC networks.
IBM, Microsoft, and Netscape have staked much of their Internet futures on distributed object models. Distributed objects are not the ones programmers deal with regularly. Objects in the Distributed Common Object Model (DCOM) or the Object Management Architecture (OMA) exist at the system level.
Objects elsewhere in the system and elsewhere on the network can interact with them as objects. They may be able to query their capabilities or inherit them for their own use. System objects such as OMA and DCOM can also be written in almost any language.
DISTRIBUTED COMMON OBJECT MODEL
DCOM is the distributed version of COM, the object model behind OLE and Active X. This object model is implemented in Windows and is coming for the Macintosh and Unix.
Although COM benefits from strong development tools, writing a good program with COM and DCOM can still be difficult. A high-performance server object, for example, needs to be cognizant of arcane threading models. Microsoft's Transaction Server offers a solution here.
The programming model for DCOM is identical to that of COM, but it can take some work to port COM programs to DCOM. DCOM has an advantage over OMA and CORBA (Common Object Request Broker Architecture) in the area of interoperability, because it is a more rigidly defined standard. Because you know that a COM object will work on an implementation of COM, an open market already exists for COM objects. This is unlikely to happen with CORBA objects, because they tend to be ORB-specific. DCOM also works over connectionless protocols (such as UDP and IPX). OMA does not use these protocols.
OBJECT MANAGEMENT ARCHITECTURE
The Object Management Group's OMA is often called CORBA, even though CORBA is only part of the larger OMA. The OMA encompasses the Internet Interoperability Protocol (IIOP) and other related services.
Vendors implement CORBA on many platforms, including IBM's SOMobjects (one is coming from Netscape). Most of CORBA's architecture is like DCOM's, but CORBA adds a new layer: the Interface Definition Language (IDL). The IDL defines the interface of the object separate from its implementation. Developers compile the IDL into interface code, which connects to the object broker and implementation code on both systems. This code can be written in almost any language, and the data type mappings between the implementation code and IDL are part of the OMA standard.
Only with the release of IIOP have different CORBA object brokers had a common wire-level protocol. Even with IIOP, proprietary extensions are a necessity of CORBA development, and they make it difficult to port objects or have them interoperate with those on other platforms. Despite these problems, OMA is a rich architecture with many powerful vendors behind it.
DISTRIBUTED JAVA PROGRAMMING
Two services are available for writing distributed applications in Java. Version 1.1 of Java Development Kit (likely shipping by the time you read this) features both a CORBA-compliant IDL facility and a native Java remote method invocation (RMI) facility. Most Java programmers will access these facilities indirectly through JavaBeans, a component software specification.
Java IDL is an IDL compiler that allows you to map Java objects to CORBA object brokers. Thus you can interoperate with objects written in other languages and possibly running on other systems. A developer must follow a protocol for naming CORBA objects inside Java and vice-versa. This makes it easier to integrate a Java system with legacy code, which likely isn't written in Java. Performance should improve as well.
RMI works only between Java programs running on different systems. RMI usage isn't seamless: You must declare remote classes explicitly and perform other rituals to ensure implementation. By default, RMI works directly on Java's own sockets interface. Although this will let you implement your system on any Java system, it keeps communication outside of any other security and distribution facilities you have. But RMI does let you take advantage of Java's security features, including the sandbox, multithreading, and garbage collection. Moving RMI programs around to any system using a Java virtual machine (VM) should be easy.
TRANSACTION PROCESSES
Consider a credit card purchase: The debit from your account and the credit given to the vendor is a good example of a transaction. If your purchase isn't approved, the bank won't credit the retailer's account. A transaction is a series of operations that a programmer designates as a logical group. If any of them fails, then you don't want any of them to execute.
Server and host processes comply with standards for letting such systems work correctly, including the X/Open XA architecture and part of IBM's LU 6.2 specification. A programmer begins a transaction, designates the operations, and (if no servers complain) performs a commit. If a problem arises with any operation, the program performs a rollback. This is called a two-phase commit, and a transaction-processing (TP) system manages it. In our credit card example, if either your debit or the store's credit fails, the TP system should roll back the other.
Distributed systems represent fragile operations. While well-designed distributed operations rarely involve more than three different systems, the number of potential points of failure is much greater than on a single system. Also consider that distributed objects are client and server processes. Such factors make it complicated to design a high-performance server object.
For many years now, mission-critical enterprise mainframe systems have employed expensive proprietary TP monitors to govern such operations. With high-performance server operating systems, new standards in network communications, better development tools, and the ability to distribute object models on which to build TP systems, mainstream PCs can now handle what was once practical only on mainframes.
IBM TRANSACTION
The mother of all online transaction processing (OLTP) systems is IBM's Customer Information Control System (CICS). Although CICS has been run mainly on mainframes, IBM has ported it to Windows NT, OS/2, and AIX as part of the company's Transaction Server. CICS was written back in the late 1960s to run in environments so slow and so resource-constrained that they seem impossible these days. (Imagine mainframes with 128K of RAM.) CICS earned a reputation on mainframes for the highest reliability and scalability.
On the newer platforms, such as Windows NT, CICS is built on top of the Encina transaction monitor. The two, along with client software, form IBM's Transaction Server. Encina removes details of the file and communications subsystems from CICS, making both CICS and applications easier to move between platforms. It also emulates some mainframe services, such as the VSAM file system, and connects to databases such as MS SQL Server and IBM's Database Server. Encina is built on top of OSF's DCE Services, including RPC and DCE security, which it can use in addition to conventional CICS security. Lastly, Encina can tie any XA-compliant process into the transaction system.
Application programmers can write to CICS APIs, usually writing in COBOL using the standard CICS APIs. IBM's Transaction Server does not provide a full implementation of those APIs, but the missing parts are esoteric.
There are several ways to program CICS under Transaction Server. ECI (Extended Call Interface) is a standard call-level interface for high-level languages. And although EPI (External Presentation Interface) is geared toward terminal communications, applications can use this interface for "screen-scraping" the I/O to present it in other media. For example, CICS Internet Gateway uses EPI, turning 3270 applications into HTML programs.
CICS Gateway for Java exposes Java classes for programming CICS. IBM has high hopes for it and CICS Internet Gateway in terms of exposing enterprise data to network computers and other Internet-connected clients. IBM also has CICS Gateway for Lotus Notes, which exposes CICS data as a Notes database and lets programmers manipulate it with Notes macros and LotusScript.
MICROSOFT TRANSACTION SERVER
Microsoft Transaction Server runs on Windows NT Server 4.0 and uses ActiveX as its client programming model. Very little is needed to create a transaction; the process can be accomplished using drag-and-drop through the user interface rather than an API-based task. A few COM interfaces are provided for API-level programming. Transaction Server talks to XA, LU 6.2, and OLE TX transaction monitors (OLE TX is a Microsoft spec).
Transaction Server makes server-based programming easier. One of the hardest parts about designing a high-performance server application is designing its thread and process model correctly. If you do this badly, the application will not run well. Threading models in a COM-based process are especially complicated.
With Transaction Server, you just design simple operations as ActiveX components in single-user DLLs, and the server handles all the messy details of thread and resource management. Because Transaction Server components are ActiveX components, which define specific interfaces, Transaction Server can call the correct code to commit the work or roll it back.
DIRECTORY SERVICES
So you've got objects and you've got a network. How do your programs find the objects with which they need to communicate? By using a directory service--the last step necessary in building a distributed platform.
Current directories consist mainly of a fairly simple hierarchical database. Network managers use this database to store information about users, network resources, arbitrary objects, and security. In the long term, the directory service will be the glue that binds the objects in the directory to their different roles. Here's a look at several important directory services now on the market. They've come a long way, but they also have a long way to go.
Domain Name Service (DNS) is the naming service for the Internet. It's not really a directory service, but it performs many of the functions of one. DNS is slow and highly static: You can look up names, and the infrastructure gradually replicates these names throughout the network. The directory basically supports only the relationships of names (www.pcmag.com) and IP addresses. Programs then use these to perform actual operations. No structure exists to support security or objects more complex than addresses. This testifies to its weakness as a standard.
Much enthusiasm has been expressed recently over the Lightweight Directory Access Protocol (LDAP) as an emerging standard. Indeed, many major vendors (Netscape and Microsoft included) have announced or shipped support for it. But LDAP isn't a rich directory service; its principal strength at present is that it's a least-common-denominator solution.
Names in an LDAP have various attributes, including organizational units and a country code. The directory structure isn't simple, but its complexity tends to be hidden by the application. Its implementations rely on proprietary extensions, which weaken its applicability as a standard. The Internet Engineering Task Force (IETF) is working on future generations of LDAP to make it stronger.
Debuted in NetWare 4.0, Netware Directory Service (NDS) is perhaps the most widely used directory service on private networks. Novell has announced plans to port the core features of NDS to other platforms, including many Unix versions and Windows NT.
One of the strengths of NDS is that its directories can be "federated" with other NDS directories while preserving the security of each site. This means that users on different networks can have selected access to each other's resources. Security is still managed by administrators of the individual organizations.
Novell is working on enhancing NDS. At the same time, it's also working with the IETF on future versions of LDAP.
Microsoft has recently announced a next-generation directory service as well as a new COM-based system for generic directory-services programming. The Windows NT Server Directory Services supports a variety of naming schemes, including LDAP, X.500, URL, UNC (\\myserver\\sharename\apps\excel\excel.exe), and RFC822 (better known as Internet e-mail addresses, such as johns@company .com). Microsoft's Directory Services also supports the storage of public key certificates and the explicit storage of private keys based on the MIT Kerboros authentication protocol.
Object Management Architecture (OMA) defines two promising services for retrieving names from a variety of providers using standard Common Object Request Broker Architecture (CORBA) programming. But they are still works in progress and are not yet widely used.
The Naming Service associates names with object references, which you can then use to perform operations. The Trading Service is a catalog in which servers can advertise names and the properties of those names. The Trading Service supports lookups for objects based on those properties. Like NDS, the Naming service supports the federation of name spaces within an enterprise.
======================================================================
This time I think Novell and their partner's are on the right side of the technology battel utilizing an "open system" design and platform. Apple failed because they used proprietary standards. But remember the whole "open group" is fighting against Microsoft. Time will tell who succeeds but if the above article summarizes the situation (and technology options) correctly, then there is a better than 50:50 chance that the "open" design will prevail.
EKS |