"3COM'S BENHAMOU LENDS INTELLIGENCE TO NETWORKS"
"As chairman of the board, president and chief executive officer of 3Com Corp., Eric Benhamou heads up the second-largest data network infrastructure company in the world, one with the unique history of having invented perhaps the most used LAN technology in the world: Etherent.
Benhamou left Bridge Communications, an early networking company he co-founded, for 3Com in 1987. At the latter firm, he held senior management positions in engineering, operations and management, until 1990, when he took full reigns of the company.
Under his leadership, 3Com entered the Fortune 500 in 1994, with 1993 calendar sales totaling $617 million (U.S.). By fiscal 1997, 3Com had sales of $5.6 billion (U.S.).
In 1997, U.S. President Bill Clinton named Benhamou to the President's Information Technology Advisory Committee, which advises Clinton on research and development focal points of federal programs to maintain United States leadership in advanced computing and communications technologies and their applications. Benhamou also sits on the boards of directors at Netscape Communications, Cypress Semiconductor and Legato.
Communications & Networking's Lawrence Cummer had a chance to talk to Benhamou about his views on the future of networking and his own company's future.
CN:What do you see being 3Com's direction into the rest of the year and onwards?
EB:Let me begin by painting the general vision that drives the company. Obviously we've been in networking ever since our inception and we believe that our industry is headed towards a state of pervasive networking, where networking appears across the four major markets. We believe that networking is on its way to become much simpler, much more available, lower cost, multimedia and completely deregulated. All of this creates a state of pervasive penetration of the network.
On our way towards this sort of nirvana stage there is one major milestone that we will meet as an industry, which is the creation of converged networks and this is the next big milestone and the single biggest drive of growth for the entire industry. This happens to be the same strategy of 3Com today. In the early years of our industry we focused on growing two dimensions: speed, or performance, and distance, making the networks go faster and faster and reach further and further.
In the last couple of years, we've focused on adding intelligence to networks and making them capable of handling different connections, different traffic, different sources of traffic, different applications and different users in appropriately different ways, as opposed to behaving like a dumb set of pipes. And as we've added enough intelligence to these networks we find ourselves capable now of building infrastructures that can move data, voice and video on a common infrastructure, and this is what we call converged networks. We think that we are headed down this path as an industry.
The driving forces behind converged networks are, first of all, very strong economic benefits for both consumers and for service providers, and the reason behind this is fairly obvious. If you have one network infrastructure to build instead of three you save money in capital outlay, and if you have one network infrastructure to manage instead of three you save money in operational costs and you also just simplify your life in general. So economic benefits are the primary factor. The second is that there is a new kind of application you can build services that you cannot build if you have parallel efforts. You could envision, for example, that the future look and feel of your desktop will not only have data-oriented buttons and options but also voice- and video-oriented buttons and options.
CN:You mentioned the last few years there's been an increasing trend to adding intelligence to networks rather than just the bigger, better, faster, further sort of approach. Is it a change in network traffic that driving that?
EB:Yes, it is a change in the nature of the applications that are run on the networks. If you think about the latest applications, they generated very predictable amounts of traffic, it was going in very predictable directions. It was all going to the mainframe and from the mainframe and you had a network between the users and the mainframe, you could pretty much know how the traffic was going to flow.
When you move to clients who have applications it became a little more chaotic, but it was still fairly predictable because you knew where the servers were, and therefore there was a lot of traffic concentration in and out of server farms.
When you go to the next phase - which is when you employ intranet applications, Web-based and Java-based applications across that same network infrastructure - then all hell breaks loose. You basically loose complete locality of traffic. Anyone can publish information. Anyone can subscribe to information. Some of this information is plain data, some of it is images, some of it is videos, some of it is audio. So basically traffic goes all over the place, wherever it wants and however it wants, and therefore the network has to be extremely flexible.
So there's basically two broad classes of approach. You can either throw a lot of bandwidth at the problem, basically create huge pipes that could accommodate anything. Or you could make the network more intelligent, recognizing the bandwidth is not yet, and will probably never really be, free, regardless of what our hopes are, and just make the network adapt intelligently to what services are appropriate. We have a view of how to do this that is perhaps different than other companies in the industry.
CN:I know a lot of vendors view the core of the network as the place to put intelligence and I know that traditionally 3Com has focused its attention on the edge devices and network interface cards...
EB:Yeah, that's completely accurate. Some people caricature our position and say that we want to put all the intelligence at the edge and that's not true; we want to distribute it. Putting it all in the core is tantamount to basically replicating a mainframe-like infrastructure and we know that this has a lot of limitations. It creates immense amounts of complexity and it is not very flexible at all. We believe that the edge is the best place to know which policy is appropriate for a given traffic, because this is where you're in contact with the user and the application on one side and in contact with the network infrastructure on the other side, and this is where you can figure out who the user is, what the user wants to do, what is the appropriate policy and this is how you can turn around and tell the network what's coming down the pipe.
CN:I suppose that has a positive impact on flexibility, as well as being distributed?
EB:Yes, it has a lot more flexibility, because it also relieves some of the burden out of the core. The core no longer has to guess what the user wants to do or is about to do since it receives an explicit signal from the edge. It is very similar to subscribing to a cellular phone service. You have to pick a plan, and you have very different plans depending upon whether you want to basically use your cellular phone as your main phone and carry it around all the time, or to have long distance conversations one-half hour a day or an hour a day, or if you're going to use your cellular phone as basically an emergency phone that you keep in your glove compartment. So you're going to pick very different plans and the cellular phone company uses your self-selection of the policy basically to build its network's intelligence.
This is a very natural concept for us when you think about cellular phone or other phones for that matter, and we don't have an equivalent of that in data networks.
What 3Com is really talking about is how to create the equivalent of that, so that the user, or the administrator on behalf of the user, explicitly defines a policy of traffic that the network infrastructure can adapt to, such that it doesn't have to throw bandwidth at every problem it faces. cn
plesman.com |