SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The New QLogic (ANCR) -- Ignore unavailable to you. Want to Upgrade?


To: Roy Sardina who wrote (19648)12/7/1998 1:21:00 PM
From: KJ. Moy  Read Replies (1) | Respond to of 29386
 
Roy,

<<<All this discussion of latency is testosterone driven, the fastest HBA in the business is 40+ microseconds, so a sub 2 microsecond switching latency is IMMATERIAL to system performance (except in a benchmarking lab) So until there is a faster HBA,something other than latency needs to be in the FAB pitch.>>>

For a network with a requirement of a 64-port fabric and 32 simultaneous full duplex transmissions, the switch latency will come in play. Your sub 2 micro-seconds will turn into 100+ micro-seconds within the fabric IMO. Your scenario is only true for a small or point to point network. I can see mainframe base network and the upcoming NT clustering configurations exceeds 64 ports fabric requirement.Just my opinion.


KJ



To: Roy Sardina who wrote (19648)12/8/1998 8:11:00 PM
From: Craig Stevenson  Read Replies (2) | Respond to of 29386
 
Roy,

Wow! It looks like I've been missing out on a good battle. <g>

<<the fastest HBA in the business is 40+ microseconds, so a sub 2 microsecond switching latency is IMMATERIAL to system performance>>

Isn't there a basic flaw in your logic, because you are using a point to point example? I agree with you that switching latency is irrelevant in this case, but then so is a switch. <g>

I think that KJ and I are seeing this one along the same lines. In current systems, which are ALL small, switching latency is indeed immaterial, since latency effects in a small (8-16) port switch don't impact system performance. (Using your numbers, even under ideal circumstances, it would take 20 HBA's to transfer enough frames quickly enough for latency effects to surface.)

Where KJ and I seem to disagree with you is with regard to what happens when systems scale to more than one switch. As far as LATENCY is concerned, it seems to me that you would have to have a VERY busy system to see tangible effects of latency, but they should be there, and should be measurable. The bigger the system, the greater the effects of latency should become, although this depends on system utilization. We are probably splitting hairs here. I did some calculations a while back using the different latency figures for Ancor's MKII and Brocade's SilkWorm. Although there was a difference, it wasn't a big enough difference that the end user would notice.

Throughput is an entirely different matter, though. I still feel that as systems scale up, there will be more emphasis placed on how to connect Fibre Channel switches together to increase the port count, and/or increase throughput through the fabric. Once again, this is not a factor in today's market for 8 and 16 port switches, but may become an issue down the road. This is where I think Ancor's multi-staging concept is superior to cascading.

I find it somehow disconcerting to disagree with you on this issue. You clearly have more "real-world" experience in Fibre Channel than I do. It is also difficult to argue with Brocade's market success. If we really are wrong, I would appreciate your efforts to educate us.

Craig

P.S. "Testosterone driven" discussions about latency haven't helped my investment portfolio very much. <g>