Frank,
This is a long post so I will spend some time on some of it :-)
Thanks for addressing my specific question. I now understand the context of your original message a lot better. To which my first response is a resounding: bit.ly
Your welcome - but the link did not work.
Nevertheless, I see your point more clearly now.
The point implied in my question, however, remains. By definition, equalizing cable lengths at "any" single architectural segment along the chain translates into artificially introducing a certain amount of additional latency into the paths of all players, in order to reduce their speeds to the player whose cable length is, by the happenstance of being assigned the most distant switch port, the longest.
Begin musing: I frankly don't see how this is anything more than a half-measure, unless similar measures are taken at every hop along the routes of every client, from their most distant participating servers right up to the exchange in question, especially since, if I am not mistaken, there are multiple colos now housing those servers whose distances from the exchanges obviously differ far more significantly than the lengths of cables between racks. Otherwise I don't see this as more than a token gesture, except that it is under the singular control of the exchange, even if it does mean introducing a few picoseconds more delay to most participants. It would seem that a more elaborate handicapping system, as I noted above, would have to be created and enforced that was more reaching in a geographic sense than at the exchange switch port alone. End musing.
Your musings are interesting and on point, but does not address the peculiar world of algorithmic trading. If permitted the algo-traders would put their servers in the exchange.
Their legal wrangling with each other for 'fairness' has nothing to do with technology :-)
So what do the algo-traders do? They put their servers in the colo exchange with the feed handlers. These are the same spots used by Bloomberg, Thomson and others for the eye-ball traders.
Bloomberg and Thomson and the like massage the data and send it to data centers to merge with analysis and send it world wide. They have a subset of clients in local exchange areas that receive a 'low latency' feed or colo with them.
The big guns do it themselves at the colo. Fast servers with fast disk (or solid state 'disk') lots of memory, optimized NIC's and ultra-low latency switches are used. What many of the folks miss are the intricacies of IP Multicast and the smoothing effects of some chipsets. Most testing cannot show this, because the microburst situation is transient and is difficult for test equipment to capture and replay (when its even allowed in these areas).
So being at the colo means all metro distances are equalized and everyone is getting the same data at the same time.
As an arms dealer (I work for an equipment vendor) - I advise and work with these folks, but am not a Wall Street insider. I am aware of primary and secondary colos for Nasdaq and NYSE. I am not aware of anyone having two primary colos - that would require some fairness doctrine when the closest colo filled up.
One of my partners is pretty much the top deployment engineer for multicast around. What my team is doing here in the latency battle is to pull a lambda into our lab from the colo where we have purchased our own meet-me location. This would allow our clients to send us a copy of their raw or processed feeds that we could use in a test-bed - where we could even host their applications. We can then either prove our solutions are better - or work with our hardware designers to address any issues (it could not be our network designs :-).
The issues are interesting - do you buffer (implying smoothing - which implies added latency) or do you tail-drop. We certainly see at least one competitor touting ultra-low latency that actually does this by dropping the peak burst traffic.
The rest of your post could be summarized into the general concept of private peering. Low latency WAN's are interesting , but not germane to the trading discussion. Why? Because no one would address the CME from NY - you always put your algorithmic systems as close to the exchange as possible.
The cloud concept that is so prevalent today is (in my humble opinion) just a rehash of the ASP concept that was emerging at the end of the dot.com bubble. Think Digital Island.
In this concept - everyone attaches to the Internet and voila - you have access everywhere! Cloud compute, storage etc - so the idea of offloading is easy.
But early adopters are finding issues, because you can't get an SLA on the 'Internet' whatever that is - so now we need to act like our own ISP and to move traffic quicker and with greater reliability - so we need to directly (privately) peer with our vendors.
As a case in point I work as an adviser to a local University gigapop. We set up peering with Internet2, NLR, two Tier 1 ISP's and the local Broadband providers. This allowed 75% plus of the traffic to bypass indirect peering where they have no control.
My own IT department was not as smart. Our vendor management folks bought the cheapest bandwidth from a tier 2 ISP with lousy peering relationships. My house to my lab on Time Warner business class service was 8ms with low jitter (its only 9 miles-and as you suspect from the discussion I have my own fiber interconnect in my lab outside of my IT department - its simply air-gapped from our internal network). However, my house to our corporate portal was oscillating between 76 and 125ms. VOIP service in a word - stunk.
Private peering with TW and other broadband providers fixed this issue.
Same thing will occur for offloading an IT service to some cloud. Peer directly with Google, Amazon or whomever and you can control your SLA environment.
Otherwise - beware. So who will by these low latency WAN circuits and occupying the colo spaces? I think the target audience is the cloud vendors. They network their DC's together and then create meet-me rooms for their clients. This is similar to the exchange situation we discussed earlier, but not quite as stringent.
Cloud services will be interesting while all this is getting sorted out.
Check out the following: searchcloudcomputing.techtarget.com
Legal issues aside, I wonder how they peer, do they have a clear ITIL methodology for troubleshooting, incident reporting etc. The lack of a legal framework tells me the SLA framework did not exist.
John |