The speed constraint is limited by these factors: I/O bus speed, bus to net transduction, net speed. The best desktop's I/O busses run at about 100 mbps. This is orders faster than cable modem, but it is orders slower than SR.
No one is proposing that there is any utility in transferring a 300gbyte file. Is there any utility in creating one in this era? The idea is that if you can send one that tall, you can send one that wide. Where is the utility in wide?
The utility is found in the commonality of the trunk. It is cheaper to build one big pipe than it is to build many little ones. The issue is can all the little ones be preserved when they're dumped in the big one? This isn't a water pipe coming from the reservoir in the mountains where differentiation of content is superfluous.
The term aggregation has arisen. In a campus environment where 100 meg graphics files are flying all over the place it is too expensive to construct a network of n*(n-1)/2 connections between n users. It is cheaper to build 2n+1 connections between them. The only problem is that 1 dangling out there. It has to support the n*m unit time throughput at any time where m is the expected maximum use per user. No technology can aggregate that much bandwidth within the campus budget except SR.
If the net, the bus, the processor, all had the same speed, the issue of huge file sequential transfer would be moot. No one wants to pull FTTC. No one wants optic processors and busses. No one wants to pay for something that isn't yet needed. In some environments this capability is needed and is needed now. Where it is needed, companies like SR and SGI are looking for a chance to supply it.
So the answer to your question about how fast a file can be sent in the intended work environment depends on whose intention, the work, the environment, and the depth of pocket. The answer you want though is something like the I/O bus speed of electronic silicon-based computers. |