When times were simpler, as the rudiments of the internetwork as we know it today were first taking shape:
"Many, perhaps most, of the individuals involved in protocol design thus far are oriented toward the use of short date transmissions over the network the transmission lengths that have been considered "typical" are a few characters, a print line, or perhaps as much as a page of text. The experience of the current RJS sites, however, is that single files are commonly much longer, for example a line-printer output file of 400 pages would not seem unusual to these sites. Further, one might reasonably predict that network use of Remote Job Services will be preselected with a tendency toward large jobs (although large jobs do not necessarily imply large I/O files) and that the addition of other batch service sites (ILLIAC, UCSD) will increase the number of long-file transfers. In light of this kind of experience/prediction, it would seem that the FTP should include (perhaps as an option which interactive-user oriented systems could ignore) a method of "restarting" a long file transfer if some element in the transmission path fails after a large volume of data has been transferred."
From RFC 281, drafted in 1971: A suggested Addition to File Transfer Protocol
ietf.org
Soon (if not already), the Internet's architecture will begin struggling with the ailments that afflict all things legacy. As Internet moves forward with improvements that capitalize on all-optical properties at some point, its architects must keep in mind all of the things that have brought it to its present state, and then move forward, gingerly, in such a way that is both forward- and backward- compatible. An example of this can be seen in migrating to IPv6. At some point, does the baggage become too heavy to carry? |