SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : LAST MILE TECHNOLOGIES - Let's Discuss Them Here -- Ignore unavailable to you. Want to Upgrade?


To: Curtis E. Bemis who wrote (2553)12/12/1998 6:27:00 PM
From: Frank A. Coluccio  Read Replies (2) | Respond to of 12823
 
I've got to step around some of your questions here a little, Curtis, offering generic information for the most part that could be found in other public sources of information, due to some sensitivities. Hope you understand.

In general, there are two areas in financial imaging applications that are highly consumptive of bandwidth. Each of these could stand improvement through the use of ATM or IP routing and transport. Thus far, this is an area that has not taken advantage of statistical methods on a large scale. To keep it brief, I'll focus on paper-based check imaging here, since this is fairly easy for us to relate to. And to dispel any notions at this time that this is a field that is dwindling in the face of electronic cash, think again. The need for imaging is actually on the rise for reasons I am not versed in. Probably has something to do with the economy, I'd think.

The two groupings of activities are:

1- Image Capture and Processing, along with Remote Archiving. These tend to be main frame CPU-ish in nature and unique to a given institution. They will tend to take a star topology at first, with multiple remote capture sites homing to a data center, more often than not, with some meshing taking place for overhead control and code delivery purposes, and later becoming heavily meshed with uptake on the extranet side. To date, the WAN links that support this group have been characterized as instruments of brute force.

2- Paperless clearing and settlements (automated clearinghouse functions). These will tend to be meshed from the outset due to the number of participating entities that must interact with one another, and the fact that they are each situated outside of one another's enterprise nets. Collectively, these form a consortium, or a community-of-interest (COI) form of network, also referred to as a closed user group or CUG.

>Unicast or Multi ?<

Where clearing is concerned, split modes would be used, depending on time-of-day windows dictated by the Fed, and the need for multiple postings to multiple orgs of credit/debit information for on-us and off-us situations. A key factor here is the number of institutions that are affected by a given item transaction. This is one of the areas that is currently under discussion, so I don't have any precise answers. I can only offer what I sense and can assume using my heuristics and the body language being used by individual representatives of institutions and vendors at this time. I'm rather certain that you know how those dynamics work.

In my previous post I indicated that the ANSI standards (for electronic interbank clearing and settlements, at any rate) were still evolving. Both at the work-flow process level where humans touch paper and machines, and at the electronic data interchange level, as well.

One set of areas that needs attention is the area which could leverage the benefits of present-day and emerging multicasting protocols, but don't.

>or multicast?<

In the interbank processes, they don't use multicast at the present time for several reasons, one being because of the previous limitations of multicasting and the boat-anchor effects of legacy entrenchment. More to the point, however, they don't due to the still-nascent stage at which these discussions are taking place. And even if this were a mature environment, they would not due to the inability of the underlying transports of the past to support multicasting in an elegant manner. But that is changing, even if not as swiftly as many would like.

If you are interested in this area, I'll remember to keep you posted as I learn more, which translates to when the motions are passed, and when the standards have been ratified. But before the EDI aspects are even broached to within any semblance of intermediate consideration, the manual flows and the institutional positions concerning on-us and off-us parameters must first be resolved. It is not unlike an ITU-of-yore standards study and implementation process, in this respect.

>>Bandwidth required ?<<

In the remote capture processing scenario, where image sorters could be thousands of miles away from a central processor, too much bandwidth is required. This is due primarily to "mechanical constraints," of the sorters, believe it or not, at the present time.

Image sorters are still electro-mechanical beasts and must run synchronously with predictability, as they are highly time sensitive in nature from various perspectives.

Explanation: If buffers fill up, back pressure causes the sorters to undergo "feed stops." Each time a feed stop occurs, the sorter must "clutch" to a halt. Too many clutches per unit of time, and you wind up with smoke.

Worse, production ceases with serious consequences, since this is an environment that pays heavy penalties if certain processing and delivery windows are not met.

Consequently, and mainly because there are no allowances for anything but total and absolutely unobstructed throughput, the WAN requirements are very high in order to allow for worst case "bursting."

Caching usually doesn't work well here because of time dependencies written into associated processes and modules, which must also be real time, as well. The reasons for this are mainly rooted in the fact that the original applications were assumed to be running in a local environment, in the proximity of the main frame, and not thousands of miles away. In such a local environment, therefore, bandwidth was not an issue, and everything ran at the IBM block- or byte- channel speeds. Not so on the WAN, however, without paying egregious amounts to the carriers for the privilege.

And were it not for the close timing tolerances by the local host application modules with respect to inter-dependencies, caching could in fact be used. In that manner you could instead dump images to a local cache at the remote capture site, and spool data over the WAN at a T1 rate, instead of using multiple T3s.

There are real-time work schedules to maintain here as well. Human key operators are needed here, when image processing doesn't do a complete recognition or reco job (currently the reco completion rate of the image processors is ~60% upon first-pass of the sorter with the remaining 40% requiring manual operators to view and repair through a GUI-based screen, when necessary.

Still unacceptable, but getting better with time. Last year the averages were closer to 50%-to-55% reco.

The actual amount of b/w is peculiar to the size of the org, obviously, and to this date it is still undetermined in my present endeavor. It will be driven by the number and type of capture sites that require connectivity, the number of customer sites that subscribe, ultimately, and what they want access to, and for how long.

These will drive the bandwidth requirements for the application beyond the fundamental in-house requirements of the principals.

The minimum requirements in house at this time (barring any further acquisitions or extensions to the EXTRANET), in an SNA main frame channel-extended environment for pilot purposes, exceeds multiple interstate OC-48s, if they are of the two-fiber bi-directional line switched genre. Four-fiber bi-drectional line switched types? We could probably get away with only one OC-48.

With full redundancy for reliability purposes, this will easily surpass several OC-48s in the first year, unless IP were instituted (in which case I can see this coming down to half the requirement at first, and with optimization made possible through advances in caching, further reductions, still). With external user uptake, this could easily go into the multiple hundreds of gigabits PER SECOND range, if not Terabit speeds, when viewed as an ISOLATED cloud aggregate per second.

In its intrinsic state this would be classified as, first, and enterprise network, later (with uptake from others) it could be considered a closed user group [CUG] or consortium, or community of interest [COI] network, but because customers would be on line, it would invaribly be called an extranet today. The means by which the underpinnings are put in place could render it either a VPN with extranet qualities, or a CUG, or whatever.

At some point it doesn't make a difference what it's called. Sorta like VLANs a while back, and at which layer the virtualization was taking place, and more recently the infamous Layer 3 switched category ;-)

>>encoding ?? MPEG ??<<

Currently ABIC, going to JPEG. Don't ask me why. My take, baggage that comes with the vendor's support. Not a problem for processing when dealing with internals, except for the ~ 20% incremental bandwidth requirement on the WAN. But these are offset by some tradeoff attributes embedded in each compressed image which are needed to meet emerging transaction standards.

>> packetized and if not, why not? <<

Packetized? That depends on where and what you call a packet.
Channel extended information is lumped into frames containing SNA and control data, using formatted fields with their own identity and purposes. Most of the peripheral control fields are for spoofing purposes. They make the main frame think that the remote processor nodes are actually locally attached to it (the main frame). Would you call these frames actual packets? Some gurus argue that a packet is a frame is a cell, only the magnitude changes.

But since you are referring obviously to IP packets, the answer at this time is no. It should be noted that this entire field is late in being reengineered, so it's not "entirely" a matter of something else being entrenched instead of packet. Rather, when long distance imaging really began to take off, the use of packet modes had not scaled to the point where they are today capable, and traditional channel extension techniques were used.

But this was my earlier point, if you recall, concerning what I'd like to see assessed and changed. But in order for IP to work in this environment, it will need to meet exacting QoS performance requirements in such a way as to be substitutable with nailed up services without any noticeable differences.

>Why not?<

Because of (M)(B)(?)illions of Dollars in undepreciated investments in this sector, which up to this point in time has detracted from any incentive to develop new application code. The compelling and daunting fact is that there are an associated hundreds of millions of lines of code that are in place in legacy systems that have not changed over the past thirty years (it only grew more with time) to handle the individual item flows of conventional non-image data. These cannot be divorced from the present day requirements that easily. And this is indeed a daunting prospect that will take years to overcome.

Equally daunting is the amount of storage space needed to house this new media form for financial checks, stubs, credit card slips, remittance notices, and the image statements for all of the above, given the need to satisfy the retention periods involved:

Three layers of storage and archive are actually required. One for immediate to intermediate retrieval and processing needs [one to three days], one for two to three month inspection periods upon issuance of customer statements, and one for seven years to satisfy statutes.

Q: How do you spell "storage?"

Hint: It only takes three letters.