SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : All About Sun Microsystems -- Ignore unavailable to you. Want to Upgrade?


To: tiquer who wrote (31395)4/28/2000 12:59:00 PM
From: Robert  Read Replies (1) | Respond to of 64865
 
About streaming video....

The technology is basically very simple. Two sockets
are set up, one on the client and one on the server,
and data is written from the server into the socket to
be read by the client. The client then decodes the data
to be viewed. On the server side, it reads the video from
a database and write to the socket, which looks just like
another file descriptor.

Let's say you have 100 people all viewing the same video
simultaneously. There would be one copy of the video to
be distributed. The server has to be fast enough and smart
enough to get that copy out to all it's sockets and be
intelligent to use enough clock cycles from all its cpu's
to get it done. Storage has a relatively easy job, it just
sits there. Of course some of the logic could be moved into
storage but what if you wanted to jump to a section of the
video? Or search for a particular scene? Or view PG censored
versions? Do you put that logic in the storage or the server
which is more flexible? It is possible to directly attach
storage to the net and for certain applications that is the
right thing to do but the smart architecture is to have
a server to implement all the fancy user logic.

Ultimately I think the network and storage can be optical.
Optical networking via fiber has advantages as the number
of wavelengths is infinite, making it very high volume.
Optical storage is also very high capacity, a holographic
image is parallel storage all at once, the throughput is
huge. But the bottleneck seems to be computing. I have yet
to think of how to get the optical equivalent of a
transistor with high throughput, ie being able to have
two holograms anding/oring at the individual photons.

Long story short, bandwidth and storage are the least of
our problems, CPU power is the bottleneck.