SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : C-Cube
CUBE 36.89-1.0%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: John Rieman who wrote (48713)2/29/2000 11:28:00 AM
From: William T. Katz  Read Replies (1) of 50808
 
Here is a question for those that might know how encoding gets done by hardware chips (C-Cube, SigmaDesigns, Sony, etc):

If you get a software encoder that allows you to trade off time for quality, then theoretically I would assume the software solution to produce better results than hardware given enough time. So if I am willing to let Ligos LSX encoder (http://www.ligos.com) chew on my data for overnight, won't it produce better results than a real-time hardware encode.

I'm assuming that for each hardware chip, there are bounds on how much image information it looks at, since it has a finite time slice to do all its calculations.

But the other possibility is: hardware encoders do as much as possible relative to what we can visualize, or no software encoder lets you tradeoff that much time for quality. Any thoughts on this?
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext