SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : C-Cube -- Ignore unavailable to you. Want to Upgrade?


To: DiViT who wrote (30724)3/10/1998 11:38:00 PM
From: Manuel Vizcaya  Respond to of 50808
 
"There was an article posted by me about the same time Creative anounced their Encore. In it the author commented that Cube as on the board. It's back in the archives of this thread but I'm to lazy to look for it.
It's easier for me to peel the DXR2 label of the card sitting here on my desk.
;-)"

Thanks Dave,

Can't get more definitive proof than finding the C-Wolf dressed up in sheeps clothing.

So what do we have on the PC side.

In the box
Dell/Quadrant
NEC/Diamond
Packard Bell/Diamond
Apple/E4?
Toshiba Laptop

Upgrade Kits
Creative
Diamond?

Not bad for starters, No?



To: DiViT who wrote (30724)3/11/1998 10:42:00 AM
From: BillyG  Read Replies (1) | Respond to of 50808
 
New approach to video sensor (and camera) design. It is supposed to simplify digital video camera design and thereby lower the cost. Very technical article -- basically, the advance uses the theory developed for "1-bit" or "bitstream" D/A converters used in audio CD players, and applies it to the video world.

techweb.cmp.com

Posted: 9:00 p.m. EST, 3/10/98

Video design simplified by oversampling technique

By Chappell Brown

SIMI VALLEY, Calif. - While working on specialized infrared sensors,
Amain Electronics Co. Inc. has hit on a general approach to electronic
imaging that could simplify both camera and video-display technology.
The
scheme uses a multiplexed version of oversampled analog-to-digital
conversion as an integrated detection and readout system for image
sensors, bringing onto the focal plane signal-processing functions that
would normally be performed after an image is acquired.

By integrating A/D conversion with sensing functions, the architecture of a
CMOS imaging chip can be simplified without sacrificing performance,

according to Bill Mandl, president and founder of Amain Electronics (Simi
Valey, Calif.). The company perfected the technique over the past three
years in development projects for the military and is now working with an
industrial partner to develop it for commercial videoconferencing
applications.

"Oversampled sigma-delta algorithms have been proven to be effective in
compact-disk audio systems, and we are essentially applying that in an
imaging context," said Mandl.


The technique grew out of attempts to solve problems with military
infrared-imaging systems. "We were trying to solve some other problems
for the military when we realized that this technique could simplify
electronic imaging in visible light as well," Mandl said.

Since infrared detectors are built in a non-silicon materials system, the
sensing and readout functions are typically separated. For example, a
mercury cadmium telluride sensor array is bump-bonded to a silicon
addressing and readout circuit. In contrast, fully integrated silicon sensors
perform both operations on the focal plane, and the sensing and readout
function must vie for real estate with the detector array.

Since any silicon devoted to signal detection and readout circuitry is not
available to collect photons, single-chip imagers have been stingy with the
signal-conditioning circuitry. The dominant charge-coupled-device (CCD)
array chips represent the extreme in that direction, by devoting maximum
silicon area to charge collection. Essentially, a CCD transistor is a very
long FET that stretches the width of the chip and clocks collected charge
packets to the edge.

The unique FET structure makes it unnecessary to add additional circuitry
at the pixel level, but at the cost of design rigidity: CCDs are incompatible
with CMOS VLSI processes. The architecture leaves no room for
additional signal-processing functions to enable a fully integrated
camera-on-a-chip.

More recently, active-pixel-sensor (APS) chips have appeared that
attempt a more-flexible design trade-off between sensing and detection. A
few transistors are placed at each pixel to sense and amplify the collected
charge. The data is then read out in a flexible x-y addressing scheme that is
similar to the structure of static-RAM memory chips.

But APS sensors, despite the long R&D phase for the products, are just
beginning to achieve the same sensitivity as CCDs. Because of the added
circuitry, only about 20 percent of the chip area is devoted to the critical
task of collecting photons.

Since charge collection occurred on a different chip, Mandl and his
colleagues began to experiment with more complex addressing and
signal-processing functions. That work led to a key idea: integrating digital
signal processing with the pixel array.

Both CCDs and APS arrays are devoted exclusively to delivering analog
voltage values to the edge of the array, which are then processed for video
display. "We found that we could put some of the signal-processing
functions at the pixel and the rest of the conversion at the column readout,"
Mandl said. "The resulting architecture starts with analog sensing functions
and produces a digital stream as the readout on the edge of the chip."

Mandl calls the architecture multiplexed oversampled A/D (MOSAD)
image sensing.

Oversampling is a standard signal-processing technique that essentially
trades a higher sampling frequency for processor simplicity. The technique
is catching on in the sensor world because it is optimal for low-frequency
signals. Since sensors often substitute for such human sensing functions as
sight, hearing and touch, high-frequency components that lead to distortion
can simply be eliminated with

a low-pass filter. Oversampled A/D systems can therefore get by with a
simple binary signal representation called sigma-delta modulation.

In the Amain technology, each pixel consists of a charge well, which
collects electrons generated by incident photons, and a simple switched
capacitor filter. That circuit block performs filtering and integration, and the
output is time-multiplexed on each column of the array. Circuitry at the
column generates the multiplexed digital output from the sensor.

Like APS technology, the sensing and readout circuitry can be designed
and fabbed with conventional CMOS processes. But the added circuitry
takes up less space per pixel, with each pixel having 63 percent of its
active area for photon collection.

The resulting imager is actually more sensitive than CCDs because of its
high fill factor and low-noise sampling technique, Mandl claims. And, rather
than a stream of analog data that would require sophisticated signal
processing to interface with basic NTSC video technology, the imaging
chip produces a stream of digital data that can be stored, processed and
transmitted by established digital techniques.


A key idea that distinguishes the approach from other image-sensor
techniques is the continuous modulation of charge generation at the pixel.
"The typical operation with image sensors is 'integrate and dump': Charge is
collected for a short time and converted to a voltage level, and then the
charge well is cleared of electrons," Mandl said.

In contrast, the MOSAD system continuously samples the charge well
without any reset. To avoid saturation of the well, a fixed amount of charge
is switched into a subtraction capacitor between sampling operations.

Continuous modulation solves a number of problems. For one, it greatly
reduces noise. "You just dump the noise into an integral and, since the
sigma-delta operation has a smoothing effect - essentially
differentiation-you get a very low-noise output. It's a technique that has
been proven out in digital audio," Mandl said. Apart from the multiplexing
operation, the signal-processing scheme is identical to that used in CD
audio systems.


The audio analogy also implies a further simplification at the display end of
the process. With conventional sensors, the analog data extracted from the
imager must be processed for display in NTSC, the dominant analog video
format. While the basic scheme is being challenged by new digital-imaging
systems that are being readied for multimedia and digital television
broadcast, one common theme is the frame-based method of presenting
images to the eye.

Mandl realized the frame format introduces an unnecessary complication
into the detection and display process. "The frame-based system goes
back 150 years, to the first attempts to create moving pictures," he noted.
The method is essential to film-based photography and has been carried
over to electronic imaging, even though it isn't required there.


One reason for the simplified approach of CD audio systems is the
absence of a frame format, Mandl said. "With audio, you continuously
sample and modulate a signal, converting it to a digital stream, and then
reverse the process to play it back," he said. In an analogous fashion, the
bit-stream readout from an image sensor using MOSAD can be
demultiplexed and decoded using a standard sigma-delta D/A converter
and directly played back on a display, without the need for complicated
frame sequencing.

Natural modulation
Newer non-raster display systems, such as liquid crystal, plasma, field
emission and digital micromirrors, are particularly well-suited to the
technique. With their x-y addressing scheme, such displays can directly
play back the converted digital stream, with each pixel naturally modulated
by the bit stream to recreate the dynamics of the light falling on the
corresponding pixel in the imaging array.

Amain has set up a demonstration imaging and display system using a
digital micromirror projection system from Texas Instruments. The resulting
display is much easier on the eyes, eliminating flicker, Mandl said.

Of course, the same digital output stream could be processed using
conventional raster-scan algorithms to make it compatible with a video
monitor or processed for any other display peripheral. But the new method
also offers the opportunity to simplify the entire electronic-imaging system,
from focal-plane array to display. In addition, the basic MOSAD algorithm
can be applied to the analog output from commercial sensors, though the
simplicity of the imaging architecture is lost.

Amain offers design kits for OEM engineers who might want to develop
simplified imaging and display systems-for example, an IR all-weather
vision system for an automobile. An IR sensor array would be bonded to a
MOSAD readout chip, which could then directly drive a small LCD panel
mounted on the dashboard. Commercially available IR sensors can be
used with the readout chip. In addition, other analog data from the
car-engine temperature, oil level, rpm's-can be piped to the readout
chip and converted via the same A/D system. The result would be a simple
monitoring and vision system using low cost flat-panel technology.

"One of the big advantages of a closed system using MOSAD is the
elimination of processors required to process and display an image. In fact,
the imager directly drives the display," Mandl said.

Amain plans to develop a videoconferencing system that would fit
individual participants with simplified imaging systems. "You won't have to
strap a computer and car battery to your back to get quality video
imaging," he said. Instead, digital data directly from a small imaging chip
would be offloaded to a fiber-optic cable for transmission to a remote
location, where it could be directly played back.

Elimination of high-performance DSPs results in a low-power system, an
important consideration for portable applications. However, oversampled
A/D conversion does not necessarily help with data volume problems.
[NOTE: this is where MPEG-2 encoding comes into play.]
Amain is relying on high-volume fiber-optic technology to link small-scale
imaging systems, but mobile computing also requires wireless
communications. Full- motion video is still a problem in that context. [Unless you use MPEG-2 encoding...]