SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Dan3 who wrote (152514)12/14/2001 12:43:29 PM
From: wanna_bmw  Read Replies (3) | Respond to of 186894
 
Dan, let me try and help you out a little. I'm very curious about this, too, so I don't mind doing the research.

intel.com

Fab 16 is on "Indefinite Hold".

"Intel announced that Fab 16, planned for Fort Worth, Texas, was put on an indefinite hold in January of 2000. The 500 plus acres in north Fort Worth at the Perot Alliance Business Park is currently in a "stand down" mode with minimum maintenance and security presence."

intel.com

This says that Fab7 will be EOL in 2002.

intel.com

"Fab 14 is located in Leixlip, Ireland, just outside of Dublin. The facility is approximately 800,000 square feet, including a 90,000 square foot cleanroom."

intel.com

"The Colorado Springs facility... with 120,000 square feet of cleanroom that will be equipped to manufacture flash memory and logic components used in a wide variety of communications, networking and computer equipment."

intel.com

"Fab 24 to Include 135,000 Square Feet of Cleanroom"

intel.com

"Intel's D1C Development Fab will be located in Hillsboro, Oregon. The facility will include a clean room, which is approximately 120,000 square feet."

intel.com

"What is the Research & Pathfinding Lab (RP1)? World™s first 300 mm silicon research lab... Approx 56,000 sq ft of class 1 clean room at build-out"

intel.com

"The D1D fab facility, with approximately 175,000 sq. ft. of cleanroom space will be used to develop Intel’s next generation 300mm wafer process."

intel.com

"Fab 18 will be approximately 1-million-square-feet floor space, featuring 80,000-square-feet of "Class 1" clean room."

intel.com

This is the site map to all Intel manufacturing facilities.

intel.com

Mentions Oregon Fabs D1C (Logic products, 0.13-micron process technology, 300mm wafers), 15 (Logic and flash memory products, 0.25-micron and 0.35-micron process technology, 200mm wafers), and 20 (Logic products, 0.18-micron process technology, 200mm wafers). But we know that Fab20 has since been converted to .13u.

intel.com

Mentions California Fab D2 (Logic and flash memory products, 0.13-micron process for logic products only; and 0.18-micron process for logic and flash memory products, 200mm wafers).

intel.com

Mentions New Mexico Fabs 7 (Flash memory products, 0.35-micron process technology, 150mm wafers), 11 (Logic and flash memory products, 0.18-micron and 0.25-micron process technology, 200mm wafers), and 11X (Logic products, 0.13-micron process technology, 300mm).

intel.com

Mentions Massachusetts Fab 17 (Logic products, 0.28-micron, 0.35-micron and 0.50-micron process technology, 200mm wafers). Of course, we know that it has since been upgraded to .13u manufacturing.

intel.com

Mentions Arizona Fabs 12 (Logic products, 0.18-micron process technology, 200mm wafers) and 22 (Logic products, 0.13-micron process technology, 200mm wafers).

intel.com

Mentions Ireland Fab 10/14 (Logic products, 0.18-micron and 0.25-micron process technology, 200mm wafer).

intel.com

Mentions Israel Fabs 8 (Logic and flash memory products, 0.35-micron, 0.50-micron, 0.70-micron and 1.0-micron process, 150mm wafer) and 18 (Logic products, 0.18-micron process technology, 200mm wafer).

intel.com

Mentions Colorado Fab 23 (Flash memory products, 0.18-micron process technology, 200mm wafers). Of course, I think this has since been upgraded to .13u.

After putting everything together, it looks like fabs 1-6 have all been decommissioned, or combined into other fabs. Fab 7 is EOL next year, and Fab9 isn't even mentioned. Fab10 is part of Fab14, and Fab16 is on indefinite hold. Fab D1B is the same as Fab20. Also, it looks like you forgot a few fabs. Here is the list, as I see it.

D2 - ? (Logic/Flash, .13u, 200mm)
FAB7 - ? (Flash, .35u, 150mm)
FAB8 - ? (Logic, .35u, .5u, .7u, 1.0u, 150mm)
FAB11 - ? (Logic/Flash, .18u, .25u, 200mm)
FAB12 - 135,000 sq. ft. (Logic, .18u, 200mm)
FAB10/14 - 90,000 sq ft (Logic, .18u, .25u, 200mm)
FAB15 - ? (Logic/Flash, .25u, .35u, 200mm)
FAB17 - 95,000 sq. ft. (Logic, .13u, .28u, .35u, .5u, 200mm)
FAB18 - 80,000 sq ft (Logic, .18u, 200mm)
FAB20 - ? (Logic, .13u, .18u??, 200mm)
FAB22 - 133,000 sq. ft. (Logic, .13u, 200mm)
FAB23 - 120,000 sq. ft. (Flash, .13u??, .18u, 200mm)

Future fabs
D1C - 120,000 sq ft (Logic, .13u, 300mm)
Fab11X - 135,000 sq ft (Logic, .13u, 300mm)
Fab24 - 135,000 sq ft (Logic??, .09u??, 300mm)
D1D - 175,000 sq ft (Logic??, .09u??, 300mm)
RP1 - 56,000 sq ft (Logic??, .09u??, .065u??, 300mm)

Finally, this link has a rough estimate of total fab capacity (though it might be outdated).

intel.com

"Intel has more than .8 million square feet of fab capacity. That's roughly 8 football fields!"

They may have more than that, now, but with the number of facilities being decommissioned and brought online simultaneously, I doubt we can come up with an accurate number. Keep in mind that many of Intel's fabs are producing chips on old processes - they no doubt still continue to have demand for embedded chips, chipsets, and other logic products at larger geometries.

Anyways, I hope you get as much out of this as I just did.

wbmw



To: Dan3 who wrote (152514)12/14/2001 1:02:39 PM
From: Paul Engel  Read Replies (1) | Respond to of 186894
 
Ban Ban Blow Hard Dan - re: "Why does Intel need so many FABs?"

Why is AMD looking at Taiwan Foundries to do their production?



To: Dan3 who wrote (152514)12/14/2001 1:15:07 PM
From: Paul Engel  Respond to of 186894
 
Intel Pentium Clusters employed for advanced AI applications.

" It's now demonstrating its associative-memory neural networks on a ring of 30 Pentiums that can solve intractable AI problems "

{=================================}

AI software system extracts meaning from babble
(no - not for decoding the Banny Mani Fundamentalist Thread !!)

By R. Colin Johnson, EE Times
Dec 13, 2001 (12:01 PM)
URL: eetimes.com

SAN DIEGO — A software development firm says it has solved one of the hardest problems in artificial intelligence, successfully extracting hierarchical categories from streams of sensory data. And for $50,000, HNC Software Inc. will tell you just how it accomplished that feat.

The company's software, called Cortronics, uses neural networks to model fundamental operations that a person's brain calls on to handle those same tasks.

"We believe that Cortronics' associative-memory neural network technology could be the most powerful and promising approach to artificial intelligence ever discovered," said Robert Hecht-Nielsen, a co-founder of HNC, which provides high-end analytic and decision-management software.

The Cortronics technology was developed under a $3.3 million research contract jointly funded by the Defense Advanced Research Projects Agency and HNC, under the supervision of the Office of Naval Research.

The system, HNC said, can be applied to diverse problems such as extracting a voice stream from a noisy background or identifying camouflaged vehicles on a battlefield.

The brain implementing such an architecture memorizes synchronicity among its various sensor inputs and scores each new sensory experience for similarity to all previous memories. Even at the gigahertz speeds of current-day serial microprocessors, implementing such system-wide associative memories on any usable scale has been difficult. Without the higher-level software that implements the time-based associations, practical applications have not materialized.

HNC said it has solved this software problem, opening the door to genuine machine intelligence. It's now demonstrating its associative-memory neural networks on a ring of 30 Pentiums that can solve intractable AI problems like the classic "cocktail party problem." In that problem, a listener in a room filled with conversation must extract the voice stream of one speaker. Cortronics does this by modeling the "sparse" neural networks of the brain, with only a few connections to nearby neurons.

HNC is offering Cortronics to interested engineers in a three-week seminar priced at $50,000 a seat. It includes hands-on experience and the source code to the 30-Pentium ring running the "brain" operating system.

Engineers will still need to understand the theory behind the problem-solving system to apply it to real-world systems. "What we really hope to gain by sharing our technology with other companies is to find partners for future AI application development," said Hecht-Nielsen.

He predicted that applications developed for the brain OS will be able to instill genuine machine intelligence into next-generation "conversational" applications such as talking automatic teller machines or fully automatic customer-service "personalities" that outperform even a knowledgeable human being in answering ad hoc queries. HNC said it will assist participating engineers in creating their own AI applications during the technology course.

"The cocktail party problem is just a prelude to the kind of AI applications we think are now possible," said Hecht-Nielsen. "We envision all kinds of automated conversational customer services."

Neurons mimicked

Under the hood of Cortronics' solution to the classic "sparse" coding problem is a feature-attractor architecture in which a neural network self-organizes a huge but fixed set of tokens into a universal representation of the data set. Each token in the universal set is associated with only a few other tokens, mimicking the sparse connections among the billions of neurons in the human brain. Every entity in the database then becomes a string of tokens and their associations.

By reinforcing the associations between both spatially and temporally "contiguous" information, the neural net reinforces the connections between often-appearing contiguities in its data stream. As a result, a higher-order amalgamation of often-synchronized features emerges from the topology of the network — namely, "objects" become defined as globs of often-appearing-together tokens.

"The principle is that the components of holistic objects, such as a human face, which is composed of eyes, nose, mouth and so forth, always appear together, so their interconnections in the neural network are reinforced." This lets Cortronics "isolate and excise objects from their background," said Hecht-Nielsen.

This ability to "detect" and "segment" images into objects has been one of the persistent problems of machine vision research. Real-world scenes are confused by deformed and distorted viewpoints, and partial occlusion by nearby objects. Cortronics solves this detect-and-segment problem, HNC said, with its spontaneously appearing holistic objects, which a separate management level of the brain operating system tracks by "paying attention" to significant portions of occluded objects, thereby verifying or falsifying their presence in real-time.

The second type of memory association is association by similarity, and here the Cortronics technology lays claim to genuine machine intelligence. HNC said Cortronics automatically recognizes higher-level objects by logging the similarities of their component parts to form an ascending hierarchy of related objects. With a face, for instance, the Cortronics hierarchical abstractor self-organizes a higher-level face object with the component parts. Likewise, given enough examples it will also self-organize separate categories for male vs. female faces.

The utility of these self-organizing hierarchical abstractions is made plain when an application seeks to identify specific types of objects from real-time inputs. Ordinarily, input data streams contain a mishmash of irrelevant information that often confuse attempts at automatic recognition.

'Find the males'

HNC said Cortronics uses its hierarchy of abstract objects to succeed. Thus, if instructed to "find the males" in a scene, it will tentatively activate all instances of the desired objects and their component parts and will then recognize and "expect" other, related features to appear when the scene changes.

"This ability to make objects of interest pop out from a cluttered background was another holy grail of artificial intelligence," said Hecht-Nielsen, an adjunct professor at the University of California, San Diego, specializing in neural networks and computing. "It's how my students at UCSD helped solve the cocktail party problem. In essence, Cortronics forms expectations about what it should recognize next, enabling it to track a single voice among many by recognizing which sound must have followed from earlier ones."

In the cocktail party problem, where five people are speaking simultaneously, Cortronics creates, on the fly, a list of "next utterances" that could likely follow from the current one, then tracks whichever next word, from five separate voices, matches. This strategy requires that the "first" word be manually flagged as belonging to the voice of interest. Once that's done, Cortronics pays attention to just the selected voice.

"This is the way people's brains work too. To prime the pump with the first word, listeners will move their head slightly closer [to the speaker] to increase the signal-to-noise ratio, but once the first word is recognized, they follow that voice based on their expectations of what next words could logically follow from the current ones," said Hecht-Nielsen.

The Cortronics Technology Course will be held March 25 to April 12 in San Diego.