SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Applied Materials No-Politics Thread (AMAT) -- Ignore unavailable to you. Want to Upgrade?


To: Cary Salsberg who wrote (8502)1/17/2004 4:38:30 PM
From: Proud_Infidel  Respond to of 25522
 
IBM Raises Number of New Hires for 2004
Saturday January 17, 4:25 pm ET
By Caroline Humer

NEW YORK (Reuters) - IBM will hire 15,000 new employees -- 50 percent more than originally planned -- in areas like software and services because of a rebound in the economy, a top executive said on Saturday.

Armonk, New York-based International Business Machines Corp (NYSE:IBM - News)., which has faced criticism for its plans to shift some U.S. workers to cheaper locations such as India and China, will add about 4,500 net jobs in the United States this year, according to Randy MacDonald, IBM's senior vice president for human resources.

"We are going to hire more in the U.S. than we shift" overseas, MacDonald said in an interview.

About 30 percent of the 15,000 new positions, or 4,500 jobs, will be net new hires in the United States, he said.

In total, the move will increase IBM's workforce by nearly 5 percent to about 330,000 or more depending on attrition. That number is the highest since 1991 when IBM began a decade-long overhaul under former Chief Executive Louis Gerstner.

More than half of IBM's employees are outside the United States.

The company plans to move up to 3,000 jobs from the United States to developing nations in 2004, an IBM spokesman said. A Wall Street Journal report in December that said the company would shift 4,730 software jobs to India was incorrect, MacDonald said.

NEW GROWTH CYCLE

The raised hiring target follows news from the world's largest computer company that customers started buying more technology during the fourth quarter. IBM Chief Financial Officer John Joyce described 2004 on Thursday as "the year when the IT industry will begin its next growth cycle."

The technology industry is emerging from a three-year slide caused by a weak economy, computer overcapacity and cuts in corporate spending. While consumer spending recovered in 2003, corporate buying lagged.

Last fall, with signs of growth starting to emerge, Chief Executive Samuel Palmisano said IBM would hire 10,000 new employees in 2004 in "hot" segments, such as software for doing business over the Internet and services to support wireless technology and the growing Linux operating system.

The decision to add 5,000 jobs was made in the past few weeks, MacDonald said.

Discussing where the hiring will occur, MacDonald said that was a matter of the availability of the technical skills and customer needs, as well as cost. IBM's plans do not favor one geographic region over another. He said that Asia Pacific is the company's fastest growing region.

Moving jobs overseas has become a hot political issue as U.S. corporations build foreign workforces to try to cut costs. IBM services competitor Accenture Ltd. (NYSE:ACN - News), for instance, plans to double its staff in India to 10,000 this year.

MacDonald said IBM also would raise to 50 percent from 40 percent the number of its new staff directly from college. The rest will have previous professional experience. In 2005, IBM plans to increase the percent of new hires from a university to 60 percent, he said.



To: Cary Salsberg who wrote (8502)1/19/2004 9:38:07 PM
From: Proud_Infidel  Read Replies (1) | Respond to of 25522
 
Analysis: Scaling is dead; long live innovation
by Mike Clendenin, EETimes
Silicon Strategies
01/19/2004, 12:19 PM ET

TAIPEI, Taiwan -- In the early days of 0.13-micron manufacturing process technology, it looked like Moore's Law was in danger of becoming Moore's Flaw; that the lawful doubling of transistors every 18 months would be thwarted by skyrocketing design costs and exotic new materials that resulted in poor yields.

Two years later, it's a brand-new day. The 130-nanometer node is well-established on the periphery and poised for a drive into the mainstream, as the IC-manufacturing industry gains optimism about the technical -- if not the economic -- feasibility of 90-nm design.

Indeed, although many technologists, designers and process engineers are wary after the egg-in-the-face ordeal of a dual transition to 130-nm design rules and 300-mm wafer processing, these battle-weary veterans seem primed for another fight. So when a guy like Bernard Meyerson, chief technology officer of IBM Microelectronics, tells a roomful of Taiwanese designers and process engineers that traditional CMOS scaling is dead, they take it in their stride.

"There is paradigm shift here, and it is a very important one," Meyerson said at the Semico Impact Conference held here recently. "The diminishing returns you get from scaling mean that innovation -- the harder thing -- actually has to happen faster and faster just to stay on the expected performance line. And scheduling innovation is something that makes engineers very nervous."

The audience keenly listened as Meyerson told how a dramatic rise in power density, brought about by the traditional brute scaling of process technology dictated by Moore's Law, has already yielded silicon that could iron a pair of pants and is on a curve heading toward supernova.

In other words, physics is getting ugly. The days of relatively generous amounts of gate oxides, on the order of say 30 angstroms depth, have given way to the angst of dealing with less than 10 angstroms, making brute scaling nearly impossible. That's why some in the industry, including Meyerson, believe that classical CMOS scaling is no longer possible.

Perhaps the reason for the sense of optimism about 90-nm, and subsequent 65- and 45-nm, nodes is that people have seen this shift coming. In some cases, engineers have experienced the problems already, at 130-nm, where low-k became a synonym for low yield, resolution enhancement crept into more masking layers and the industry made the uncomfortable shift from "design rules" to "design guidelines" -- meaning that designers really would have to talk to process engineers.

The apparent scaling roadblock for bulk CMOS also mirrors earlier experience with bipolar processes, another casualty of increasing power density. Today, standby leakage in CMOS is gaining importance over active power during the transition from 130-nm to 90-nm, just as interconnect replaced transistor-level performance as the leading problem during the run-up to 130-nm manufacturing.

Interconnect delay is still a big problem. And feature defects, not just particle defects, loom as another specter. One trait of the sub-micron era may be unprecedented levels of collaboration across many sectors of the chip industry.

"The things that must be done to make nanometer design successful will bring together the design group and the manufacturing group in a level of cooperation not seen yet, or else they won't succeed," said Wally Rhines, chairman and chief executive officer of EDA tool vendor Mentor Graphics Corp.

That seems a tall order in an atmosphere where the schedule to tapeout, not the time-to-volume, is usually a designer's main concern. But in the post-130-nm world, data suggests that the efforts following physical verification eat up roughly 30 percent of design time, according to PDF Solutions, a process technology consultancy.

Lots of EDA tool vendors and foundry executives have talked about design complexity outpacing productivity. But methodologies that encourage collaboration and make yield a concern for everybody have not yet arisen to bridge the natural division between design and manufacturing.

One of the key challenges for EDA tool vendors will be how to quantify design flows so that process engineers can go back to designers and suggest changes. A built-in function that offers specific recommendations based on previous yield data will make it easier for process engineers to justify changes to a designer, said Mentor's Rhines.

It will also create a more systematic approach to applying design guidelines, suggests John Kibarian, president and chief executive officer of PDF Solutions. "You don't know that adding another redundant via is always good thing," he said. "It could needlessly stress a low-k film in a design where the yield will already be high enough. So it wouldn't make economic sense."

Another problem that's starting to see some potential solutions is the data flow between designers and mask-making shops. With design rules mushrooming, the amount of data that runs through verification tools is having a serious impact on the turnaround time for mask sets.

Short of buying supercomputers to process tomorrow's designs, one trick the EDA industry is looking into involves using GDSII-based data flows to speed up the mask data preparation. Rhines argued that such an approach would preserve the hierarchy of designs so that blocks of data are not repeatedly tested, as is done in today's more linear methodology. Complementing this new method would be the optimization of EDA platforms for parallel processing, as well as multithreading techniques that would chew through information faster.

'Golden benchmark'

"If verification tests every geometry every time, you will never get through the process," said Rhines. " You need to maintain the hierarchy and distribute it across many computers so you can take another order of magnitude out of processing time and still keep the golden benchmark of overnight processing per level."

To duplicate the rapid advances of the past, the industry is also looking to rely more heavily on reusable intellectual property. Until now, processor cores have been at the heart of the IP industry, and that will probably not change. But I/O and memory IP, as well as embedded test and repair cells, are increasing in importance.

So is the relationship among IP vendors, foundries and designers. Adam Kablanian, president and chief executive officer of IP vendor Virage Logic Corp., believes in a shared responsibility during a product ramp, in which the customer, IP supplier and foundry will collaborate on yield optimization. "At 90-nm, it will take three to tango," he said.

Not everyone need charge into the brave new world of sub-130-nm design, though. At this point, foundry executives concede that 90-nm, 300-mm wafers still only makes sense for large-die, high-density, high-margin and high-volume products; and in many cases that is the case at 130-nm, too.

Indeed, many companies may find that their performance and cost-savings criteria are met by 180-nm and 130-nm processes, said Ben Lee, Asia-Pacific managing director of FPGA maker Altera Corp. "The list of products at 90-nm is shorter and shorter. Many may never go," he said. "And for those products that do go, timing will be very critical. If you go too early, the costs will really hurt you."

One trend gaining prominence in the scaling debate is the role of packaging. Clearly, it is a consideration from the get-go in complex chip designs, such as FPGAs. "I/O now determines performance more than it ever did," said Ivo Bolsens, chief technology officer of Xilinx Inc. "We start by designing the package. It's a bit of the world upside down."

Ever since the advent of low-k dielectrics, foundries have dedicated more internal resources toward packaging to reduce the risk of die/package stress cracking the delicate dielectric layer. Both Taiwan Semiconductor Manufacturing Co. and United Microelectronics Corp. have forged closer relationships with packaging houses in an effort to preempt, or at least lessen, that possibility.

Packaging is also emerging as the stealth route to system-on-chip (SoC) designs. Using a system-level diagram of a third-generation cell phone to illustrate the complexity of merging disparate technologies, Jackson Hu, CEO of foundry UMC, said, "SoC has probably been oversold. . . . It's not easy to put all these processes into a single die."

That may push more and more designers to consider a system-in-package approach. But the SiP has its own inherent difficulties. If one chip in a three-, four- or five-dice package fails during testing, then the whole package is worthless. The same goes for a module-level approach. Of course, Hu noted that this fact will create "opportunities" for those companies that can nail down a solution.

IBM's Meyerson said he started to notice links among these myriad issues about two years ago and concluded that novel, complementary techniques such as strained silicon, silicon-on-insulator and FinFETs would gain prominence. Not too many people were listening back then, but they are now. "It's good for the industry as a whole when this sort of consensus takes hold," Meyerson said. "At least companies are finally coming to grips with the new challenges at hand."