DRAM Makers Plan Move to Detroit

January 7, 2011
Russell Fish

No not really, but they might as well.

According to a report by Cadence subsidiary Denali, the worldwide DRAM industry lost $30B in from 2007 to 2009.  2010 allowed a few of the healthier companies to show a profit.  Now it looks like things are turning back down.  Think of a drowning man who comes up for just one gulp of air only to slide back beneath the surface.

This is a sick industry.  In the midst of this long running financial blood bath, every three years each company has to go hat in hand to the banks, the equity market, or the government and beg for $3B to build the next generation DRAM fabs.  The current transition to the 4x node will probably sink the weaker players.

"Let's see... you lost $1B a year for the past 3 years, your principal product is selling for half the price this year as last, and you want me to give you a loan?"  That's the reason much DRAM CAPEX funding comes from "dumb money" i.e., taxpayers - German taxpayers for Qimonda, Taiwanese taxpayers for Nanya, Promos, Powerchip, and Inotera, and Japanese taxpayers for Elpida.

GM and Chrysler bailouts appear perilously parallel to the billions of Euros German taxpayers flushed on the now defunct Qimonda.  At a certain point even the densest politicians get the message of economic failure and quit bailing, but usually not until the hole in the ground has consumed prodigious amounts of cash.

The parallels to Detroit don't stop at finance.  Each year for the past 20 Detroit added new chrome, stretched the wheelbase, slapped on racing stripes and declared the "All NEW Amazonian Sportfire Plus for 1989."

DRAM technology has similarly frozen in time with only increased density and cratered prices to show for their billions of investment.  Sense amps, decode logic, charge pumps, and IO date from the 80s.  When your most significant architectural achievement in 2 decades is transferring data on two clock edges instead of one, you deserve to go out of business, become a ward of the State, or move to Detroit.

Computer Memory Trends

Even memory consumption trends may be trending the wrong way.

According to Morgan Stanley's Frank Wang 58% of DRAMs go into PCs.  Another 25% go into upgrades.  The product with the largest DRAM appetite and the most upgrades is the desktop computer which nominally ships with 4Gbyte.  Unfortunately, the desktop computer business is necrotic.  The laptop standard is 2Gbyte and the market is still growing.

However the hottest trend in computers is tablets with an expected 50 million unit sales for 2011.  The industry leading iPad ships with a miniscule 256Mbyte.  Nevertheless tablets have destroyed the netbook business and are clawing into the notebook underbelly.

On the upper end, cloud server clusters are fitted with 16-64Gbytes, but much of the cloud's economy comes from sharing its megapower with less powerful and less memory rich, desktops, laptops, or tablets.

As the computer industry redefines itself, the DRAM makers are dragged along for what could be a very bumpy ride.

Legacy Architectures Hit the Wall

While DRAM enterprises were burning mountains of cash, CPU makers were printing it big time.  INTEL's most recent gross margin nudged 65%.  Mostly the CPU industry was captivated by the inertia of the x86 legacy and papered over its shortcomings with Detroit style chrome and tailfins and Moore's Law process improvements every few years. Despite the shower of riches, all was not well in CPU-land.

For three decades the most well-known CPU architect in the world, Dr. David Patterson of Berkeley, beat his head against the laws of physics to squeeze more and more mips from each joule of energy.  Finally faced with increasingly diminished returns he published what must be seen as the crowning achievement of his illustrious career:

Patterson's Three Walls

This simple rule of thumb was destined to join Moore's Law in defining the possibilities and the pitfalls of legacy computer architectures.

The Walls succinctly encapsulated the ultimate limitations:

Power Wall + Memory Wall + ILP Wall = Brick Wall

"Power Wall" means faster computers get really hot.

"Memory Wall" means 1000 pins on a CPU package is way too many.

"ILP Wall" means a deeper instruction pipeline really means digging a deeper power hole.

So it was that the Detroit model worked until 2004 when INTEL laid the Pentium IV egg followed by the never produced Tejas.  At a reported 150 watts, 1000 pins, and 40 pipeline stages, the Tejas was the crackup that happened when legacy architecture crashed full throttle into all three of Patterson's Walls.

From that moment forward every sentient computer architect on the planet knew they were watching the end of the x86 era.  Watching this behemoth struggle against the inevitable was like watching a Mastodon slowly tire while sinking into a tar pit.  Multi-core designs aggravated the memory wall.  Downsized Atoms were toy computers that still ran hot.  Merging CPU and GPU was cute, but as Patterson predicted, computer architecture was up against a brick wall.

Serendipity May Save the DRAM Makers

Due to no effort of their own, DRAM makers may receive a windfall from the travails of the CPU makers.

CPUs are moving to DRAMs.

CPU architects have seen the solution to Patterson's Walls for years.  An obscure designer from Mountain View filed a 1989 patent describing how to put a CPU in the middle of an existing DRAM main memory.  Six years later Patterson himself proposed adding main memory DRAM to a CPU.  INTEL and many others licensed the Mountain View patent and Patterson spent half a decade designing and building the iRAM, but the world remained mesmerized by the x86 architecture.

Now the CPU business is about to fall into the laps of the barely solvent DRAM makers as CPUs merge into DRAM.

An obscure design group from Dallas claims they have broken Patterson's Walls by refactoring the division between CPU and main memory.  They call it TOMI™ Technology and claim to put a quad-core CPU in the middle of a commodity DRAM chip.  The prototype designed on a turn-of-the-century 110nm 64M DRAM clocks at 500MHz while consuming 23mw per CPU.

Furthermore, according to Microprocessor Report, "Venray'™s prototype can deliver 4.6 times the bandwidth of Intel'™s best server processor despite using 10-year-old DRAM technology."

The 64M parent DRAM sells for 65 cents.  The 4 CPUs add only 20% to the die size of the DRAM so the new technology is fairly cost effective.

Financial Implications for DRAM Makers

This means that some of the foundry DRAM capacity at companies such as Nanya and ProMOS might get sopped up making CPUs in DRAMs.  Nanya might no longer have to live off Formosa Plastic's credit card.  The timing could also be propitious for ProMOS who most recently reported losses of $1 for every $2 in revenue.  Struggling makers like currently reorganizing Hynix might decide that CPUs in DRAMs could make more money than $1.80 2G DRAMs.

Fabs that can't finance the migration to the latest process node might discover that making CPUs in DRAMs at 70nm or even 90nm is a viable business.

Or they could just move to Detroit and seek a government bailout.