12th April 2007 - 02:37 PM
I would be interested to know what are the limitations of the 3D chips. Why can"t I get one for my PC? Do large 3d chips require special cooling. Is this why they are only being used in cell phone chips (small processing chips) and supercomputers (advanced cooling). Perhaps they don"t want to compete with Intel or AMD, or won"t reveal it yet.
12th April 2007 - 03:29 PM
IBM again shows they are pioneers.
If it weren't for IBM doing proof of concept research alot of the technology used today wouldn't exist.
12th April 2007 - 04:38 PM
at the bottom of the article it talks about how they *are* working on a pc chip:
"IBM is developing this advanced technology by converting the chip that currently powers the fastest computer in the world, the IBM Blue Gene supercomputer, into a 3-D stacked chip"
12th April 2007 - 05:14 PM
Just imagine a "sandwich" of several multi-core POWER processors stacked
12th April 2007 - 06:59 PM
Just imagine 5GHz, quad-core, 2D, 45nm Cell BE's stacked on 10 layers on a single 3D CPU for a count of 40 Cell cores on a single CPU.
Now imagine those CPU's crunching real-time medical imaging data in your doctors office and letting you know you have a cancer tumor the size of a pinpoint in your lung that will be a problem in 5 years if you don't do anything about it.
Citizen of the Universe
12th April 2007 - 10:43 PM
Way to go baby! I wonder where this will put the new Blue Gene computers in terms of performance, once they switch out the processors. This stuff reeks of Kurzweil I love it!
13th April 2007 - 03:05 AM
Don - yes... I am imagining little stacks like that. Hexagonal shape, actually; little stacked pyraminds bonded atop little copper hexagons that stack together against each other to form a spherical structure - a "buckyball" of bonded copper hexagons; a shell thap provides power and cooling to all those thousands of individual processor-stack pyraminds facing each other on the inside.
Each individual processor structure incorporates its own local system memory directly - the memory layers being interleaved between the processing layers. At the outer edges of each individual processing layer sit 'fringes' of optical communicators - tiny quantum-dot optical resonators; each tuned to its assigned frequency, and each capable of acting equally as emitter and detector alike... in other words; each individual processor stack within the sphere is in constant full-duplex full-busarea communication with every and all other individual processing-stacks - simultaneously.
Now; with processing bandwidth like that, tell me what you could do with it. I believe Isaac Asimov once thought up a use for such a thing... had the principles all wrong of course, but given the era that's only to be expected. The general size and shape all fit, and the total available processing power *may* be enough to run one of mister Asimov's old robots with...
What do you think?
13th April 2007 - 04:09 AM
Picture the size of the heat sink on todays multi-core chips. Now imagine that extending from both sides of a two chip CPU sandwich. Now, what about the 3rd layer? That seems to be the limiting factor to many layers. The vias can conduct some heat away from inner layers - and that may be sufficient for low power applications.
13th April 2007 - 11:15 AM
Heat-sinks will not be the primary means of removing heat from these types of CPU's. Micro-Fluidics will be taking over that work and should fit very nicely into this layering tech. There have been several articles covering MF advances in the last 6 months and we should be seeing it coming online about the same time this technology does.
The next 5 years is going to be very interesting.
14th April 2007 - 06:42 PM
There's obviously gonna be a huge issue with cooling.
A 2D chip has essentially it's own thickness as the distance heat has to travel to escape. A 3D chip has a huge increase in this distance. It also will tend to build up heat in its center. I would be extremely surprised if they managed to stack any more than two processors (lets say the processors we have today in our computers) and not have it result in a horrendously large cooling system (relative to the chip).
Putting cooling fans and such on each side of the chip does not remedy this problem, since all of the internal portions of the chip are nowhere near as exposed to the heat extraction.
I wouldn't be surprised if IBM was also doing active research in nanofluid coolant, or systems in general capable of cooling this kind of chip from the inside and from the outside.
22nd September 2007 - 04:40 PM
I don't see how this is a breakthrough. Proof of concept was developed way back in 2000...companies are even named for this technique...hence Matrix Semiconductor Co...new spin on an old idea...
22nd September 2007 - 04:44 PM
The concept itself is not a breakthrough. Proof of concept was developed way back in 2000...companies are even named for this technique...hence Matrix Semiconductor Co...new spin on an old idea...
however, making real chips by stacking run with all their heating, electromagnetic issues is a breakthrough...
22nd September 2007 - 06:21 PM
Sh** i wrote a sci-fi about it once :)
This is about the ship (The AC-DC) they were building.
" It was a marvelous creation. Even harder than diamond of a material with a molecular structure very like, although not as bendable, as the MonoRail. It was almost impossible to cut, in fact you more or less poured the material out. In that way it was much more flexible than steel, As long as you worked the semiliquid form in space where gravitational forces were negligible you could mould it very much to your liking. But a soon as it came into contact with air it locked. They had sandwiched two hulls together as the outer hull. In between those layers they had filled it up with another new plastic material. It had started as an idea from Norway and Sweden. They wanted to make flexible Integrated Chips ( -IC- ) that you more or less would be able to weave into textiles, car bodies, you name it.
In the late 1999 the Norwegian Company had a viable product. Microsoft, who then basically put the product on a shelf, bought up the company 2001. Around 2008 they started to experiment with it again. It took ten more years for Microsoft to see the far-reaching implications of that new material. Only when the new wormhole theory had gone from theory to test did they try to push their new material. Ac-Dc was a prototype in many ways. After they had poured the first mould of the hull, they more or less glued this IC-plastic around the inner hull. One of the very nice things about it was that you could put IC-plastic over each other. Layer after layer after layer. Then other new principles of self-integrating almost organically growing Neuron Networks could knit it together in three dimensions. To help that network develop Ac-Dc had constant access to both Lunas and Earth’s biggest supercomputers. After one and a half year, this new type of GiGa-computer were starting to make almost human responses. Certainly not those of a grownup but it seemed to become more and more self-aware. As for the Turing test it had taken care of that after only eight months.
As it was cooled by space itself it was also supra conductive. Which meant that new strange quantum-phenomena like super tunneling with FTL capabilities could came into play. All in all it promised to become something very near a quantum computer, expected to work in a similar fashion that you before, only would have expected of God. That is, to be able to give you an answer even before you had started to ask the question. Which could be understood as, it could make wishes come true. So there you had it, two diamond outer hulls with a supra conductive three-dimensional GiGa-computer glued in between. In a very real way one could say that the ship became the computer, Off course there was independent backups, and more ordinary computers to. It was after all the very first trial run and nobody could really prophesy if the ship really was going to be sane, even if it woke up. And certainly, the true awakening would only happen when the new multi parallel programming language were installed. That language was an ongoing development of the GiGa itself. The ordinary supercomputers found it to complex to be able to help. And even humans had problem seeing the concept behind it. "
I just knew that i had gotten something wrong there. Sorry IBM, your's is the honor, not Microsoft ( Ah well, i wrote that they (MS) 'sat' on it for quite a while didn't i :)