DavidD
I found in textbook, that electromagnetic field strenght and mahnetic induction is proportionaly frenquency square (and electrons acceleration):
E~a~f^2
B~a~f^2
W=E+B~f^2. f - frenquency.
Electromagnetic field density I is:
I~E^2+B^2=f^4+f^4~f^4.
So what is actual radiated energy f^2 or f^4?
Anyway even if radiated energy is linar proportional to frenquancy, then still it means, that higher frenquecy processor working less effiecently. Do I am right? Or this radiated energy is very very small and almost don't have any influency to processor efficiency?

P.S. For laymans: don't think that it is couse of why at higher frenquency processors are hotter. Higher frenquency electricity proportionaly better traveling through condensator, and two wires with dialectric between them are like condensator.

My phylosophy is that if imposible to increase frenquency, becouse of limitation of electricity speed, then maybe is possible to increase number of transistors and decreasing frenquency by achieving higher and higher effiecency? But if increasing number of transistors and thus number of wires between them, then increasing also resistivity and thus electricity consumption... So my only one hope for not ending processor calculation power increasing is that maybe with more transistors and with much smaller frenquency possible to create more effiecent processors and more powerfull. For example in 1 m^2 can fit in 10000 current processors of 1 cm^2 size. If made them in 3D like neurons then in 1 cm^3 qubte can go in I hope milion current processors and they would work with milion times lower frenquency (3 kHz), and thus wouldn't be faster one such 3D processor than current one. Only one hope is increase size in 2D, but then would need 100 m^2 size processor, which would work on 10000 times lower frenquency (300 kHz) than current 0.01 m^2 size processor, but in this 100 m^2 size processor would fit in milion times more transistors and thus such 100 m^2 processor would be 100 times faster than current one .
1 m^2 processor would work on 30 MHz frenquency and would have 10000 times more transistors and would work 10 times faster than current 3GHz processor. So my conclution probably that since 3D technology making procesors imposible (that would ectricity travel not serial (processor after processor), but paralel, through all processors at same time...) like neurons, but even neurons are orginized in cortex which is 2D, but neurons just have very much very small sinapses and still in some sense connecting neurons in 3D...
DavidD
I mean cerebral cortex of human brain is of size 0.22 m^2, in such size space would fit in about 2000 times more transistors processor with 1000 bilions (10^12) transistors, what is still less than synapses in brain (<10^15). And btw it would work on abou 50 times lower frenquency (60 MHz). In thought cerebral coretex is 1.5 mm = 0.0015 m thickness and neuron thickness is about 0.000001 m, then 0.0015/0.000001=1500 neurons can fit in so if transistors would be 1500 more (1500 processors layers) then number 10^15 is achieved and chipmakers fully can made 0.22 m^2 processors with 1000 processors, one onto another and such processor would work on 40000 Hz - very closly to neurons working frenquency.
Ron
Hi David,
The efficiency is the output power divided by the DC power. Those are going to depend on your Imax and your breakdown voltage. The higher the frequency, you need semi-conductors with less inter-electrode capacitance and inductance (intrinsic reactance), so there are alot of trade offs between frequency and efficiency. I've done alot of RF research with things like gate length, s-d resistances and, of course, doping levels. There is no cut and dry answer and alot of factors that must be taken into consideration.
Peace,
Ron
guiding_light
The processor power calculation contains many portions (such as leakage, etc.) but the most basic component is the power from applying a voltage to charge a remote capacitor. This applies most practically, because most of the time the transistor is only turned on a brief moment rather than left on to carry current.

The energy to charge a capacitor is given generally by 0.5*C*V^2, where C is the remote capacitance (but actually this should include lots of components in the budget). This energy is then multiplied by the clock frequency to give the power consumed by this charging, on the assumption that it is done every cycle by the transistor. Finally, you would have to estimate how many transistors are doing this on average per cycle (because most should be sleeping) and multiply accordingly.

The biggest error source in this estimation, of course, is C and the number of active transistors on average per cycle.

There is another way to get an estimate of the transistor power which avoids having to deal with C. Some companies publish their transistor data, which would give how much current a transistor outputs at a given voltage. For example, a state-of-the-art ballpark figure at ~1V is ~1 mA/um * ~0.1 um isolation width => ~0.1 mW per transistor. Then multiply this by how many transistors are actively on during an average cycle. A million? Then power ~100 W during an average cycle. That is a little high, so I estimate in most CPUs there are several hundred thousand actively on transistors during an average cycle at full load.
DavidD
I was thinking why processor have ~10^9 transistors and working on about 10^9 Hz frenquency and have not 10^18 FLOPs, but about 10^12 FLOPs. I was thinking that many gates need for 1 FLOPs and about milion transistor for 1 FLOPs, but since only 10^5 transistors are load on averege through computation time, then it's many explains. But I was thinking that still mantisa, floting point, multiplication and over tasks recuiring complex gates interactions, but maybe you 90% right... But also not all neurons working, nor at they full activity...
DavidD
Possible explanation why working only about 100 thousunds transistors instead about bilion can be that (3*10^4)^2=10^9, and thus processor length consist of about 10^4 - 10^5 transistors and this lenght electricity can travel only with about t=0.02(m)/3*10^8(m/s)=6.(6)^{-11}(s),
f=1/t=1/[6.(6)^{-11}(s)]=1.5*10^10 Hz=15 GHz. Speed of electricity is half speed of light so then processor can work only on 15GHZ/2=7.5GHz. Transistors isn't very linear connected and this bring us to about 3-5 GHz.
guiding_light
Another possibility of course is that the transistors are transfering currents across different (source-drain) voltages. The lower source-drain voltage would result in less current when the transistor is not saturated. If transistors are stacked (connected in series) that also reduces voltage. This tends to save power but speed is compromised since you have a series of low-voltage, low-current transistors. But it would lead to a larger number of counted transistors then if we assumed they all operated at more or less the same voltage.
guiding_light
By redoing my estimate, extreme case ~0.1 V * 0.01 mA = 1 uW per transistor. So now we have probably tens of millions of transistors active during average cycle, full load. This may add to up to ~10% of the total. More reasonable but maybe still too many considered asleep?
Enthalpy
Radiation by chips is extremely low, forget it. All power is lost in resistivity.
Enthalpy
Processors don't have 1e12 flops. Rather a bit over 1e9 under uncommonly good circumstances.

You need many transistors to make a FLoating-point OPeration, and even many more transistors to store the data, and then these transistors are at rest most of the time. Hence the waste.

Take rather 70 transistors for a 1-bit full adder, then 64*64 monobit adders to make a floating-point multiplier: that's already a quarter-million. With some control logic and an adder, plus shifters etc, you're well over a million transistors per floating-point unit.

This would still allow to put 1000 units on a chip and get 1e12 flops (apart from cooling issues), but nobody knows how to feed these units with data. So today's processors have only some 4 cores of each 3 SSE units on 128 bits, totaling only 24 equivalents to the floating-point units described above, instead of 1000. As these units are underused, they deliver less than 24*3GHz flops: rather a few GFlops.

I couldn't tell from Intel's doc whether the three SSE per core are identical, or if one multiplies and the other adds.

Nearly everything else (80% of the chip) is cache memory, in an attempt to deliver the necessary memory bandwidth.

For more specialized chips where data flow is more predictable and organizable, especially video processors, you may put more processing power relative to cache memory. This needs highly specialized software that general applications on your computer don't have.

Vista and DirectX 10 make a step in the direction of using more the video processor and less the general processor. This might be a heavy trend in the future. nVidia also provides a software package to run "more-general" computations on video processors.
DavidD
QUOTE (Enthalpy+May 1 2008, 01:05 AM)
Radiation by chips is extremely low, forget it. All power is lost in resistivity.

Yes in resistivity and maybe even more than from resistivity - from capacitor effect, you probably know such problem for ~ electricity: R=U/(C*f), where R is resistivity, f - frenquency, C - capcity, U - voltage. So if cpacity and voltage between two wires don't changing, when if you increase frenquency two times, resistivity will decrease also two times...
DavidD
Geforce 8800 and Radeon 3900 have about 500 GFLOPs. It is 5*10^11 FLOPS. Intel core 2 quad have about 50 GFLOPs = 5*10^10 FLOPs with single precision and about 15 GLFOPs with double precision if I good remeber.
DavidD
I make mistake, formula of licage is I=U*f*C, but it don't changing idea. Leakage is the bigger , the bigger is frenquency, becouse two separate wires are like very small capacitors - C... and I is leackage...
Lizzy Frog
QUOTE (DavidD+May 1 2008, 05:01 PM)
I make mistake