beuis
Can please someone tell me why if we have binary do we not just oscillate the 3rd binary bit at a faster rate than we poll its state creating a 0 and a 1 at the same time giving us a 3rd binary state like the electron?? up and down at the same time.

This would enable us to encode and decode data in parallel rather than serial!!!
DavidD
I little bit raed about this... 3D here means that somehow interactions going from 3 axis x, y, z instead x and y only. But this still some bullshit...
StevenA
I'm not certain I understand exactly what your referring to.

If I understand you correct, given two systems that are operating at different speeds, and desynchronized in time, communicating via. binary information, (or in your case it sounds like you're referring to one running significantly faster than the other) are you then asking if we can make some computational gains by utilizing or compensating for the uncertainty in their synchronization over time?

If that's basically what you're interested in, then a few related comments:

1) Each side of this be computing at a some speed, and not faster than that. (Though having smaller circuits can make them operate independently faster than trying to maintain a synchronization between a network of them)

2) The desynchronization lowers our ability to transmit binary information reliably. In the extreme, if the circuits sampled each other randomly, we couldn't transmit a specific ordering for binary information and any "controllable" information would become very small, though we would have an increase in the quantity of states we saw an indeterminant. So, you can see this similar to having x represent an unknown binary digit and 0 and 1 representing known binary digits.

This is just a rough example, and there are more details, but we could see a typical binary string like this:

0011101101...

Now if we have uncertainty present, and we could predict precisely when it occured as well as sample it as a third state, it could appear as a trinary representation instead:

x1x000x10x...

And if that was possible, we could increase computational speeds some, but the problem here is that we couldn't predict when those x states occured and each side of this would still be seeing 0s or 1s so we'd just be seeing a binary string, but an unreliable one because it represent multiple possible results. In this case the information content is lower and a good simple analogy is that, let's say we have a communication channel that could add one of, say, three random values, -1, 0 or 1 to a number we try to transmit. In order to transmit information reliably, we'd end up being required to multiply our number by 3 and transmit a larger number, so that this noise could not move the number far enough to represent a different valid number. If we could only transmit numbers 0 to 1000, we'd instead be limited to transmitting numbers 0 to 333, which we'd multiply by 3 and transmit one of these 0,3,6,9,12, etc. and then the receiver can round the result to the nearest valid value and altering these by +/-1 won't change the interpreted symbol.

So there's an inherent loss in information when we're subject to part of a channel containing noise.

3) The parallel properties expected by some for quantum computers arise from computations and communication on smaller scales of size and energy and though these should be present to some small significance in a typical electronic circuit on larger scales, the influences are very small.

4) Now if we could actually accelerate the clock speed of one system by a very significant amount, there could be potential gains made overall by considering statistical results and in many the possible gains could act quite similar to those hoped for in quantum computation.

For example, let's say we had a manner to accelerate a computer 1000 times faster than a second one (or similarly if we had some logic circuits connected in this manner). In this case, if we didn't have an ability to keep them synchronized, we couldn't predict specifically when one computer would either input or output information to the other, but whatever information the faster computer receives is processed 1000 times faster than the other computer.

If we, for example, input a string of 111111... from the slow to the fast computer, the faster computer would be detecting each one of these close to 1000 time units long. The faster computer would also be computing 1000 times the results of the for each bit. If we designed the faster computer to process groups of 5 bit inputs and we simply toggled or input from 0 to 1 to 0 etc., then the faster computer would be computing the results of strings input like these 00000, 00001, 00011, 00111, 01111, 11111, 11110, 11100, 11000, 10000. So in the period of time we toggle from 0 to 1 to 0 for an input, the faster computer could see 10 possible input values and if we extended it to operate upon larger strings we'd see a similar growth in the variety of inputs it received.

Something interesting to consider here would be that we could potentially use a faster computer for some inputs, but not others.

For example if we had one computer running 5 times as fast, but without uncertainty in the timing, and we had a binary string with lengths of 1s and 0s that were all 5 long, we could feed this specific input to the faster computer because each time we presented a 0 or 1 it would appear 5 long to that computer.

We could, more generally look at how bianry sequences could be sorted into strings potentially able to be computed faster in this manner:

0000 -> could be performed on a computer 4 times as fast by presenting a single 0
0001 -> could potentially be performed on a computer 3 times as fast by transmitting 01, which would be seen as 000111 and then selecting the correct "phase" for the result, when it saw the leading 0001 section.
0010 -> no gain here as we need the computer to process a single 1
0011 -> could be performed on a computer twice as fast by being presented the sequence 01, which would be seen as 0011
0100 -> no gain here
0101 -> no gain here (though in some ways we could calculate such repetitive strings very quickly with a computer that simply toggled 01010101... for its input and the same would be true for computing 1010, and even easier would be the 0000 and 1111 representations)
0110 -> potentially twice as fast
0111 -> potentially 3 times as fast
1000 -> potentially 3 times as fast
...

etc.

Though these would still appear to require a synchronization in time and uncertainty would arise in trying to determine when to sample a resut from the faster computer (if you don't sample the result at the correct time, then it will be computing something different).

Anyway, it's an interesting question, though we would not be directly working with the properties of uncertainty in quantum mechanics, as we'd be using conventional circuits in which those influences are (intentionally) minimized, but it could be interesting to see if such a mechanism mimics some of the properties observed in quantum mechanics.

To extend upon this though, it would be interesting to know what potential statistical gains could be made by having an array of small computers operating at speeds that were as fast as possible and not required to be synchronized to each other. There's a subject that's only very indirectly related to this and it's called asynchronous logic. It does provide some computational gains, though for asynchronous logic there are generally large power gains (it's not directly related to your question specifically but it does have some common features in that there are some uncertainties that arise in the designs due to the fact that no common clock is used and instead the clocking mechanism is effectively embedded within the data flowing through the circuit - the computational circuits wait until they encounter inputs that are formatted in a valid manner and then proceed to allow this information to flow past them, and the next stage similarly waits until such valid formatting has arisen in the output of that and this process repeats, so that each element can compute as fast as possible and only needs to compute something when it has valid information, otherwise it consumes little power and simply waits for the next "packet" to come along)
magpies
Why would we want to encode and decode data in parallel rather than serial!!! ?
beuis
Thank you Steven A you make this forum great!!

to magpie you ask - "Why would we want to encode and decode data in parallel rather than serial!!! ? "

serial is sequential one process then another
parallel is as many processes (n) as you want all at once.

imagine a computer that can render 3D graphics and Havok physics computations in what would appear like "real time"

This would be interesting to perform most basic experiments with just one computer running at 2 x the freq of another computer.

would be better if just one pin on a microchip is being pulled up and down by a crystal oscilator at 2 x the freq of the others. but the micro would need .asm to (HEX) to (binary) that understands the trinary bits.

could use the trinary bit as part of an MSB (most significant bit header).

thanks dude got me thinking but im sure this will go nowhere.

StevenA
QUOTE (beuis+Jul 29 2008, 03:39 PM)
Thank you Steven A you make this forum great!!

to magpie you ask - "Why would we want to encode and decode data in parallel rather than serial!!! ? "

serial is sequential one process then another
parallel is as many processes (n) as you want all at once.

imagine a computer that can render 3D graphics and Havok physics computations in what would appear like "real time"

This would be interesting to perform most basic experiments with just one computer running at 2 x the freq of another computer.

would be better if just one pin on a microchip is being pulled up and down by a crystal oscilator at 2 x the freq of the others. but the micro would need .asm to (HEX) to (binary) that understands the trinary bits.

could use the trinary bit as part of an MSB (most significant bit header).

thanks dude got me thinking but im sure this will go nowhere.

I'm happy you found some value in my comments (Seriously, way to go, dude! The forum mafia around is stifling and it's refreshing to see people who truly enjoy novel and original concepts and applications - they'll self implode - most of them are living off tax dollars anyway! LMAO! What would any good story be without an antagonist anyway?).

Here's quite an interesting application. I'm not certain if it can work, but it seems quite possible.

To get the general idea, imagine a network of computers that had ever increasing clock speeds. We couldn't reliably communicate from a lower clock speed to a higher one and some uncertainty regarding the timing or interleaving of data that each would see from a neighboring layer in this structure would be present, but imagine if it was possible to program such a computer statistically, over time, to be able to compute an error correction algorithm that could estimate what the intended transmission was and then perform the necessary computations at a higher speed. Not imagine that computer using a similar technique to effectively "build" a faster computer layered upon it ... etc.

One property that each such computer would appear required to possess would be an equivalent memory that we could configure to allow for us to specific the error correction. In the worst case, the uncertainties in communication don't configure it correctly, but then again, we can just try a second time or swap in another one (if the condition is unrecoverable), until we get a computational gain and then stage the next one after that.

I've seen some ideas before regarding utilizing rather random networks of organic molecules (basically an organic paint) to almost 'evolve' a computer. I don't think the technologies quite there to do that yet, but it's definitely an interesting possibility. (Of course we could then hand over this technology to one of the forum mafia and tell them that if they 'push this button' it blows up the world. Then in characteristic style, they'll push it and then we can start over from scratch and hope fewer weak links in the chain exist next go around ... seriously though, we've probably been quite lucky that nature's been running the show largely outside of human abilities to alter the situation. Then again, we could find some technologies that offer some quite interesting alternative possibilities ... seems like a coin toss to me)
Geoff Mollusc
Good work StevenA, if AI gets predictive, just imagine the possibilities.
beuis
DECIMAL 0 1 2 3 4 5 6 7 8 9 10 11 etc...
HEX 0 1 2 3 4 5 6 7 8 9 A B C
BINARY 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011

could we not just re write the table but using 3 values not 2 states

x = 1 and 0 at same time meaning chip pin up and down at 2Freq.

1 = 000
2= 001
3=010
4=011
5=100
6=101
7=110
8=111
9=00x
10=01x
11=x11

ere etc.....

by having 3 states we send much less data to get same computation = faster PC for me and you...maybe
StevenA
QUOTE (beuis+Jul 29 2008, 08:44 PM)
DECIMAL 0 1 2 3 4 5 6 7 8 9 10 11 etc...
HEX 0 1 2 3 4 5 6 7 8 9 A B C
BINARY 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011

could we not just re write the table but using 3 values not 2 states

x = 1 and 0 at same time meaning chip pin up and down at 2Freq.

1 = 000
2= 001
3=010
4=011
5=100
6=101
7=110
8=111
9=00x
10=01x
11=x11

ere etc.....

by having 3 states we send much less data to get same computation = faster PC for me and you...maybe

In this case you're working with a trinary system, which does convey some additional information per symbol. (You can estimate this quickly by recognizing 3*3*3*3*3=127 and 2*2*2*2*2*2*2*2=128, so 5 trinary symbols conveys almost the same quantity of information that 8 binary bits does, which is a gain of ~1.6, but we can compute this more accurately as ln(3)/ln(2)~=1.585)

In this case you could potentially have a gain in communication bandwidth of close to 60%, but here's the catch - you need the pin to potentially be able to toggle twice as fast. So, in a sense you're taking something capable of double the bandwidth and only gaining close to 60% of this potential.

In quantum mechanics, the mechanism is different (or at least would be if some of the expectations held by some pan out, but I admit being somewhat skeptical). In this case you have physical relationships between objects that aren't immediately observable (which you observe them, that's effectively a computation). The longer you can keep the information in this unobserved state, the potentially greater number of such hidden interactions and computations it can perform, and this growth is of a potentially exponential form that's faster than any classical electromechanical system can perform. The problem here is that when the state is finally observed, they can't predict which one of these possibilities occured, but there are ways of combining multiple such results to quickly scan through some of the general properties of these computations and ways of correcting for statistical errors that arise. Here's a link that could give you some of the general ideas http://www.cs.caltech.edu/~westside/quantum-intro.html. So the difference is actually in the physical manner in which the computations are performed.
StevenA
On the other hand, there still are potentially quite interesting tricks that could be performed with electronic circuits in novel ways.

Recognize that the potential computational power of single transistor is actually very much large than typically utilized in a digital circuit. The transistor is actually computing a non-linear function, an integration (it possesses an internal memory) and in many ways could be considered to be performing a multi digit multiplication at rates quite a bit higher than a typical system operates at.

If we wrote a program on a computer to simulate a single transistor, it would likely run thousands of times slower than an actual transistor, and consider as well that for some higher end PCs potentially billions of transistors in order to do this (on top of this you have much less of an ability to utilize potential quantum mechanical features on those smaller scales).

So we could potentially have a computer trillions of times less efficient in computing what a single one of its elements is computing (consider the analogy here of people trying to understand how atoms operate ).

On top of that, most transistors are intentionally designed to avoid quantum mechanical uncertainties and retain significant enough voltage swings (which raises power consumption) to have a reliable binary interpretation of the signal.

Not recognize that we could take a single tiny resonant ball of material (with some non-linear characteristics in its oscillation) and combine two very higher frequency signal within it and by phase shifting these (there are some tradeoffs involved here) potentially perform computations on scales at least an order of magnitude higher than a typical transistor in a PC and the clock is already embedded in the signal - you can invert a signal by, instead of using another transistor (with its inherent multiplier, integrator and non-linearity) simply shift the distance slightly between two of these components. You could potentially also embed multiple wavelengths of information within a larger mass (basically imagine computation similar to using audio/acoustics within a drop of water and having the computations arise from these waves hitting each other and reflective or being deflected away or toward pressure differentials that arise when the waves reinforce each other - the real trick would be to create something that could retain information in oscillatory loops within the material at potentially different frequencies and then being able to route information around between these loops - now that would be close to 'magic' in terms of computation).

There are some ways of converting a lower frequency signal into higher frequencies (optical networks interleave multiple lower frequency signal into the same channel in order to take advantage of the very high optical bandwidth).

The idea I was thinking of above could be seen similar to taking sections of serial binary information that only contain low frequencies (slowly toggling binary values, like 0000111100001111, for example)and effectively resample them at a higher frequency on a computer running at a faster rate - the communication takes the same amount of time, but the information would now be embedded within a system with effectively a higher frequency clock, so the subsequent computations performed on this information would be faster - though you'd likely eventually want to return the result to a slower clock speed computer.

There would also be ways of taking strings that were not specifically lower frequency signals and by being selective in which processor you routed the information to, you could have that other processor interprete the information in a different form that allowed the communication bandwidth to appear as if it was lower. Here's an example:

Let's say you had a binary string 010101010101 representing a number that you wanted to perform a computation on (maybe you want to compute the factors of this number). If you had a faster computer that effectively XORed all the information presented to it with it's own higher frequency version of 010101010101, and let's say it operated 4 times faster than an originating processor, we'd only be able to send it 4 bits in the time it would read 16 bits, but consider what possible streams of input bits it could interprete these 4 bits as:

If we send 0000, it sees this expanded over time as a 16 bit string of 000000000000. If we toggle a single bit in this sequence and wait for 4 of the slower time units, it would see, given something like ...0001000... at a slower speed, various combinations of inputs similar to these (consider though there would be some uncertainties over the exact durations/transitions of symbols, so it's a bit more complex than this)

0000000000000000
0000000000000001
0000000000000011
0000000000000111
0000000000001111
0000000000011110
0000000000111100
0000000001111000
....

So it sees a string of 4 1s propogating across its effective input field.

Now if it internally XORed these values in a predetermined (or potentially reconfigurable) fashion with the sequence 01010101010101, then it would be effectively capable of computing the results for any one of these sequences, from the presentation of a single 1 impulse at the input:

0101010101010101
0101010101010100
0101010101010110
0101010101010010
0101010101011010
0101010101001011
0101010101101001
0101010100101101
0101010110100101
....

Now consider that actually we have 16 possible such inputs we could transmit in this period of time 0000, 0001, 0010, ... etc. and so we actually 16 times as many possible computations to select from.

The issue here is in being able to synchronize the results in time. Even though we don't need as high a frequency in communication, we still need some gain in timing/phase accuracy in order to at least select one of these as the desired computation with something better than a random 1 of 4 selection in phase (then again there could be some other tricks, similar to ideas from quantum computing which might allow us to sidestep this somewhat). Of course, imagine if we them take one of these stages and similarly break off into a fractal structure of higher and higher frequency computers. In the limit, you might have most anything computable in a constant period of time (imagine a computer that takes as long to give an answer as it does to read the question ... now that would be funky!)

Anyway, your idea is interesting, but the trinary representation for a higher frequency toggled bit alone doesn't directly do this.

Then again, there is a close similarity here for your trinary idea.

Consider again that we can potentially gain close to a 60% increase in bandwidth. If all we require is simply an ability to detect whether or not a transition on the pin occurs during a clock cycle, then we really don't need to toggle the pin at double the rate, we simple need to assure at least one transition occurs during a cycle and theoretically this only requires that the frequency on the pin be anything higher than the clock rate (realistically you need some guardband here though and the main concern is over the potentially very narrow band transitions at the edges of a clock cycle that could occur for toggle rates very close to the clock speed - you'd also have issues, if these are desynchronized in time, that a toggle could occur close to a clock edge and it being difficult to ascribe the toggle to the correct side of the clock - there's likely a mechanism using multiple pins that could improve things for this scenario - so instead of having one bit transmit trinary information, you have 2 bits transmitting something close to 9 states - in this case likely 8 states, so instead of ~59% gain, you get 3 bits for the "cost" of 2 in some ways and the increase would be only 50% instead).

But as an example, let's say we have a clock period of 10nS (100MHz) and a toggle rate of 8nS (125MHz), then we can see how these would interleave over time - to make it easier to resolve, we can offset one of these signals by 1 nS. I'll list the transition times for each signal so you can see that at least one toggle occurs within each clock period:

CODE
clk tgl
0    1
9
10  17
20  25
30  33
40  41
49
50  57
...

DavidD
I don't know what you think, but how quantum computer can be possible if entanglement and spins of particles don't exist?
beuis
Thank you for a great responce again your ideas are great and have me thinking...

I believe we only really need to create 10 finite bits from a switch that only has two states 0 and 1.

0-9 are the 10 states then they are just placed in order to make any number.

I think you hit the crux of the problem when it is the storage and recall of those states.

when viewing a microprocesor from the inside I think we would see many repeated patterns in the 0 and 1s.

I was also told that instead of changing the switch states many calculations can be done by just moving a decimal point around a grid of 0 and 1s. As there are only really 10 numbers 0-9. And in base 2 maths (on or off) there are patterns in the states.

i could conceptulise a spinning photon emmiter that is tiny and using a rotationl array of photon detectors around the central light source (photon emitter).

say we have 10 detectors in a circle. each one is representing a number from 0-9.
each can store its on or off with a switch (transistor).

each detector would just constatly be turning on and off all the time.
then we build a circuit to allow us to read the states but each can be read independently say i read detector 3 and 1 this would give me the number 4 and 2 since we counting 0 as a state. any states read would then have to be stored using binary again.

the point is 10 different states are read from the core each time. not 2 or 3 but 10.

we would then use another binary chip to read the array in different ways.

me thinks this is not to great but just idea.

StevenA
QUOTE (DavidD+Jul 30 2008, 12:58 PM)
I don't know what you think, but how quantum computer can be possible if entanglement and spins of particles don't exist?

These properties do exist, but a few possible issues are 1) if the properties are entangled, then this means that there are fewer fundamental units that describe them and the potential gains for large collections of objects could be limited. For example, if we had 5 objects with 3 properties each, but some of these properties are entangled, then we're not free to control all 5*3=15 properties independently and the information content in knowing these states is smaller. It may be that there are more entangled relationships that aren't immediately obvious and this would make the system appear either more noisy than expected or similarly that fewer independent properties were available to be controlled. 2) In order to allow entangled properties to interact on larger scales of time, it seems possible that trying to create greater isolation in communication with a system could also degrade our ability to keep it stable in that form or 3) alternately that it interacts with information on smaller scales that aren't related to the desired computations (so this would be like another lower noise floor that limits the gains).

Basically, the properties quantum mechanics describes are inherently statistical and these statistics regard some of the boundaries of knowledge in physics, so it's a bit tough to predict how things will unfold as the details of the mechanisms generating those boundaries aren't well understood. On the other hand, if we take an observer centric view, then at least for humans, the possibilities are most anything imagineable and logically possible ... the main paradox involved is over trying to monitor and control a property who's most significant trait is that it disappears when observed
StevenA
QUOTE (beuis+Jul 30 2008, 02:26 PM)
Thank you for a great responce again your ideas are great and have me thinking...

I believe we only really need to create 10 finite bits from a switch that only has two states 0 and 1.

0-9 are the 10 states then they are just placed in order to make any number.

I think you hit the crux of the problem when it is the storage and recall of those states.

when viewing a microprocesor from the inside I think we would see many repeated patterns in the 0 and 1s.

I was also told that instead of changing the switch states many calculations can be done by just moving a decimal point around a grid of 0 and 1s. As there are only really 10 numbers 0-9. And in base 2 maths (on or off) there are patterns in the states.

i could conceptulise a spinning photon emmiter that is tiny and using a rotationl array of photon detectors around the central light source (photon emitter).

say we have 10 detectors in a circle. each one is representing a number from 0-9.
each can store its on or off with a switch (transistor).

each detector would just constatly be turning on and off all the time.
then we build a circuit to allow us to read the states but each can be read independently say i read detector 3 and 1 this would give me the number 4 and 2 since we counting 0 as a state. any states read would then have to be stored using binary again.

the point is 10 different states are read from the core each time. not 2 or 3 but 10.

we would then use another binary chip to read the array in different ways.

me thinks this is not to great but just idea.

Yes, such techniques can improve the bandwidth of information and accelerate the communication as well as potentially the rate of computations also.

We need to be able to embed the necessary information for a "program" or process within it - alternately we can create computational elements with rather random processes as scales finer than we control and then select how these are combined together. (So you primarily have the elements 1) storage - you need enough places to put the information and retain it 2) routing - you need to be able to combine these elements in specific subsets, which usually means you need these pieces of information to be able to reach a common point - at a minimum this would be sort of input/output location and 3) computation - you need to be able to combine/transform existing elements into representations of that information).

So we need to be able to get information into this and control where the flows of information interact, and this routing could also effectively be part of the input information, though the specifics of the routing could already be built into the hardware of the computer to be more of an application specific computation. With regard to the computations, in many ways it doesn't matter specifically what form of interaction we have as long as you can combine A and B and get something unique as C - pretty much any non-linear combination of inputs, such as a simple NAND gate will work, though generally you also need a manner to reamplify the results and compensation for possible errors or noise.

An interesting optical ideal I'd had before was one of having finer and finer rings of optical guides all interacting at a single non-linear element (like an atom), for example, there are crystals that can multiply the frequency of light, though that's just one possible mechanism.

You could use some of the larger loops of this to support lower frequency inputs and output (the wavelengths are larger and easier to monitor and modulate) and the interactions with the non-linear element could convert these into higher frequency elements that could resonate within the smaller rings, similar to a memory storage element with addresses being frequencies and the value stored at that address would be the amplitude of that frquency component - the non-linear element would be effectively performing some operations like takeing addresses A and B and computing two new addresses as |x+y| and |x-y| to store computed results - this basic principle is based upon a concept in some radio/audio applications called heterodyning http://en.wikipedia.org/wiki/Heterodyne and it arises from the trigonometric identity - sin(x)*sin(y)=((cos(x-y)-cos(x+y))/2.

The most significant problem would appear to be in controlling the phases or delays between information in various frequency bands - if you had a manner to limit the spread of a wavefront over time that could likely serve as a mechanism to construct a manner in which to coherently route information within these rings instead of the information just rather instantly spreading out into a chaotic spectrum. Also, photons don't travel at nearly as precise a velocity as Relativity tends to portray, though using high intensity and very short pulses can compensate for this quick decay in information wouldn't seem ideal (but that may be the only way to do it).

It's interesting to conside that a "working prototype" mechanical system using acoustic transducers (that's just a fancy name for speakers and microphones ) and a wire of material wrapped into smaller and smaller loops, all intersecting at a single point through a different material, could probably be built to actually show it in real time operation - (it would be kind of cool to recognize that you could at least "hear" in "real time" what some of the frequency bands were computing ). It's also interesting to consider that you could have speech as an audio input. Now that would be fun . (Ok, I admit realistically this wouldn't do much anything novel and it would likely just sound like a funky audio reverberation, unless a lot of engineering was put into it and you'd need power to reamplify the signals at the point of interaction and reinject them into the rings and it would only preform one program unless you had an ability to alter the resonant wavelength of the rings, such as changing the length or tension of the material). But truly, you could simulate most of this on a computer or digital signal processor and it would be much easier - the trick would be in extending this to frequencies you couldn't directly control and then using resonant pathways within some easy to create material (recognize that it's not impossible to perform computations within a drop of wave if you can have fine enough control over pressure waves being transmitted through it).
DavidD
You just like kid rambling bullshit. Ther eno any quantum entanglement and spins, what I prove in over thread. Quantum computer imposible in principle. Sciencists wrong understand/interpretating some laws.
___
QUOTE (DavidD+Jul 30 2008, 06:19 PM)
I'm just like a kid rambling bullshit.

Agreed.
DavidD
QUOTE (___+Jul 30 2008, 09:09 PM)
Agreed.

Kid who is righter than milions crancks like you.
PhysOrg scientific forums are totally dedicated to science, physics, and technology. Besides topical forums such as nanotechnology, quantum physics, silicon and III-V technology, applied physics, materials, space and others, you can also join our news and publications discussions. We also provide an off-topic forum category. If you need specific help on a scientific problem or have a question related to physics or technology, visit the PhysOrg Forums. Here you’ll find experts from various fields online every day.