29th July 2008 - 10:43 PM
On the other hand, there still are potentially quite interesting tricks that could be performed with electronic circuits in novel ways.
Recognize that the potential computational power of single transistor is actually very much large than typically utilized in a digital circuit. The transistor is actually computing a non-linear function, an integration (it possesses an internal memory) and in many ways could be considered to be performing a multi digit multiplication at rates quite a bit higher than a typical system operates at.
If we wrote a program on a computer to simulate a single transistor, it would likely run thousands of times slower than an actual transistor, and consider as well that for some higher end PCs potentially billions of transistors in order to do this (on top of this you have much less of an ability to utilize potential quantum mechanical features on those smaller scales).
So we could potentially have a computer trillions of times less efficient in computing what a single one of its elements is computing (consider the analogy here of people trying to understand how atoms operate
On top of that, most transistors are intentionally designed to avoid quantum mechanical uncertainties and retain significant enough voltage swings (which raises power consumption) to have a reliable binary interpretation of the signal.
Not recognize that we could take a single tiny resonant ball of material (with some non-linear characteristics in its oscillation) and combine two very higher frequency signal within it and by phase shifting these (there are some tradeoffs involved here) potentially perform computations on scales at least an order of magnitude higher than a typical transistor in a PC and the clock is already embedded in the signal - you can invert a signal by, instead of using another transistor (with its inherent multiplier, integrator and non-linearity) simply shift the distance slightly between two of these components. You could potentially also embed multiple wavelengths of information within a larger mass (basically imagine computation similar to using audio/acoustics within a drop of water and having the computations arise from these waves hitting each other and reflective or being deflected away or toward pressure differentials that arise when the waves reinforce each other - the real trick would be to create something that could retain information in oscillatory loops within the material at potentially different frequencies and then being able to route information around between these loops - now that would be close to 'magic' in terms of computation).
There are some ways of converting a lower frequency signal into higher frequencies (optical networks interleave multiple lower frequency signal into the same channel in order to take advantage of the very high optical bandwidth).
The idea I was thinking of above could be seen similar to taking sections of serial binary information that only contain low frequencies (slowly toggling binary values, like 0000111100001111, for example)and effectively resample them at a higher frequency on a computer running at a faster rate - the communication takes the same amount of time, but the information would now be embedded within a system with effectively a higher frequency clock, so the subsequent computations performed on this information would be faster - though you'd likely eventually want to return the result to a slower clock speed computer.
There would also be ways of taking strings that were not specifically lower frequency signals and by being selective in which processor you routed the information to, you could have that other processor interprete the information in a different form that allowed the communication bandwidth to appear as if it was lower. Here's an example:
Let's say you had a binary string 010101010101 representing a number that you wanted to perform a computation on (maybe you want to compute the factors of this number). If you had a faster computer that effectively XORed all the information presented to it with it's own higher frequency version of 010101010101, and let's say it operated 4 times faster than an originating processor, we'd only be able to send it 4 bits in the time it would read 16 bits, but consider what possible streams of input bits it could interprete these 4 bits as:
If we send 0000, it sees this expanded over time as a 16 bit string of 000000000000. If we toggle a single bit in this sequence and wait for 4 of the slower time units, it would see, given something like ...0001000... at a slower speed, various combinations of inputs similar to these (consider though there would be some uncertainties over the exact durations/transitions of symbols, so it's a bit more complex than this)
So it sees a string of 4 1s propogating across its effective input field.
Now if it internally XORed these values in a predetermined (or potentially reconfigurable) fashion with the sequence 01010101010101, then it would be effectively capable of computing the results for any one of these sequences, from the presentation of a single 1 impulse at the input:
Now consider that actually we have 16 possible such inputs we could transmit in this period of time 0000, 0001, 0010, ... etc. and so we actually 16 times as many possible computations to select from.
The issue here is in being able to synchronize the results in time. Even though we don't need as high a frequency in communication, we still need some gain in timing/phase accuracy in order to at least select one of these as the desired computation with something better than a random 1 of 4 selection in phase (then again there could be some other tricks, similar to ideas from quantum computing which might allow us to sidestep this somewhat). Of course, imagine if we them take one of these stages and similarly break off into a fractal structure of higher and higher frequency computers. In the limit, you might have most anything computable in a constant period of time (imagine a computer that takes as long to give an answer as it does to read the question
... now that would be funky!)
Anyway, your idea is interesting, but the trinary representation for a higher frequency toggled bit alone doesn't directly do this.
Then again, there is a close similarity here for your trinary idea.
Consider again that we can potentially gain close to a 60% increase in bandwidth. If all we require is simply an ability to detect whether or not a transition on the pin occurs during a clock cycle, then we really don't need to toggle the pin at double the rate, we simple need to assure at least one transition occurs during a cycle and theoretically this only requires that the frequency on the pin be anything higher than the clock rate (realistically you need some guardband here though and the main concern is over the potentially very narrow band transitions at the edges of a clock cycle that could occur for toggle rates very close to the clock speed - you'd also have issues, if these are desynchronized in time, that a toggle could occur close to a clock edge and it being difficult to ascribe the toggle to the correct side of the clock - there's likely a mechanism using multiple pins that could improve things for this scenario - so instead of having one bit transmit trinary information, you have 2 bits transmitting something close to 9 states - in this case likely 8 states, so instead of ~59% gain, you get 3 bits for the "cost" of 2 in some ways and the increase would be only 50% instead).
But as an example, let's say we have a clock period of 10nS (100MHz) and a toggle rate of 8nS (125MHz), then we can see how these would interleave over time - to make it easier to resolve, we can offset one of these signals by 1 nS. I'll list the transition times for each signal so you can see that at least one toggle occurs within each clock period:
There are some designs that have likely used some techniques like this (I know in one integrated circuit design I used a comunication system with 2 pins that utilized some delay information to communicate faster than the clock rate), but the primary issue for these is that you're effectively relying on some components to operate faster than other components and then taking advantage of that. In the case of something like an I/O pin for an integrated circuit, they're already operating much slower than the internal clock rate of a processor and you'd have much more difficulty squeezing in such a technique within the silicon as generally the designs are already pushing the bandwidth, but if you can segregate binary, similar to the suggestions I gave above, then you've effectively dropped the bandwidth of the signal and can now potentially resample it at a higher frequency or embed multiple of these within higher frequency channels (the tradeoff here is that you're using more hardware as now you have additional hardware processes, but it can be a mechanism to push speeds faster than many types of convention designs. If we then go beyond that an actually begin to work with differentials in speed to the point communication between them becomes statistical, then we're trading off increases in the time required to communicate information, because we need to allow for error correction, but we could make some potentially significant gains in the fact that the computational delays would be reduced (so this structure would be best for complex computations on smaller sets of data), and
(In other words, if you have a processor running at 4 times the speed, and you're already having some problems with getting information injected reliably into a single phase of this process, it matters little if you're simultaineously trying to inject information into another phase and you get a little more that information "spilling over" into a different bin - as long as you're not making a significant change in the equivalent "noise floor" of the system it would make little difference and you'd be taking greater advantage of computing threads in parallel. (I can practically guarantee that if it hasn't happened already, it soon will that the idea will be patented or claimed intellectual property. How annoying ...) Also consider that you might actually be able to push the envelope to small enough scales and fast enough speeds, by scaling up communication rates, that the "phase noise" could possibly allow for some quantum features to be present (so maybe you've got something there as well ... there are at least some quite interesting possibilities there).