loading

Predictive* digital music synthesizer (Pandora's Box #2)

Picture of Predictive* digital music synthesizer (Pandora's Box #2)
(*)Originally this was "preemptive", because I heard a rumor that someone else
had announced it. However "it" (theirs) was more of a data mining scheme in my opinion
than an automatic sound generator. This is "predictive" because it reads and sounds
THE NUMBER which is PROVEN to contain ALL possible SOUNDS in a usable way.

Someone claimed to have invented one in 1971, before PC's were powerful enough.
I responded by saying there were no PC's not even Altairs in 1971, but with 1971
technology I would have used the following 1971 technology with MY METHOD.

The Pandora's Box instructable now includes info about a Singing Calculator Number,
which is from the sequential digits of the number that is the subject of this instructable,
and a planned simple device for feeding that number into a speaker to get music.

NO PROGRAMMABLE CHIP IS NEEDED.
This sounds different than my other Pandora's Box which needs a programmable chip,
but when the schematic is done for this, so will be the uncomputerized original P Box.

 
Remove these adsRemove these ads by Signing Up

Step 1: More info, what is needed?

Picture of More info, what is needed?
I have reduced and minimalized this so that a small handful of
nice cheap 4040 and 4051 chips may be all you need.
(and battery, wire, solder, iron, speaker, breadboard, etc.)

HEADACHE WARNING: MATH AHEAD (Only "math heads" need to know.)

At this point the simplest musical calculator using "normal numbers" will use a brute force
method that pulls digits out of thin air using a binary counter, which is not considered a
computer, although it calculates all possible combinations starting with zero. Normal numbers
contain all other numbers, and the ones which can be made by counting contain them all
in order. And this is interesting because files and numbers are both the same thing, strings
of bits. This is well demonstrated in the simplest way I can imagine, in that number which
will play as the calculator song.

We say that Pi has an infinite number of digits, and that the digits are arranged "randomly"
in such a way that it contains an equal number of each number, so it is called Normal
(mathematically speaking). If I was making a Pi player, it would sound like the hiss of
static on a radio, because that is the sound of random. No one says that counting is
random, and a number made by ordinary counting will not skip a number, so we can
be sure that any number can be found in sorted order in it's place, leaving no doubt,
as someone could doubt that Pi has a certain number in it if they looked all over it
and never found what they were looking for.

The number "zero point one two three..." is the simplest demo of the musical number
concept, not necessarily the most practical one. Most understandable. I really feel
like I have to condescend and KISS about my most incredible inventions so people
because of responses like this:
Digg my "Holodeck"

The original Pandora's Box instructable had constraints, so that it's output would
never sound like static. This one does not have noise-avoiding constraints, except
that as another demo it is not designed to go so far into the calculation that we
will lose the sense of the pattern in the sound which is the process of counting.

It is very important to imagine that the number one need not be the first sound.
If this particular method of making sound were advanced far enough, then
the number one could actually represent the first song on the popularity chart!
I hope to get deeper into enumeration in future "musical number theory" instructables.
My more current enumeration research actually involves effective skipping of white noise,
so just think, out of all the possible numbers (sound files), what portion of them are noise? (!) .

It is important to realize that the musical number explored in this instructable is
not the only one, and this is not the only method I have invented or will invent
of generating digital sounds. Many are not yet impressed with my 3D projection
system or writing style, but at least I've included older projects which may be
reason to anticipate more and better in the future.

There may be inconsistent flow in the development of this prematurely published
instructable. I provide a link to what part of the Musical Number sounds like as
output by the device I'm now making for you to make.

Some 2^(2^(17)) digits of The Number (compressed into mp3)
Listen carefully for beats, voice like sounds (woo!) , bells ... some imagination maybe required!

I'm expecting the finished circuit to consist of a few logic chips, (no uC or uP)
so you'll need a soldering iron, a speaker, and chips which I haven't chosen yet.

You may feel free to experiment with the calculation and playing of numbers as
sound files while I work on making this instructable project.

Calculated BINARY numbers sound much louder as RAW or PCM
when the letter O is used instead of the number zero when stored as text.
(Otherwise you may not hear it at all.)
All your base are belong to you! (Haha. Use whatever base you want. Also,
I recommend using only the alphabet for bases 11 thru 26, or Hexadecimal will be distorted.)

The number that sings about the calculator is in ASCII (see Pandora's Box)
because it used to be in BCD, but that is not a standard (net playable) format.

I know you all laugh at my videos but I will probably demo LOTS of unique sound artifacts.
1-40 of 69Next »
AndyGadget5 years ago
Fascinating - I've been playing around with Champernowne's number in various bases on a PicAxe and interpreting the digits as tones in a musical scale.  It's beginning to sound quite musical. 
Just wondered if you'd seen THIS.  This guy's been interpreting The Number as text strings and he's been locating published passages within it.  He's located a passage from Genesis and a Shakespeare sonnet wayyyyyyyyyy down the sequence. It's all there - it's just a case of finding it.
VIRON (author)  AndyGadget5 years ago
Thanks for the link and your interest in this number and project. The link is especially nice because it has another set of instructions for calculation. I do not recall exactly where I posted my equations at the moment, which are simple arithmetic. In the future I hope to publish more of this kind of thing.
cyberlox6 years ago
This is not rocket science, just some lateral thinking. I think the reason that people may find this difficult to grasp is the way it has been explained. Try this explanation... as an example, music stored in CD quality format uses 16-bit samples, sampled at 44.1Khz. One second of sound therefore is stored as 44,100 times 16 bits. In binary, you can look at this simply as a very large number, rather than a stream of bits. Now, thinking in reverse, suppose you have (any) number, converted to its binary representation (a string of 1's and zeroes) and fed that to the D/A converters of, say, a CD player then you would hear 'music'. A number converted to binary may not necessarily be a Top 10 hit though! :) So, you can imagine an entire track of a CD as a single number (represented in binary by the bit stream). An infinitely large random number, represented in binary as an infinitely long string of bits would therefore contain all the musical combinations possible (as someone pointed out with the monkeys and typewriters analogy). So, for a given music track, normally stored as a binary bit stream, you COULD represent that by a single, large, number. If you represent music in this way, all you need is a circuit that takes a number and generates a bit stream to feed to a D/A converter. The inventive step required is how to represent 'the number' in a form that doesn't require as many bits as its binary representation - otherwise you may as well store it as we do today on CDs.... as a binary bit stream. The easiest way I can think of is in analog electrical form, as analog circuits are continuous, not discrete sample like digital, so you can represent any number to any precision (which is necessary to obtain sufficient digital bits for playback). Ah well, thats my contribution :)
Attention programmers:

Why not write a program that rips CD audio as a single number, then perform prime factorization to compress and store the data? Or, using the same algorithm as this instructable, find the starting position for playback so all the computer has to do is pick the math up at that decimal place to begin reproduction?

It seems to me like the original number crunching might be CPU-intensive, but then encoding an MP3 or any other file compression tends to be taxing too. Aside from that, if the result was expressed as powers of 2 or 10, it might be tidy enough that the execution doesn't take so long.

If stored as an exponent, maybe the ripper could add a trailing silence to the audio. That way, the exponent could be fudged a little to make it easy and a bit smaller, and it would prevent audible errors on the other end.

Perhaps a low-quality MP3 or .wav file might be a good starting point, just as proof-of-concept and to keep the math from taking forever during development.

As Viron has suggested in the past, this is a helluva way to bypass DMCA. Who can bust you for sharing numbers? Maybe as long as you have Prime 95 installed on your computer you could just say you're testing Mersenne primes for musicality. Who knew
298749819378786574298842008-1
sounded like Metallica?

Lars Ulrich: "Now those f*ing math geeks are stealing my money! How am I supposed to f*ing pay for my gold-plated tennis racquet? I'm gonna f*ing sue you, math!"

James Hetfield: "MATH BAAAD!"
"Why not write a program that rips CD audio as a single number, then perform prime factorization to compress and store the data?"

A friend of mine had the same idea (but for compression of arbitrary data, just find the place in pi where the bitstream occurs and record that).  There are two main reasons:

a) storing the location of where an arbitrary pattern occurs in the bitstream requires as much storage as just storing the pattern.  Think of the generated stream 01101110010111011110001001101010111100110111101111 etc, which encodes
0, 1, 10, 11, 100, 101 etc.

If you want to store an 8-bit pattern, the only place it is guaranteed to occur is in the "8 bit patterns" chunk of the stream or later.  This part of the stream starts after 128 bits of the bitstream, so to encode a location that far into the stream requires... 9 bits.

b) Let's call a CD 60 minutes of 44000 Hz at 16 bit sampling rate.  That's
60 * 60 * 44000 * 16 bits
= 2,534,400,000 bits.  The number that this therefore represents is of the order of 22,534,400,000

Given that the largest known prime is of the order of 243,000,000, which took the Great Internet Mersenne Prime Search many months to find, the prime factorisation of the "CD number" would take a very long time indeed.

Figuring out why this sort of approach doesn't work tells you a lot about information theory, but for practical compression you are better off looking at the actual data and what redundancies it has that you might be able to exploit.

Well, I wasn't thinking about applying this to an entire CD - perhaps more like a 3 minute song.  I know, even still this would be a difficult feat, but perhaps breaking the song further into smaller chunks would yield better results.  Using your method of finding "pattern chunks", a 24-bit vector could contain the starting location for a 10 second clip of audio at CD-quality (44100 samples per second at 16-bit depth and 2 channel stereo).

Given the fact that even high-quality MP3's and AAC files lose considerable resolution to compression, you could even possibly truncate the 16-bit sample depth to 12 bits for a compromise, thereby decreasing the amount of heavy-lifting necessary by a total of 8 bits in the starting vector.

I don't believe it would really compress audio so much as offer a way around DMCA.  Although it technically doesn't create a loophole, it still gives a more plausible excuse.  Let's say someone created a program similar to Prime95, only it processed a data stream (could be audio, stock market fluctuations, any data you decide to throw at it) and stored its results to a generic file.  Those files could still be traded under the guise of scientific research.  If you combine those files at your end to recreate audio - it's your thing, do what you wanna do.  I can't tell ya who to sock it to.

I mean, I'm sure I can create some permutation to turn my Windows .cab files into a .jpg of Mickey Mouse.  Will Disney sue me for violating their copyright by transforming existing, unrelated data into this photo?  Or will they sue Microsoft for creating code that can do so?  Or perhaps they'll sue math itself?
The main point is still the same: storing the location of a pattern in a known sequence takes just as much information as storing the data itself. Regardless of whether you're going to rip an entire CD or just a 3 minute song.

Regarding copyright, just transforming the data isn't enough. If it were, there would be no way to prosecute people for sharing MP3s (as MP3s are significantly abstracted from the original audio data). In court, it comes down to prior knowledge: if you can prove you haven't heard of Mickey Mouse before, and you just happened to transform your .cab file into this funny looking cartoon character you would like to use for marketing purposes -- then you're good to go.
I disagree with your first point.  Mathematically speaking, if you find the starting point for a known sequence and express it exponentially, it will take significantly less space to express it.  To be precise, the greater the number of the starting point, the higher the compression ratio becomes.

Bear with my explanation for one moment:

Audio signals fluctuate.  Assuming that the lowest frequency expressed by a given signal will be approximately 20 Hz (the low threshold of human hearing), it is reasonable to assume that the file to be analyzed would not carry a DC component for greater than 0.025 seconds (which, if it were DC for that period, would represent either the crest or trough of a 20 Hz square wave, assuming a 50% duty cycle).  This means that we can rule out certain exponentially expressed numbers that, when expanded, create a DC output.

While this would eliminate 2^n (the simplest to represent with n being the total length of the audio in bits), it would also eliminate 2^n-1 (the most costly to represent).  It would also rule out any starting point that gives you a few cycles, then DC.  Really, this logic excludes roughly half of any given possibilities for expressing such a number this way.

Computationally expensive, absolutely.  I'll give you that.  I never was much of a programmer, even in my Commodore 64 heyday.  But I can tell you that such a compression scheme is feasible, and it would significantly reduce the amount of data required to reconstruct audio.

As for copyright: after I posted my previous comment, I researched and found that there are, in fact, illegal primes.  A few were constructed by some guy who decided to take DVD cracking software and throw the right combination of bits at the end so that it would be a prime number.  Using this number, you can unzip the software.  Apparently, it's illegal under DMCA not because it's a numerical representation of software, but because it contains software that is designed to decrypt a copy-protection scheme.  It appears there's still no legal precedent for a numerical representation of anything like I'm suggesting, as it still falls under the "fair use" test.  So, if I find out that Metallica's "St. Anger" album can be reduced to a handful of powers of two, hooray for me.  If I use that to upload the album to Limewire, I'll still have Lars Ulrich's lawyers suing me for money they didn't make by selling the album.

Nevermind the fact that sounds like Hatebreed, but with less talent.  No, that couldn't possibly be why no one paid for a copy.
I'm not sure quite what you mean by your first paragraph, but my core assertion is still valid- storing the position of an arbitrary bit pattern in a generative stream takes at least as much information as just storing the bit pattern.  Information theory imposes that hard limit, there's nothing you can do about it.

Where your idea does become feasible is where you start examining this idea for music compression, not arbitrary data compression (because you can't actually compress arbitrary data, only data containing some redundancy).  Your assertion that most "chunks" of music will fit a certain pattern (frequencies between 20Hz and 20KHz, no DC component) is completely true.  The obvious way to utilise this is to arrange all the possible chunks of a given size in order of "likelihood to arise in a piece of music", so the common chunks are at the beginning of the pattern so take less information to encode.  If the first half of your track sounds like Lars Ulrich, the probability of more Lars Ulrich down the line is greater :)

This will only give you a slight compression, because you are still reproducing the music precisely (a.k.a. lossless encoding), but if you are also willing to say "well, this chunk from 2/3 of the way down the bitstream is actually very similar to chunk number 6 right at the beginning", hey presto, you are trading a loss of a small amount of reproduction accuracy for a much bigger reduction in size.

This is, not coincidentally at all, similar to the way I believe MP3 compression works.  It's certainly quite similar to the way JPEG encoding works, but I only did computer graphics lectures not computer audio so can't be certain.

At this point I suggest you read up a little about lossless vs. lossy compression, JPEG, MPEG, wavelet encoding and the related fields, because
a) what you find will probably have been written by people better at explaining things than me
b) it's a fascinating field.  For instance, it turns out that your nervous system between your eyes and your brain uses a similar type of encoding to JPEG images to compress data.  Who knew?

Concerning the DMCA, illegal primes etc., I think it's the spirit that matters (and will be taken into account) rather than technicalities.  If you end up with an audio file of copyrighted music you didn't pay for, it's still piracy whether the intermediate transfer was done with MP3s, prime numbers, vinyl records attached to carrier pigeons...
Not to beat a dead horse, but I'll make a second attempt as it was past my bedtime on the first try and I think I fizzled out about 1/16 the way through.

Let's assume that we have written a simple program that takes 2^n and turns that calculation into a bitstream.  Now, if n=2, the bitstream outputs 4; obviously, there isn't much compression happening because n takes two bits to express, while 4 takes three bits.  However, if n=256, the bitstream outputs 65536.  This is a considerable savings, as n is only 9 bits long while the bitstream is 17 bits long.  The savings continues to grow as n gets larger.

In searching for a real-world example, let's try creating a starting vector for 10 seconds of CD-quality audio.  That's 16 bits/channel, 2 channels total, at 44100 samples/second, for 10 seconds.  16*2*44100*10=14,112,000 bits total.  To find out how many bits long n would be, we simply find log2(14112000)=23.75, or 24 bits.  This is the starting place you mentioned before, where the "10 second audio number" lives.

In practice, we would actually take the 10 second bitstream and perform a log2 to get n.  However, in a real-world example there will still be a remainder.  So we continue to perform log2 to this number to get another, and so on, until we end up with 0 or 1 at the end.

Another possibility is to compare the result to the original block of audio and find out how many seconds it takes before there is an inconsistency.  Then, one could pick up at that location with another 10 second block and continue until the routine reaches the end of the audio file.  At that point it would follow the example of the previous paragraph.

The point I feebly attempted to make before is that, while we couldn't hope to do this only one time, we also shouldn't expect to have to do this 14,112,000 times per block either.  This means that the resulting file of "n1+n2+n3+..." should still be significantly smaller than the original, at least from what I'm guessing.

I would have attempted this program a long time ago to see how successful I'd be, except I haven't programmed since the late '80s.  I do know that log2 is a simple calculation to make, even on very large numbers, so it wouldn't surprise me that this could be pulled off fairly quickly - however, I know it could possibly be expensive, as it would take many iterations for each block of audio.

I am aware that there can be a gaping hole in my logic here, as math was never my strong suit.  However, I would like to see good proof that I'm wrong.  If I were pursuing free energy, I'd expect to be told, "You can't break the laws of thermodynamics, you kook!"  If you could describe to me how there is a hard limit here, I'm all ears; however, right now I can only see the limit as a theoretical maximum, not as a law that it will take exactly as many bits to describe as the number one would describe with them.
Excellent. And you did it in one paragraph.
PHOBoS5 years ago
I convertered your breadboard schematic to a more logic electronic schematic which also makes the circuit more understandable. But either I'm not fully understanding the concept or you made a mistake in the schematic because as you can see on all the 4051 chips I/O 4,5 & I/O 6,7 are twisted. (though I doubt It will make a big difference in sound) I also took the liberty to convert the original pandora circuit to another schematic and except for reverse connecting pins 9,10 I think you also forgot to connect I/O 8/9/10/11 to the output. I build this circuit (with all the I/O connected) and it sounds nice though It's clearly a repetitive counting sound. But It's a very nice concept.
Pandora2_web.jpgPandora1_web.jpg
strehlow6 years ago
Just a note about why BCD --> 7-segment decoder chips usually ignore 10-15. BCD is an abbreviation for Binary Coded Decimal. In decimal, the only valid digits are 0-9. Every unique output state that must be defined requires logic gates to implement. Every gate cost money to produce. When designing the chip, only 0-9 had to map to some specific output. When the engineers optimized the design to minimize the number of gates, they didn't care what happened to the output lines for invalid input. Any definition of those output states would have required additional gates. This is why different manufacturers chips often produced different results for those undefined inputs. The logic array was optimized in a different manner using a different combination of gates. Within the defined states, they behave the same way, but not in the undefined states. There are hexadecimal --> 7-segment chips. They cost more (at least they used to).
Fun. I did this in software recently with notes in an octave, but am working on something lower level now. http://vimeo.com/1569824
bounty10126 years ago
... Um yea...
totokan6 years ago
Thanks for the maths, and complex explanation, but I want something really simple here. Your device counts upwards and outputs as a sound? Reply will be met with further inquiry.
_soapy_7 years ago
[quote]Wait! What's imaginary memory? An example is the multiplication table.
I'm not just talking about what was memorized in elementary school,
but the infinite plane of all numbers multiplied together. A computer
does not have to store that anywhere, nor could it, to retrieve the
answer to a multiplication.[/quote]Actually, processor chips *do* store all the numbers in a table and look them up when they have to do multiplication. It also works backwards for division. It's faster than actually doing the maths the way humans do it.
This is how the infamous Pentium Bug happened - the chip designers thought they could remove, sorry, optimise further some of the values, and the result was that the look-ups were then wrong.

http://www.maa.org/mathland/mathland_5_12.html
VIRON (author)  _soapy_7 years ago
That is an exceptional Defect of the Pentium in particular. Most processors certainly do Not work that way.
_soapy_ VIRON7 years ago
Actually, all full modern chips have this look-up table. It used to be stored in a discreet chip, which was a maths co-processor, but now Pentium level and above (Intel, AMD and IBM) all have them built in. Wasting a few cycles to work out a single division would be silly when a single call to a register burned onto the chip.
VIRON (author)  _soapy_7 years ago
Math coprocessor units are not lookup tables. They are calculators. They use floating point math which only works with small numbers. Almost "all full modern chips" do NOT have math coprocessing. And most of the chips Intel,AMD,IBM make are not for PC's. Some DSP and RISC chips have single cycle mul or div operations.
_soapy_ VIRON7 years ago
Yes, but that calculation is done by looking the result up, not by actually doing the maths!

See http://www.cs.earlham.edu/~dusko/cs63/fdiv.html and http://support.intel.com/support/processors/pentium/sb/CS-013007.htm (Last paragraph reads:

"The cause of the problem traces itself to a few missing entries in a lookup table used in the hardware implementation algorithm for the divide operation. Since this divide operation is used by the Divide, Remaindering, and certain Transcendental Instructions, an inaccuracy introduced in the operation manifests itself as an inaccuracy in the results generated by these instructions."

The reason this bug occurred was because someone thought they could further optimise the already fully optimised table by removing a few numbers that "weren't needed" but in fact were. This was to reduce the size of the look-up in the FPU.

awalton _soapy_7 years ago
Read up on how FPUs work. All digital FPUs do the math, they just don't do it all at once, since that would either be very slow or take a ridiculously huge number of transistors.

In the era of the Pentiums, they used a faster variant of the ever-popular SRT algorithm, calculating two bits (radix-2) at a time instead of one. (Off-topic: Very recently, Intel bumped this again to using a radix-4 division algorithm in the Core 2 family, which almost halved the throughput latency of the FDIV and related operations.) A full LUT-based Pentium FPU would need a 280 lookup table, which is absolutely enormous, and unrealistic (you would need several quadrillion quadrillion bytes). By successively approximating the solution to the division until the error is too small to be represented by IEEE754 at any given bit depth (in this case, 80-bits), you can implement a full hardware FPU without having to have that enormous lookup table.

FPUs work much like we do when we go about solving a long division problem, a digit at a time (solving the recurrence relationship). After every step in the division, we calculate partial values by multiplication (CPUs use left- or right-shifts instead) and subtraction, and use the partial remainder values to complete the next digit(s). FPUs look up the next digit instead of calculating it directly to avoid the need for additional latency for this operation, much as we know that 5 will go into 15 evenly without having to go to a calculator. (Oddly enough, we happen know this because when we're kids, we are taught at least a base-10, 144-value LUT ("times table"/"multiplication table"), which happens to be really convenient when doing traditional math.)

The problem with the FPU was similarly pretty simple. The (IIRC) 1066-entry-long M*d LUT used to find the next quotient digit based on the last partial remainder (M) and the divisor (d) in the algorithm was five entries short. All of these entries contained 0, when they were meant to contain "+2". These entries were removed from the table after some engineer(s?) did some bad math while trying to prove these entries unreachable when in fact they were reachable, just very, very rarely (1 in 9 billion random operations). (Alternatively offered by some Intel-sponsored revisionist-historians was that these engineers simply "forgot to program the table with these values", however, this is exceedingly unlikely.)

So yes, it uses a LUT, but as a way of making the successive approximation algorithm faster, not to "look up the solution." You could leave the LUT out and still build an FPU, but it would be much more latent, which would sell fewer chips.
merseyless7 years ago
is there any sort of computer program that could do this?
VIRON (author)  merseyless7 years ago
Yes. The oldest version runs on a TRS-80, and the latest version runs on the HYDRA. BUT: Many have tried and failed to run it on Windows. The least you can do is count and output each digit of every number as digital sound, which typically takes a few lines of code and less than 100 bytes.
LeumasYrrep7 years ago
WOW. very neat. Didn't read it all, will soon. Now I want to make one. WEEKEND project here I come(will probly take me longer :)
VIRON (author)  LeumasYrrep7 years ago
NEWS: If you have a Hydra Game Console you can now download free Hydra programs from a Parallax.com forum that do the same and more than what these devices do. In the Parallax HYDRA forum; in thread titled "miscellaneous demos". (or a URL I will create in the near future.) Running the demos on a Hydra is much easier and probably more fun than building these devices. One reason is that the Hydra demos have audiovisualization graphics. I also should post a video demo soon of this project. The HYDRA game console is available from Makezine.com store, but my code can also easily be tried by anyone familiar with the inexpensive ($12.95) Propeller chip to run it without a Hydra.
VIRON (author)  VIRON7 years ago
Link to demos on Parallax Site Forum
(Works on a HYDRA, may be tweaked for other Propeller Chip products.)
Pumpkin$7 years ago
OKAY VIRON YOU CONFUSED US, SO WHAT IN THE NAME OF SCIENCE IS THIS All SUPPOSED TO MEAN? SERIOUSLY MAN THIS THING IS SEEMS USELESS!
gzaloprgm7 years ago
Very cool, is it like a brute force cracking on sounds, like producing all possible sounds combination?
VIRON (author)  gzaloprgm7 years ago
That's exactly what it is.
soapdude7 years ago
Looks cool but I have no idea what your talking about. LOL
maeve7 years ago
I havent a clue what this is about????????????????????????????????????????????????????????????????????????????????
VIRON (author)  maeve7 years ago
I will ask my brother's young children to explain it to you. Meanwhile, it has been described as: "it's a box that makes music all by itself", and "it's like a radio from another planet".
maeve VIRON7 years ago
Sounds cool. Might make it once Iv graduated from university(don't think Ill be able to make it until then!). Thanks for ansewring my question!
VIRON (author) 7 years ago
Check to see that the pin 11 resets (gray wires) on 4040's are to "ground". If not then they will stop and glitch randomly and intermittently. It also sounds like time to try putting bypass capacitors (0.1uF) on V+ and ground of the chips. When they roll over 1111 to 0000 that is when they are most needed since they may glitch then. Also, make sure the "blue wire" on pin 1 of the right 4040 is clocking pin 10 of the left 4040. We found an error in the schematic of the uC-less pbox before you reversed pins 9 and 10 on 4040's so that should be already fixed now. Also, using the amp or piezo or an LED, look for pure tones and ticking noises and blinking on 4040 pins that have diodes. (Some will be unhearable, so just make sure that some are making constant tones or ticking sounds. These get mixed in all combinations by the diodes to make the melody but they should be steady and stable coming out of the 4040s)
VIRON (author) 7 years ago
First try reversing pins 9 and 10 connections on both 4040's. (the two upper right ones). Don't worry about the capacitors; tell me if this made a difference.
gregor VIRON7 years ago
WAHOU :-)) IT work...ed.! just about one minute, and after a beautiful pitch down, silence... I changed the battery, checked the diodes i just resoldered... Counting backwards?
VIRON (author)  gregor7 years ago
It only goes forwards. As a fractal, it may seem repetitive. There should be a neverending melody that even includes inaudible ultrasonic tones but it should not go silent, it should keep a beat indefinitely. The battery should last days or weeks.
dentsinger7 years ago
Some time ago, I was trying to come up with a new way to write music using Fourier analysis and z(n+1) = z(n)2 + c. I never finished it though. I'm one of those bi-polar creatives who has tons of fabulous ideas to research in a ton of different fields, yet nothing ever gets finished. Hat's off to you.

VIRON (author)  dentsinger7 years ago
Yeah that's the function for the mandelbrot set and all it's julia sets. It sounds very random and noisy like radio static using my technique.
1-40 of 69Next »