Simon Williams wrote: > Michael J. Mahon wrote: > > >>Simon, I've put up a new sample (March 30) which shows the result of >>two changes: >> >>1) I've reverted to a triangle-sine waveform--this eliminates most of >>the "phaser" sound that results when a waveform has a fast transition >>in it and decays slowly in amplitude. > > > I'm really curious as to how the sounds are actually generated... it > seems to me that there must be a fair bit of processing happening "on > the fly". You are correct! Each Crate machine is kept 100% busy acting as a single oscillator, or voice, of the 8-voice synthesizer. Playing a note is identical to "playing" a rest, since a rest is simply a note of zero frequency. This simplifies timekeeping (since the same mechanism is used for both notes and rests) but, even more important, it keeps the 22kHz pulse train going (at the lowest duty cycle) during the rest, which prevents any "thunk" when notes begin and end. All notes begin at the lowest duty cycle (sample value 0) and end very near the lowest duty cycle (0..3 out of 0..31), so there is little or no noise when a note ends. Between notes, the synthesizer code fetches the next note, sets up its frequency, compensates for any "overrun" samples of the previous note required for it to get near a sample value of 0, and sets up the corrected duration for the next note, then vectors to it. It does this in pricisely 2 sample times, during which it is continuing to generate the lowest duty cycle 22kHz pulse train, so there are no "ticks" between notes/rests. There is also provision made for command messages to be inserted into the music, so that voice changes and "stop playing" can be handled (and maybe "repeats" in the future). The 22kHz pulse stream is maintained during command processing as well. It's all that a 1MHz Apple can do to generate a 22.05kHz pulse train with 32 different duty cycles (for 5-bit accuracy in D/A conversion), while counting down note duration, counting up the envelope pointer, and advancing the phase of the digital oscillator by a constant set by the desired frequency of the note. There is a new sample value selected every 92 cycles (approximately 11kHz sample rate), and 16 cycles of the 92 are required for toggling the speaker (4 STA's), and 3 cycles for vectoring to the next generator (which is dynamically determined during the current sample's pulses). So everything else has to fit into the 73 remaining cycles, for each of 32 pulse generators, each of which generates two pulses (pulse rate of 22kHz) accurate in phase and width to the cycle. I use an Applesoft program to generate the Merlin code for the pulse generators, that statically schedules the "work" instructions between the "timed events" that must occur at specific cycles. This typically involves using some time-killing instructions to pad all the times properly, and the total time budget must allow for those, too. I started using an program to generate the code several years ago, when I found it very onerous to reschedule (for DAC522) 16 pulse generators by hand every time I made some change to the "work" code. The scheduler also allows me to determine quickly whether or not a given amount of work can be successfully scheduled--a great help when working out just what can be done within a sample interval and how to do it. The speaker outputs of all eight boards are simply mixed resistively and filtered with a 100 microsecond time constant to filter most of the 22kHz out of the output, which goes to an audio preamplifier (actually the line input of a cassette deck at the moment). >>2) I've fixed the MIDI timing bug that has nagged me for two weeks--I >>kept thinking that I had a synthesizer timing bug, when I actually had >>a MIDI timing bug related to changes in tempo. ;-) > > > I'm not terribly familiar with the MIDI file format, so that will be my > research project for the weekend... It's more complicated than I expected, but nicely uniform. Then there are the different ways that various sequencers "use" the MIDI format to deal with...running status vs. no running status, and "key up" status vs. "key down" with velocity 0. >>You will notice the reduced "phaser" effect and that the final chords >>are now tight together, without the advanced note of the prior versions. > > > I would have been content to be blown away by the first version, but by > now you've got to the point where any further refinement would just be > "showing off" ;-) ;-) Thanks! I admit to being pretty blown away myself when it worked! Now I have to strike a balance between "proof of concept" and "fully developed" to figure out when to stop fiddling with it and leave it to others to develop, if they wish. My current thinking is that I need to get multiple voices fully supported and demonstrated (including "expression", or multiple attack amplitudes), and the special class of fixed-pitch sampled voices (percussion). All these fit nicely into the existing framework. And I need to generalize my Applesoft MIDI-to-music file converter, so that it can page into large MIDI files instead of requiring the whole MIDI file to be BLOADed. This won't be hard. > Well, I've dug out a stack of IIe boards from storage and picked up a > soldering iron, so guess what's next... Excellent! BTW, it has been interesting to treat the Applecrate and NadaNet as "just works" infrastructure during synthesizer development. It is a nice shift of perspective to go from working on something to just using it to do something else. And I'm happy to report that I find it quite usable. ;-) -michael New Applesoft BASIC interface for NadaNet networking! Home page: http://members.aol.com/MJMahon/