"Bryan Parkoff" writes: > Can Disk II Family read three or four (zero) bits that it will always > show invalid? Is it the truth that Disk II's hardware is designed to read > only one or two (zero) bits? Yes, that's correct. > Can you explain why? The Disk II controller (and IWM) can't reliably read "nibbles" containing more than two consecutive zero bits. In fact, the older 13-sector state machine PROM (P6) can't reliably read more than one consecutive zero bits. The one bits in the nibble are stored as flux transitions on the media. Zero bits are only indicated as the lack of a flux transistion. On a write, every time there is a pulse on the write data line, the drive "analog board" electronics and head cause a flux transistions. On read, the head and analog board detect the flux transitions and produce pulses. The controller transfers data using a shift register running at a crystal-controlled rate, but the drive motor speed is not sufficiently well regulated to make it possible to accurately time long streams without flux transistions. Also, there is speed variation from one drive to another. The bit time is nominally approximately 4 microseconds. Suppose you have a disk that was written on a drive that was 8% fast. Now you read that disk on a drive that is 8% slow. The bit timing will be 4.7 microseconds. On the other hand, you might write a disk on a drive that is 8% slow, then read it on a drive that is 8% fast. This would make the timing 3.4 microseconds. Here are some example timings: interval between pulses fast wr slow wr data nominal slow rd fast rd ----- ------- ------- ------- 11 4 us 4.7 us 3.4 us 101 8 us 9.4 us 6.8 us 1001 12 us 14.1 us 10.2 us 10001 16 us 18.8 us 13.6 us Suppose the state machine sees two pulses with an interpulse delay of 13.8 microseconds. Should it decode that as 1001 or 10001? It is ambiguous. Restrictions on consecutive zero bits are used in all disk channel codes. The IDE or SCSI drive in your PC or Macintosh has such restrictions. However, they are dealt with entirely by the embedded control system of the disk drive, so you as an end user never notice it. With the Disk II controller, this restriction has to be dealt with by software running on the main processor. > Why do each byte require between $80 and $FF? Because as the bits are being shifted in from the disk, the only way the software can tell that a whole nibble has been read is by the most significant bit being set. Suppose that this restriction did not exist. Then the values $FE and $7F would both be valid nibbles. When the software is reading a nibble and gets a $7E, how would it know that the full nibble had been read, and that there wasn't another zero bit in the nibble about to be shifted in, making it $FE? > I do understand that between $00 and $7F can't be used > because it will show more than two (zero) bits. That explains $00. The only reason $7F can't be used is that there would be no way for the software to tell that the full nibble has been read. > How did Steve Worniak decide to design logic chip that can only handle > two (zero) bits? The logic chip (74LS323) doesn't care whether the bits are 0 or 1. Only the software cares. > I do know that he decides to use software by reading each bit. The software does NOT read each bit, at least not individually. It loops reading the shift register until the most significant bit is set. > If hardware is chosen to use its own hardware, three or more (zero) > bits can be accurate because the rotation speed is always fixed. The rotation rate is never "fixed". Even on drives that use a tachometer and servo for motor speed control, there is always some variation. > I am so curious why Apple II decide NOT to use standard MFM encoding > that it does exist before 1970s, but all PC like 8080 and/or 8085 use > MFM encoding (probably FM encoding) before 1970s. Because in 1977 when Woz designed the Disk II controller, it would have cost a lot more money, and used a much larger circuit board, to implement MFM. The Disk II controller *is* capable of reading and writing FM. > Please explain every detail if you have great knowledge. Please pay me my standard consulting fee, and I'll be glad to explain every detail. Alternatively, get a book. There are several that explain this stuff. Beneath Apple ProDOS is a good start. (There are some fundamental errors in the description in Beneath Apple DOS, so I don't recommend it as a source for information on the low-level operation of the Disk II, but it is still *very* useful at explaining how the Apple DOS software worked.) There is another book that has an even better explanation of how the Disk II controller works, but I don't recall the title or author at the moment. Eric Smith wrote: >There is another book that has an even better explanation of how the >Disk II controller works, but I don't recall the title or author at >the moment. It's _Understanding the Apple II_ (or the later _Understanding the Apple //e_) by Jim Sather--a great book, now hard to find. -michael Check out amazing quality sound for 8-bit Apples on my Home page: http://members.aol.com/MJMahon/ "Bryan Parkoff" writes: > I will be willing to pay you the fee for more information about MFM > encoding and MFM hardware. To understand any of these data formats, it is necessary to understand the concept of a channel code, and the distinction between data bits and channel bits. Raw data cannot normally be written directly to a medium, or transmitted across a communication channel, because there has to be some means at the reading/receiving end to determine where each bit begins and ends. One of the most common ways of dealing with this is to transform the data into a self-clocking format using a channel code. Channel codes may or may not be binary. For instance, modems (including those used for DSL) generally pack multiple bits per transmitted symbol. However, for our purposes we will only consider binary channel codes. To convert binary data into a binary self-clocking channel code, extra bits must be added. Data bits are the user-level data. For instance, on an Apple II a logical sector consists of 256 bytes of eight bits each. But at the hardware level, the disk controller deals with groups of eight channel bits that have special limitations on the allowable combinations. Each group of eight channel bits encodes only six data bits (or five data bits in 13-sector format. In general, GCR formats allow any string of channel bits that meet the requirement of not too many consecutive zeros. The channel bits are used in groups, and typically there is a "code book" to map combinations of user data bits into groups and vice versa, hence the term "group code". Sometimes some other constraints are added; for instance, Apple GCR adds the requirement that the most significant bit must be a one. Apple calls groups "nibbles". Note that in GCR there is not a one-to-one mapping of data bits into channel bits. You can't isolate bit 4 of the group, for instance, and say that it corresponds to bit 6 of the data byte. In simpler channel codes, there is a more direct relationship between data bits and channel bits. For instance, in FM coding every other channel bit must be a one, and is called a clock bit. The data bits are passed through unmodified, so each data bit maps to exactly one corresponding channel bit. (Inside Apple DOS erroneously claims that Apple GCR has clock bits.) FM coding basically identical to the "4+4" format used for the volume, track, and sector bytes of the address field in Apple format. But on an FM disk, all data is encoded that way. You can see that FM is less efficient than Apple GCR. There are only 16 legal combinations of eight channel bits, versus 32 for Apple 13-sector format, and 64 for Apple 16-sector format. (FM uses a few otherwise "illegal" codes with a missing clock bit as the index mark, address mark, and data mark.) For the same channel bit rate, FM is 4/5 as efficient as Apple 13-sector format (5+3 encoding) and 2/3 as efficient as Apple 16-sector format. For typical 5.25-inch FM formats, the nominal channel bit cell time is 4 us, thus the interpulse time is 4 us or 8 us. For each actual data byte recorded, eight clock bits are inserted, so a byte of data requires 16 channel bits (64 us). MFM packs data with twice the density by eliminating most clock bits. Clock bits are only introduced when two consecutive data bits are both zeros. The channel bit rate is doubled, so the nominal interpulse times are 4 us (between two consecutive one data bits), 8 us (between two one data bits separated by a single zero data bit), or 6 us (between two one bits separated by two zero data bits). Thus twice as much data can be stored on a disk using the same bandwidth, requiring only that the read channel be more precisely able to discriminate the pulse timing. > Maybe FM encoding will be included. I do have > enough information about GCR encoding since I already have Beneath to DOS > 3.3 and ProDOS. I don't need more information for GCR encoding, but I need > to understand how Disk II hardware work. I do have Understanding Apple II+ > & //e book that explains Disk II hardware. Based on the questions you asked in the original message, it would appear that you should spend more time studying those books. > Do you have any book that can explain FM / MFM encoding and hardware? No. However, the data sheets for the NEC uPD765, Intel 8271 and 8272, and Western Digital 1771 and 1791 have a lot of useful information about it. The data on the Western Digital parts may be found in their 1983 data book: http://www.bitsavers.org/pdf/westernDigital/_dataBooks/ Bryan Parkoff asked: > Can Disk II Family read three or four (zero) bits that it will always >show invalid? Is it the truth that Disk II's hardware is designed to read >only one or two (zero) bits? Can you explain why? Why do each byte require >between $80 and $FF? I do understand that between $00 and $7F can't be used >because it will show more than two (zero) bits. > How did Steve Worniak decide to design logic chip that can only handle >two (zero) bits? I do know that he decides to use software by reading each >bit. If software is chosen to use without from hardware's help, two (zero) >bits can be accurate because the rotation speed may vary. If hardware is >chosen to use its own hardware, three or more (zero) bits can be accurate >because the rotation speed is always fixed. > I do see that most copy-protected disks contain more than three (zero) >bits. If it is true, it is called weak bit that GCR encoding could detect >three (zero) bits before it can skip to continue reading next valid bits. > I am so curious why Apple II decide NOT to use standard MFM encoding >that it does exist before 1970s, but all PC like 8080 and/or 8085 use MFM >encoding (probably FM encoding) before 1970s. Please explain every detail >if you have great knowledge. I would add only one thing to Eric's excellent explanation. The analog read amplifier on the analog card must successfully detect magnetic transitions and generate a pulse when one is detected. Since magnetic media, recording currents, and heads are not identical, and subject to variations in response (particularly the media), it is necessary to employ an automagic gain control (AGC) to maintain appropriate read signal levels. As long as a series of 1's are arriving, the AGC circuit has a good measure of the read signal level, and can maintain good gain control. But if there is a series of 0's (no transitions), then the AGC circuit treats that as a lack of sufficient signal gain, and turns up the gain. If the average number of 1's goes too low for too long, the gain of the read channel will be increased until random noise read from the media will be interpreted as transitions. This is why reading an erased disk will show a random pattern of 1's and 0's. To summarize, the presence of transitions, 1's, turns down the gain to the proper level, and the absence of transitions causes the gain to rise until "transitions" are detected, even if they are only noise. The time constant of the AGC circuit must be chosen to be fast enough to respond to relatively sudden changes in the response of the media, but slow enough that it will not "turn up the volume" so fast that legitimate strings of 0's will be corrupted by noise. In the case of the SA400 drive chosen by Apple (Steve Wozniac, actually), and compatible drives, the time constant only allows two consecutive zeros to be read reliably in the context of enough 1's. (It is interesting to note that even three 0's may be read reliably if they are preceded by several 1's--for example, the nibble $F8.) This constraint was adopted in the GCR scheme Woz designed (although his original design was a bit more restrictive than required, resulting in only 32 usable 7-bit-plus-"start"-bit nibbles, and only 13 sectors per track). The 16-sector version of the code used the properties of the AGC circuit to permit the two-consecutive-zero cases when they were surrounded by sufficient 1's (transitions) to keep the AGC happy. Timing constraints are also a real consideration, as pointed out by Eric, but the read channel characteristics are fundamental. -michael Check out amazing quality sound for 8-bit Apples on my Home page: http://members.aol.com/MJMahon/