MCP
MCP robot

Guitar To MIDI Using The LogoChip V2

Contents:

1. Introduction

For my main project for the Spring 2005 section of Robot Design (91.548), a guitar to MIDI interface was designed that was intended to serve as an interface between guitar playing and MIDI control of external hardware and software. This document describes the details of the design, experimentation, implementation and final outcome of the project. The original plan was not nearly completed in the time expected, as many important issues that were not apparent before arose along the way. This issues are presented in this document and can serve as a valuable source of information for the design of a similar project, or one involving any of the key aspects of this project, namely interfacing a physical musical instrument with software using a PIC.

2. Overview

The initial design for this project called for an optical pickup for each string that fed the audio signal from that string alone to the input of of an analog switch, which controlled the current string being fed to an Analog-to-Digial input of the LogoChip V1 input. The LogoChip was to step through each input as needed, and sample a duration of the signal, providing a sample window of each string that could be used to gather pitch, volume, and other attributes of the sound. Inside the LogoChip, the sample would be analyzed for the time between zero crossings in the waveform, providing a wavelength of the signal, which could be used to determine pitch. The amplitude of this waveform over time would provide realtime changes in volume. The data gathered from the input sampling was to be sent as MIDI information for note on and off events, with initial note volume (velocity) as set by the initial volume of the waveform, realtime volume change sent based upon volume change of the input signal, and pitchbending to follow that of the guitar, all of which being sent on a different channel for each string. The different channels allows the realtime change of volume to occur for each string, as this MIDI command is global to all notes on a channel. As the project progressed, however, nearly all of these expressive and interesting design plans were dropped, leading to the final result of simply being able to somewhat accurately reproduce the MIDI note that corresponds to the note being played on the guitar. The reasons for these changes in design are described in the following sections, and mainly have to do with limitations in time and the original design specification.

3. Design and Implementation

The first part of the original design that was implemented was the optical guitar pickup system. The idea for this design came from information I found on the internet about the existence of such a system, not from schematics I found. The initial results of the following plan appeared so successful that such a system seemed very easy to implement and incorporate into my overall design. As with many things, however, it did not turn out to be as easy as it looked. This was implemented using standard break-beam sensors that are used in various robotic designs, and involve a transmitter and receiver mounted on a base that separates the two by a small distance. Just as with standard guitar-to-MIDI pickups, which use six individual magnetic pickups, the break-beam sensor was placed as close to the bridge of the guitar as possible, which improves pitch tracking. The break-beam was used in a manner such that the string was the source that broke the beam, the vibrations of which caused the output of the receiver to produce an analog audio signal of the guitar string. The mounting of the pickup was tested as both shining above the string to the receiver below it, and as on either side of the string. The results of both methods actually produced the same results, despite the fact that guitar strings are generally plucked so that they vibrate side to side. Since the pickup was so close to the bridge, the majority of the vibration appeared to happen circularly, moving in all directions: up and down and side to side. By simply hooking up the break-beam as an LED that transmits a positive voltage and a receiver that controls the amount of output from a positive voltage, an audio signal of approximately the same voltage swing was produced at the output. Figure 3.1 below shows the setup of the break-beam pickup on the guitar. The pickup is placed as close to the bridge as possible, and adjusted in height to place the string at the middle of break-beam transmitter and receiver, both in terms of height and side to side. If this adjustment does not occur correctly, the output will be based solely upon the areas of the string that break the beam, which can lead to interesting, but perhaps useless waveforms. The signal out of the pickup when adjusted correctly, however, was extremely clean and had very little 60 Hz hum, nearly no noise, and a very smooth waveform.


Figure 3.1: Break-Beam Pickup Setup

Fig. 1: Break-Beam Pickup

This pickup design, however, suffered from many problems that rendered it unusable for the project, without heavy modification. Firstly, the output of the break-beam sensor itself was completely free of its own 60 Hz Hum, unlike magnetic guitar pickups, but was susceptible to external hum and interference from any light producing source within the area. This meant that lights directly above the pickup could be heard in the final output, all of which produce a 60 Hz hum, and other pitched-noise (especially fluorescent lights), that interfere with the audio output. Secondly, since the pickups are outputting based upon how much of the transmitted signal is broken by the guitar string, any movements of the string produce output. This meant that moving the string produced signals as loud as the audio signal itself, which proved to be a problem in the process of determining pitch using the ADC's. Finally, because the break-beam had to be situated in a manner such that it breaks the beam in one direction, i.e. up to down or side to side, it suffered from various volume cancelation problems. The first of these problems was that if the string was broken side to side, then as the string was played at higher notes, the height of the string got lower, and therefore further away from the center of the break-beam axis. This meant that higher notes had both lower volume and significantly lower decay times, which was useless for this project. If the mounting was done up to down, this would not have been as much of a problem, except that as the string got lower, it would get closer to the transmitter or receiver, meaning that the distance between the two must be great enough to compensate for the change in string height, so that the string does not hit the transmitter or receiver. The second volume-related problem was that since the vibration of the string occurred in a circular manner at the bridge, there were slow changes in volume where the side to side movement went down to zero, but the vertical vibration had still been at full level. This is not a problem with magnetic pickups, as the field induced over the pickup is not dependent on the displacement from some axis. These problems led to the decision that optical pickup design was not viable for this application unless it were significantly redesigned. There are no MIDI guitar designs that I know of that use optical pickups, and these problems may be a major factor for this. The optical design was dropped, and the built-in magnetic pickup was used instead, producing better output for this particular application. This pickup, however, was only capable of producing single-string output when only one string is played. This could be remedied by using the aforementioned six individual magnetic pickup design. A block diagram of the final guitar to LogoChip assembly is shown in Figure 3.2 below. This design uses the internal magnetic pickups of the guitar, and distorts the signal using an overdriven op-amp to produce nearly square waves that are timed with the LogoChip and sent to the computer to play MIDI notes that they represent.


Figure 3.2: Guitar to LogoChip Block Diagram

Fig. 2: Block Diagram

The use of the magnetic pickups required that they be amplified as to bring the signals up to levels that are useful with the PIC. Since the output of a guitar pickup is very low voltage, the signal must be boosted, in this case by an op-amp. The op-amp was also used to distort the signal, producing a nearly-square wave form of the input signal, which was easier to use in the frequency-determining process. Distorting the wave was accomplished by greatly increasing the gain on the op-amp until the input distorted it, and clipped the wave into an approximate square. Testing the frequency of a fully square wave is as simple as timing how how it is off, then on, before it goes off again. This time is the full-cycle of the wave, and represents the wavelength of the signal. The output of the op-amp, however, was not fully square, and in fact, had many issues. Since the output of the guitar string, even with the magnetic pickup, produced a non-sine output, with many variances in waveform, the wave did not clip at all points. Despite the fact that the frequency of the wave remained the same, the width of the wave and various harmonic peaks changed over time, making it difficult to measure by simply checking lengths of high and low signal. For this reason, a new manner of measuring the frequency had to be designed.

Initially, the output of the optical pickup, and later magnetic pickup, were fed into the analog input of the LogoChip V1, which checked for times when the wave crossed zero. With a square wave this would have worked, but in this case, it was not a viable option. The code was later changed to wait while the signal is high, then wait while the signal is low, then once it went high again, report the time of this cycle. This produced relatively accurate results, although timing in this manner has a 1 ms resolution, which is not precise enough for measuring such frequencies. A diagram of the interface between guitar and LogoChip V2 circuit assembly is shown in Figure 3.3 below.

Figure 3.3: Guitar to LogoChip Circuit Schematic Diagram

Fig. 3: Schematic

The first version (V1) of the LogoChip also had limitations on the sampling rate of the analog-to-digital conversion, which did not allow the higher string frequencies to be measured. As more code was added to the reading loop, this sampling rate further decreased, which lead to the desire for another PIC. The LogoChip V2 was used as an alternative, as it still provided the simplicity of the LogoChip language, with added speed and dedicated micro-second accurate timers. The design was altered to use the distorted guitar signal as the clock to the timer on the LogoChip. Since this timer could continue to count faster than the code could run, without extra burden on the processor, a few cycles could be used to wait for the timer to time the guitar clock input, producing a final timed value that could be used to determine the frequency of the wave. This timer also timed the rising edge transitions in the wave, meaning that the timer value presented was already representative of an entire cycle of the wave, without extra computation required. The added benefit of requiring no computation for the timing, meant that if the PIC were designed to produce MIDI output directly, other MIDI related computation, such as getting volume information, could occur while the timer value was being determined, and a lookup table for timer to MIDI note values could be used, allowing the processor cycles needed to be significantly reduced, perhaps to the speed required for 6 strings of MIDI output. A sound sample of the distorted guitar and resulting MIDI note output is linked below. The MIDI piano sound is on the left channel and the guitar is on the right (stereo track). Piano_L-and-Guitar_R.wav Sound Sample.

In the original design idea, I figured that the LogoChip would directly output the MIDI information, as described in the previous paragraph. This, however, would have taken a significant amount of extra work, and may have been beyond the power of the LogoChip. The design instead was altered to simply send raw timer values through the serial port to the computer, which would be interpreted in software. Since the timer incremented itself with each cycle of the input guitar wave, the fixed amount of time before the timer was sent to the computer allowed a timer value that was mostly consistent for each note to be produced. It also meant that as frequency went up, the timer value went up, because more cycles of the guitar waveform occur in that sample period. Therefore, unlike wavelength measurements, the timer value produced went up as frequency went up, allowing it to be more easily used in the computation of the MIDI note to play. In order to accurately get the MIDI note to play, some mathematical conversion between timer values and MIDI note must be determined. This would be best done by taking more accurate samples that are always consistent, and determining what sort of relation exists between the two values. Since the value returned is the number of cycles in a fixed amount of time, then it is actually the frequency of the wave, as long as we know the total sample time. For example, if we sample for 1 second, and we get a value of 60 for the timer, we have 60 cycles per second, or 60 Hz. Tables for frequency to MIDI note are easily available on the Internet. The values returned in this project, however, were not fixed to an exact time value as they should be, and were sent as they came. The values, however, happened to be approximately equal to an integer value of 1 between each note on the guitar, which is how each MIDI note is specified, where a note of 61 (C#3) is one note above 60 (C3). This would not be true over the whole frequency range, and is not the correct way to solve the problem, but in this case it was used to provide results before the deadline of the project. A problem with reading the frequency is the error that will occur in the measurement. This error must be accounted for by averaging the results, or choosing the pitch that occurs most often, which is what was done in the software used for this project. Another problem with this particular project is that the 60 Hz hum of the guitar had to be accounted for. This was easy enough to deal with, as it mostly only occurred when no strings were being played, so any note around this frequency, or below, could be assumed to be no note at all (the lowest note on this guitar was 120 Hz, so really any note below this could be assumed to not be a note at all). A sample of the (albeit simple and realistically incorrect) serial data to MIDI note playback source code is available at the source code link below. The directory contents include the C++ serial to MIDI program and the LogoChip software. GuitarToMidi.cpp in the SourceCode directory is the main program, which relies on other files not included (part of another project and cannot be shared), but are simply calls to a MIDI note playback system (in this case DirectMIDI, a wrapper to DirectX's DirectSound MIDI package. Simply replace the note playback calls with those to another MIDI system, such as DirectMIDI and the code will work.

SourceCode/GuitarToMidi.cpp Is the C++ program for interpreting serial timer data to MIDI notes.

SourceCode/LogoChip_Code.txt Is the code that runs on the LogoChip to send serial timer data.

The final results of the project were limited and somewhat disappointing considering the time spent on the project, but were nonetheless interesting. Since much of the time was spent on problems and design considerations regarding the pickups and timing of the waveform, the conversion from serial data to MIDI note got very little attention. This meant that a MIDI note could be created for each note played, most of which correctly matched the note on the guitar. Lower notes on the neck were not very accurate, whereas about half way down the neck, there was a stretch of nearly two octaves that produced the correct MIDI note for each guitar in that stretch. Due to the inaccurate conversion of timer data to MIDI note, as well as the inconsistent timer data, the note was not accurate enough for any real use in a MIDI system. In order to remedy the MIDI note conversion problem, the timer data must first be fixed, which appeared to be a matter of fixing the amount of time that the microsecond timer runs before the number of positive edge transitions (microsecond timer value that is read in the LogoChip) is sent to the computer. Since the number provided by the microsecond timer is a number of positive edge transitions that occurred since the timer was last read (and reset), with microsecond accuracy, the number returned is the number of cycles of the input waveform over a fixed amount of sample time. This equates to frequency and seems very useful as a method of converting guitar signal to MIDI note. The time between samples that works best would be determined by experimentation, and once fixed should provide accurate results in a MIDI system. Implementing volume change should also be simple, and based upon sampling the input voltage over time and averaging out the data to create smooth transitions. Other controllers such as pitchbending are more difficult, but all seem possible with the LogoChip. Whether or not the LogoChip can handle all of these calculations on its own is still in question, but with an external computer or separate PIC to handle the data, it all seems possible.

4. Improvements

Clearly this design can be improved in many ways, mostly by implementing the remaining features that were not in the current project. All of these features (MIDI notes, velocity, realtime volume control and pitchbending) can be implemented on the LogoChip V2, however the performance of the device may suffer with all the added computation. The MIDI specification assumes that there should be a minimum of about 1 ms between MIDI commands. When chords (more than one note at a time) are played in MIDI, they actually occur by sending one note after another in a series, which means that a large chord would have a slight delay between the first and last note played. With a guitar, the actual output is limited to the speed of the guitarist, and a delay of a few milliseconds may not be noticeable. This would have to be determined by experimentation, but since the output of a byte of data on the LogoChip cannot occur below this 1 ms mark, the hardware limits the results beyond the computation time. If the computation is reduced significantly enough, fast performance should be possible, however, the PIC may be too slow for such complexity without modifications. The ability to use the LogoChip V2 as a self-contained MIDI device still needs to be studied, but it appears the device may be capable, as long as computation required is significantly reduced. This may mean using more than one PIC, or other hardware to reduce computations, such as a smoothed volume follower to get changes in volume of the guitar string, a signal to gate conversion to determine when the string is on or off without computation needed, and precomputation of timer values to MIDI note for use as a lookup table to significantly reduce computation.

This guitar to MIDI project can also be extended to other musical instruments. As long as single notes can be picked up individually, the frequency can be determined, along with changes in amplitude, allowing MIDI interfacing to many musical instruments. One possibility is to generalize such a design so that it can be used on any pitched source. This would require some sort of adaptable or upgradable software that can determine the pitch of different sound sources (which may have different ranges, significantly different waveforms, and different playability concerns that make them difficult to analyze). Volume changes are global to all instruments, and can be determined by analyzing the input volume of the waveform over time. Pitchbending would be implemented the same way for all instruments as well, as it would be a function of the deviation from a fixed MIDI note. When designing such a project (extending the guitar project above to include the missing pieces, or creating a global instrument to MIDI conversion tool), several important elements must be considered. The most important consideration will be how the signal from the instrument can be cleanly recorded. Clearly an idea such as an optical pickup seem to be an excellent choice, and may still be viable, but experimentation may show that they do not prove useful in practice. Magnetic pickups for stringed instruments work well as they produce fairly clean signals, but microphones, for example, would not work well as they are subject to noise and picking up sounds that are not from the instrument itself. Once a clean signal can be obtained, how it will be measured must also be a consideration. Sounds with significant harmonic content (signals that are more complex and varying than sine wave like) may be hard to determine the frequency of. The guitar, for example, produced significant harmonic content, which is difficult to determine the pitch of, when the string was first struck. This can be thought of as a string pluck "noise", and noise is very random and can be lacking a defined pitch. For this reason, the guitar signal was sampled after a short fixed amount of time (say 30 milliseconds) had elapsed, after the string was struck. Another consideration would be dealing with instruments that produce more than one sound at once, such as chords, as is the case with guitars. In this case, microphones definitely will not work, as they will pickup all the strings at once. Six individual pickups that are isolated from one another is required in this case, but may be difficult to obtain in other instruments, such as accordions, where the individual notes may not be easily separated from one another. One possible solution to this, which has been the subject of much research, is using frequency analysis to separate individual notes from a chord of similar sounds. Though the LogoChip may not be capable of such computation, as PICs get faster, they may be able to, and DSP processing can allow such computation with today's technology.

5. Conclusion

Though the project was never completed to the level initially anticipated, the results obtain may be important for future designers of such a project. Clearly the optical pickup design turned out to be a failure, but may still be salvageable with a more complex design that does not suffer from the problems that were encountered in this project. The use of a PIC such as the LogoChip, which also runs interpreted code rather than assembly, as a tool to analyze audio signals was also tested to a limited degree, and produced varying results. Since the speed of the chip limits the highest frequency that can possibly be sampled, only certain instruments that are lower in the frequency spectrum, such as guitar, may be possible candidates for such a project. The use of hardware timers with microsecond accuracy significantly increased the possibilities of this project, as it freed up the chip from having to compute frequency based upon changes in amplitude. Nearly no code was needed to get essentially the frequency of the wave, which would allow the processor to focus on other tasks, such as sampling multiple strings of a guitar, analyzing volume changes, and converting this data to MIDI within the chip. By using external circuits, such as peak following to provide a smoothed volume change output of the guitar signal that can be sampled to reduce processor time, or waveform smoothing to increase accuracy of the wave timing, the performance of the chip could be greatly increased, which may allow it to be powerful enough to analyze all six strings of a guitar and convert the data to MIDI data all within the chip itself. By using a fixed frequency to MIDI note table and sampling a smoothed volume change output from the string, and considering the capabilities of the chip as determined through testing, six string MIDI output of note and volume data seem possible with the LogoChip V2.

On a personal level, this project provided me with a lot of experience in terms of the considerations needed when designing such a system, especially how good ideas for design (optical guitar pickups) may not be that good in practice, and how many little snags can lead to significant design problems along the way. Since I produce music and use a lot of equipment with MIDI built in, I am interested in the design of circuits that can provide interactive control of MIDI parameters for the use in music production. By using the LogoChip to work on a project related to audio analysis, I was given the opportunity to experience audio specific PIC programming, and see what may or may not be possible to implement. The results of this design, though limited and not as initially planned, were hopeful and should provide the experience needed to pursue further designs.

6. Appendix

SourceCode/ Code listing. The C++ and LogoChip code are provided. The LogoChip code is designed for the LogoChip V2 chip and programming environment, and would not run to my knowledge on a V1 chip.


The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
line
Updated 2005-05-15
line
Alex Baumann
Alex Baumann

line
Summary:  This document describes the design process of a guitar to MIDI interface using the LogoChip V2 chip and the LogoChip language. This includes ideas for how an interface could be designed, as well as the details of an attempt at implementation, with results and observations. Also included is the details of an optical break-beam pickup design, and its successes and failures.
line
Developed by students of the Engaging Computing Group in the Department of Computer Science at the University of Massachusetts Lowell