ANALYSIS OF TEMPORAL & FREQUENCY STRUCTURE IN DIGITAL MUSIC
For a musician, sheet music contains a plethora of information required to understand a piece: among them are note pitch, note length, when notes occur, tempo, and key signature. However, with digital music files, these features are stored and interpreted differently. When attempting to convert a digital music file to sheet music (score transcription), the result is often sheet music that is not intuitive or legible to the average musician. To identify standing tempo and tempo aberrations in digital music files, we apply numerical methods on note onset information. These methods are applied on MIDI files, and they are designed to help bridge the gap between the file creator, typically a recorded performer or composer, and the musician interpreting a score transcription. In order to extract temporal features we analyze the inter-onset intervals (IOIs) of each detected impulse and compute their greatest common divisor (GCD). The GCD gives us information about the minimal required Pulses Per Quarter Note to express the MIDI information without any loss. With further refinement, automated score transcription can be used to preserve aural musical traditions, notate live performances, and better connect a performer to their music.
History
Publisher
ProQuestLanguage
EnglishCommittee chair
Michael RobinsonCommittee member(s)
Stephen D. Casey; Joshua M. LanskyDegree discipline
Mathematics and StatisticsDegree grantor
American University. College of Arts and SciencesDegree level
- Masters