Ph.D. Thesis Computationally Efficient Methods For Polyphonic Music Transcription
Ph.D. Thesis Computationally Efficient Methods For Polyphonic Music Transcription
thesis
Computationally efficient methods
for polyphonic music transcription
External reviewers:
Anssi Klapuri (Queen Mary University, London, UK)
Andreas Rauber (Vienna University of Technology, Austria)
Committee members:
Xavier Serra (Universitat Pompeu Fabra, Barcelona, Spain)
Grard Assayag (IRCAM, Paris, France)
Anssi Klapuri (Queen Mary University, London, UK)
Jos Oncina (Universidad de Alicante, Spain)
Isabel Barbancho (Universidad de Mlaga, Spain)
A Teima
Acknowledgments
First and foremost, I would like to thank all members of the computer music lab
from the University of Alicante for providing an excellent, inspiring, and pleasant
working atmosphere. Especially, to the head of the group and supervisor of this
work, Prof. Jose Manuel I
nesta. His encouraging scientific spirit provides an
excellent framework for inspiring the new ideas that make us to continuously
grow and advance. I own this work to his advice, support and help.
Carrying out a PhD is not an easy task without the help of so many people.
First, I would like to thank all the wonderful staff of our GRFIA group, and
in general, all the DLSI department from the University of Alicante. My
research periods at the Audio Research Group from the Tampere University of
Technology, the Music Technology Group from the Universitat Pompeu Fabra,
and the Department of Software Technology and Interactive Systems from the
Vienna University of Technology, also contributed decisively to make this work
possible. I have learned much, as a scientist and as a person, from the wonderful
and nice researchers of all these labs.
I would also thank to the people who directly contributed to this work.
I am grateful to Dr. Francisco Moreno for delaying some of my teaching
responsibilities when this work was in progress, and for supplying the kNN
algorithms code. I learned most of the signal processing techniques needed for
music transcription from Prof. Anssi Klapuri. Ill always be very grateful for the
great period in Tampere and his kind hosting. He directly contributed to this
dissertation providing the basis for the sinusoidal likeness measure code, and also
the multiple f0 databases that allowed to evaluate and improve the proposed
algorithms. Thanks must also go to one of my undergraduate students, Jason
Box, which collaborated to this work building the ODB database and migrating
the onset detection code from C++ into D2K.
I wish to express my gratitude to the referees of this dissertation, for kindly
accepting the review process, and to the committee members.
This work would not have been possible without the primary support
provided by the Spanish PROSEMUS project1 and the Consolider Ingenio
2010 MIPRCV research program2 . It has also been funded by the Spanish
CICYT projects TAR3 and TIRIG4 , and partially supported by European
Union-FEDER funds and the Generalitat Valenciana projects GV04B-541 and
GV06/166.
Beyond research, I would like to thank my family and my friends (too many
to list here, you know who you are). Although they dont exactly know what
1 Code
TIN2006-14932-C02
CSD2007-00018
3 Code TIC2000-1703-CO3-02
4 Code TIC2003-08496-C04
2 Code
I am working on and will never read a boring technical report in English, their
permanent understanding and friendship have actively contributed to keep my
mind alive within this period.
Finally, this dissertation is dedicated to the most important person in my
life, Teima, for her love, support, care and patience during this period.
Antonio Pertusa Ib
an
ez
February, 2010
vi
Contents
1 Introduction
2 Background
2.1 Analysis of audio signals . . . . . . . . . .
2.1.1 Fourier transform . . . . . . . . . .
2.1.2 Time-frequency representations . .
2.1.3 Filters in the frequency domain . .
2.2 Analysis of musical signals . . . . . . . . .
2.2.1 Dynamics . . . . . . . . . . . . . .
2.2.2 Timbre . . . . . . . . . . . . . . .
2.2.3 Taxonomy of musical instruments .
2.2.4 Pitched musical sounds . . . . . .
2.2.5 Unpitched musical sounds . . . . .
2.2.6 Singing sounds . . . . . . . . . . .
2.3 Music background . . . . . . . . . . . . .
2.3.1 Tonal structure . . . . . . . . . . .
2.3.2 Rhythm . . . . . . . . . . . . . . .
2.3.3 Modern music notation . . . . . .
2.3.4 Computer music notation . . . . .
2.4 Supervised learning . . . . . . . . . . . . .
2.4.1 Neural networks . . . . . . . . . .
2.4.2 Nearest neighbors . . . . . . . . .
3 Music transcription
3.1 Human music transcription . .
3.2 Multiple fundamental frequency
3.2.1 Harmonic overlap . . . .
3.2.2 Beating . . . . . . . . .
3.2.3 Evaluation metrics . . .
3.3 Onset detection . . . . . . . . .
3.3.1 Evaluation metrics . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . .
estimation
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
7
8
11
15
17
18
19
20
21
24
26
26
27
31
33
34
38
38
40
.
.
.
.
.
.
.
43
43
45
46
47
48
52
53
.
.
.
.
.
.
.
55
55
55
56
59
60
62
62
vii
CONTENTS
4.3
4.4
4.5
5 Onset detection
5.1 Methodology . . . . . . . . . . . . . . . . .
5.1.1 Preprocessing . . . . . . . . . . . . .
5.1.2 Onset detection functions . . . . . .
5.1.3 Peak detection and thresholding . .
5.2 Evaluation with the ODB database . . . . .
5.2.1 Results using o[t] . . . . . . . . . . .
5.2.2 Results using o[t] . . . . . . . . . . .
5.3 MIREX evaluation . . . . . . . . . . . . . .
5.3.1 Methods submitted to MIREX 2009
5.3.2 MIREX 2009 onset detection results
5.4 Conclusions . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
65
66
69
70
72
72
74
75
76
77
77
81
82
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
83
84
84
85
89
89
89
92
93
93
95
98
.
.
.
.
.
.
.
.
.
.
.
.
.
101
102
102
103
104
105
106
107
108
108
109
111
113
114
viii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
119
120
120
123
123
123
126
126
127
128
129
129
131
133
133
134
136
138
139
141
141
149
157
162
168
177
Bibliography
191
ix
List of Figures
1.1
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
10
12
13
14
15
17
18
21
22
25
25
27
28
29
30
32
35
36
38
39
40
41
3.1
3.2
3.3
48
49
53
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
58
59
61
63
64
65
67
68
70
71
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xi
LIST OF FIGURES
xii
4.11
4.12
4.13
4.14
73
74
75
78
5.1
5.2
5.3
5.4
5.5
5.6
5.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
84
86
88
90
90
94
97
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
6.11
6.12
6.13
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
104
105
105
107
107
108
108
110
111
112
114
115
115
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
7.12
7.13
7.14
7.15
121
122
125
130
132
135
136
137
138
139
143
144
146
146
147
LIST OF FIGURES
7.16
7.17
7.18
7.19
7.20
7.21
7.22
7.23
7.24
7.25
7.26
7.27
7.28
7.29
7.30
7.31
7.32
7.33
xiii
List of Tables
2.1
34
5.1
5.2
5.3
5.4
5.5
5.6
5.7
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
91
92
93
95
96
96
96
6.1
6.2
6.3
6.4
6.5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
109
113
113
116
116
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
142
145
148
158
159
160
161
162
162
163
164
.
.
.
.
.
.
.
xv
Introduction
1. INTRODUCTION
quantization, key detection or pitch spelling. This stage is more related to the
generation of human readable notation.
In general, a music transcription system can not obtain the exact score
that the musician originally read. Musical audio signals are often expressive
performances, rather than simple mechanical translations of notes read on a
sheet. A particular score can be performed by a musician in different ways.
As scores are only guides for the performers to play musical pieces, converting
the notes present in an audio signal into staff notation is an ill-posed problem
without a unique solution.
However, the conversion of a musical audio signal into a piano-roll representation without rhythmic information only depends on the waveform. Rather
than a score-oriented representation, a piano-roll can be seen as a soundoriented representation which displays all the notes that are playing at each
time. The conversion from an audio file into a piano-roll representation is done
by a multiple fundamental frequency (f0 ) estimation method. This is the main
module of a music transcription system, as it estimates the number of notes
sounding at each time and their pitches.
For converting a piano-roll into a readable score, other harmonic and
rhythmic components must also be taken into account. The tonality is related
with the musical harmony, showing hierarchical pitch relationships based on a
key tonic. Source separation and timbre classification can be used to identify the
different instruments present in the signal, allowing the extraction of individual
scores for each instrument. The metrical structure refers to the hierarchical
temporal structure. It specifies how many beats are in each measure and what
note value constitutes one beat, so bars can be added to the score to make it
readable by a musician. The tempo is a measure to specify how fast or slow is
a musical piece.
A music transcription example is shown in Fig. 1.1. The audio performance
of the score in the top of the figure was synthesized for simplification, and
it did contain neither temporal deviations nor pedal sustains. The piano-roll
inference was done without errors. The key, tempo, and meter estimates can be
inferred from the waveform or from the symbolic piano-roll representation. In
this example, these estimates were also correct (except from the anacrusis1 at
the beginning, which causes the shift of all the bars). However, it can be seen
that the resulting score differs from the original one.
When a musician performs a score, the problem is even more challenging,
as there are frequent temporal deviations, and the onset and duration of the
notes must to be adjusted (quantized) to obtain a readable score. Note that
quantizing temporal deviations implies that the synthesized waveform of the
1 The term anacrusis refers to the note or sequence of notes which precede the beginning
of the first bar.
Nocturne
Frdric Chopin
Op. 9 N. 2
Andante (
!#
!
" =132)
! !
%
' % % 12
8 !&
&
espress. dolce
!
!
! !! ! !! ! !!! ! !!!
!
!
( % % 12 ) ! ! !" ! %
% 8 !
!
!
"
" "
" !. " " "
!.
1
Piano
#*
5
4
!3 !2
!5 !$ ! !2 ! #
&
&
&
$ ! !!! ! !!! % ! $ !!! ! !!!
! " ! !
!
!
"
!"
!" " ! . " !
12121
! !3 !4 !3 #
54
!.
!.
!.
!1 #
!3
! !2 #
&
1
!#
1
!
$ ! !! !! !!! ! !!! % ! % !!!
!
!" ! $ !" !
!" $ !"
!. " !. "
" !. " !. "
!. "
!.
4
% 1
'%%!
!5 !4 !3 ! !4
! !!!#
f %!
! !
( %% " ! !
%!
"
!. " !
.
!
! !
!
" !"
!.
- !
&
! !!! ! !!!
! !" !
..... 3 4
% !/ # ! ! ! !" ! !1 % !2 #
'%%
4
3
3
1
3 1 2 1 + 1%! !
4
2
2
! # !3Performer
! !, $ ! ! ! !3" interpretation
% !+ ! ! !, ! ! ! ! $ ! ! % !+ ! !+ $ ! ! ! ! % ! # ! !
&
&
cresc.
p !
! ! ! ! ! !! ! !!
! !! % $ !! !
$ ! !!
"! ! ! !" ! % !! !" ! ! !" ! ! " ! ! " ! ! !! ! !! !!
!
!
!"
!"
54
" !.
"
!5
! !#
&
p
pp
!
! !!! ! !!! ! % !!! f ! % !!! ! !!
! !!! ! !!! ! !!! ! !!! !
$ ! !!!
!
!
%
!
"
!
!
!
( % % !" !
! " ! " !
!
" !
!"
f0 estimation
$ !"
!Multiple
!"
! $!
%
$ !"
!"
"
!
"
!
. "
"
!
.
"
!
.
"
!
.
" !.
!. "
!.
!. "
!.
23
10
4
% !" !" !" !" ! ! ! # !3 !2 #
'%%
! !!! % !
!
!
"
% %!
! . !"
! !2 ! !4 #
Tempo I.
poco ritard.
4 3 2 1 2
( %%
!#
1
!#
1
!5 #
$ !4
!3 !1 #
& $!#
5
!!2 ##
4
!1 #
!#
5
f
! !!! ! !!! % ! !!! ! !!! % ! !!! $ ! !!!
! !" !
! " !
" $ ! !" ! !"
!
!"
$
!
!"
"
" !.
!!!
"
"
!.
!.
!#
!! ! !!
!
!"
"
poco rallent.
2"
" 3 4 4 3 4 5
$ !! !!" !! !! 0 $ !! $ !! % !!! $ !!!" % !!"
" " " "
2
1 2
! !!! ! !!! !!" $ 0 !!" " " % !!" !!"
!
0! $! !
!
!"
5
"
!. "
!
1
2
. . .
. 3. .
.
. . .
. . .
O YZ .
.
& O O ` . . .. . .. . .. . ... P . ... . ... O . P .. . .. . ..
. ... . ... . .
Inst 2
Piano
. . . E . . . . . .
. E
. E O.
% O O Y`Z E . . . . O . . . . .
.
O.
P.
.
O
P.
.
.
.
.
.
.
. . . .
4
5
P. . . . .
.
. . . P. . . . O.
. . .
O
!
.
.
& O O O ... . . ..
. P ..
. ...
. ..
O . .. ##
.
.
.
.
O
.
Figure
1.1:
Music
transcription
example
from
Chopin
(Nocturne,
Op.
9,
N.
2).
.
. . . . . . . . . .
% OO E . . .
. .
. .
. .
O
.
.
O. P. .
6
7
.
.
.
.
.
. .
O . . O. P. O. . . P.
P ... . . O .
.
.
& O O ..
.
. ...
P .
.
.
O . ...
. 3
O . ..
. ..
P ..
. ..
.
.
.
.
E
.
E
E
.
O.
% OO
.
.
.
. .
O. .
P. .
. .
O
P.
.
.
. . . . . .
! 9
8
10
P. . . .
. . .
O
. .
. . . . . . . .
.
.
& O O O .. . . ..
P .. .. .. . .. . ..
.
.
. . .. . .
.
.
. . . .. . ..
. . . . . . . . .
% O E . .
. .
. E . E . . O .. .. . . . .
OO .
.
.
P
.
.
.
.
.
.
.
11
12
.
P
.
.
.
O
.
P.
&OO .
.
. P.
. .
. P .. .. ... .. O O .. . P .. ...
. .
1. INTRODUCTION
resulting score would not exactly match the original audio times. This is the
reason why the piano-roll is considered as a sound-oriented representation.
This dissertation is mainly focused on the multiple f0 estimation issue, which
is crucial for music transcription. This is a extremely challenging task which
has been addressed in several doctoral theses, such as Moorer (1975), Maher
(1989), Marolt (2002), Hainsworth (2003), Cemgil (2004), Bello (2004), Vincent
(2004), Klapuri (2004), Zhou (2006), Yeh (2008), Ryynanen (2008), and Emiya
(2008).
Most multiple f0 estimation methods are complex and have high computational costs. As discussed in chapter 3, the estimation of multiple simultaneous
pitches is a challenging task due to the number of theoretical issues.
The main contributions of this work are a set of novel efficient methods
proposed for multiple fundamental frequency estimation (chapters 6 and 7). The
proposed algorithms have been evaluated and compared with other approaches,
yielding satisfactory results.
The detection of the beginnings of musical events on audio signals, or onset
detection, is also addressed in this work. Onset times can be used for beat
tracking, for tempo estimation, and to refine the detection in a multiple f0
estimation system. A simple and efficient novel methodology for onset detection
is described in chapter 5.
The proposed methods have also been applied to other MIR tasks, like genre
classification, mood classification, and artist identification. The main idea was
to combine audio features with symbolic features extracted from transcribed
audio files, and then use a machine learning classification scheme to yield the
genre, mood or artist. These combined approaches have been published in (Lidy
et al., 2009, 2007, 2008) and they are beyond the scope of this PhD, which is
mainly focused on music transcription itself.
This work is organized as follows. The introductory chapters 2, 3, and 4
describe respectively the theoretical background, the multiple f0 problem, and
the state of the art for automatic music transcription. Then, novel contributions
are proposed for onset detection (5), and multiple fundamental frequency
estimation (6, 7), followed by the overall conclusions and future work (8).
Outline
2 - Background. This chapter introduces the theoretical background, defining
the signal processing, music theory, and machine learning concepts that
will be used in the scope of this work.
3 - Music transcription. The multiple f0 estimation problem and the related theoretical issues are described in this chapter, followed by an
introduction to the onset detection problem.
4 - State of the art. This chapter presents an overview of the previous
approaches for single f0 estimation, multiple f0 estimation, and onset
detection. The review is mainly focused on the multiple f0 estimation
methods, proposing a novel categorization of the existing approaches.
5 - Onset detection using a harmonic filter bank.
A novel onset detection method based on the properties of harmonic musical sounds is
presented, evaluated and compared with other works.
6 - Multiple pitch estimation using supervised learning methods.
Novel supervised learning methods are proposed in a simplified scenario,
considering synthesized instruments with constant temporal envelopes.
For this task, neural networks and nearest neighbors methods have been
evaluated and compared.
7 - Multiple f0 estimation using signal processing methods. Efficient
iterative cancellation and joint estimation methods to transcribe real
music are proposed in this chapter. These methods have been evaluated
and compared with other works.
8 - Conclusions and future work. The conclusions and future work are
discussed in this chapter.
Background
This chapter describes the signal processing, music theory, and machine learning
concepts needed to understand the basis of this work.
Different techniques for the analysis of audio signals based on the Fourier
transform are first introduced. The properties of musical sounds are presented,
classifying instruments according to their method of sound production and
to their spectral characteristics. Music theory concepts are also addressed,
describing the harmonic and temporal structures of Western music, and how can
it be represented using written and computer notations. Finally, the machine
learning techniques used in this work (neural networks and nearest neighbors)
are also described.
2. BACKGROUND
2.1.1
Fourier transform
The information of the discrete waveform can be used directly, for example, to
detect periodicities in monophonic2 sources, by searching for a repetitive pattern
in the signal. The waveform also provides information about the temporal
envelope, that can be used for some tasks such as beat detection. However, the
time domain data is not practical for some approaches that require a different
kind of information.
A waveform can be analyzed using the Fourier transform (FT) to map it
into the frequency domain. The FT performs the decomposition of a function
in a sum of sinusoids of different frequencies, showing the signal within each
given frequency band over a range of frequencies. It is a widely used technique
for frequency analysis tasks.
The standard Fourier transform (see Eq. 2.1) is well defined for continuous
pure sine waves with infinite length.
Z
x(t)ej2f t dt
FTx (f ) = X(f ) =
2.1
+
X
x(n)ej2kn
n=
2.2
In real world, signals have finite length. X[k] is defined in Eq. 2.3 for a
discrete finite signal x[n].
DFTx [k] = X[k] =
N
1
X
n=0
2 Only
x[n]ej N kn
k = 0, . . . , N 1
2.3
Im(z )
|z| =
b
a2 + b2
z = a + jb
(z) = arctan(b/a)
Re(z )
a
Figure 2.1: Complex plane diagram. Magnitude and phase of the complex
number z are shown.
The Shannon theorem limits the number of useful frequencies of the discrete
Fourier transform to the Nyquist frequency (fs /2). The frequency of each
spectral bin k can be easily computed as fk = k(fs /N ) since the N bins are
equally distributed in the frequency domain of the transformed space. Therefore,
the frequency resolution of the DFT is f = fs /N .
The equations above are described in terms of complex exponentials. The
Fourier transform can also be expressed as a combination of sine and cosine
functions, equivalent to the complex representation by the Eulers formula.
If the number of samples N is a power of two, then the DFT can be efficiently
computed using a fast Fourier transform (FFT) algorithm. Usually, software
packages that compute the FFT, like FFTW3 from Frigo and Johnson (2005),
use Eq. 2.3, yielding an array of complex numbers.
Using complex exponentials, the radial position or magnitude |z|, and the
angular position or phase (z) can easily be obtained from the complex value
z = a + jb (see Fig. 2.1).
The energy spectral density (ESD) is the squared magnitude of the DFT of
a signal x[n]. It is often called simply the spectrum of a signal. A spectrum
can be represented as a two-dimensional diagram showing the energy of a signal
|X[k]|2 as a function of frequency (see Fig. 2.2). In the scope of this work, it will
be referred as power spectrum (PS), whereas magnitude spectrum (MS) will be
referred represent the DFT magnitudes |X[k]| as a function of frequency.
Spectra are usually plotted with linear amplitude and linear frequency scales,
but they can also be represented using a logarithmic scale for amplitude,
frequency or both. A logarithmic magnitude widely used to represent the
amplitudes is the decibel.
dB(|X[k]|) = 20 log(|X[k]|) = 10 log(|X[k]|2 )
3 Fastest
2.4
2. BACKGROUND
0.3
5000
x[n]
|X[k]|
4500
0.2
4000
0.1
3500
3000
DFT
2500
-0.1
2000
-0.2
1500
1000
-0.3
500
-0.4
0
500
1000
1500
2000
2500
3000
3500
4000
0
0
50
100
150
200
250
300
350
400
450
500
10
2.1.2
Time-frequency representations
STFTw
x [k, m] =
N
1
X
n=0
x[n]w[n mI]ej N kn
k = 0, . . . , N 1
2.5
where w is the window function, m is the window position index, and I is the
hop size.
11
"3dspectrogram_piano_011.txt" matrix
2. BACKGROUND
70
60
70
50
60
40
50
30
|X[k]| 40
20
30
10
20
10
0
60
50
40
30
00
11
20
2
2
3
3
4
4
10
55 0
Figure 2.3: Magnitude spectrogram for the beginning section of a piano note.
Only the first 60 spectral bins and the first 5 frames are shown. Spectrum at
each frame is projected into a plane.
The hop size of the STFT determines how much the analysis starting time
advances from frame to frame. Like the frame length (window size), the choice
of the hop size depends on the purposes of the analysis. In general, a small hop
produces more analysis points and therefore, smoother results across time, but
the computational cost is proportionately increased.
Choosing a short frame duration in the STFT leads to a good time resolution
and a bad frequency resolution, and a long frame duration results in a good
frequency resolution but a bad time resolution. Time and frequency resolutions
are conjugate magnitudes, which means that f 1/t, therefore they can
not simultaneously have an arbitrary precision. The decision about the length
of the frames in the STFT to get an appropriate balance between temporal and
frequency resolution depends on the application.
Spectrograms are three-dimensional diagrams showing the squared magnitude of the STFT evolving in time. Usually, spectrograms are projected into a
two-dimensional space (see the lower plane in Fig. 2.3), using colors or grey levels
to represent the magnitudes. In the scope of this work, the term magnitude
spectrogram will be referred to describe a magnitude spectrum as it changes
over time.
12
x[k]g[n k]
2.6
k=
2. BACKGROUND
(a) STFT
(b) DWT
(c) Constant Q
Figure 2.5: Time-frequency resolution grids without overlap for STFT, DWT,
constant Q transform from Brown (1991), and a filter bank with 6 bands.
Constant Q transform
Using the Fourier transform, all the spectral bins obtained are equally spaced
by a constant ratio f = fs /N . However, the frequencies of the musical notes
(see section 2.3) are geometrically spaced in a logarithmic scale4 .
The constant Q transform is a calculation similar to the Fourier transform,
but with a constant ratio of frequency to resolution Q. This means that each
spectral component k is separated by a variable frequency resolution fk =
fk /Q.
Brown (1991) proposed a constant Q transform in which the center
frequencies fk can be specified as fk = (2k/b )fmin , where b is the number of
filters per octave and fmin is the minimum central frequency considered. The
transform using Q = 34 is similar (although not equivalent) to a 1/24 octave
filter bank. The constant Q transform for the k-th spectral component is:
XQ [k] =
4 This
14
N [k]1
X
2
1
w[k, n]x[n]ej N [k] Qn
N [k] n=0
2.7
frequency
b1
...
bi
...
bB
Energy in
each band
Figure 2.6: Example of a filter bank with triangular shaped bands arranged in
a logarithmic frequency scale.
where N [k] is the window size (in samples) used to compute the transform of
the frequency k:
N [k] = fs /fk = (fs /fk )Q
2.8
The window function w[k, n] used to minimize spectral leakage has the same
shape but a different length for each component. An efficient implementation
of the constant Q transform was described by Brown and Puckette (1992).
The main drawback with this method is that it does not take advantage of
the greater time resolution that can be obtained using shorter windows at high
frequencies, loosing coverage in the time-frequency plane (see Fig. 2.5(c)).
2.1.3
2. BACKGROUND
bi =
2.9
K1
X
k=0
f
Mel(f ) = 2595 log
+1
700
2.10
1
DCTx [i] =
x[n] cos
i n+
N
2
n=0
16
2.11
3500
3000
Mel scale
2500
2000
1500
1000
500
0
0
2000
4000
6000
8000
10000
Hertz scale
The MFCC are the obtained DCT amplitudes. In most applications, the
dimensionality of the MFCC representation is usually reduced by selecting only
certain coefficients.
The bandwidth of a filter can be expressed using an equivalent rectangular
bandwidth (ERB) measure. The ERB of a filter is defined as the bandwidth of
a perfectly rectangular filter with a unity magnitude response and same area as
that filter. According to Moore (1995), the ERB bandwidths bc of the auditory
filter at the channel c obey this equation:
bc = 0.108fc + 24.7 Hz
2.12
2. BACKGROUND
0.04
RMS
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
0
20
Attack
40
60
80
Sustain
100
120
Release
2.2.1
Dynamics
2.13
where N is the size of the frame. Real sounds have a temporal envelope with an
attack and release stages (like percussion or plucked strings), or attack, sustain
and decay segments (like woodwind instruments)5 . The automatic estimation of
the intra-note segment boundaries is an open problem, and it has been addressed
by some authors like Jensen (1999), Peeters (2004), and Maestre and Gomez
(2005).
5 Synthesizers generate amplitude envelopes using attack, decay, sustain and release
(ADSR), but this segmentation is not achievable in real signals, since the decay part is often
not clearly present, and some instruments do not have defined sustain or release parts.
18
The attack of a sound is formally defined as the initial interval during which
the amplitude envelope increases. For real sounds, Peeters (2004) considers
attack as the initial interval between the 20% and the 80% of the maximum
value in the signal, to take into account the possible presence of noise.
Transients are fast varying features characterized by sudden bursts of noise,
or fast changes of the local spectral content. During a transient, the signal
evolves in a relatively unpredictable way. A transient period is usually present
during the initial stage of the sound, and it often corresponds to the period
during which the instrument excitation is applied, though in some sounds a
transient can also be present in the release stage.
A vibrato is a periodic oscillation of the fundamental frequency, whereas
tremolo refers to a periodic oscillation in the signal amplitude. In both cases,
this oscillation is of subsonic frequency.
2.2.2
Timbre
K1
X
t [k] X
t1 [k])2
(X
2.14
k=0
t [k] and X
t1 [k] are the energy normalized Fourier spectra in the current
where X
and previous frames, respectively:
|X[k]|
X[k]
= PK1
k=0 |X[k]|
2.15
The spectral centroid (SC) indicates the position of the sound spectral center
of mass, and it is related to the perceptual brightness of the sound. It is
calculated as the weighted mean of the frequencies present in the signal, and
the weights are their magnitudes.
19
2. BACKGROUND
SCX =
K1
X
k X[k]
2.16
k=0
2.2.3
20
0.1
|X[k]|
x[n]
0.08
30
0.06
25
0.04
0.02
T0
DFT
20
15
-0.02
-0.04
10
-0.06
5
-0.08
-0.1
0
0
500
1000
1500
2000
1
f0 =
T0
100
200
300
400
500
f0
Figure 2.9: Example waveform and spectrum of a violin excerpt (file I151VNNOM from Goto (2003), RWC database).
Instruments are classified in the families above depending on its exciter, the
vibrating element that transforms the energy supplied by the player into sound.
However, a complementary taxonomy can be assumed, dividing musical sounds
in two main categories: pitched and unpitched sounds.
2.2.4
2. BACKGROUND
Figure 2.10: Example waveform and spectrogram of a piano note (file I011PFNOM from Goto (2003), RWC database).
22
whereas a thinner string under higher tension (such as a treble string in a piano)
or a more flexible string (such as a nylon string used on a guitar or harp) exhibits
less inharmonicity.
According to Fletcher and Rossing (1988), the harmonic frequencies in a
piano string approximately obey this formula:
fh = hf0
p
1 + Bh2
2.17
A typical value of the inharmonicity factor for the middle pitch range of a
piano is B = 0.0004, which is sufficient to shift the 17th partial to the ideal
frequency of the 18th partial.
In some cases, there are short unpitched excerpts in pitched sounds, mainly
in the initial part of the signal. For instance, during the attack stage of wind
instruments, the initial breath noise is present before the pitch is perceived.
Inharmonic sounds are also produced by the clicking of the keys of a clarinet,
the scratching of the bow of violin, or the sound of the hammer of a piano
hitting the string, for instance.
The additive synthesis, that was first extensively described by Moorer (1977),
is the base of the original harmonic spectrum model, which approximates a
harmonic signal by a sum of sinusoids. A harmonic sound can be expressed as
a sum of H sinusoids with an error model :
x[n] =
H
X
2.18
h=1
2. BACKGROUND
subtracted from the original sound and the remaining residual is represented as
a time varying filtered white noise component.
Recent parametric models, like the ones proposed by Verma and Meng (2000)
and Masri and Bateman (1996), extend the SMS model to consider transients.
When sharp onsets occur, the frames prior to an attack transient are similar,
and also the frames following its onset, but the central frame spanning both
regions is an average of both spectra that can be difficult to be analyzed.
Without considering noise or transients, in a very basic form, a harmonic
sound can be described with the relative amplitudes of its harmonics and their
evolution over time. This is also known as the harmonic pattern (or spectral
pattern). Considering only the spectral magnitude of the harmonics at a given
time frame, a spectral pattern p can be defined as a vector containing the
magnitude ph of each harmonic h:
p = {p1 , p2 , ..., ph , ..., pH }
2.19
2.20
In most musical sounds, the first harmonics contain most of the energy of
the signal, and sometimes their spectral envelope can be approximated using a
smooth curve.
Inharmonic sounds
The pitched inharmonic sounds have a period in the time domain and a pitch,
but their overtone partials are not approximately integer multiples of the f0 .
Usually, a vibrating bar is the sound source of these instruments, belonging to
the idiophones family. The most common are the marimba, vibraphone (see
Fig. 2.11), xylophone and glockenspiel.
As the analysis of inharmonic pitched sounds is complex and these instruments are less commonly used, most f0 estimation systems that analyze the
signal in the frequency domain do not handle them appropriately.
2.2.5
Unpitched musical sounds are those that do not produce a clear pitch sensation.
They belong to two main families of the Hornbostel and Sachs (1914) taxonomy:
membranophones and idiophones. All the membranophones family, many
idiophones and some electrophones produce unpitched sounds.
24
Figure 2.11: Example waveform and spectrogram of a vibraphone (file I041VIHNM from Goto (2003), RWC database).
25
2. BACKGROUND
Most of these sounds are characterized by a sharp attack stage, that usually
shows a broad frequency dispersion (see Fig. 2.12). Interestingly, Fitzgerald and
Paulus (2006) comment that although synthetic7 drum sounds tend to mimic
real drums, their spectral characteristics differ considerably from those in real
drums.
Spectral centroid, bandwidth of the spectrum and spectral kurtosis are
features commonly used in unpitched sound classification.
The transcription of unpitched instruments is referred to the identification
of the timbre class and its onset and offset times, as no pitch is present. This
task will not be addressed in the scope of this thesis, which is mainly focused on
the transcription of pitched sounds. For a review of this topic, see (FitzGerald,
2004) and (Fitzgerald and Paulus, 2006).
2.2.6
Singing sounds
According to Deutsch (1998) p.172, singing sounds are produced by the human
vocal organ, which consists of three basic components: the respiratory system,
the vocal folds and the vocal tract. The respiratory system provides an excess
pressure of air in the lungs. The vocal folds chop the airstream from the
lungs into a sequence of quasi-periodic air pulses, producing a sound with a
fundamental frequency. Finally, the vocal tract modifies the spectral shape and
determines the timbre of the voice. The Fig. 2.13 shows a voice spectrogram
example.
The term phonation frequency refers to the vibration frequency of the vocal
folds and, during singing sounds, this is the fundamental frequency of the
generated tone (Ryynanen, 2006). In a simplified scenario, the amplitudes of
the overtone partials can be expected to decrease by about 12 dB per octave
(Sundberg, 1987). The phonation frequencies range from around 100 Hz for
male singers over 1 kHz for female singers.
The vocal tract functions as a resonating filter which emphasizes certain
frequencies called the formant frequencies. The two lowest formants contribute
to the identification of the vowel, and the higher formants to the personal voice
timbre.
26
Figure 2.13: Example waveform and spectrogram of a singing male voice, vowel
A (file I-471TNA1M from Goto (2003), RWC database).
piece follows basic melodic, harmonic and rhythmic rules to be pleasing to most
listeners.
In this section, some terms related to the music structure in time and
frequency are described, followed by a brief explanation for understanding a
musical score and its symbols. Different score formats and representations
commonly used in computer music are also introduced.
2.3.1
Tonal structure
Harmony is a term which denotes the formation and relationships of simultaneous notes, called chords, and over time, chordal progressions. A melody is
a sequence of pitched sounds with musically meaningful pitches and a metrical
structure. Therefore, the term melody refers to a sequence of pitches, whereas
harmony refers to the combination of simultaneous pitches.
A musical interval can be defined as a ratio between two pitches. The term
harmonic interval refers to the pitch relationship between simultaneous notes,
whereas melodic interval refers to the pitch interval of two consecutive notes. In
Western tonal music, intervals that cause two notes to share harmonic positions
in the spectrum, or consonant intervals, are more frequent that those without
harmonic relationships, or dissonant intervals.
27
2. BACKGROUND
C!
D!
Figure 2.14:
labeled.
D!
E!
F!
G!
G!
A!
A!
B!
Musical temperaments
In terms of frequency, musical intervals are relations described by the ratio
between the respective frequencies of the involved notes. The octave is the
simplest interval in music, after the unison8 . Two notes separated by one octave
have a frequency ratio of 2:1. The human ear tends to hear two notes an octave
apart as being essentially the same. This is the reason why, in most musical
cultures (like Western, Arabic, Chinese, and Indian music), the wide range of
pitches is arranged across octaves in a logarithmic frequency scale.
Music is based on the octave relationship, but there exist different ways for
arranging a number of musical notes within an octave and assigning them a given
frequency. In Western music, the most common tuning system is the twelve tone
equal temperament, which divides each octave into 12 logarithmically equal
parts, or semitones.
In this musical temperament, each semitone is equal to one twelfth of an
octave. Therefore, every pair of adjacent notes has an identical frequency ratio
of 1:21/12 , or 100 cents. One tone is defined as a two semitones interval. Equal
temperament is usually tuned relative to a standard frequency for pitch A of
440 Hz9 .
Western musical pitches
A musical note can be identified using a letter (see Fig. 2.14), and an octave
number. For instance, C3 refers to the note C from the third octave. Notes
separated by an octave are given the same note name. The twelve notes in each
octave are called pitch classes. For example, the note C3 belongs to the same
pitch class than C4 .
8 An
28
Figure 2.15: Musical major keys (uppercase), and minor keys (lowercase).
The number of alterations and the staff representation are shown. Fig. from
http://en.wikipedia.org/wiki/File:Circle_of_fifths_deluxe_4.svg.
There are 12 pitch classes, but only 7 note names (C,D,E,F,G,A,B). Each
note name is separated by one tone except F from E, and C from B, which have
a one semitone interval. This is because modern music theory is based on the
diatonic scale.
29
2. BACKGROUND
Figure 2.16: Harmonics and intervals. The first nine harmonics of middle
C. Their frequencies and nearest pitches are indicated, as well as the Western
tonal-harmonic music intervals. Fig. from Krumhansl (2004).
A musical excerpt can be arranged in a major or a minor key (see Fig. 2.15).
Major and minor keys which share the same signature are called relative.
Therefore, C major is the relative major of A minor, whereas C minor is
the relative minor of E major. The key is established by particular chord
progressions.
Consonant and dissonant intervals
As introduced above, harmonic and melodic intervals can be divided into
consonant and dissonant. Consonant intervals are those that cause harmonic
overlapping in some degree. For harmonic interference it is not required exact
frequency overlap, only approximation11 .
The perceptual dimension of consonance and dissonance is related to ratios
of frequencies. The ordering along the dimension of consonance corresponds
quite closely to the size of the integers in the ratios (Vos and Vianen, 1984).
The unison (1:1) and octave (2:1) are the most consonant intervals, followed
by the perfect intervals. Perfect intervals12 are the perfect fifth (3:2) and the
perfect fourth (4:3). The major third (5:4), minor third (6:5), major sixth (5:3),
11 Other temperaments, like the meantone temperament, make the intervals closer to their
ideal just ratios.
12 In the equal temperament, besides the unison and the octave, the interval ratios described
are approximate.
30
and minor sixth (8:5) are next most consonant. The least consonant intervals
in western harmony are the minor second (16:15), the major seventh (15:8) and
the tritone (45:32).
In music, consonant intervals are more frequent than dissonant intervals.
According to Kosuke et al. (2003), trained musicians find more difficult to
identify pitches of dissonant intervals than those of consonant intervals.
It is hard to separate melody from harmony in practice (Krumhansl, 2004),
but harmonic and melodic intervals are not equivalent. For example, two notes
separated by one octave play the same harmonic rule, although they are not
interchangeable in a melodic line.
The most elemental chord in harmony is the triad, which is a three note
chord with a root, a third degree (major or minor third above the root), and a
fifth degree (major or minor third above the third).
2.3.2
Rhythm
14 For
31
2. BACKGROUND
Figure 2.17: Diagram of relationships between metrical levels and timing. Fig.
from Hainsworth (2003).
to the preferred human foot tapping rate (Klapuri et al., 2006), or to the dance
movements when listening to a musical piece.
A measure constitutes a temporal pattern and it is composed by a number
of beats. In Western music, rhythms are usually arranged with respect to a
time signature. The time signature (also known as meter signature) specifies
how many beats are in each measure and what note value constitutes one beat.
One beat usually corresponds to the duration of a quarter note15 (or crochet) or
an eighth note (or quaver) in musical notation. A measure is usually 2, 3, or 4
beats long (duple, triple, or quadruple), and each beat is normally divided into
2 or 3 basic subdivisions (simple, or compound). Bar division is closely related
to harmonic progressions.
Unfortunately, the perceived beat does not always correspond with the one
written in a time signature. According to Hainsworth (2003), in fast jazz music,
the beat is often felt as half note (or minim), i.e., double of his written rate,
whereas hymns are often notated with the beat given in minims, the double of
the perceived rate.
The tatum16 , first defined by Bilmes (1993), is the lowest level of the metric
musical hierarchy. It is a high frequency pulse that we keep in mind when
perceiving or performing music. An intuitive definition of the tatum proposed
by Klapuri (2003b) refers it as the shortest durational value in music that are
still more than incidentally encountered, i.e., the shortest commonly occurring
time interval. It frequently corresponds to a binary, ternary, or quaternary
subdivision of the musical beat. The duration values of the other notes, with
few exceptions, are integer multiples of the tatum. The tatum is not written
in a modern musical score, but it is a perceptual component of the metrical
structure.
15 Note
16 In
32
2.3.3
2. BACKGROUND
Note
Rest
American name
British name
Whole
Semibreve
Half
Minim
Quarter
Crotchet
Eighth
Quaver
Sixteenth
Semiquaver
Thirty-second
Demisemiquaver
Sixty-fourth
Hemidemisemiquaver
Table 2.1: Symbols used to represent the most frequent note and rest durations.
or decrease the pitch by one semitone, respectively. Notes with a pitch outside
of the range of the five line staff can be represented using ledger lines, which
provide a single note with additional lines and spaces.
Duration is shown with different note figures (see Fig. 2.1), and additional
symbols such as dots () and ties (^). Notation is read from left to right.
A staff begins with a clef, which indicates the pitch of the written notes.
Following the clef, the key signature indicates the key by specifying certain
notes to be flat or sharp throughout the piece, unless otherwise indicated.
The time signature appears after the key signature. Measures (bars) divide
the piece into regular groupings of beats, and the time signatures specify those
groupings.
Directions to the performer regarding tempo and dynamics are added above
or below the staff. In written notation, the term dynamics usually refers to the
intensity of the notes17 . The two basic dynamic indications in music are p or
piano, meaning soft, and f or forte, meaning loud or strong.
In modern notation, lyrics can be written for vocal music. Besides this
notation, others can be used to represent unpitched instruments (percussion
notation) or chord progressions (e.g., tablatures).
2.3.4
A digital score contains symbolic data which allow the easy calculation of
musical information and its manipulation. In a computer, a score can be stored
17 Although the term dynamics is sometimes used to refer other aspects of the execution of
a given piece, like staccato, legato, etc.
34
and represented in different ways. Musical software can decode symbolic data
and represent them in modern notation. Software like sequencers can also play
musical pieces in symbolic formats using a synthesizer.
Symbolic formats
The MIDI18 (Musical Instrument Digital Interface) protocol, introduced by the
MIDI Manufacturers Association and the Japan MIDI Standards Committee in
1983, enables electronic musical instruments, computers, and other equipment
to communicate, control, and synchronize with each other. MIDI does not
generate any sound, but it can be used to control a MIDI instrument that will
produce the specified sound. Event messages such as the pitch and intensity
(velocity) of musical notes can be transmitted using this protocol. It can also
control parameters such as volume, vibrato, panning, musical key, and clock
signals to set the tempo.
In MIDI, the pitch of a note is encoded using a number (see Fig. 2.19). A
frequency f can be converted into a MIDI pitch number n using Eq. 2.21:
f
n = round 69 + 12 log2
440
2.21
n69
12
2.22
35
2. BACKGROUND
Figure 2.19: Equal temperament system, showing their position in the staff,
frequency, note name and MIDI note number. Fig. from Joe Wolfe, University
of South Wales (UNSW), http://www.phys.unsw.edu.au/jw/notes.html.
MIDI messages, along with timing information, can be collected and stored
in a standard MIDI file (SMF). This is the most extended symbolic file
format in computer music. The SMF specification was developed by the MIDI
Manufacturers Association (MMA). Large collections of files in this format can
be found on the web.
The main limitation of MIDI is that there exist musical symbols in modern
notation that can not be explicitly encoded using this format. For example, pitch
names have a different meaning in music, but there is no difference between C]
and D[ in MIDI, as they share the same pitch number. In the literature, there
are a number of pitch spelling algorithms to assign contextually consistent letter
names to pitch numbers according to the local key context. A comparison of
these methods can be found in the Meredith and Wiggins (2005) review.
This is because MIDI is a sound-oriented code. It was designed as a protocol
to control electronic instruments that produce sound, and not initially intended
to represent musical scores. Another example of a sound related code is CSound
score format, which was also designed for the control and generation of sounds.
36
37
2. BACKGROUND
2
10
11
Z
&\
2
Inst 3
Piano
% Z\
. .
.
D
.
z
z
{z
. . 6 . . 7 . . 8 . . 9 . . 10 . # .. . # . 11 D
.
.
. . . . . . .
.
D . D . . .
D
.
z
z
{z
z 16 z
z
{z 17 z
z
{z
z 18z
z
{z
z
z z
{z
z19 z
z
{z
z20z
z
z
{z
zz
z
z
z
{z
z
z
z z
{
21
. .Example
. .
Figure .2.20:
. . .of a. piano-roll
. . and. # score
... . # representation
. . . . . (bottom)
. . . (top)
for &
an excerpt
of
the
MIDI
file
RWC-MDB-C-2001-27
from
Goto
(2003),
.
.
.
.
.
.# .# . .
. .RWC
.
database, W.A. Mozart variations on Ah Vous Dirai-je Maman, K.265/300e.
Figs.
D . D using
D
. DLogic
. .Pro. 8. D . D .
. .
%obtained
12
13
14
15
z
z
z
z. z
z
{z
z
z z
z
{z
z z
z
z
{z
z z
{z z
z
z
{z
z z
z
z
{z z
z
z
{z z
z
z
{z
zz
z
{z
z
zz
{z
z
z z
z
{z
z z
23
. . 24. learning
2.422 .Supervised
. 25. . 26 . # ... . # . 27 - 28 29 . . . . .
. . .
& .
.
.
2.4.1
Neural networks
z
z
z
z
z
z
z
z
z
z
z z
z
z
zz z
z
z
z
{zz
z
z
z
z
z
zz
z
z
{z
z
z
z
z
z
z
z 35z
z
z
z
{zz
z
z
z
z
z
z
z
z z
z
z
{z z
z
z
{ z
z
z
z
34
.z
36
.{zz
.z
.z
. . . .perceptron
. . (MLP)
. . .neural
.
. . or .multilayer
D
The multilayer
architecture
. .network
.
& . introduced by Minsky. and Papert (1969). Citing Duda. et .al. (2000),
was first
multilayer neural networks.implement linear discriminants, but in a space where
D
E#
D
E
.
D
.
the%
inputs have been
nonlinearly
mapped.# The "key .power provided by. such
""
"
networks is that they admit fairly simple algorithms where the form of the
z
z
z
z
z
zz
z
z
z
{z
z
z
z z
{ z
z
z
z
z
z
z
z
z
z
z z
z
z
{z
z
zdata.
z{ z
z
z
z
z
z
z
zz
z
z
{z
z
z
z
{
z
z
z
z
zz
z
z
{z
z
z
{z
37 z
38
39
nonlinearity
.z
. Qcan
. . . from
. .training
. .be. learned
. . . . . . . . O. P. O . P. . .
The. Fig. 2.21 shows an example of a MLP. This sample neural network is
&
.
.
33
composed by three layers: the input layer (3 neurons), a hidden layer (2 neurons)
D layer (3. neurons), connected
D
and%
output
by
. weighted Dedges.
.
38
z
z
z
z
z
z
z
z
zz
zz
z
z
z
{z
zz
z
zz
{
z
z
z
z
z
z
z
z
z
z
zz
z
z
{z
z{
z
z
z
z
z
z
z
z
z
z
z
z
{z
z
z
zz
z
{
Output values
Output layer
Weigth matrix 2
Hidden layer
Weight matrix 1
Input layer
Input values
2.23
39
2. BACKGROUND
of positions in the input layer (see Fig. 2.22). A TDNN can be trained with the
same standard backpropagation algorithm used for a MLP.
2.4.2
Nearest neighbors
40
*
* *
*
*
2NN
3NN
1NN
4NN
27 More information about different metrics used for NN classification can be found in (Duda
et al., 2000), section 4.6.
41
Music transcription
This chapter briefly addresses the human music transcription process, followed
by an analysis of the theoretical issues in automatic transcription from a signal
processing point of view. Finally, the onset detection task is also introduced.
The different metrics used for the evaluation of multiple f0 estimation and onset
detection methods are also discussed.
3. MUSIC TRANSCRIPTION
following events. We can identify relative pitch differences, more than absolute
pitches. Another proof of the importance of the musical context is that pitches
can be very hard to identify in a confusing context like, for instance, when two
different songs are heard simultaneously.
Klapuri et al. (2000) performed a test to measure the pitch identification
ability of trained musicians using isolated chords. The results were compared
with those obtained using an automatic transcription system, and only the
two most skilled subjects performed better than the computational approach,
showing that it is not easy to analyze notes out of context.
Hainsworth (2003) proposed a test where trained musicians where asked to
answer how did they perform transcription. A common pattern was found. The
first step was to do a rough structural analysis of the piece, breaking the song
into sections, finding repetitions, and in some cases marking key phrases. Then,
a chord scheme or the bass line were detected, followed by the melody. Finally,
the inner harmonies were heard by repeated listening, building up a mental
representation. According to Hainsworth (2003), no-one transcribes anything
but simple music in a single pass.
The auditory scene analysis (ASA) is a term proposed by Bregman (1990)
to describe the process by which the human auditory system organizes sound
into perceptually meaningful elements. In computational analysis, the related
concept is called computational auditory scene analysis (CASA), which is closely
related to source separation and blind signal separation. The key aspects of
the ASA model are segmentation, integration, and segregation. The grouping
principles of ASA can be categorized into sequential grouping cues (those that
operate across time, or segregated) and simultaneous grouping cues (those
that operate across frequency, or integrated). In addition, schemas (learned
patterns) play an important role. Mathematical formalizations to the field of
computational auditory perception have also been proposed, for instance, by
Smaragdis (2001) and Cont (2008).
The main advantage for humans when transcribing music is our unique
capability to identify patterns and our memory, which allows us to predict
future events. Using memory in computational transcription usually implies a
huge computational cost. It is not a problem to include short term memory , but
finding long term repetitions means keeping alive many hypothesis for various
frames, which is a very costly task. Solving certain ambiguities that humans
can do using long-term memory remains as a challenge. An excellent analysis of
prediction, expectation and anticipation of musical events from a psychological
point of view is done by Cont (2008), who also proposes computational
anticipatory models to address several aspects of musical anticipation. Symbolic
music sequencies can also be modeled and predicted in some degree (Paiement
et al., 2008), as they are typically composed of repetitive patterns.
44
Within a short context, a trained musician can not identify any musical
information when listening to a 50 ms isolated segment of music. With a 100
ms long segment, some rough pitch estimation can be done, but it is still difficult
to identify the instrument. Using longer windows, the timbre becomes apparent.
However, multiple f0 estimation systems that perform the STFT can estimate
the pitches using short frames1 . Therefore, probably computers can do a better
estimate in isolated time frames, but humans can transcribe better within a
wider context.
H
X
Ah cos(h0 n + h ) + z[n]
3.1
h=1
where 0 = 2f0 . The relation in Eq. 3.1 is approximated for practical use, as
the signal can have some harmonic deviations.
A multiple f0 estimation method assumes that there can be more than one
harmonic source in the input signal. Formally, the sum of M harmonic sources
can be expressed as:
x[n]
Hm
M X
X
m=1 hm =1
3.2
typical frame length used to detect multiple f0 when using the STFT is about 93 ms.
45
3. MUSIC TRANSCRIPTION
Noise suppression techniques have been proposed in the literature2 to allow the
subtraction of additive noise from the mixture. And the third major issue is
that, besides the source and noise models, in multiple f0 estimation there is a
third model, which is probably the most complex: the interaction between the
sources.
For instance, consider two notes playing simultaneously within an octave
interval. As their spectrum shows the same harmonic locations than the lowest
note playing alone, some other information (such as the energy expected in
each harmonic for a particular instrument) is needed to infer the presence of
two notes. This issue is usually called octave ambiguity.
According to Klapuri (2004), in contrast to speech, the pitch range is wide3
in music, and the sounds produced by different musical instruments vary a lot in
their spectral content. The harmonic pattern of an instrument is also different
from low to high notes. And transients and the interference of unpitched content
in real music has to be addressed too.
On the other hand, in music the f0 values are temporally more stable than
in speech. Citing Klapuri (2004), it is more difficult to track the f0 of four
simultaneous speakers than to perform music transcription of four-voice vocal
music.
As previously discussed in Sec. 2.3.1, consonant intervals are more frequent
than dissonant ones in western music. Therefore, pleasant chords include
harmonic components of different sounds which coincide in frequency (harmonic
overlaps). Harmonic overlaps and beating are the main effects produced by the
interaction model, and they are described below.
3.2.1
Harmonic overlap
46
a review of noise estimation and suppression methods, see (Yeh, 2008), chapter 4.
tessitura of an instrument is the f0 range that it can achieve.
A cos(n + ) =
K
X
Ak cos(n + k )
3.3
k=1
Using trigonometric identity, the resulting amplitude (Yeh and Roebel, 2009)
can be calculated as:
v
#2
#2 " K
u" K
u X
X
t
A=
Ak sin(k )
Ak cos(k ) +
k=1
3.4
k=1
From which the estimated amplitude A of two overlapping partials with the
same frequency, different amplitude, and phase difference is:
q
A = A21 + A22 + 2A1 A2 cos( )
3.5
As pointed out by Yeh and Roebel (2009), two assumptions are usually
made for analyzing overlapping partials: the additivity of linear spectrum and
the additivity of power spectrum. The additivity of linear spectrum A = A1 +A2
assume that the two sinusoids
are in phase, i.e., cos( ) = 1. The additivity of
p
power spectrum A = A21 + A22 is true when cos( ) = 0.
According to Klapuri (2003a), if one of the partial amplitudes is significantly
greater than the other, as is usually the case, A approaches the maximum of
the two. Looking at Eq. 3.5, this assumption is closely related to the additivity
of power spectrum, which experimentally (see Yeh and Roebel, 2009) obtains
better amplitude estimates than considering cos( ) = 1.
Recently, Yeh and Roebel (2009) proposed an expected overlap model to get
a better estimation of the amplitude when two partials overlap, assuming that
the phase difference is uniformly distributed.
3.2.2
Beating
47
3. MUSIC TRANSCRIPTION
s1 (t)
s2 (t)
Figure 3.1: Interference tones of two sinusoidal signals of close frequencies. Fig.
extracted from http://www.phys.unsw.edu.au/jw/beats.html
beating effect generates spectral components not belonging to any original
source (see Fig. 3.2), producing ghost fundamental frequencies, and it also alters
the original partial amplitudes in the spectrum.
3.2.3
Evaluation metrics
48
45
G3
40
35
C3
30
25
20
15
10
0
0
200
400
600
800
1000
1200
beating
frequency
Figure 3.2:
Example spectrum of two piano sounds with fundamental
frequencies C3 (130.81 Hz) and G3 (196 Hz). A beating component appears
at frequency 65 Hz, corresponding to a C2 ghost pitch.
Different metrics have been used in the literature to evaluate polyphonic
estimation methods. They can be classified into frame-by-frame metrics
(fundamental frequencies are evaluated within single frames), and note-based
metrics (note onsets and durations are also taken into account). The former
are used to evaluate most multiple f0 estimation methods, whereas note-based
metrics are suitable for the evaluation of those approaches that also perform f0
tracking5 .
Frame-based evaluation
Within a frame level, a false positive (FP) is a detected pitch which is not
present in the signal, and a false negative (FN) is a missing pitch. Correctly
detected pitches (OK) are those estimated pitches that are also present in the
ground-truth.
A commonly used metric for frame-based evaluation is the accuracy, which
can be defined as:
Acc =
OK
OK
+ F P + F N
3.6
5 This
term refers to the tracking of the f0 estimates along consecutive frames in order to
add a temporal continuity to the detection.
49
3. MUSIC TRANSCRIPTION
OK
OK + F P
3.7
Rec =
OK
OK + F N
3.8
2 Prec Rec
OK
=
Prec + Rec
OK + 12 F P + 12 F N
3.9
t=1
3.10
The substitution, miss and false alarm errors are defined as follows:
PT
Esubs =
6 National
50
t=1
3.11
PT
Emiss =
t=1
PT
Ef a =
t=1
3.12
3.13
Poliner and Ellis (2007a) suggest that, as in the universal practice in the
speech recognition community, this is probably the most adequate measure,
since it gives a direct feel for the quantity of errors that will occur as a proportion
of the total quantity of notes present.
To summarize, three alternative metrics are used in the literature to evaluate
multiple f0 estimation systems within a frame level: accuracy (Eq. 3.6), Fmeasure (Eq. 3.9), and total error (Eq. 3.10).
The accuracy is the most widely used metric for frame by frame evaluation.
The main reason for using accuracy instead of F-measure is that an equilibrate
balance between precision and recall is probably less adequate for this task.
Typically, multiple f0 estimation methods obtain higher precision than recall.
This occurs because some analyzed mixtures contain many pitches with
overlapped harmonics that can be masked by the other components. An
experiment was carried out by Huron (1989) to study the limitations in
listeners abilities to identify the number of concurrent sounding voices. The
most frequent type of confusion was the underestimation of the number of
sounds. Some pitches can be present in the signal, but they can become almost
unhearable, and they are also very difficult to detect analytically. For instance,
when trying to listen an isolated 93 ms frame with 6 simultaneous pitches, we
usually tend to underestimate the number of sources.
Note-based evaluation
Instead of counting the errors at each frame and summing the result for all
the frames, alternative metrics have been proposed to evaluate the temporal
continuity of the estimate. Precision, recall, F-measure and accuracy are also
frequently used for note level evaluation. However, it is not trivial to define
what is a correctly detected note, a false positive, and a false negative.
The note-based metric proposed by Ryynanen and Klapuri (2005) considers
that a reference note is correctly transcribed when their pitches are equal,
the absolute difference between their onset times is smaller than a given
onset interval, and the transcribed note is not already associated with another
reference note. Results are reported using precision, recall, and the mean overlap
ratio, which measures the degree of temporal overlap between the reference and
transcribed notes.
51
3. MUSIC TRANSCRIPTION
52
t (sec)
Figure 3.3: Example of a guitar sound waveform. The actual onsets are marked
with dashed vertical lines.
sounds like drums, pitched percussive onsets like pianos, and pitched nonpercussive onsets like bowed strings. Unpitched and pitched percussive sounds
produce hard onsets, whereas pitched non-percussive timbres usually generate
soft onsets.
3.3.1
Evaluation metrics
53
4.1.1
Time domain methods look for a repetitive pattern in the signal, corresponding
to the fundamental period. A widely used technique is the autocorrelation
function, which is defined for a signal x[t] with a frame length W as:
ACFx [ ] =
t+W
X1
x[k]x[k + ]
4.1
k=t
where represents the lag value. The peaks of this function correspond to
multiples of the fundamental period. Usually, autocorrelation methods select
the highest non-zero lag peak over a given threshold within a range of lags.
However, this technique is sensitive to formant structures, producing octave
errors. As Hess (1983) points out, some methods like center clipping (Dubnowski
1 Detecting the fundamental frequency in speech signals is useful, for instance, for prosody
analysis. Prosody refers to the rhythm, stress, and intonation of connected speech.
55
et al., 1976), or spectral flattening (Sondhi, 1968) can be used to attenuate these
effects.
The squared difference function (SDF) is a similar approach to measure
dissimilarities, and it has been used by de Cheveigne and Kawahara (2002) for
the YIN algorithm.
SDFx [ ] =
t+W
X1
(x[k] x[k + ])
4.2
k=t
1
if = 0,
SDF0x [ ] =
SDF
x [ ]
4.3
P
otherwise
(1/ ) j=1 SDFx [j]
The main advantage of using the SDF0 function is that it tends to remain
large at low lags, dropping below 1 only where SDF falls below the average.
Basically, it removes dips and lags near zero avoiding super-harmonic errors,
and normalization makes the function independent of the absolute signal level.
An absolute threshold is set, choosing the first local minimum of SDF0 below
that threshold. If none is found, the global minimum is chosen instead. Once
the lag value is selected, a parabolic interpolation of immediate neighbors is
done to increase the accuracy of the estimate, obtaining 0 , and the detected
fundamental frequency is finally set as f0 = fs / 0 .
YIN is a robust and reliable algorithm that have been successfully used as
basis for singing voice transcription methods, like the one proposed by Ryynanen
and Klapuri (2004).
4.1.2
Usually, methods in the frequency domain analyze the locations or the distance
between hypothetical partials in the spectrum.
Cepstrum
The real cepstrum of a signal is the inverse Fourier transform of the logarithm
of the magnitude spectrum.
CEPx [ ] = IDFT{log(|DFT(x[n])|}
4.4
56
K1
1 X
2 k
2
ACFx [ ] = IDFT{|DFT(x[n])| } =
cos
|X[k]|
K
K
2
4.5
k=0
Note that the cosine factor emphasizes the partial amplitudes at those
harmonic positions multiple of . The main difference between autocorrelation
and cepstrum is that autocorrelation uses the square of the DFT, and the
cepstrum performs the logarithm. Squaring the DFT causes to raise spectral
peaks but also the noise. Using the logarithm flats the spectrum, reducing noise
but also the harmonic amplitudes.
Therefore, as pointed out by Rabiner et al. (1976), the cepstrum performs
a dynamic compression over the spectrum, flattening unwanted components
and increasing the robustness for formants, but rising the noise level, whereas
autocorrelation emphasizes spectral peaks in relation to noise, but raising the
strength of spurious components.
Both ACF and cepstrum-based methods can be classified as spectral location
f0 estimators.
Spectral autocorrelation
The main drawback of the spectral location f0 estimators is that they are
very sensitive to harmonic deviations from their ideal position. Some methods,
like the one proposed by Lahat et al. (1987), perform autocorrelation over the
spectrum.
ACFSX [ ] =
2
K
K/2 1
|X[k]||X[k + ]|
4.6
k=0
fmin
f0
fmax
58
Figure 4.2: Two way mismatch procedure from Maher and Beauchamp (1994).
between each measured partial and its nearest harmonic neighbor in the
predicted sequence, whereas the second measures the mismatch between each
predicted harmonic and its nearest neighbor in the measured sequence. Each
match is weighted by the amplitudes of the observed peaks. This method tries to
reduce octave errors, applying a penalty to missing and extra harmonics relative
to the predicted pattern. The methodology was also used for duet3 separation4 .
Cano (1998) introduced some modifications over the TWM to improve the
original SMS analysis developed by Serra (1997). These modifications include
a pitch dependent analysis window using adaptive window length, a more
restrictive selection of spectral peaks to be considered, f0 tracking using shortterm history to choose between candidates with similar TWM error, to restrict
the frequency range of possible candidates, and to discriminate between pitched
and unpitched parts.
4.1.3
Perceptual models
59
4.1.4
Probabilistic models
As pointed out by Roads (1996), a small time window is often not enough for a
human to identify pitch, but when many frames are played one after another, a
sensation of pitch becomes apparent. This is the main motivation to introduce
probabilistic models for f0 tracking, which can be used to refine the f0 estimate.
Intuitively, a simple f0 tracking approach would consist in giving preference
to f0 hypotheses that are close to the hypothesis of the last time frame. A
more reliable method is to use statistical models, like hidden Markov models
(HMMs), which track variables through time. HMMs are state machines, with
a hypothesis available for the output variable at each state. At each time frame,
the HMM moves from the current state to the most likely next state, based on
the input to the model and the state history which is represented in the current
state.
5 Hair
60
Figure 4.3: Combinations of note models and the musicological model from
Ryyn
anen and Klapuri (2004).
61
Finally, the two models are combined into a network (see Fig. 4.3), and
the most probable path is found according to the likelihoods given by the note
models and the musicological model. The system obtained half amount of errors
than the simple f0 estimation rounded to MIDI pitch, proving the capability of
probabilistic models for this task.
4.2.1
Salience methods
62
The method from Peeters (2006) combines a temporal representation (timedomain ACF and real cepstrum) with a spectral representation (spectral
autocorrelation) to reduce octave ambiguities. The best results were reported
when combining the spectral autocorrelation function with the cepstrum.
Zhou et al. (2009) propose an efficient method which relies on a novel timefrequency representation called resonator time-frequency image (RTFI). The
RTFI (Zhou and Mattavelli, 2007) selects a first order complex resonator filter
bank to implement a frequency dependent time-frequency analysis. Harmonic
components are extracted by transforming the RTFI average energy spectrum
into a relative energy spectrum. Then, a preliminary estimation of pitch
candidates is done by converting the RTFI average spectrum into a pitch energy
spectrum (PES) and a relative pitch energy spectrum (RPES). The information
about harmonic components and pitch candidates are combined to remove extra
pitches. Finally, the remaining candidates are filtered out by using a smoothness
criterion to remove extra pitches again, considering only cases for which the
frequency ratio of two candidates is 2, 3 or 4.
4.2.2
Some methods estimate the most prominent f0 , subtracting it from the mixture
and repeating the process for the residual signal until a termination criterion.
In the method proposed by Klapuri (2003a), Fig. 4.4, the spectrum of the
signal is warped on a logarithmic frequency scale to compress the spectral
magnitudes and remove the noise. The processed spectrum is analyzed into
a 2/3 octave filter bank, and f0 weights are computed for each band according
to the normalized sum of their partial amplitudes. The results are combined by
summing the squared band-wise weights, taking inharmonicity (Eq. 2.17) into
account. The spectral components of the fundamental frequencies that have the
highest global weights are smoothed using the algorithm described in (Klapuri,
63
64
Figure 4.6: Overview of the joint estimation method from Yeh (2008).
key. Like in (Ryyn
anen and Klapuri, 2004), the acoustic and musicological
models are combined into a network which optimal path is found using the
token-passing algorithm from Young et al. (1989).
Other examples of iterative cancellation methods are those proposed by Wan
et al. (2005), Yin et al. (2005), and Cao et al. (2007).
4.2.3
These methods evaluate a set of possible hypotheses, consisting of f0 combinations, to select the best one without corrupting the residual at each iteration.
Time-domain methods for joint cancellation of multiple f0 hypothesis have
been proposed by de Cheveigne (1993, 2005). The hypotheses are cancelled using
a cascade of filters, and the combination selected is the one that minimizes the
residual. In the experiments done by de Cheveigne (2005), different iterative
cancellation methods are compared with the joint approach, showing that joint
cancellation outperforms the iterative cancellation results.
The method proposed by Yeh (2008) evaluates a set of multiple f0 hypotheses
without cancellation. An adaptive noise level estimation (see Fig. 4.6) is first
done using the algorithm described in (Yeh and Roebel, 2006), in order to
extract only the sinusoidal components. Then, given a known number of sources,
the fundamental frequencies are obtained using the method described in (Yeh
et al., 2005). At each spectral frame, to reduce the computational cost, a set of
f0 candidates are selected from the spectral peaks using a harmonic matching
technique9 . Each f0 hypothesis is related to a hypothetical partial sequence
(HPS). The HPS is a source model with estimated frequencies and amplitudes
obtained by partial selection and overlapping partial treatment. Partials are
identified with spectral peaks within a tolerance deviation from their ideal
position. In case that more than one peak is in the tolerance range, the peak
forming a smoother HPS envelope is selected. The amplitudes of overlapped
partials in the combination are estimated by using linear interpolation, similarly
to (Maher, 1990), and a set of rules. Finally, HPS are flattened by exponential
compression.
Once HPS are estimated, a score function for a given hypothesis is
calculated taking into account, for each hypothetical source, the harmonicity, the
9 Different
65
4.2.4
66
The partial tracking method proposed by Marolt (2004a,b) uses a combination of the auditory Patterson-Holdsworth gammatone filterbank with the
Meddis hair cell model as a preprocessing stage. Instead of using a correlogram,
a modified version of the Large and Kolen (1994) adaptive oscillators is utilized
to detect periodicities in output channels of the auditory model. There are 88
oscillators with initial frequencies corresponding to the tuned musical pitches.
If the oscillators synchronize with their stimuli (outputs of the auditory model),
then the stimuli are periodic, meaning that partials are present in the input
signal. This scheme can be used to track partials, even in the presence of vibrato
or beating. The model was extended for tracking groups of harmonically related
partials, by using the output of the adaptive oscillators as inputs of neural
networks. A set of 88 neural networks corresponding to the musical pitches
were used, each containing up to 10 oscillators associated to partial frequencies.
The harmonic tracking method from Marolt (2004a,b) was integrated into a
system called SONIC (see Fig. 4.7) to transcribe piano music. The combination
of the auditory model outputs and the partial tracking neural network outputs
are fed into a set of time delay neural networks (TDDN), each one corresponding
to a musical pitch. The system also includes an onset detection stage, which is
implemented with a fully-connected neural network, and a module to detect
repeated notes activations (consecutive notes with the same pitch). The
information about the pitch estimate is complemented with the output of the
repeated note module, yielding the pitch, length and loudness of each note.
The system is constrained to piano transcription, as training samples are piano
sounds.
67
Figure 4.8: HMM smoothed estimation from Poliner and Ellis (2007a) for an
excerpt of F
ur Elise (Beethoven). The posteriorgram (pitch probabilities as a
function of time) and the HMM smoothed estimation plotted over the groundtruth labels (light gray) are shown.
Reis et al. (2008c) use genetic algorithms12 for polyphonic piano transcription. Basically, a genetic algorithm consist on a set of candidate solutions
(individuals, or chromosomes) which evolve through inheritance, selection,
mutation and crossover until a termination criterion. At each generation, the
quality (fitness) of each chromosome is evaluated, and the best individuals are
chosen to keep evolving. Finally, the best chromosome is selected as the solution.
In the method proposed by Reis et al. (2008c), each chromosome corresponds
to a sequence of note events, where each note has pitch, onset, duration and
intensity. The initialization of the population is based on the observed STFT
peaks. The fitness function for an individual is obtained from the comparison of
the original STFT with the STFT of synthesized versions of the chromosomes
given an instrument. The method is constrained to the a priori knowledge of the
instrument to be synthesized. The system was extended in Reis et al. (2008b)
by combining the genetic algorithm with a memetric algorithm (gene fragment
competition), to improve the quality of the solutions during the evolutionary
process.
12 Genetic algorithms are evolutionary methods based on Darwin natural selection proposed
by Holland (1992).
68
4.2.5
The goal of non-negative matrix factorization (NMF), first proposed by Lee and
Seung (1999), is to approximate a non-negative matrix Y as a product of two
non-negative matrices W and H, in such a way that the reconstruction error is
minimized:
X WH
4.7
This method has been used for music transcription, where typically X is the
spectral data, H corresponds to the spectral models (basis functions), and W
are the weightings, i.e., the intensity evolution along time (see Fig. 4.9). This
methodology is suitable for instruments with a fixed spectral profile14 , such as
piano sounds.
There are different ways to design the cost function in order to minimize
the residual. For instance, Cont (2006) assumes that the correct solution for
a given spectrum uses a minimum of templates, i.e., that the solution has the
minimum number of non-zero elements in H. NMF methods have also been used
for music transcription by Plumbley et al. (2002), Smaragdis and Brown (2003),
13 SVMs are supervised learning methods for classification (see (Burges, 1998)). Viewing
input data as sets of vectors in an n-dimensional space, a SVM constructs separating
hyperplanes in that space in such a way that the margins between the data sets are maximized.
14 In the scope of this work, an instrument with a fixed spectral profile is referred when two
notes of that instrument playing the same pitch produce a very similar sound, as it happens
with piano sounds. As an opposite example, a sax cant be considered to have a fixed spectral
profile, as real sax sounds usually contain varying dynamics and expressive alterations, like
breathing noise, that do not sound in the same way than other notes with the same pitch.
69
Figure 4.9: NMF example from Smaragdis and Brown (2003). The original
score and the obtained values for H and W using 4 components are shown.
Raczynski et al. (2007), Virtanen (2007), Vincent et al. (2007) and Bertin et al.
(2007).
The independent component analysis (ICA), introduced by Comon (1994),
is closely related to the NMF. ICA can express a signal model as x = Wh,
being x and h n-dimensional real vectors, and W a non-singular mixing matrix.
Citing Virtanen (2006), ICA attempts to separate sources by identifying latent
signals that are maximally independent.
As pointed out by Schmidt (2008), the differences between ICA and NMF
are the different constraints placed on the factorizing matrices. In ICA, rows
of W are maximally statistically independent, whereas in NMF all elements
of W and H are non-negative. Both ICA and NMF have been investigated by
Plumbley et al. (2002) and Abdallah and Plumbley (2003a, 2004) for polyphonic
transcription. In the evaluation done by Virtanen (2007) for spectrogram
factorization, the NMF algorithms yielded better separation results than ICA.
These methods have been successfully used for drum transcription (see
(FitzGerald, 2004) and (Virtanen, 2006)), as most percussive sounds have a
fixed spectral profile and they can be modeled using a single component.
4.2.6
The Matching Pursuit (MP) algorithm15 from Mallat and Zhang (1993)
approximates a solution for decomposing a signal into linear functions (or atoms)
15 The matching pursuit toolkit (MPTK) from Krstulovic and Gribonval (2006), available
at http://mptk.irisa.fr, provides an efficient implementation of the MP algorithm.
70
Figure 4.10: Modified MP algorithm from Leveau et al. (2008) for the extraction
of harmonic atoms.
that are selected from a dictionary. At the first iteration of the algorithm,
the atom which gives the largest inner product with the analyzed signal is
chosen. Then, the contribution of this function is subtracted from the signal
and the process is repeated on the residue. MP minimizes the residual energy
by choosing at each iteration the most correlated atom with the residual. As a
result, the signal is represented as a weighted sum of atoms from the dictionary
plus a residual.
The method proposed by Ca
nadas-Quesada et al. (2008) is based on
harmonic matching pursuit (HMP) from Gribonval and Bacry (2003). The
HMP is an extension of MP with a dictionary composed by harmonic atoms.
Within this context, a Gabor atom16 can be identified with a partial, and a
harmonic atom is a linear combination of Gabor atoms (i.e., a spectral pattern).
The algorithm from Ca
nadas-Quesada et al. (2008) extends HMP to avoid
inaccurate decomposition when there are overlapped partials, by maximizing the
smoothness of the spectral envelope for each harmonic atom. The smoothness
maximization algorithm is similar to the one proposed by Klapuri (2003a).
The performance of this method when dealing with harmonically related
simultaneous notes is further described by Ruiz-Reyes et al. (2009).
Leveau et al. (2008) propose a modified MP algorithm which can be applied
to the whole signal, instead of the frame by frame basis. The harmonic atoms
extraction method is shown in Fig. 4.10. Molecules are considered as a group
of several atoms of the same instrument in successive time windows.
16 Gabor atoms are time-frequency atomic signal decompositions proposed by Gabor (1946,
1947). They are obtained by dilating, translating and modulating a mother generating
function.
71
4.2.7
Bayesian models
Citing Davy (2006b), tonal music can be exploited to build a Bayesian model,
that is, a mathematical model embedded into a probabilistic framework that
leads to the simplest model that explains a given waveform. Such models are
also known as generative models because they can be used to generate data by
changing parameters and the noise. Some multiple f0 estimation systems rely on
generative models of the acoustic waveform. Most of these models assume that
the fundamental frequency belongs to a fixed grid, associated to the pitches.
The method proposed by Cemgil et al. (2003, 2006) is based on a generative
model formulated as a dynamical bayesian network. The probabilistic model
assumes harmonic frequency relationships of the partials and exponentially
decaying spectrum envelope from one partial to another. This approach allows
to write many classical noisy sum-of-sines models into a sequential form. The
model relies on sinusoids with damped amplitude and constant frequency.
A piano-roll is inferred from the observation, assigning to each of the grid
frequencies the state mute or sound at each instant. The algorithm for
estimating the most likely piano-roll is based on EM and Kalman filtering on a
sliding window over the audio signal. This can be considered as a time-domain
method (DFT is not explicitly calculated), which can be used to analyze music
to sample precision, but with a very high computational cost.
Vincent and Rodet (2004) propose a generative model combining a nonlinear
Independent Subspace Analysis (ISA17 ) and factorial HMM. The method is
based on creating specific instrument models based on learning. The spectra of
the instrument sounds are modeled by using the means and variances of partial
amplitudes, the partial frequencies and the residuals. To transcribe a signal, the
spectrum is considered as a sum of spectral models which weights are optimized
using the second order Newton method. The HMM is used for adding temporal
continuity and modeling note duration priors.
Other Bayesian approaches for music transcription are those proposed by
Kashino and Tanaka (1993), Sterian (1999), Walmsley et al. (1999), Raphael
(2002), Kashino and Godsill (2004), Dubois and Davy (2005, 2007), Vincent
and Plumbley (2005), and Davy et al. (2006). For a review on this topic, see
(Cemgil, 2004) and (Davy, 2006b).
4.2.8
Goto (2000) describes a method called PreFEst (see Fig. 4.11) to detect melody
and bass lines in musical signals. The system assumes that the melody and bass
17 ISA combines the multidimensional ICA with invariant feature extraction. Linear ISA
describes the short-time power spectrum of a musical excerpt as a sum of power spectra with
time-varying weights, using a Gaussian noise for modeling the error.
72
are the most predominant harmonic structures in high and low frequency regions
respectively. First, the STFT is apportioned through a multirate filterbank, and
a set of candidate frequency components are extracted. Then, two bandpass
filters are used to separate the spectral components of the bass and melody.
For each set of filtered frequency components, the method forms a probability
density function (PDF) of the f0 . The observed PDF is considered as being
generated by a weighted mixture of harmonic-structure tone models. The model
parameters are estimated using the EM algorithm. To consider a continuity
of the f0 estimate, the most dominant and stable f0 trajectory is selected,
by tracking peak trajectories in the temporal transition of the fundamental
frequencies PDFs. To do this, a salience detector selects salient promising peaks
in the PDFs, and agents driven by those peaks track their trajectories. The
system works in real time.
Kameoka et al. (2007) propose a method called harmonic temporal structured clustering (HTC). This approach decomposes the power spectrum time
series into sequential spectral streams (clusters) corresponding to single sources.
This way, the pitch, intensity, onset, duration, and timbre features of each
source are jointly estimated. The input of the system is the observed
signal, characterized by its power spectrogram with log-frequency. The source
model (see Fig. 4.12) assumes smooth power envelopes with decaying partial
amplitudes. Using this model, a goodness of the partitioned cluster is calculated
using the Kullback-Liebler (KL) divergence. The model parameters are
estimated using the expectation-constrained maximization (ECM) algorithm
73
Figure 4.12: HTC spectral model of a single source from Kameoka et al. (2007).
from Meng and Rubin (1993), which is computationally simpler than the EM
algorithm. In the evaluation done by Kameoka et al. (2007), the HTC system
outperformed the PreFEst results.
The method from Li and Wang (2007) is similar to (Ryynanen and Klapuri,
2005) in the sense that the preliminary pitch estimate and the musical pitch
probability transition are integrated into a HMM. However, for pitch estimation,
Li and Wang (2007) use statistical tone models that characterize the spectral
shapes of the instruments. Kernel density estimation is used to build the
instrument models. The method is intended for single instrument transcription.
4.2.9
Blackboard systems
74
4.2.10
Database matching
76
Few supervised learning methods have been used as the core methodology for
polyphonic music transcription. Probably, this is because they rely on the data
given in the learning stage, and in polyphonic real music the space of observable
data is huge. However, supervised learning methods have been successfully
applied considering specific instrument transcription (usually, piano) to reduce
the search space. The same issue occurs in database matching methods, as they
also depend on the ground-truth data.
In contrast to supervised approaches, unsupervised learning methods do
not need a priori information about the sources. However, they are suitable
for fixed time-frequency profiles (like piano or drums), but modeling harmonic
sounds with varying harmonic components remains as a challenge (Abdallah
and Plumbley, 2004).
In general, music transcription methods based on Bayesian models are
mathematically complex and they tend to have high computational costs, but
they provide an elegant way of modeling the acoustic signal. Statistical spectral
model methods are also complex but they are computationally efficient.
Blackboard systems are general architectures and they need to rely on other
techniques, like a set of rules (Martin, 1996) or supervised learning (Bello, 2000),
to estimate the fundamental frequencies. However, the blackboard integration
concept provides a promising framework for multiple f0 estimation.
4.4.1
Most onset detection methods are based on signal processing techniques, and
they follow the general scheme represented in Fig. 4.14. First, a preprocessing
stage is done to transform the signal into a more convenient representation,
usually in the frequency domain. Then, an onset detection function (ODF),
related to the onset strength at each time frame is defined. A peak picking
algorithm is applied to the detection function, and those peaks over a threshold
are finally identified as onsets.
In the preprocessing stage, most systems convert the signal to the frequency
or complex domain. Besides STFT, a variety of alternative preprocessing
methods for onset detection, like filter-banks (Klapuri, 1999), constant-Q
77
Audio signal
Preprocessing
stage
o(t)
0.8
Onset detection
function
0.6
0.4
0.2
Peak picking
and thresholding
2,32
4,64
6,96
* * * * * * * * * * * * * *** **
9,28
11,60
13,93
* * *
78
79
80
4.4.2
Some methods do not follow the general scheme of Fig. 4.14. Instead, they use
machine learning techniques to classify each frame into onset or non-onset.
Supervised learning
The system proposed by Lacoste and Eck (2007) use either STFT and
constant Q transforms in the preprocessing stage. The linear and logarithmic
frequency bins are combined with the phase plane to get the input features for
one or several feed-forward neural networks, which classify frames into onset
or non onset. In the multiple network architecture, the tempo trace is also
estimated and used to condition the probability for each onset. This tempo
trace is computed using the cross-correlation of the onset trace with the onset
trace autocorrelation within a temporal window. A confidence measure that
weights the relative influence of the tempo trace is provided to the network.
Marolt et al. (2002) use a bank of 22 auditory filters to feed a fully connected
network of integrate-and-fire neurons. This network outputs a series of impulses
produced by energy oscillations, indicating the presence of onsets in the input
signal. Due to noise and beating, not all the impulses correspond to onsets.
To decide which impulses are real onsets, a multilayer perceptron trained with
synthesized and real piano recordings is used to yield the final estimates.
Support vector machines have also been used for onset detection by Kapanci
and Pfeffer (2004) and Davy and Godsill (2002) to detect abrupt spectral
changes.
Unsupervised learning
Unsupervised learning techniques like NMF and ICA have been also applied for
onset detection.
Wang et al. (2008) generate the non-negative matrices with the magnitude
spectra of the input data. The basis matrices are the temporal and frequency
patterns. The temporal patterns are used to obtain three alternative detection
functions: a first-order difference function, a psychoacoustically motivated
relative difference function, and a constant-balanced relative difference function.
These ODFs are similarly computed by inspecting the differences of the temporal
patterns.
81
82
83
5. ONSET DETECTION
...
...
C3
C!3
D3
D!3
E3
F !3
F3
G3
frequency
...
b1
...
bi
bB
Energy in
each band
5.1 Methodology
For detecting the beginnings of the notes in a musical signal, the method
analyzes the spectrum information across one semitone filter bank, computing
the band differences in time to obtain a detection function. Peaks in this
function are extracted, and those which values are over a threshold are
considered as onsets.
5.1.1
Preprocessing
From a digital audio signal, the STFT is computed, providing its magnitude
spectrogram. A Hanning window with 92.9 ms length is used, with a 46.4 ms
hop size. With these values, the temporal resolution achieved is t = 46.4 ms,
and the spectral resolution is f = 10.77 Hz.
2 http://grfia.dlsi.ua.es/cm/worklines/pertusa/onset/pertusa_onset.tgz
84
5.1. METHODOLOGY
Using a 1/12 octave filter bank, the filter corresponding to the pitch G]0
has a center frequency of 51.91 Hz, and the fundamental frequency of the next
pitch, A0 , is 55.00 Hz, therefore this spectral resolution is not enough to build
the lower filters. Zero padding was used to get more points in the spectrum.
Using a zero padding factor z = 4, three additional windows with all samples set
to zero were appended at the end of each frame before doing the STFT. With
this technique, a frequency resolution f = 10.77/4 = 2.69 Hz is eventually
obtained.
At each frame, the spectrum is apportioned among a one semitone filter bank
to produce the corresponding filtered values. The filter bank comprises from 52
Hz (pitch G]0 ) to the Nyquist frequency to cover all the harmonic range. When
fs = 22, 050 Hz, B = 94 filters are used3 , which center frequencies correspond to
the fundamental frequencies of the 94 notes in that range. The filtered output
at each frame is a vector b with B elements (b B ).
b = {b1 , b2 , . . . , bi , . . . , bB }
5.1
Each value bi is obtained from the frequency response Hi of the corresponding filter i with the spectrum. The Eq. 5.2 is used4 to compute the filtered
values:
v
uK1
uX
bi = t
(|X[k]| |Hi [k]|)2
5.2
k=0
5.1.2
Two onset detection functions have been used. The first one, called o[t], can
be used for percussive (hard) onsets, whereas an alternative ODF o[t] has been
proposed for sounds with smooth (soft) onsets.
Onset detection function for hard onsets (o[t])
Like in other onset detection methods, as (Bilmes, 1993), (Goto and Muraoka,
1995, 1996), and (Scheirer, 1998), a first order derivative function is used to
pick potential onset candidates. In the proposed approach, the derivative c[t] is
computed for each filter i.
ci [t] =
d
bi [t]
dt
5.3
3 When
85
5. ONSET DETECTION
o(t)
0.8
0.6
0.4
0.2
2,32
4,64
6,96
9,28
11,60
13,93
Figure 5.2: Example of the onset detection function o[t] for a piano melody,
RWC-MDB-C-2001 No. 27 from Goto (2003), RWC database.
The values for each filter must be combined to yield the onsets. In order to
detect only the beginnings of the events, the positive first order derivatives of
all the bands are summed at each time, whereas negative derivatives, which can
be associated with offsets, are discarded:
a[t] =
B
X
5.4
i=1
To normalize the onset detection function, the overall energy s[t] is also
computed (note that a[t] < s[t]):
s[t] =
B
X
5.5
bi [t]
i=1
The sum of the positive derivatives a[t] is divided by the sum of the filtered
values s[t] to compute a relative difference. Therefore, the onset detection
function o[t] [0, 1] is:
B
X
a[t]
o[t] =
=
s[t]
i=1
B
X
5.6
bi [t]
i=1
Fig. 5.2 shows an example of the onset detection function o[t] for a piano
excerpt, where all the peaks over the threshold were correctly detected onsets.
86
5.1. METHODOLOGY
C
X
j (bi [t + j] bi [t j]) ,
5.7
j=1
a
[t] =
5.8
i=1
With these equations, Eq. 5.5 must be replaced by Eq. 5.9 to normalize o[t]
into the range [0, 1]:
s[t] =
C
B X
X
j bi [t + j]
5.9
i=1 j=1
5.10
i=1 j=1
87
5. ONSET DETECTION
o[t]
o(t)
0.8
0.6
0.4
0.2
oo(t)
[t]
2,32
4,64
6,96
9,28
11,60
13,93
0.8
0.6
0.4
0.2
o(t)
o
[t]
2,32
4,64
6,96
9,28
11,60
13,93
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
2,32
4,64
6,96
9,28
11,60
13,93
Figure 5.3: Onset detection function for a polyphonic violin song (RWC-MDBC-2001 No. 36 from Goto (2003), RWC database). (a) o[t]; (b) o[t], with C = 1;
(c) o[t], with C = 2. With C = 2, all the onsets were successfully detected
except by one, which is marked with a circle.
88
5.1.3
The last stage is to extract the onsets from the onset detection function. Peaks at
time t are identified in the onset detection function when o[t1] < o[t] > o[t+1],
and those peaks over a fixed threshold o[t] > are considered as onsets. Two
consecutive peaks can not be detected, therefore the minimum temporal distance
between two onsets is 2t = 92.8 ms. A silence threshold is also introduced
to avoid false positive onsets in quiet regions, in such a way than if s[t] < ,
then o[t] = 0. The same peak detection and thresholding procedure is applied
for o[t].
The silence gate is only activated when silences occur, or when the
considered frame contains very low energy, therefore it is not a critical
parameter. The precision/recall deviation can be controlled through the
threshold .
5.2.1
As previously described, the system has two free parameters; the silence gate
threshold , to avoid false positives when the signal level is very low, and the
onset detection threshold , which controls the precision/recall deviation of the
ODF. The method was evaluated with the ODB database to set an appropriate
value for . The results are shown in Fig. 5.5. A good compromise between
precision and recall was obtained using = 0.18, with = 70.
6 Thanks
7 http://www.phon.ucl.ac.uk/resource/sfs/
8 http://grfia.dlsi.ua.es/cm/worklines/pertusa/onset/ODB
9 http://grfia.dlsi.ua.es/cm/worklines/pertusa/onset/evaluator
89
5. ONSET DETECTION
Figure 5.4: Onsets from RWC-MDB-C-2001 No. 27 from Goto (2003), RWC
database, labeled with speech filling system (SFS).
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
Precision
Recall
0
0.1
0.2
0.3
0.4
0.5
Figure 5.5: Onset detection (o[t]) precision and recall curves in function of the
threshold , using a constant value for the silence threshold = 70.
90
Reference
Content
OK
FP
FN
Pr %
Re %
F-m %
RWC-C02
RWC-C03
RWC-C26
RWC-C27
RWC-C36
RWC-C38
RWC-J01
RWC-G08
2-artificial
2-uncle mean
3-long gone
3-you think too much
6-three
8-ambrielb
15-tamerlano
25-rujero
Its alright for you
Tiersen 11
Realorgan3
classic
classic
piano
piano
violin
violin
piano
rock
soul
jazz
rock
jazz
rock
electro
opera
guitar
rock
bells
organ
64
25
36
210
45
165
96
62
117
157
135
160
138
111
127
92
83
37
13
37
41
1
0
19
13
6
17
35
10
53
10
27
33
41
17
1
36
10
43
31
0
13
0
29
17
8
16
21
8
25
17
36
12
2
1
1
2
2
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
2
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
66.37
37.88
97.30
100.0
70.31
92.70
94.12
78.48
76.97
94.01
71.81
94.12
83.64
77.08
75.60
84.40
98.81
50.68
56.52
59.81
44.64
100.0
94.17
100.0
85.05
84.96
88.57
87.97
88.20
94.41
86.49
89.03
75.51
91.37
97.87
98.81
97.37
86.67
61.54
40.98
98.63
97.00
82.57
88.71
89.30
83.22
82.11
91.01
81.57
90.14
86.25
76.29
82.74
90.64
98.81
66.67
68.42
1873
406
282
82.19
86.91
84.48
Total
Table 5.1: Onset detection results using the proposed database (ODB). The
table shows the number of correctly detected onsets (OK), false positives (FP),
false negatives (FN), merged onsets (M), doubled onsets (D), precision (P),
recall (R), and F-measure (F-m).
The detailed results using o[t] with these thresholds can be seen in Tab. 5.1.
The overall F-measure achieved was 84.48%.
In order to get a perceptual evaluation of the results, once performed the
onset detection, new audio files10 were generated using CSound by adding to the
original waveform a click sound in the positions where the onsets were detected.
Comparison with other approaches
In order to compare the method with other approaches, two publicly available
onset detection algorithms were evaluated using the ODB database. The
experiments were done comparing the onset times obtained by BeatRoot11 and
aubio12 with the ground-truth onsets of the ODB database using the evaluation
methodology previously described.
BeatRoot, introduced by Dixon (2006), is a software package for beat
tracking, tempo estimation and onset detection. To evaluate the method, the
onset times were obtained using the BeatRoot-0.5.6 default parameters with the
following command:
java -jar beatroot-0.5.6.jar -o onsets.txt -O input.wav
10 http://grfia.dlsi.ua.es/cm/worklines/pertusa/onset/generated_sounds_ODB
11 http://www.elec.qmul.ac.uk/people/simond/beatroot/index.html
12 http://aubio.org/
91
5. ONSET DETECTION
System
OK
FP
FN
Pr %
Re %
F-m %
1873
1828
1526
406
608
778
282
327
629
3
79
21
3
80
21
82.19
75.04
66.23
86.91
84.83
70.81
84.48
79.63
68.45
Table 5.2: Comparison with other methods using the ODB database and o[t].
In the method from Dixon (2006), different onset detection functions based
on spectral flux, phase deviation, and complex domain13 can be selected.
The onset detection function values are normalized and a simple peak picking
algorithm is used to get the onset times.
Aubio is the implementation of the algorithm proposed by Brossier (2005),
submitted to the MIREX 2005 contest and previously described in Sec. 4.4.
Like in BeatRoot, the default parameters were used for the evaluation of this
method:
5.2.2
92
Reference
OK
FP
FN
Pr %
Re %
F-m %
RWC-C02
RWC-C03
RWC-C36
RWC-C38
Realorgan3
47
25
45
101
11
35
38
5
45
13
60
31
0
93
4
0
0
0
5
0
0
0
0
5
0
57.32
39.68
90.00
69.18
45.83
43.93
44.64
100.00
52.06
73.33
49.74
42.02
94.74
59.41
56.41
5.3.1
Besides the proposed method, three algorithms were evaluated in the MIREX
(2009) onset detection contest. These algorithms are briefly described next.
The onset detection function from Tzanetakis (2009) is based on the halfwave rectified spectral flux. It uses a pick picking algorithm to find local maxima
in consecutive frames and a threshold relative to the local mean. To reduce the
14 http://alg.ncsa.uiuc.edu/do/tools/d2k
15 Music
to knowledge, http://www.music-ir.org/evaluation/m2k/
Music Information Retrieval Systems Evaluation Laboratory.
16 International
93
5. ONSET DETECTION
94
Participant
OK
FP
FN
Pr %
Re %
F-m %
R
obel (2009) 10 hd
R
obel (2009) 7 hd
R
obel (2009) 19 hdc
Pertusa and I
nesta (2009)
R
obel (2009) 16 nhd
R
obel (2009) 12 nhd
Tan et al. (2009) 1
Tan et al. (2009) 2
Tan et al. (2009) 3
Tan et al. (2009) 5
Tan et al. (2009) 4
Tzanetakis (2009)
7015
7560
7339
6861
6426
6440
6882
6588
5961
7816
5953
5053
1231
2736
2367
2232
846
901
2224
1976
1703
5502
1843
2836
2340
1795
2016
2494
2929
2915
2473
2767
3394
1539
3402
4302
161
188
185
196
148
145
157
152
146
84
135
162
133
257
212
10
183
198
308
266
285
1540
345
46
85.00
81.32
80.56
79.99
86.39
85.96
75.67
78.28
79.61
62.88
78.98
67.01
79.19
83.30
81.88
77.50
73.62
73.15
76.97
74.58
68.97
83.69
68.91
59.91
79.60
79.00
78.31
76.79
76.48
76.10
74.43
73.38
68.63
68.23
67.94
59.54
Table 5.4: Overall MIREX 2009 onset detection results ordered by F-measure.
The precision, recall and F-measure are averaged. The highest F-measure was
obtained using = 0.25.
If the musical excerpt is classified as unpitched, the onset detection is
based only on energy processing. If it is set as pitched (percussive or nonpercussive), both energy processing and pitch processing are combined. The
energy-based processing computes the spectral differences from two consecutive
frames and applies an adaptive threshold. The pitch-based processing computes
the chromagram and looks for changes in the strongest base pitch class and
dominant harmonics pitch class pair. Adjacent time frames with the same pitch
content are grouped into the same cluster, and the clusters indicate regions
belonging to the same note. In the percussive pitched class, energy changes
have higher weights than pitch changes, whereas in the non-percussive category
the pitch changes are considered to be more relevant.
5.3.2
The method was evaluated in the MIREX (2009) onset detection contest using
different values of [0.1, 0.3]. The overall results using the best parameters
for each algorithm are shown in Tab. 5.4. The proposed method yielded a good
average F-measure with a very low computational cost (see Tab. 5.5).
The Tab. 5.6 shows the values of that yielded the best results for each
sound category compared with the highest overall average F-measure among
the evaluated methods. The proposed approach achieved the highest average
F-measure for the brass, drums and plucked strings categories, characterized by
hard onsets.
Complex sounds are mixtures of unpitched and pitched sounds, including
singing voice. In this category, the method achieved a lower F-measure than
using poly-pitched and drum sounds probably due to the presence of singing
voice. In general, this algorithm is not suitable for singing voice, which is
usually not perfectly tuned and tends to have partials shifting in frequency
across different semitones, causing many false positives.
95
5. ONSET DETECTION
Participant
Pertusa and I
nesta (2009)
Tzanetakis (2009)
R
obel (2009) 12 nhd
R
obel (2009) 16 nhd
R
obel (2009) 7 hd
R
obel (2009) 19 hdc
R
obel (2009) 10 hd
Tan et al. (2009) 1
Tan et al. (2009) 2
Tan et al. (2009) 3
Tan et al. (2009) 4
Tan et al. (2009) 5
Runtime (hh:mm)
00:01
00:01
00:02
00:02
00:03
00:03
00:04
01:57
01:57
01:57
01:57
01:57
Class
Complex
Poly-pitched
Solo bars & bells
Solo brass
Solo drum
Solo plucked strings
Solo singing voice
Solo sustained strings
Solo winds
Files
15
10
4
2
30
9
5
6
4
0.19
0.20
0.26
0.24
0.21
0.28
0.30
0.24
0.28
Pr %
68.97
93.58
79.69
77.69
94.00
91.25
15.17
58.06
54.97
Re %
70.25
88.27
80.12
83.68
88.93
89.56
46.12
60.92
70.17
F-m %
68.51
90.36
77.34
79.88
90.68
90.11
22.63
55.47
60.15
Best F-m %
74.82
91.56
99.42
79.88
90.68
90.11
51.17
64.01
75.33
Table 5.6: Detailed MIREX 2009 onset detection results for the proposed
method with the best for each class. The precision, recall and F-measure
are averaged. The best F-measure among the evaluated methods is also shown.
Participant
R
obel (2009) 19 hdc
R
obel (2009) 7 hd
Pertusa and I
nesta (2009)
R
obel (2009) 19 hd
R
obel (2009) 16 nhd
R
obel (2009) 12 nhd
Tan et al. (2009) 1
Tan et al. (2009) 2
Tan et al. (2009) 3
Tan et al. (2009) 4
Tan et al. (2009) 5
Tzanetakis (2009)
Params
Pr %
Re %
F-m %
0.34
0.34
0.20
0.52
0.43
0.49
N/A
N/A
N/A
N/A
N/A
N/A
92.07
91.70
93.58
93.02
98.51
96.12
85.94
85.94
89.61
89.23
61.33
71.91
92.02
92.31
88.27
87.79
80.21
80.97
83.11
83.11
70.73
70.32
90.01
66.43
91.56
91.54
90.36
89.20
87.26
87.11
83.12
83.12
74.17
73.77
68.66
67.41
96
F-m
Complex
Poly pitched
Solo bars and bells
Solo brass
Solo drum
Solo plucked strings
Solo singing voice
Solo sustained strings
Solo winds
Total
Figure 5.7: MIREX 2009 onset detection F-measure respect to the threshold
for the different sound classes using the proposed method.
The proposed methodology is primarily intended for detecting pitch changes,
therefore the poly-pitched results (see Tab. 5.7) are of special interest. For
this class of sounds, the F-measure was close to the best. These results are
satisfactory, given that the R
obel (2009) and Tan et al. (2009) approaches are
also oriented to the onset detection of pitched sounds.
As expected, the method performs slightly worse with sounds with nonpercussive attacks, like sustained strings and winds. For instance, for sax
sounds two onsets are usually detected by the system; one when the transient
(breathing) begins, and other when the pitch is reached. Portamentos19 are
also a problem for the proposed method, and they usually occur in these kind
of sounds, and also in singing voice. A strong portamento produces that a new
onset is detected each time that a semitone is reached, like it would happen
with a glissando. This is not a drawback for multiple pitch estimation systems,
but it may yield some false positive onsets. Therefore, for detecting the onsets
of these sounds, it is probably more adequate to identify the transients rather
than the pitch changes.
19 A portamento is a continuous and smooth frequency slide between two pitches. A glissando
is a portamento which moves in discrete steps corresponding to pitches. For instance, a
glissando can be played with a piano, but this instrument is unable to play a portamento.
Violins can produce portamentos, although they can also generate glissandos.
97
5. ONSET DETECTION
Bars and bells have percussive onsets and they are typically pitched,
although most of these sounds are inharmonic. Therefore, their energy may
not be concentrated in the central frequencies of the one semitone bands. In
the proposed method, when this happens and the harmonics slightly oscillate
in frequency, they can easily reach adjacent bands, causing some false positives.
Anyway, it is difficult to derive conclusions for this class of sounds, as only 4
files were used for the evaluation and the MIREX data sets are not publicly
available.
Interestingly, the proposed approach also yielded good results with unpitched
sounds, and it obtained the highest F-measure in solo-drum excerpts among all
the evaluated methods.
The best threshold value for poly-pitched and complex sounds was around
= 0.20, which coincides with the threshold experimentally obtained with the
ODB database. Using this threshold, the overall F-measure is only 1% lower
(see Fig. 5.7) than with the best threshold = 0.25 for the whole MIREX data
set, therefore the differences are not significant.
5.4 Conclusions
An efficient novel approach for onset detection has been described in this
chapter. In the preprocessing stage, the spectrogram is computed and
apportioned through a one semitone filter bank. The onset detection function
is the normalized sum of temporal derivatives for each band, and those peaks
in the detection function over a constant threshold are identified as onsets.
A simple variation has been proposed, considering adjacent frames in order
to improve the accuracy for non-percussive pitched onsets. In most situations,
o[t] yields lower results than without considering additional frames, therefore it
is only suitable for a few specific sounds.
The method has been evaluated and compared with other works in the
MIREX (2009) audio onset detection contest. Although the system is mainly
designed for tuned pitched sounds, the results are competitive for most timbral
categories, except for speech or inharmonic pitched sounds.
As the abrupt harmonic variations produced at the beginning of the notes are
emphasized and those produced in the sustain stage are minimized, the system
performs reasonably well against smooth vibratos lower than one semitone.
Therefore, o[t] is suitable for percussive harmonic onsets, but it is also robust
to frequency variations in the sustained sounds.
When a portamento occurs, the system usually detects a new onset when
the f0 increases or decreases more than one semitone, resulting in some false
positives. However, this is not a drawback if the method is used for multiple
pitch estimation.
98
5.4. CONCLUSIONS
99
101
6.1 Preprocessing
Supervised learning techniques require a set of input features aligned with
the desired outputs. In the proposed approach, a frame by frame analysis
is performed, building input-output pairs at each frame. The input data are
spectral features, whereas the outputs consist on the ground-truth pitches. The
details of the input and output data and their construction are described in this
section.
6.1.1
102
Input data
The training data set consists of musical audio files at fs = 22, 050 Hz
synthesized from MIDI sequences. The STFT of the each musical piece is
computed, providing the magnitude spectrogram using a 93 ms Hanning window
with a 46.4 ms hop size. With these parameters, the time resolution for
the spectral analysis is t = 46.4 ms, and the highest possible frequency is
fs /2 = 11025 Hz, which is high enough to cover the range of useful pitches.
Like in the onset detection method, zero padding has been used to build the
lower filters.
The same way as described in Chapter 5 for onset detection, the spectral
values at each frame are apportioned into B = 94 bands using a one semitone
filter bank ranging from 50 Hz (G]0 ) to fs /2, almost eight octaves, yielding a
vector of filtered values b[t] at each frame.
These values are converted into decibels and set as attenuations from the
maximum amplitude, which is 96 dB4 with respect to quantization noise. In
order to remove noise and low intensity components at each frame, a threshold
is applied for each band in such a way that, if bi [t] < , then bi [t] = . This
threshold was empirically established at = 45 dB. This way, the input data
is within the range b[t] [, 0]B .
Information about adjacent frames is also considered to feed the classifiers.
For each frame at time t, the input is a set of spectral features {b[t + j]} for
j [m, +n], being m and n the number of spectral frames considered before
and after the frame t, respectively.
Output data
For each MIDI file, a binary digital piano-roll (BDP) is obtained to get the
active pitches (desired output) at each frame. A BDP is a matrix where each
row corresponds to a frame and each column corresponds to a MIDI pitch (see
Fig. 6.1). Therefore, at each frame t, n + m + 1 input vectors b[t + j] for
j [m, +n] and a vector of pitches [t] {0, 1}B are shown to the supervised
method during the training stage.
103
...
F8
G!0
Figure 6.1: Binary digital piano-roll coding in each row the active pitches at
each time frame when the spectrogram is computed.
6.2.1
6.1
This way, the input data bi [t] [, 0] are mapped into bi [t] [1, +1] for
the network input. Each of these values, which corresponds to one spectral
component, is provided to a neuron at the input layer. The adjacent frames
provide the short-context information. For each frame considered, B new input
units are added to the network, being the total number of input neurons B(n +
m + 1).
The network output layer is composed of B = 94 neurons, one for each
possible pitch. The output is coded in such a way that an activation value of
yk [t] = 1 for a particular unit k means that the k-th pitch is active at that
frame, whereas yk [t] = 0 means that the pitch is not active.
The TDNN has been implemented with bias5 and without momentum6 . The
selected transfer function f (x) is a standard sigmoid (see Fig. 6.3):
f (x) =
2
1
1 + ex
6.2
5 A bias neuron lies in one layer, is connected to all the neurons in the next layer but none
in the previous layer, and it always emits 1.
6 Momentum, based on the notion from physics that moving objects tend to keep moving
unless acted upon by outside forces, allows the network to learn more quickly when there exist
plateaus in the error surface (Duda et al., 2000).
104
h[t]
h(t i)
................
.....
.....
.....
.....
.....
k,+
S(f
ti+n
b[t
n] )
k = 0, ... , b -1
S(fb[t]
k , ti )
k = 0, ... , b -1
S(f
k ,ti-m
b[t
m])
k = 0, ... , b -1
Figure 6.2: TDNN architecture and data supplied during training. The arrows
represent full connection between layers.
11
0.5
0.5
00
-0.5
-0.5
-1-1
-10
-10
-5
-5
00
55
10
10
After performing the transfer function, the output values for the neurons are
within the range yk [t] [1, +1]. A pitch is detected when yk [t] > . Therefore,
the activation threshold controls the sensitivity of the network (the lower is
, the more likely a pitch is activated).
6.2.2
Nearest neighbors
In the kNN method, the vectors [t] are the prototype labels. As previously
discussed in Sec. 2.4.2, in the recognition stage the standard kNN algorithm
can not generalize and find new prototypes not seen in the training stage. A
simple extension of the kNN method has been proposed to mitigate this effect.
In the recognition stage the k nearest neighbors are identified at the target
frame t, and an activation function Ap [t] is obtained for each pitch p:
Ap [t] =
k
X
p(i) [t]
6.3
i=0
105
k
X
p(i) [t]
i=0
1
di [t] + 1
6.4
A third activation function has been proposed, taking into account the
normalized distances:
A00p [t] =
k
X
i=0
p(i) [t]
1
k1
di [t]
1 P
i di [t]
6.5
In all these cases, if the activation function obtains a value greater than ,
then the pitch p is added to the prototype yielded at the target frame t.
6.3 Evaluation
A data set of MIDI sequences were utilized for the evaluation of the proposed
methods, obtaining input/output pairs from the MIDI files and the synthesized
audio. Then, 4-folded cross-validation experiments were performed, making four
subexperiments dividing the data set into four parts (3/4 for training and 1/4
for test). The presented results were obtained by averaging the subexperiments
carried out on each data subset. The accuracy of the method is evaluated at
frame-by-frame and note levels.
The frame (or event) level accuracy is the standard measure for multiple
pitch estimation described in Eq. 3.6. A relaxed novel metric has been proposed
to evaluate the system at note level. Notes are defined as series of consecutive
event detections along time. A false positive note is detected when an isolated
series of consecutive false positive events is found. A false negative note is
defined as a sequence of isolated false negative events, and any other sequence
of consecutive event detections is considered as a successfully detected note.
Eq. 3.6 is also used for note level accuracy, considering false positive, false
negative and correctly detected notes.
106
6.3. EVALUATION
6.3.1
The evaluation was done using musical pieces generated with synthetic instruments with near-constant temporal envelopes. The limitations for acoustical
acquisition from real instruments played by musicians and the need of an exact
timing of the ground-truth pitches have conditioned the decision for constructing
these sounds using virtual synthesis models.
Polyphonic tracks of MIDI files (around 25 minutes of music) were improvised by the author and synthesized using different waveshapes, attempting to
have a variety of styles and pitch combinations in the training set. In total,
2, 377 different chords were present in the data set with an average polyphony
of 3 simultaneous sounds. The selected timbres are described next.
Sinusoidal waveshape
This is the simplest periodic wave. Almost all the spectral energy is concentrated
in the f0 component.
Sawtooth waveshape
This sound contains all the harmonics with amplitudes proportional to 1/h,
being h the number of harmonic. Only the first H = 10 harmonics were used
to generate this sound.
Clarinet waveshape
The clarinet sound is generated using a physical model of a clarinet with the
wgclar Csound opcode, which produces good imitating synthesis.
107
Some parameters are free in any neural network. The ones of special interest
in the proposed method are the number of input frames (n + m + 1), and
the activation threshold . The computational complexity depends on the
number of input frames and, if this value is high, there can be some spectral
frames merging different pitch combinations, which can difficult the training
and recognition processes. The activation threshold controls the sensitivity
of the network.
These are the most relevant parameters, whereas others concerning the
training, like weight initialization, number of hidden neurons, etc. have shown
to be less important. Different experiments have been carried out varying these
parameters and the results did not vary importantly.
The detailed parametrization results are extensively described in (Pertusa,
2003). After some initial tests, a number of hidden neurons of 100 proved
7 An
108
events
notes
sine
0.94 0.02
0.95 0.02
sawtooth
0.92 0.02
0.92 0.02
clarinet
0.92 0.02
0.92 0.02
Hammond
0.91 0.02
0.92 0.02
6.4.2
Recognition results
109
0.8
0.6
0.4
0.2
0
C1
C2
C3
C4
C5
C6
C7
C8
Pitch
the clarinet and the Hammond suggest that the methodology can be applied to
other instruments characterized by a nearly stable amplitude envelope.
The errors have been analyzed considering note length, pitch, and number
of training samples. Errors produced by notes shorter than 100 ms represent
the 31% of the total amount of errors. With a time resolution t = 46 ms,
these notes extend along one or two frames. Since most of the false negatives
occur at the beginning and end of the notes, these very short notes, which are
not usual in real music, are sometimes missed.
As shown in Fig. 6.8, most of pitch errors correspond to very high (higher
than C7 ) and very low (lower than C3 ) pitches which are very unfrequent in real
music, whereas the method has a very high success rate in the central range of
pitches. This effect is partially related with the amount of musical pitches in
the training set, which is composed of musical data. The most frequent musical
pitches are those at valid central frequencies. There exists a clear correlation of
recognition success for a given pitch to the amount of events in the training set
for that pitch. In Fig. 6.9, each dot represents a single pitch. Abcises represent
the amount of data for that pitch in the training set, whereas ordinates represent
the recognition accuracy. An exponential curve has been adjusted to the data
showing the clear non linear correlation between the amount of training data
and the performance.
Another reason to explain these errors is that lowest pitches are harder to
detect due to the higher frequency precision required, and highest pitches have
less harmonics below Nyquist. Moreover, the harmonics of highest pitches can
110
0.8
0.6
0.4
0.2
0
0
Figure 6.9: TDNN correlation between recognition rates for each pitch and the
amount of events in the training set for that pitch.
also produce aliasing when they are synthesized. Anyway, most of the wrong
estimates correspond to very unusual notes that were artificially introduced in
the data set to spread out the pitch range, and which are not common in real
music.
A graphical example of the detection is shown in Fig. 6.10. This musical
excerpt was neither in the training set nor in the recognition set. It was
synthesized using the clarinet timbre11 , and with a fast tempo (120 bpm). In
this example, the event detection accuracy was 0.94, and most of the errors were
produced in the note onsets or offsets. Only 3 very short false positive notes
were detected.
6.4.3
The results for the evaluated waveshapes were similar, showing that the
performance does not critically depends on the selected timbre, at least for
instruments with a fixed spectral profile. To assess how specific the network
weights are for the different timbres considered, musical pieces generated with
a given timbre were presented to a network trained with a different instrument.
The event and note detection results are displayed in tables 6.2 and 6.3,
respectively.
11 Clarinet
111
112
0.44
2.37
3.76
5.15
6.55
sec.
Figure 6.10: Temporal evolution of the note detection for a given melody using the clarinet timbre. Top: the original score;
center: the melody as displayed in a sequencer piano-roll; down: the piano-roll obtained from the network output compared
with the original piano-roll. Notation: o: successfully detected events, +: false positives, and -: false negatives.
0.00
C3
C4
C5
C3
C4
C5
Events
sine
sawtooth
clarinet
Hammond
sine
0.94 0.02
0.48 0.02
0.46 0.03
0.083 0.014
sawtooth
0.57 0.03
0.92 0.02
0.67 0.03
0.169 0.009
clarinet
0.70 0.04
0.69 0.02
0.92 0.02
0.144 0.007
Hammond
0.30 0.02
0.26 0.03
0.34 0.02
0.91 0.02
Table 6.2: Frame level cross-detection results using TDNN. Rows correspond
to training timbres and columns to test timbres.
Notes
sine
sawtooth
clarinet
Hammond
sine
0.95 0.02
0.51 0.03
0.57 0.04
0.089 0.014
sawtooth
0.46 0.02
0.92 0.02
0.56 0.02
0.164 0.009
clarinet
0.61 0.06
0.65 0.01
0.92 0.02
0.140 0.002
Hammond
0.27 0.02
0.29 0.02
0.337 0.008
0.92 0.02
Table 6.3: Note level cross-detection results using TDNN. Rows correspond to
training timbres and columns to test timbres.
The accuracy ranges from 0.089 to 0.65 for pitch recognition of sounds
which are different from those used to train the TDNN, showing the network
specialization. The cross-detection value could be an indication of the similarity
between two timbres, but this assumption needs of further in-deep study.
0.9
0.8
k/6
k/5
k/4
k/3
k/2
2k/3
0.7
0.6
Acc
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
700
800
900
1000
Figure 6.11: Event detection accuracy using Ap for the sinusoidal timbre with
respect to k and .
have provided the best results for event and note detection (see Tabs. 6.4 and
6.5). When k becomes large, the accuracy decreases, and good values for k
are relatively small (from 20 to 50). The behavior is similar for all the tested
timbres. In most cases, Ap obtains a significantly higher accuracy than when
using only one nearest neighbor.
No significant differences were found comparing the best results for A0p and
Ap . However, when using A0p , the number of neighbors does not affect much the
results (see 6.12). The highest accuracy was obtained with {k/200, k/300}.
The best results for most timbres were obtained using A00 p with k = 20.
Tabs. 6.4 and 6.5 show the success rate for events and notes using k = 20,
which is the best k value among those tested for most timbres (except from the
sinusoidal waveshape, where k = 50 yielded a slightly higher accuracy). It can
be seen that A00p obtains the highest accuracy for most timbres. Anyway, the
best results are significantly worse than those obtained using the TDNN.
6.6 Conclusions
In this chapter, different supervised learning approaches for multiple pitch
estimation have been presented. The input/output pairs have been generated
by sequencing a set of MIDI files and synthesizing them using CSound. The
magnitude STFT apportioned through one semitone filter-bank is used as input
114
6.6. CONCLUSIONS
k/1000
k/300
k/200
k/100
k/10
k
0.9
0.8
0.7
0.6
Acc
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
700
800
900
1000
Figure 6.12: Event detection accuracy using A0p for the sinusoidal timbre with
respect to k and .
0.9
0.8
0.7
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.6
Acc
0.5
0.4
0.3
0.2
0.1
0
100
200
300
400
500
600
700
800
900
1000
Figure 6.13: Event detection accuracy using A00p for the sinusoidal timbre with
respect to k and .
115
Events
1-NN
Ap
A0p
A00
p
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
k/6
k/5
k/4
k/3
k/2
2k/3
k/1000
k/300
k/200
k/100
k/10
k
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
sinusoidal
0.67
0.74
0.74
0.73
0.722
0.668
0.62
0.61
0.70
0.72
0.67
0.22
0.21
0.72
0.74
0.727
0.712
0.681
0.648
0.62
0.58
0.51
0.02
0.02
0.02
0.02
0.014
0.012
0.02
0.03
0.02
0.02
0.02
0.03
0.03
0.03
0.02
0.013
0.013
0.012
0.012
0.03
0.02
0.03
sawtooth
0.62
0.644
0.653
0.654
0.653
0.633
0.61
0.54
0.616
0.637
0.633
0.19
0.09
0.62
0.651
0.655
0.654
0.637
0.623
0.60
0.58
0.53
0.02
0.011
0.010
0.007
0.007
0.011
0.02
0.03
0.014
0.006
0.009
0.03
0.03
0.02
0.011
0.007
0.007
0.011
0.013
0.02
0.02
0.02
clarinet
0.50
0.49
0.48
0.50
0.49
0.49
0.48
0.457
0.49
0.49
0.47
0.17
0.04
0.48
0.49
0.50
0.49
0.49
0.49
0.48
0.47
0.44
0.02
0.02
0.02
0.02
0.02
0.02
0.02
0.014
0.02
0.02
0.03
0.03
0.02
0.02
0.02
0.02
0.02
0.02
0.02
0.02
0.03
0.03
Hammond
0.604
0.637
0.638
0.643
0.641
0.612
0.58
0.55
0.62
0.632
0.60
0.16
0.08
0.62
0.644
0.643
0.638
0.623
0.60
0.58
0.54
0.49
0.011
0.014
0.011
0.010
0.010
0.014
0.02
0.02
0.02
0.010
0.02
0.03
0.02
0.02
0.013
0.010
0.007
0.011
0.02
0.02
0.03
0.03
Notes
1-NN
Ap
A0p
A00
p
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
k/6
k/5
k/4
k/3
k/2
2k/3
k/1000
k/300
k/200
k/100
k/10
k
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
sinusoidal
0.65
0.71
0.722
0.728
0.728
0.705
0.675
0.38
0.61
0.688
0.70
0.35
0.33
0.62
0.720
0.731
0.733
0.717
0.697
0.671
0.65
0.60
0.02
0.02
0.009
0.008
0.009
0.008
0.008
0.03
0.03
0.008
0.02
0.09
0.09
0.03
0.010
0.008
0.008
0.007
0.007
0.011
0.02
0.03
sawtooth
0.51
0.55
0.57
0.59
0.60
0.614
0.612
0.29
0.46
0.52
0.596
0.30
0.18
0.46
0.56
0.59
0.61
0.611
0.618
0.614
0.600
0.57
0.02
0.03
0.03
0.03
0.03
0.013
0.010
0.02
0.03
0.03
0.012
0.08
0.06
0.03
0.03
0.03
0.02
0.013
0.010
0.012
0.012
0.02
clarinet
0.46
0.44
0.45
0.47
0.48
0.49
0.49
0.41
0.42
0.45
0.45
0.25
0.09
0.39
0.45
0.48
0.48
0.49
0.49
0.48
0.47
0.45
0.04
0.04
0.04
0.04
0.04
0.04
0.05
0.03
0.03
0.03
0.03
0.08
0.04
0.05
0.04
0.04
0.04
0.04
0.05
0.05
0.05
0.06
Hammond
0.530
0.55
0.57
0.59
0.60
0.617
0.613
0.33
0.50
0.57
0.605
0.26
0.15
0.47
0.56
0.59
0.62
0.622
0.620
0.617
0.605
0.57
0.013
0.02
0.02
0.02
0.02
0.010
0.007
0.03
0.03
0.03
0.009
0.05
0.05
0.03
0.02
0.02
0.02
0.013
0.008
0.008
0.011
0.03
116
6.6. CONCLUSIONS
data, whereas the outputs are the ground-truth MIDI pitches. Two different
supervised learning methods (TDNN and kNN) have been used and compared
for this task using simple stationary sounds and taking into account adjacent
spectral frames.
The TDNN performed far better than the kNN, probably due to the huge
space of possible pitch combinations. The results suggest that the neural
network can learn a pattern for a given timbre, and it can find it in complex
mixtures, even in the presence of beating or harmonic overlap. The success
rate was similar in average for the different timbres tested, independently of the
complexity of the pattern, which is one of the points in favour of this method.
The performance using the nearest neighbors is clearly worse than the TDNN
approach. Different alternatives were proposed to generalize in some way the
prototypes matched using the kNN technique to obtain new classes (pitch
combinations) not seen in the training stage. However, these modifications
did not improve significantly the accuracy. An interesting conclusion from this
comparison is that kNN techniques are not a good choice for classification when
there exists many different prototype labels, as in this particular task.
Respect to the TDNN method, errors are concentrated in very low/high
frequencies, probably due to the sparse presence of these pitches in the training
set. This fact suggests that increasing the size and variety of the training set, the
accuracy could be improved. In the temporal dimension, most of the errors are
produced in the note boundaries, which are not very relevant from a perceptual
point of view. This is probably caused by the window length, which can cover
transitions between different pitch combinations. When the test waveshape
was different from that used to train the net, the recognition rate decreased
significantly, showing the high specialization of the network.
The main conclusions are that a TDNN approach can estimate accurately the
pitches in simple waveforms, and the compact input using a one semitone filterbank is representative of the spectral information for harmonic pitch estimation.
Future work include to test the feasibility for this approach for real mixtures
of sounds with varying temporal envelopes, but this requires of a large labeled
data set for training, and it is difficult to get musical audio pieces perfectly
synchronized with the ground-truth pitches. However, this is a promising
method that should be deeply investigated with real data.
It also seems reasonable to provide the algorithm with a first timbre
recognition stage, at least at instrument family level. This way, different weight
sets could be loaded in the net according to the decision taken by the timbre
recognition algorithm before starting the pitch estimation.
117
119
The methods described in this chapter are implemented in C++, and they
can be compiled and executed from the command line in Linux and Mac
OSX. Two standard C++ libraries have been used, for loading the audio files
(libsndfile2 ), and computing the Fourier transforms (FFTW3 from Frigo and
Johnson (2005)). The rest of code, including the generation of MIDI files, has
been implemented by the author.
7.1.1
Preprocessing
120
Waveform
STFT
Onset detection
Candidate selection
Iterative cancellation
Postprocessing
MIDI pitches
H( k )X(k )
7.1
k,|k |<W
4 SLM
121
60
50
40
30
20
10
0
0
500
1000
1500
2000
2500
3000
3500
4000
4500
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
5000
Figure 7.2: Example magnitude spectrum (top) and SLM (bottom) for two
sounds in an octave relation (92.5 and 370 Hz) using W = 50 Hz. The
fundamental frequencies are indicated with arrows.
|H|2 =
|H( k )|2
7.2
|X(k )|2
7.3
k,|k |<W
|X|2 =
k,|k |<W
|()|
|H| |X|
7.4
122
then the original spectral component at the same frequency , with its original
amplitude, is added to the harmonics list. The spectral components that
do not satisfy the previous condition are discarded. Therefore, the mid-level
representation of the proposed method consists on a sparse vector containing
only certain values of the original spectrum (those ideally corresponding to
partials). This sparse representation reduces the computational cost with
respect to the analysis of all spectral peaks.
7.1.2
Onset detection
The onsets are detected from the STFT using the method described in
Chapter 5, identifying each frame at ti as onset or not-onset. For efficiency,
only a single frame between two consecutive onsets is analyzed to yield the
fundamental frequencies within that inter-onset interval.
To avoid the analysis of transients at the onset times, the frame chosen to
detect the active notes is to + 1, being to the frame where an onset was detected.
Therefore, only those frames that are 46 ms after a detected onset are analyzed
to estimate the fundamental frequencies in the interval between two adjacent
onsets. The scheme for estimating the pitches only between two consecutive
onsets has also been used in the recent method from Emiya et al. (2008b).
7.1.3
Candidate selection
For each frame to + 1, a selection of f0 candidates is done from the sparse array
of sinusoidal peaks. Therefore, as many other iterative cancellation approaches,
the method assumes that the partial corresponding to the fundamental frequency is present. This assumption is made to improve the efficiency, as it
reduces significantly the number of candidates respect to the analysis of the
whole spectral range.
There are two restrictions for a peak to be a candidate; only candidates
within a given pitch margin [fmin , fmax ] are considered, and the difference
between the candidate frequency and the frequency of the closest pitch in
the equal temperament must be lower than fd Hz5 . This is a constant value
introduced to remove some false candidates at high frequencies.
7.1.4
Iterative cancellation
The candidates are sorted in ascending frequency order, and they are evaluated
using the iterative cancellation method described in Alg. 1.
The partials of each candidate are searched taking inharmonicity into
account. Similarly to (Bello et al., 2002), (Every and Szymanski, 2006), and
5 This
123
124
A
Sinusoidal peaks. A candidate is found
f
A
Constant harmonic pattern centered at
the candidate frequency and scaled with
the fundamental frequency amplitude
A
Residual peaks after cancellation
7.5
7.1.5
Postprocessing
Those candidates with a low absolute or relative intensity are removed. First,
the pitch candidates with a intensity ln < are discarded. The maximum
note intensity L = maxn {ln } at the target frame is calculated to remove the
candidates with ln < L, as the sources in the mixture should not have very
important energy differences7 .
Finally, the frequencies of the selected candidates are converted to MIDI
pitches with Eq. 2.21. Using this inter-onset based scheme, there are certain
ambiguous situations that are not produced in a frame by frame analysis. If a
pitch is detected in the current and previous inter-onset interval, then there are
two possibilities: there exists a single note spanning both onsets, or there is a
new note with the same pitch.
To make a simple differentiation between new notes and detections of pitches
that were already sounding in the previous frames, the estimation is done at
frames to + 1, and to 1. If a detected pitch at frame to + 1 is not detected
at to 1, then a new note is yielded. Otherwise, the note is considered to be a
continuation of the previous estimate.
126
has also been used in different ways in the literature (Klapuri, 2003a, Yeh et al.,
2005, Ca
nadas-Quesada et al., 2008, and Zhou et al., 2009). The proposed novel
smoothness measure is based on the convolution of the hypothetical harmonic
pattern with a gaussian window.
Given a combination, the HPS of each candidate is calculated considering the
harmonic interactions with the partials of all the candidates in the combination.
The overlapped partials are first identified, and their amplitudes are estimated
by linear interpolation using the non-overlapped harmonic amplitudes.
In contrast with the previous iterative cancellation method, which assumes a
constant harmonic pattern, the proposed joint approach can estimate hypothetical harmonic patterns from the spectral data, evaluating them according to the
properties of harmonic sounds. This approach is suitable for most real harmonic
sounds, in contrast with the iterative method, which assumes a constant pattern
based in percussive string instruments.
7.2.1
Preprocessing
7.2.2
Candidate selection
128
7.2.3
All the possible candidate combinations are calculated and evaluated, and
the combination with highest salience is selected at the target frame. The
combinations consist of different number of pitches. In contrast with other
works, like (Yeh et al., 2005), there is not need for a-priori estimation of the
number of concurrent sounds before detecting the fundamental frequencies, and
the polyphony is implicitly calculated in the f0 estimation stage, selecting the
combination with highest score independently from the number of candidates.
At each frame, a set of combinations {C1 , C2 , . . . , CN } is obtained. For
efficiency, like in the recent approach from Emiya et al. (2008b), only the
combinations with a maximum polyphony P are generated from the F
candidates. The amount of combinations N without repetition can be calculated
as:
N=
P
X
F
n=1
P
X
F!
n!(F n)!
n=1
7.6
This means that when the maximum polyphony is P = 6 and there are
F = 10 selected candidates, N = 847 combinations are generated. Therefore,
N combinations are evaluated at each frame, and the adequate selection of F
and P is critical for the computational efficiency of the algorithm.
7.2.4
HPS estimation
7.7
where pc,h is the amplitude of the h harmonic of the candidate c. The partials
are searched as previously described for the candidate selection stage. If a
particular harmonic is not found, then the corresponding value pc,h is set to
zero.
Once the partials of a candidate are identified, the HPS values are estimated
considering hypothetical source interactions. To do it, the harmonics of all the
candidates in the combination are first identified, and they are labeled with the
candidate they belong to (see Fig. 7.4). After the labeling process, there are
harmonics that only belong to one candidate (non-overlapped harmonics), and
harmonics belonging to more than one candidate (overlapped harmonics).
Assuming that the interactions between non-coincident partials (beating)
do not alter significantly the original spectral amplitudes, the non-overlapped
129
Spectral peaks
f
f1 f2
Partial identification
ff11
ff22
f
f1 f2
Linear subtraction
f1
HPS(f1)
f
f2
HPS(f2)
amplitudes are directly assigned to the HPS. However, the contribution of each
source to an overlapped partial amplitude must be estimated. This can be done
using the amplitudes of non-overlapped neighbor partials (Klapuri, 2003a, Yeh
et al., 2005, Every and Szymanski, 2006), assuming smooth spectral envelopes,
or considering that the amplitude envelopes of different partials are correlated
in time (Woodruff et al., 2008).
In the proposed method, similarly to (Maher, 1990) and (Yeh et al., 2005),
the amplitudes of overlapped partials are estimated by linear interpolation of
the neighboring non-overlapped partials (see Fig. 7.4).
If there are more than two consecutive overlapped partials, then the
interpolation is done the same way with the non-overlapped values. For instance,
if harmonics 2 and 3 are overlapped, then the amplitudes of harmonics 1 and 4
are used to estimate them by linear interpolation.
130
7.2.5
Salience of a combination
Once the HPS of all the candidates have been estimated for a given combination,
their saliences are calculated. The salience of a combination is the squared sum
of the saliences of its candidates. A candidate salience is obtained taking into
account the intensity and the smoothness of its HPS.
The intensity l(c) of a candidate c is a measure of the strength of a source,
and it is computed as the sum of the HPS amplitudes:
l(c) =
H
X
pc,h
7.8
h=1
Like in other works, the method also assumes that a smooth spectral pattern
is more probable than an irregular one. To compute the smoothness of a
candidate, the HPS is first normalized dividing the amplitudes by the maximum
. Then, p
is low-pass filtered using a
harmonic value in the HPS, obtaining p
truncated normalized Gaussian window N0,1 , which is convolved with the HPS
:
to obtain the smoothed version p
c = N0,1 p
c
p
7.9
11 Usually, only the first harmonics contain most of the energy of a harmonic source, therefore
typical values for H are within the margin H [5, 20].
131
p
~
p
0.8
0.6
0.4
0.2
10
10
p
~
p
0.8
0.6
0.4
0.2
Figure 7.5: Spectral smoothness measure example. The normalized HPS vector
p and the smooth version p
of two candidates (c1 , c2 ) are shown. Sharpness
values are s(c1 ) = 0.13, and s(c2 ) = 1.23.
Then, as shown in Fig. 7.5, a sharpness measure s(c) is computed by summing the absolute differences between the smoothed values and the normalized
HPS amplitudes:
s(c) =
H
X
c,h |)
(|
pc,h p
7.10
h=1
s(c)
1 N0,1 (
x)
7.11
s(c)
Hc
7.12
where Hc is the index of the last harmonic found for the candidate. This
parameter was introduced to prevent that high frequency candidates that have
less partials than those at low frequencies will have higher smoothness. This
way, the smoothness is considered to be more reliable when there are more
partials to estimate it.
132
Once the smoothness and the intensity of each candidate have been
calculated, the salience S(Ci ) of a combination Ci with C candidates is:
S(Ci (t)) =
C
X
[l(c) (c)]
c=1
7.13
7.2.6
Postprocessing
After selecting the best combination at each individual frame, a last stage is
applied to remove some local errors taking into account the temporal dimension.
If a pitch was not detected in a target frame but it was found in the previous
and next frames, it is considered to be active in the current frame too, avoiding
some temporal discontinuities. Notes shorter than a minimum duration d are
also removed.
Finally, the sequences of consecutive detected fundamental frequencies are
converted into MIDI pitches. The maximum intensity of the entire song
candidates is used as reference to get the MIDI velocities, linearly mapping
the candidate intensities within the range [0, maxC,c {l(c)}] into MIDI values
[0, 127].
complex mixture, even for expert musicians. As discussed in Sec. 3.1, context is
very important in music to disambiguate certain situations. The joint estimation
method II is an extension of the previous method, but considering information
about adjacent frames, similarly to the supervised learning method described
in Chapter 6, producing a smoothed detection across time.
7.3.1
Temporal smoothing
0 (t)) =
S(C
i
S(Ci0 (j))
7.14
j=tK
This way, the saliences of the combinations with the same pitches than Ci0 in
the K adjacent frames are summed to obtain the salience at the target frame, as
shown in Fig. 7.6. The combination with maximum salience is finally selected
to get the pitches at the target frame t.
0 (t))}
C 0 (t) = arg max{S(C
i
i
7.15
This new approach increases the robustness of the system in the data set
used for evaluation, and it allows to remove the minimum amplitude for a peak
to be a candidate, added in the previous approach to avoid local false positives.
If the selected combination at the target frame does not contain any pitch
(if there is not any candidate or if none of them can be identified as a pitch),
then a rest is yielded without evaluating the combinations in the K adjacent
frames.
This technique smoothes the detection in the temporal dimension. For a
visual example, lets consider the smoothed intensity of a given candidate c0 as:
l(c0 (t)) =
t+K
X
j=tK
134
l(c0 (j))
7.16
C1! (t 1) = {C3}
1! (t)) =7400
S(C
2! (t)) =3200
S(C
3! (t)) =1000
S(C
4! (t)) =340
S(C
5! (t)) =100
S(C
C1! (t + 1) = {C3}
C2! (t + 1) = {E3}
135
Figure 7.7: Top: Example of detected piano-roll for an oboe melody. Bottom:
Three-dimensional temporal representation of l(c0 (t)) for the candidates of the
winner combination at each frame. In this example, all the pitches were correctly
detected. High temporal smoothness usually indicates good estimates.
When the temporal evolution of the smoothed intensities l(c0 (t)) of the
winner combination candidates is plotted in a three-dimensional representation
(see Figs. 7.7 and 7.8), it can be seen that the correct estimates usually show
smooth temporal curves. An abrupt change (a sudden note onset or offset,
represented by a vertical cut in the smoothed intensities 3D plot) means that the
harmonic components of a given candidate were suddenly assigned to another
candidate in the next frame. Therefore, vertical lines in the plot usually indicate
errors mapping harmonic components with the candidates.
7.3.2
Partial search
136
137
Selected partial
1
fh fr
fh
fh + fr
Figure 7.9: Partial selection in the joint estimation method II. The selected
peak is the one with the greatest weighted value.
value is selected as a partial. The advantage of this scheme is that low amplitude
peaks are penalized and, besides the harmonic spectral location, intensity is also
considered to identify the most important spectral peaks with partials.
7.3.3
7.17
D(vi , vj )
S(vj ) + 1
D(vi , vj ) =
X
cvi ,vj
138
|l(vi,c ) l(vj,c )| +
X
cvi vj
l(vi,c ) +
X
cvj vi
l(vj,c )
7.18
t4n48n72
{C3,C5}
18211.3
15991.3
16638
t5n48n60
{C3,C4}
18214.9
56220.1
9770.75
23555.4
init
t4n48
{C3}
14504.2
13916.6
t7n48n72
{C3,C5}
9862.21
10731.9
14180.2
16066.6
0
t5n48n72
{C3,C5}
40399.4
t4n72
{C5}
155.338
t6n48n72
{C3,C5}
10186.3
11630.3
939.891
17735.3
436.046
15925.1
t6n48
{C3}
9822.03
t5n48
{C3}
7765.62
t8n48
{C3}
14971
10126.8
7036.34
553.255
18393
{C3}
t7n48
53811.1
15074
t6n48n60
{C3,C4}
11539.2
t9n48n60
{C3,C4}
16489.6
7710.18
t8n48n60
{C3,G4}
7262.77
118287
1828.94
10186.3
8359.9
32558.2
t7n48n60
{C3,C4}
Vi
t9n48
{C3}
7262.77
end
16014.2
0
t8n48n72
{C3,C5}
95375.4
32558.2
t9n35n60
{C2,G4}
83984.6
7.3.4
Alternative architectures
path which minimizes the weights sum from the starting node to the final state.
boost C++ library, available at http://www.boost.org, was used for this task.
139
2. To detect onsets and analyze only one frame between two onsets to yield
the pitches in the inter-onset interval. This scheme, used in the iterative
estimation method, increases the efficiency but with an accuracy cost. The
method relies on the onset detection results, therefore a wrong estimate
in the onset detection stage can affect the results.
3. To detect onsets and merge combinations of those frames that are between
two consecutive onsets, yielding the pitches for the inter-onset interval.
This technique can obtain more reliable results when the onsets are
correctly estimated, as it happens with piano sounds. However, merging
combinations between two frames reduce the number of detected notes,
as only combinations that are present in most of the IOI frames are
considered. Like in the previous scheme, the detection is very sensitive
to false negative onsets.
4. To detect beats and merge combinations with a quantization grid. Once
the beats are estimated14 , a grid split with a given beat divisor 1/q can
be assumed, considering that there are not triplets and that the minimum
note duration is q. For instance, if q = 4, each inter-beat interval can
be split in q sections, each one of a one sixteenth length. Then, the
combinations of the frames that belong to the quantization unit can be
merged to obtain the results at each minimum grid unit. Like in the onset
detection scheme, the success rate of this approach depends on the success
rate of beat estimation.
The implementation of the joint estimation method II allows to run the
algorithm using any of these schemes. The adequate choice of the scheme
depends on the signal to be analyzed. For instance, for percussive timbres, it is
recommended to use the third scheme, as usually onset detection is very reliable
for this kind of sounds. These architectures have been perceptually evaluated
using some example real songs, but rigorous evaluation of these schemes is left
as future work, since an aligned dataset of real musical pieces with symbolic
data is required for this task.
In order to obtain a more readable score, the tempo changes can optionally
be written into the output MIDI file. To do it, the system allows to load a list
of beat times. A tempo T = 60/Tb is reestimated at each beat instant using the
temporal difference Tb between the current and the previous beat. There is not
other metrical information extracted, therefore the bar impulses are sometimes
shifted due to anacrusis15 or incorrect time signature, which is always assumed
to be of 4/4, since this is the most frequently used musical meter.
14 Beats can be estimated with an external beat tracking algorithm like BeatRoot from
Dixon (2006).
15 Like it occurs in Fig. 1.1.
140
7.4. EVALUATION
7.4 Evaluation
To perform a first evaluation and set up the parameters, initial experiments were
done using a data set of random mixtures. Then, the three proposed approaches
were evaluated and compared with other works for real music transcription in
the MIREX (2007) and MIREX (2008) multiple f0 estimation and tracking
contests.
7.4.1
Parametrization
The parameters of the three proposed methods and their impact on the results
are analyzed in this section. The intention in the parametrization stage is
not to get the parameter values that maximize the accuracy for the test set
used, as the success rate is dependent on these particular data. However, this
stage can help to obtain a reasonable good parameter set of values and to
evaluate the impact of each parameter in the accuracy and the computational
cost. Therefore, the selected parameters are not always those that achieve the
highest accuracy in the test set, but those that obtain a close-to-best accuracy
keeping a low computational cost.
For the parametrization stage, a database of random pitch combinations has
been used. This database was generated using mixtures of musical instrument
samples with fundamental frequencies ranging between 40 and 2100 Hz. The
samples are the same used in the evaluation of the Klapuri (2006b) method.
The data set consists on 4000 mixtures with polyphony16 1, 2, 4, and 6. The
2842 audio samples from 32 musical instruments used to generate the mixtures
are from the McGill University master samples collection17 , the University of
Iowa18 , IRCAM studio online19 , and recordings of an acoustic guitar. In order
to respect the copyright restrictions, only the first 185 ms of each mixture20
were used for evaluation.
It is important to note that the data set only contains isolated pitch
combinations, therefore the evaluation of the parameters that have a temporal
dimension (like the minimum note duration) could not be evaluated using this
database. The test set is intended for evaluation of multiple f0 estimation at
single frames, therefore f0 tracking from joint estimation method II could not
be evaluated with these data.
To evaluate the parameters in the iterative cancellation method and in
the joint estimation method I, only one frame which is 43 ms apart from the
16 There
17 http://www.music.mcgill.ca/resources/mums/html/index.htm
18 http://theremin.music.uiowa.edu/MIS.html
19 http://forumnet.ircam.fr/402.html?&L=1
20 Thanks
141
Stage
Parameter
Symbol
Value
Preprocessing
SLM bandwidth
SLM threshold
Zero padding factor
50 Hz
0.1
8
Candidate selection
f0 range
closest pitch distance
[fmin , fmax ]
fd
[38, 2100] Hz
3 Hz
Postprocessing
5
0.1
beginning of the mixture has been selected. For the joint estimation method II,
which requires more frames for merging combinations, all the frames (5) have
been used to select the best combination in the mixture.
The accuracy metric (Eq. 3.6) has been chosen as a success rate criterion
for parametrization. A candidate identification error rate was also defined for
adjusting the parameters that are related with the candidate selection stage.
This error rate is set as the number of actual pitches that are not present in the
candidate set divided by the number of actual pitches.
The overall results for the three methods using the random mixtures data
set and the selected parameters are described in Sec. 7.4.2, and the results of
the comparison with other multiple f0 estimation approaches are detailed in
Secs. 7.4.3 and 7.4.4.
142
7.4. EVALUATION
Acc
0.4
SLM
Spectral peak picking
0.3
0.2
0
10
1
20
2
30
3
40
4
50
Figure 7.11: SLM accuracy respect to the bandwidth W using = 0.1, and
comparison with simple spectral peak picking. The other parameters used for
the evaluation are those described in Tab. 7.1.
systematically selected from the magnitude spectrum instead, as SML did not
improve neither the accuracy nor the efficiency with the tested values.
Experimentally, the use of all the spectral peaks yielded exactly the same
results than the selection of those spectral peaks with a magnitude over a low
fixed threshold = 0.1. This thresholding, which did not alter the results in the
test set, can reduce the computation time of the overall system to the half21 .
For this reason, this threshold was adopted and subsequently included in the
joint estimation methods.
The overall results without SLM can be seen in Fig. 7.12. In this figure,
the chosen parameter values are in the central intersection, and they correspond
to those described in Tab. 7.1. From these initial values, the parameters have
been changed individually to compare their impact in the accuracy.
The zero padding factor z is useful to accurately identify the frequency of
lower pitches. As shown in Fig. 7.12, the overall accuracy increases importantly
when using zero padding22 (z 6= 20 ). The computational cost derived from the
FFT computation of longer windows must also be taken into account. As the
overall computational cost of this method is very low, a value z = 8, which
slightly improves the accuracy, was chosen.
The range of valid fundamental frequencies comprises the f0 range of the
data set used for the evaluation, therefore it is the same for all the evaluated
methods.
The closest pitch distance value matches the spectral resolution obtained
with zero padding. This way, using a margin of fd = 3 Hz, only spectral
21 Experimentally,
22 Due
143
z
fd
z = 23
=0
=9
fd = 8
= 0.25
=0
fd = 1
z = 20
peaks at 1 bin from the ideal pitch frequency are considered as f0 candidates.
This parameter increases the accuracy about a 1%. However, as it can be seen
in Fig. 7.12, the value selected for this parameter (fd = 3) is probably too
much restrictive, and a higher range (fd = 5) yields better results. It must be
considered that the iterative cancellation method was developed before having
the random mixtures database, therefore their parameters are not optimally
tuned for this data set. However, experimentally, the accuracy deviation shows
that the chosen values do not differ much from the ones that approximate to
the highest accuracy using this data set.
The postprocessing parameters of the iterative cancellation method are the
minimum note intensity and the minimum relative intensity of a candidate
respect to the maximum intensity of the other simultaneous candidates in the
analyzed frame. The note silence threshold value = 5 (equivalent to 18.38 dB)
could not be directly evaluated using the random mixtures data set, as there
are no silences and all the sounds have very similar amplitudes. However, the
results varying [0, 9] show that this value has a low impact in the detection
when there are no silent excerpts in the signal.
144
7.4. EVALUATION
Stage
Parameter
Symbol
Value
Preprocessing
0.1
4
Candidate selection
Min f0 amplitude
f0 range
[fmin , fmax ]
2
[38, 2100] Hz
Combination generation
F
P
10
6
Salience calculation
fr
H
11 Hz
10
5
0.1
2
Postprocessing
55.68 ms
145
fr
F
z = 20
F =6
fr = 8
=0
fr = 16
=5
z = 23
F = 14
Figure 7.13: Joint estimation method I candidate error rate adjusting the
parameters that have some influence in the candidate selection stage.
fr
z = 23
=5
F
H
fr = 16
F =6
fr = 8
=4
H = 15
F = 14
=0
=0
H=5
z = 20
Figure 7.14: Joint estimation method I accuracy adjusting the free parameters.
146
7.4. EVALUATION
z
F
H
F = 14
z = 23
H = 14
H=5
F =6
z = 20
The bandwidth for searching partials fr does not seem to have a great
impact in the accuracy, but it is important in the candidate selection stage
(see Fig. 7.13). An appropriate balance between a high accuracy and a low
candidate selection error rate was obtained using fr = 11 Hz.
The computational cost increases exponentially with the number of candidates F . Therefore, a good choice of F is critical for the efficiency of the method.
Experimentally, F = 10 yielded a good trade-off between the accuracy, the
number of correctly selected candidates and the computational cost (see Figs.
7.13, 7.14 and 7.15).
As previously mentioned, the first partials usually contain most of the energy
of the harmonic sounds. Experimentally, using H = 10 suffices, and higher
values cause low pitches to cancel other higher frequency components. In
addition, note that the computational cost linearly increases with respect to
H.
The smoothness weight which maximizes the accuracy was experimentally
found using = 2. It is important to note that without considering spectral
smoothing ( = 0), the accuracy decreases significantly (see Fig. 7.14).
The postprocessing parameter values for and were selected with the same
values than in the iterative cancellation approach.
147
Stage
Parameter
Symbol
Value
Preprocessing
0.1
4
Candidate selection
f0 range
[fmin , fmax ]
[38, 2100] Hz
Combination generation
F
P
10
6
Salience calculation
fr
H
11 Hz
15
5
0.15
4
Postprocessing
(without tracking)
d
r
23 ms
50 ms
Postprocessing
(with tracking)
148
7.4. EVALUATION
= 0.3
H = 17
=6
=0
H = 10
=0
7.4.2
The overall results for the random mixtures data set after the parametrization
stage are described in the following figures of this section (Figs. 7.17 to 7.30).
These results can not be directly compared to the evaluation made by Klapuri
(2006b) using the same data, as in this latter work polyphony estimation and
f0 estimation were evaluated separately (the number of concurrent sounds was
given as a parameter for the pitch estimator), whereas in the present work these
two stages are calculated simultaneously.
As shown in Figs. 7.17 and 7.18, the candidate identification technique used
in the joint estimation method II outperforms the other candidate selection
approaches. It can also be seen (Figs. 7.19 to 7.22) that the joint estimation
method I clearly outperforms the iterative cancellation approach, and the joint
estimation method II gets a higher accuracy than the joint method I.
Respect to the estimation of the number of concurrent sources (Figs. 7.23
to 7.27), the joint estimation method II usually yields better results, but when
there are many simultaneous sources (Fig. 7.26), it tends to underestimate
the number of concurrent sounds, probably due to the combination of adjacent
frames. Looking at the evaluation in function of the pitch (Figs. 7.28 to 7.30),
it can be seen that the best results are located in the central pitch range.
149
Iterative
Joint I
Joint II
0.8
0.6
0.4
0.2
01
12
24
36
Figure 7.17: Candidate identification error rate with respect to the polyphony
(1, 2, 4 and 6 simultaneous pitches) of the ground truth mixtures.
Iterative
Joint I
Joint II
0.8
0.6
0.4
0.2
0
0
150
7.4. EVALUATION
Precision
Recall
Accuracy
0.8
0.6
0.4
0.2
0
01
12
24
36
Figure 7.19: Pitch detection results for the iterative cancellation method with
respect to the ground-truth mixtures polyphony.
Precision
Recall
Accuracy
0.8
0.6
0.4
0.2
01
12
24
36
Figure 7.20: Pitch detection results for the joint estimation method I with
respect to the ground-truth mixtures polyphony.
151
Precision
Recall
Accuracy
0.8
0.6
0.4
0.2
0
01
12
24
36
Figure 7.21: Pitch detection results for the joint estimation method II with
respect to the ground-truth mixtures polyphony.
Precision
Recall
Accuracy
0.8
0.6
0.4
0.2
Figure 7.22:
methods.
152
0
Iterative cancellation
Joint1method I
Joint 2method II
7.4. EVALUATION
1000
Iterative
Joint I
Joint II
800
600
400
200
0
0
1000
Iterative
Joint I
Joint II
800
600
400
200
0
0
Figure 7.24:
sources.
153
1000
Iterative
Joint I
Joint II
800
600
400
200
0
0
Figure 7.25:
sources.
1000
Iterative
Joint I
Joint II
800
600
400
200
Figure 7.26:
Estimation of the number of concurrent sources for six
simultaneous sources.
154
7.4. EVALUATION
Iterative
Joint I
Joint II
0.8
0.6
0.4
0.2
0
0
Figure 7.28: Precision, recall and accuracy of the iterative cancellation method
in function of the MIDI pitch number.
155
Figure 7.29: Precision, recall and accuracy of the joint estimation method I in
function of the MIDI pitch number.
Figure 7.30: Precision, recall and accuracy of the joint estimation method II
in function of the MIDI pitch number.
156
7.4. EVALUATION
7.4.3
In order to evaluate the proposed methods using real musical signals and to
compare them with other approaches, the iterative cancellation algorithm and
the joint estimation method I were submitted to the MIREX (2007) multiple
f0 estimation and tracking contest, whereas the joint estimation method II was
evaluated in MIREX (2008).
The data set used in MIREX (2007) and MIREX (2008) were essentially
the same, consisting of a woodwind quintet transcription of the fifth variation
from Beethoven, plus some synthesized pieces using Goto (2003) samples, and
polyphonic piano recordings using a diskclavier piano. There were clips of
30 seconds for each polyphony (2-3-4-5), for a total of 30 examples, plus 10
polyphonic piano pieces of 30 seconds. The details of the ground-truth labelling
are described in (Bay et al., 2009).
The MIREX evaluation was done at two different levels; frame by frame pitch
estimation and note tracking. The first mode evaluates the correct detection in
isolated frames, whereas the second task also considers the temporal coherence
of the detection.
For the frame level task, evaluation of the active pitches is done every 10 ms.
For this reason, the hop size of the joint estimation methods26 were set to
obtain an adequate temporal resolution. Precision, recall, and accuracy were
reported. A returned pitch is assumed to be correct if it is within a half semitone
of a ground-truth pitch for that frame. Only one ground-truth pitch can be
associated with each returned pitch. The error metrics from Poliner and Ellis
(2007a) and previously described in pag. 51 were also used in the evaluation.
For the note tracking task, precision, recall, and F-measure were reported. A
ground-truth note is assumed to be correctly transcribed if the method returns
a note that is within a half semitone of that note, the yielded note onset is
within a 50 ms range of the onset of the ground truth note, and its offset is
within 20% range of the ground truth note offset. One ground truth note can
only be associated with one transcribed note.
The data set is not publicly available, therefore the experiments using these
data can not be replicated out of the MIREX contests.
Iterative cancellation method
The iterative cancellation approach does not perform a frame by frame
evaluation, as it uses only those frames that are after the detected onsets to
yield the pitches for each inter-onset interval. Although it does not perform f0
tracking, onset times provide the indications about the beginning of the notes,
therefore it was only submitted to the note tracking task.
26 The
iterative cancellation method was only presented to the note tracking task.
157
Participant
Runtime (sec)
Machine
Iterative cancellation
Joint estimation method I
AC3
AC4
EV4
EV3
KE3
PE2
RK
KE4
VE
165
364
900
900
2475
2535
4140
4890
3285
20700
390600
ALE Nodes
ALE Nodes
MAC
MAC
ALE Nodes
ALE Nodes
ALE Nodes
ALE Nodes
SANDBOX
ALE Nodes
ALE Nodes
Table 7.4: MIREX (2007) note tracking runtimes. Participant, running time
(in seconds), and machine where the evaluation was performed are shown.
158
id
Ryyn
anen and Klapuri (2005)
Vincent et al. (2007)
Poliner and Ellis (2007a)
Vincent et al. (2007)
Pertusa and I
nesta (2008a)
Kameoka et al. (2007)
Kameoka et al. (2007)
Lidy et al. (2007)
Emiya et al. (2007, 2008b)
Cont (2007)
Cont (2007)
Participant
Iterative cancellation + HMM tracking
Unsupervised learning (NMF)
Supervised learning (SVM)
Unsupervised learning (NMF)
Joint estimation method I
Statistical spectral models
Statistical spectral models
Iterative cancellation
Joint estimation + Bayesian models
Unsupervised learning (NMF)
Unsupervised learning (NMF)
Method
0.614
0.527
0.485
0.453
0.408
0.268
0.246
0.219
0.202
0.093
0.087
Avg. F-m
Prec
0.578
0.447
0.533
0.412
0.371
0.263
0.216
0.203
0.338
0.070
0.067
Rec
0.678
0.692
0.485
0.554
0.474
0.301
0.323
0.296
0.171
0.172
0.137
0.699
0.636
0.740
0.622
0.665
0.557
0.610
0.628
0.486
0.536
0.523
Avg. Overlap
Table 7.5: MIREX (2007) note tracking results based on onset and pitch. Average F-measure, precision, recall, and average
overlap are shown for each participant.
RK
EV4
PE2
EV3
PI2
KE4
KE3
PI3
VE2
AC4
AC3
7.4. EVALUATION
159
160
id
Ryyn
anen and Klapuri (2005)
Yeh (2008)
Zhou et al. (2009)
Pertusa and I
nesta (2008a)
Vincent et al. (2007)
Cao et al. (2007)
Raczynski et al. (2007)
Vincent et al. (2007)
Poliner and Ellis (2007a)
Leveau (2007)
Cao et al. (2007)
Kameoka et al. (2007)
Kameoka et al. (2007)
Cont (2007)
Cont (2007)
Emiya et al. (2007, 2008b)
Participant
Iterative cancellation + HMM tracking
Joint estimation
Salience function (RTFI)
Joint estimation method I
Unsupervised learning (NMF)
Iterative cancellation
Unsupervised learning (NNMA)
Unsupervised learning (NMF)
Supervised learning (SVM)
Matching pursuit
Iterative cancellation
Statistical spectral models (HTC)
Statistical spectral models (HTC)
Unsupervised learning (NMF)
Unsupervised learning (NMF)
Joint estimation + Bayesian models
Method
Acc
0.605
0.589
0.582
0.580
0.543
0.510
0.484
0.466
0.444
0.394
0.359
0.336
0.327
0.311
0.277
0.145
Prec
0.690
0.765
0.710
0.827
0.687
0.567
0.614
0.659
0.734
0.689
0.359
0.348
0.335
0.373
0.298
0.530
Rec
0.709
0.655
0.661
0.608
0.625
0.671
0.595
0.513
0.505
0.417
0.767
0.546
0.618
0.431
0.530
0.157
Etot
0.474
0.460
0.498
0.445
0.538
0.685
0.670
0.594
0.639
0.639
1.678
1.188
1.427
0.990
1.444
0.957
0.158
0.108
0.141
0.094
0.135
0.200
0.185
0.171
0.120
0.151
0.232
0.401
0.339
0.348
0.332
0.070
Esubs
0.133
0.238
0.197
0.298
0.240
0.128
0.219
0.371
0.375
0.432
0.001
0.052
0.046
0.221
0.138
0.767
Emiss
Ef a
0.183
0.115
0.160
0.053
0.163
0.356
0.265
0.107
0.144
0.055
1.445
0.734
1.042
0.421
0.974
0.120
Table 7.6: MIREX (2007) frame by frame evaluation results. Accuracy, precision, recall, and the error metrics proposed by
Poliner and Ellis (2007a) are shown for each participant.
RK
CY
ZR
PI1
EV2
CC1
SR
EV1
PE1
PL
CC2
KE2
KE1
AC2
AC1
VE
7.4. EVALUATION
id
Runtime (sec)
Machine
ZR
Joint estimation method I
AC1
AC2
EV2
EV1
CC1
CC2
RK
PE1
PL
KE2
KE1
SR
CY
VE
271
364
840
840
2233
2366
2513
2520
3540
4564
14700
19320
38640
41160
132300
364560
BLACK
ALE Nodes
MAC
MAC
ALE Nodes
ALE Nodes
ALE Nodes
ALE Nodes
SANDBOX
ALE Nodes
ALE Nodes
ALE Nodes
ALE Nodes
ALE Nodes
ALE Nodes
ALE Nodes
Table 7.7: MIREX (2007) frame by frame runtimes. The first column shows
the participant, the second is the runtime and the third column is the machine
where the evaluation was performed. ALE Nodes was the fastest machine.
The method was also evaluated in the note tracking contest. Despite it was
not designed for this task, as the analysis is performed without information of
neighboring frames but converting consecutive pitch detections into notes, the
results were not bad, as shown in Tab. 7.5.
The joint estimation method I was also very efficient respect to the other
state of the art methods presented (see Tab. 7.7), specially considering that it
is a joint estimation approach.
Joint estimation method II
The joint estimation method II was submitted to MIREX (2008) for frame by
frame and note tracking evaluation. The method was presented for both tasks
in two setups: with and without f0 tracking.
The difference between using f0 tracking or not is the postprocessing stage
(see Tab. 7.3). In the first setup, notes shorter than a minimum duration are
just removed, and when there are short rests between two consecutive notes
of the same pitch, the notes are merged. Using f0 tracking, the methodology
described in Sec. 7.3.3 is performed instead, increasing the temporal coherence
of the estimate with the wDAG.
Experimentally, the joint estimation method II was very efficient compared
to the other approaches presented, as shown in Tabs. 7.8 and 7.9.
The results for the frame by frame task can be seen in Tab. 7.10. The
accuracy for the joint estimation method II without f0 tracking is satisfactory,
161
Participant
Runtime (sec)
MG
Joint estimation II
Joint estimation II + tracking
VBB
CL1
CL2
RK
EOS
DRD
EBD1
EBD2
YRC1
YRC2
RFF2
RFF1
99
792
955
2081
2430
2475
5058
9328
14502
18180
22270
57483
57483
70041
73784
Table 7.8: MIREX (2008) frame by frame runtimes. Participants and runtimes
are shown. All the methods except MG were evaluated using the same machine.
Participant
Runtime (sec)
Joint estimation II
ZR3
Joint estimation II + tracking
ZR1
ZR2
VBB
RK
EOS
EBD1
EBD2
YRC
RFF2
RFF1
790
871
950
1415
1415
2058
5044
9328
18180
22270
57483
71360
73718
Table 7.9: MIREX (2008) note tracking runtimes. Participants and runtimes
are shown. All the methods except ZR were evaluated using the same machine.
and the method obtained the highest precision and the lowest Etot error among
all the analyzed approaches.
The inclusion of f0 tracking did not improve the results for frame by
frame estimation, but in the note tracking task (see Tab. 7.11), the results
outperformed those obtained without tracking.
7.4.4
As the ground-truth used for MIREX (2007) and MIREX (2008) multiple f0
estimation and tracking contest were the same. In the review from Bay et al.
(2009), the results of the algorithms evaluated in both MIREX editions are
analyzed.
162
id
Participant
Joint estimation + f0 tracking
Joint estimation
Joint estimation II
Iterative cancellation + HMM tracking
Joint estimation II + tracking
Unsupervised learning (NMF)
Iterative cancellation
Iterative cancellation
Statistical spectral models (HTC)
Joint estimation + Bayesian models
Joint estimation + Bayesian models
Database matching
Iterative cancellation
Supervised learning (genetic)
Supervised learning (genetic)
Method
Acc
0.665
0.619
0.618
0.613
0.596
0.540
0.495
0.487
0.467
0.452
0.447
0.427
0.358
0.211
0.183
Prec
0.741
0.698
0.832
0.698
0.824
0.714
0.541
0.671
0.591
0.713
0.674
0.481
0.358
0.506
0.509
Rec
0.780
0.741
0.647
0.719
0.625
0.615
0.660
0.560
0.546
0.493
0.498
0.570
0.763
0.226
0.191
Etot
0.426
0.477
0.406
0.464
0.429
0.544
0.731
0.598
0.649
0.599
0.629
0.816
1.680
0.854
0.857
0.108
0.129
0.096
0.151
0.101
0.118
0.245
0.148
0.210
0.146
0.161
0.298
0.236
0.183
0.155
Esubs
0.127
0.129
0.257
0.130
0.275
0.267
0.096
0.292
0.244
0.362
0.341
0.133
0.001
0.601
0.656
Emiss
Ef a
0.190
0.218
0.053
0.183
0.053
0.159
0.391
0.158
0.194
0.092
0.127
0.385
1.443
0.071
0.047
Table 7.10: MIREX (2008) frame by frame evaluation results. Accuracy, precision, recall, and the error metrics proposed by
Poliner and Ellis (2007a) are shown for each method.
YRC2
YRC1
PI2
RK
PI1
VBB
DRD
CL2
EOS
EBD2
EBD1
MG
CL1
RFF1
RFF2
7.4. EVALUATION
163
164
Yeh et al. (2008)
Ryyn
anen and Klapuri (2005)
Zhou and Reiss (2008)
Zhou and Reiss (2008)
Zhou and Reiss (2008)
Pertusa and I
nesta (2008b)
Egashira et al. (2008)
Vincent et al. (2007)
Pertusa and I
nesta (2008b)
Emiya et al. (2008a)
Emiya et al. (2008a)
Reis et al. (2008a)
Reis et al. (2008a)
Participant
Joint estimation + f0 tracking
Iterative cancellation + HMM tracking
Salience function (RTFI)
Salience function (RTFI)
Salience function (RTFI)
Joint estimation II + tracking
Statistical spectral models (HTC)
Unsupervised learning (NMF)
Joint estimation II
Joint estimation + Bayesian models
Joint estimation + Bayesian models
Supervised learning (genetic)
Supervised learning (genetic)
Method
0.355
0.337
0.278
0.263
0.261
0.247
0.236
0.197
0.192
0.176
0.158
0.032
0.028
Avg. F-m
0.307
0.312
0.256
0.236
0.233
0.201
0.228
0.162
0.145
0.165
0.153
0.037
0.034
Prec
0.442
0.382
0.314
0.306
0.303
0.333
0.255
0.268
0.301
0.200
0.178
0.030
0.025
Rec
0.890
0.884
0.874
0.874
0.875
0.862
0.856
0.829
0.854
0.865
0.845
0.645
0.683
Avg. Overlap
Table 7.11: MIREX (2008) note tracking results based on onset, offset, and pitch. Average F-measure, precision, recall, and
average overlap are shown for each method.
YRC
RK
ZR3
ZR2
ZR1
PI1
EOS
VBB
PI2
EBD1
EBD2
RFF2
RFF1
id
7.4. EVALUATION
Figure 7.31: Fig. from Bay et al. (2009), showing Esubs , Emiss and Ef a for
all MIREX 2007 and MIREX 2008 multiple fundamental frequency estimation
methods ordered by Etot . PI2-08 is the joint estimation method II without
tracking, PI1-08 is the same method with tracking, and PI-07 is the joint
estimation method I.
Figure 7.32: Fig. from Bay et al. (2009). Precision, recall and overall accuracy for all MIREX 2007 and MIREX 2008 multiple
fundamental frequency estimation methods ordered by accuracy. PI2-08 is the joint estimation method II without tracking,
PI1-08 is the same method with tracking, and PI-07 is the joint estimation method I.
166
7.4. EVALUATION
Figure 7.33: Fig. from Bay et al. (2009). Precision, recall, average F-measure
and average overlap based on note onset for MIREX 2007 and MIREX 2008 note
tracking subtask. PI2-08 is the joint estimation method II without tracking,
PI1-08 is the same method with tracking, PI1-07 is the joint estimation method
I and PI2-07 is the iterative cancellation method.
that most of the reported f0 were correct, but multiple f0 estimation algorithms
tend to under-report and miss many active f0 in the ground-truth.
While the proposed joint estimation methods I and II achieved the lowest
Etot score, there are very few false alarms compared to miss errors. On the other
hand, the methods from Ryyn
anen and Klapuri (2005) and Yeh et al. (2008)
have a better balanced precision, recall, as well as a good balance in the three
error types, and as a result, have the highest accuracies for MIREX (2007) and
MIREX (2008), respectively.
Citing Bay et al. (2009), Inspecting the methods used and their performances, we can not make generalized claims as to what type of approach works
best. In fact, statistical significance testing showed that the top three methods29
were not significantly different.
29 (Yeh
167
7.5 Conclusions
In this chapter, three different signal processing methods have been proposed for
multiple f0 estimation. Unlike the supervised learning approaches previously
described, these signal processing schemes can be used to transcribe real music
without any a-priori knowledge of the sources.
The first method is based on iterative cancellation, and it is a simple
approach which is mainly intended for the transcription of piano sounds at a low
computational cost. For this reason, only one frame in an inter-onset interval
is analyzed, and the interaction between harmonic sources is not considered.
A fixed spectral pattern is used to subtract the harmonic components of each
candidate.
The joint estimation method I introduces a more complex methodology.
The spectral patterns are inferred from the analysis of different hypotheses
taking into account the interactions with the other sounds. The combination
of harmonic patterns that maximizes a criterion based on the sum of harmonic
amplitudes and spectral envelope smoothness is chosen at each frame.
The third method extends the previous joint estimation method considering
adjacent frames for adding temporal smoothing. This method can be complemented with a f0 tracking stage, using a weighted direct acyclic graph, to
increase the temporal coherence of the detection.
The proposed methods have been evaluated and compared to other works.
The iterative cancellation approach, mainly intended for piano transcription, is
very efficient and it has been successfully used for genre classification and other
MIR tasks (Lidy et al., 2007) with computational cost restrictions.
The joint estimation methods obtained a high accuracy and the lowest Etot
among all the multiple f0 algorithms submitted in MIREX (2007) and MIREX
(2008). Although all possible combinations of candidates are evaluated at each
frame, the proposed approaches have a very low computational cost, showing
that it is possible to make an efficient joint estimation method.
Probably, the f0 tracking stage added to the joint estimation method II is
too simple, and it should be replaced by a more reliable method in a future
work. For instance, the transition weights could be learned from a labeled test
set, or a more complex f0 tracking method like the high-order HMM scheme
from Chang et al. (2008) could be used instead. Besides intensity, the centroid
of an HPS should also have a temporal coherence when belonging to the same
source, therefore this parameter could also be considered for tracking.
Using stochastic models, a probability can be assigned to each pitch in order
to remove those that are less probable given their context. For example, in
a melodic line it is very unlikely that a non-diatonic note two octaves higher
or lower than its neighbours appears. Musical probabilities can be taken into
168
7.5. CONCLUSIONS
169
This work has addressed the automatic music transcription problem using
different strategies. Efficient novel methods have been proposed for onset
detection and multiple f0 estimation, using supervised learning and signal
processing techniques. The main contributions of this work can be summarized
in the following points:
An extensive review of the state of the art methods for onset detection
and multiple f0 estimation. The latter methods have been classified
into salience functions, iterative cancellation, joint estimation, supervised
learning, unsupervised learning, matching pursuit, Bayesian models,
statistical spectral models, blackboard systems, and database matching
methods. An analysis of the strengths and limitations for each category
has also been done.
The development of an efficient approach for onset detection and the
construction of a ground-truth data set for this task. The main novelties in
this field are the use of a 1/12 octave filter bank to compress the harmonic
information and the simple onset detection functions proposed. The
presented method is mainly intended for percussive onset detection, as it
detects energy abrupt energy changes, but it also considers the properties
of harmonic sounds, making it robust against spectral variations produced
during the sustain stage of the sounds. The algorithm was evaluated and
compared to other works yielding promising results.
Two novel approaches for multiple pitch estimation of a priori known
sounds using supervised learning methods. These algorithms were one
of the first machine learning methods proposed for this task. A harmonic
filter bank was used to reduce the amount of spectral information to feed a
time-delay neural network (TDNN), while preserving the main harmonic
content. A ground-truth data set of synthetic sounds was generated to
171
172
8.2 Publications
Some contents of this thesis have been published in journals and conference
proceedings. Here is a list of publications in chronological order.
Pertusa, A. and I
nesta, J. M. (2004). Pattern recognition algorithms for
polyphonic music transcription. In Fred, A., editor, Pattern Recognition in
Information Systems (PRIS), pages 80-89, Porto, Portugal. [Chapter 6]
Pertusa, A., Klapuri, A., and I
nesta, J. M. (2005). Recognition of note
onsets in digital music using semitone bands. Lecture Notes in Computer
Science, 3773:869-879. [Chapter 5]
Pertusa, A. and I
nesta, J. M. (2005). Polyphonic monotimbral music
transcription using dynamic networks. Pattern Recognition Letters,
26(12):1809-1818. [Chapter 6]
Lidy, T., Rauber, A., Pertusa, A., and I
nesta, J. M. (2007). Improving
genre classification by combination of audio and symbolic descriptors using
a transcription system. In Proc. of the 8th International Conference
on Music Information Retrieval (ISMIR), pages 61-66, Vienna, Austria.
[Chapter 7]
Pertusa, A. and I
nesta, J. M. (2007). Multiple fundamental frequency
estimation based on spectral pattern loudness and smoothness. In MIREX
(2007), multiple f0 estimation and tracking contest. [Chapter 7]
Lidy, T., Rauber, A., Pertusa, A., Ponce de Leon, P. J., and I
nesta, J.
M. (2008). Audio music classification using a combination of spectral,
timbral, rhythmic, temporal and symbolic features. In MIREX (2008),
audio genre classification contest, Philadelphia, PA. [Chapter 7]
Pertusa, A. and I
nesta, J. M. (2008). Multiple fundamental frequency
estimation using Gaussian smoothness and short context. In MIREX
(2008), multiple f0 estimation and tracking contest. [Chapter 7]
174
8.2. PUBLICATIONS
Pertusa, A. and I
nesta, J. M. (2008). Multiple fundamental frequency
estimation using Gaussian smoothness. In Proc. of the IEEE Int. Conf.
on Acoustics, Speech, and Signal Processing (ICASSP), pages 105-108,
Las Vegas, NV. [Chapter 7]
Lidy, T., Grecu, A., Rauber, A., Pertusa, A., Ponce de Leon, P. J., and
I
nesta, J. M. (2009). A multi-feature multi-classifier ensemble approach
for audio music classification. In MIREX (2009), audio genre classification
contest, Kobe, Japan. [Chapter 7]
Pertusa, A. and I
nesta, J. M. (2009). Note onset detection using one
semitone filter-bank for MIREX 2009. In MIREX (2009), onset detection
contest, Kobe, Japan. [Chapter 5]
175
Resumen
Agradecimientos
Antes de nada, me gustara agradecer a todos los miembros del grupo de m
usica
por ordenador de la Universidad de Alicante por proporcionar una excelente
atm
osfera de trabajo. Especialmente, al coordinador del grupo y supervisor de
este trabajo, Jose Manuel I
nesta. Su incansable espritu cientfico proporciona
un marco de trabajo excelente para inspirar las nuevas ideas que nos hacen
crecer y avanzar continuamente. Este trabajo no hubiera sido posible sin su
consejo y ayuda.
Escribir una tesis no es una tarea facil sin la ayuda de mucha gente.
Primero, me gustara agradecer a toda la plantilla de nuestro Grupo de
Reconocimiento de Formas e Inteligencia Artificial (GRFIA) y, en general,
a todo el Departamento de Lenguajes y Sistemas Informaticos (DLSI) de la
Universidad de Alicante. Mis estancias de investigacion con el Audio Research
Group, Tampere University of Technology (Tampere), Music Technology Group
(MTG), Universitat Pompeu Fabra (Barcelona) y Department of Software
Technology and Interactive Systems, Vienna University of Technology (Viena),
tambien han contribuido notablemente a la realizacion de este trabajo. He
crecido mucho, como cientfico y como persona, aprendiendo de los integrantes
de estos centros de investigaci
on.
Tambien me gustara agradecer a la gente que ha contribuido directamente a
este trabajo. A Francisco Moreno, por retrasar algunas de mis responsabilidades
docentes durante la escritura de este documento y por proporcionar el codigo
de los algoritmos de k vecinos mas cercanos. He aprendido la mayora de
las tecnicas que conozco para transcripcion musical de Anssi Klapuri. Estare
eternamente agradecido por los grandes momentos que pase en Tampere y por su
generosa acogida. Anssi ha contribuido directamente a esta tesis proporcionando
las bases para el c
odigo de similitud sinusoidal y la base de datos de acordes
aleatorios que han posibilitado la evaluacion y la mejora de los algoritmos
177
A. RESUMEN
1- Introducci
on6
La transcripci
on musical automatica consiste en extraer las notas que estan
sonando (la partitura) a partir de una se
nal de audio digital. En el caso de la
transcripci
on polif
onica, se parte de se
nales de audio que pueden contener varias
notas sonando simult
aneamente.
Una partitura es una gua para interpretar informacion musical, y por tanto
puede representarse de distintas maneras. La representacion mas extendida
es la notaci
on moderna usada en m
usica tonal occidental. Para extraer una
representaci
on comprensible en dicha notacion, ademas de las notas, sus tiempos
de inicio y sus duraciones, es necesario indicar el tempo, la tonalidad y la
metrica.
La aplicaci
on m
as obvia de la extraccion de la partitura es ayudar a un
m
usico a escribir la notacion musical a partir del sonido, lo cual es una tarea
complicada cuando se hace a mano. Ademas de esta aplicacion, la transcripcion
autom
atica tambien es u
til para otras tareas de recuperacion de informacion
musical, como deteccion de plagios, identificacion de autor, clasificacion de
genero, y asistencia a la composicion cambiando la instrumentacion o las notas
para generar nuevas piezas musicales a partir de una ya existente. En general,
1 C
odigo
TIN2006-14932-C02
CSD2007-00018
3 C
odigo TIC2000-1703-CO3-02
4 C
odigo TIC2003-08496-C04
6 Introduction.
2 C
odigo
178
estos algoritmos tambien pueden proporcionar informacion sobre las notas para
aplicar metodos que trabajan sobre m
usica simbolica.
La transcripci
on musical automatica es una tarea de recuperacion de
informaci
on musical en la que estan implicadas varias disciplinas, tales como
el procesamiento de se
nales, el aprendizaje automatico, la informatica, la
psicoac
ustica, la percepci
on musical y la teora musical.
Esta diversidad de factores provoca que haya muchas formas de abordar
el problema. La mayora de trabajos previos han utilizado diversos enfoques
dentro del campo del procesamiento de la se
nal, aplicando metodologas para
el an
alisis en el dominio de la frecuencia. En la literatura podemos encontrar
m
ultiples algoritmos de separaci
on de se
nales, sistemas que emplean algoritmos
de aprendizaje y clasificaci
on para detectar las notas, enfoques que consideran
modelos psicoac
usticos de la percepcion del sonido, o sistemas que aplican
modelos musicol
ogicos como medida de coherencia de la deteccion.
La parte principal de un sistema de transcripcion musical es el sistema
de detecci
on de frecuencias fundamentales, que determina el n
umero de notas
que est
an sonando en cada instante, sus alturas y sus tiempos de activacion.
Adem
as del sistema de detecci
on de frecuencias fundamentales, para obtener
la transcripci
on completa de una pieza musical es necesario estimar el tempo
a traves de la detecci
on de pulsos musicales, y obtener el tipo de compas y la
tonalidad.
La transcripci
on polif
onica es una tarea compleja que, hasta el momento,
no ha sido resuelta de manera eficaz para todos los tipos de sonidos armonicos.
Los mejores sistemas de detecci
on de frecuencias fundamentales obtienen unos
porcentajes de acierto del 60%, aproximadamente. Se trata, principalmente,
de un problema de descomposici
on de se
nales en una mezcla, lo cual implica
conocimientos avanzados sobre procesamiento de se
nales digitales, aunque
debido a la naturaleza del problema tambien intervienen factores perceptuales,
psicoac
usticos y musicol
ogicos.
El proceso de transcripci
on puede separarse en dos tareas: convertir una
se
nal de audio en una representaci
on de pianola, y convertir la pianola estimada
en notaci
on musical.
Muchos autores s
olo consideran la transcripcion automatica como una
conversi
on de audio a pianola, mientras que la conversion de pianola a notacion
musical se suele ver como un problema distinto. La principal razon de esto es
que los procesos involucrados en la extraccion de una pianola incluyen deteccion
de alturas y segmentaci
on temporal de las notas, lo cual es una tarea ya de por
s muy compleja. La conversi
on de pianola a partitura implica estimar el tempo,
cuantizar el ritmo o detectar la tonalidad. Esta fase esta mas relaccionada con
la generaci
on de una notaci
on legible para los m
usicos.
179
A. RESUMEN
de frecuencias fundamentales en se
nales polifonicas. Esta
es una tarea
extremadamente complicada de resolver y que ha sido abordada en numerosas
tesis doctorales.
180
2- Conocimientos previos8
Este captulo introduce los conceptos necesarios para la adecuada comprension
del trabajo. Se describen los conceptos y terminos relacionados con metodos de
procesamiento de la se
nal, teora musical y aprendizaje automatico.
Primero, se hace una breve introduccion de las distintas tecnicas para el
an
alisis de se
nales de audio basadas en la transformada de Fourier, incluyendo
diferentes representaciones de tiempo-frecuencia.
A continuaci
on, se analizan las propiedades de las se
nales musicales, y se
clasifican los instrumentos con respecto a su mecanismo de generacion del sonido
y a sus caractersticas espectrales.
Tambien se abordan los conceptos necesarios sobre teora musical, describiendo las estructuras temporales y armonicas de la m
usica occidental y su
representaci
on usando notaci
on escrita y computacional.
Finalmente, se describen las tecnicas basadas en aprendizaje automatico que
se han usado en este trabajo (redes neuronales y k vecinos mas cercanos).
3 - Transcripci
on musical10
Este captulo describe brevemente algunas caractersticas perceptuales relacionadas con el proceso que sigue un m
usico para realizar una transcripcion
musical. Seguidamente, se analizan las limitaciones teoricas de la transcripcion
autom
atica desde un punto de vista del analisis y procesamiento de se
nales
discretas.
8 Background.
10 Music
transcription.
181
A. RESUMEN
4 - Estado de la cuesti
on12
En este captulo se presenta una descripcion de los distintos sistemas previos
para estimaci
on de una u
nica frecuencia fundamental. Estos metodos se han
clasificado en los que analizan la forma de onda en el dominio del tiempo, los
que analizan la se
nal en el dominio de la frecuencia tras hacer una transformada
de Fourier, los metodos basados en modelos de percepcion ac
ustica y los modelos
probabilsticos.
Posteriormente, la revision se extiende con una mayor cobertura a los
metodos de estimaci
on de varias frecuencias fundamentales simultaneas. Es
complicado clasificar estos metodos usando una u
nica taxonoma, ya que son
muy complejos y por tanto suelen incluir diferentes tecnicas de procesamiento.
Por ejemplo, pueden categorizarse de acuerdo a su representacion intermedia
(dominio del tiempo, transformada de Fourier de tiempo corto, wavelets, bancos
de filtros perceptuales, etc.), pero tambien respecto a su genericidad (algunos
metodos necesitan informacion a-priori sobre el instrumento a transcribir,
mientras que otros pueden usarse para analizar cualquier tipo de sonido
arm
onico), a su capacidad para modelar distintos timbres (por ejemplo, los
metodos parametricos estadsticos pueden modelar envolventes cambiantes en
el tiempo o la frecuencia como aquellas producidas por un saxo, mientras que los
no parametricos s
olo pueden analizar patrones espectrales constantes, como los
que produce un piano), o por el modo en que pueden abordar las interacciones
entre distintos arm
onicos (metodos de estimacion iterativa o conjunta).
En este trabajo, se ha propuesto una nueva categorizacion basada en la
metodologa principal que sigue el algoritmo, en lugar de la representacion
intermedia escogida en la taxonoma de estimacion de una u
nica frecuencia
fundamental. Asimismo, tambien se han discutido y analizado los puntos fuertes
y debiles para cada una de estas categoras.
Finalmente, se ha hecho lo propio con los sistemas de deteccion de
onsets, clasific
andolos en metodos de procesamiento de la se
nal y metodos de
aprendizaje autom
atico.
12 State
182
of the art.
5 - Detecci
on de onsets usando un banco de filtros
arm
onicos14
En este captulo se propone un nuevo metodo para deteccion de onsets. La se
nal
de audio se analiza usando un banco de filtros pasa-banda de un semitono,
y se emplean las derivadas temporales de los valores filtrados para detectar
variaciones espectrales relacionadas con el inicio de los eventos musicales.
Este metodo se basa en las caractersticas de los sonidos armonicos. Los
primeros cinco arm
onicos de un sonido afinado coinciden con las frecuencias de
otras notas en la afinaci
on bien temperada usada en la m
usica occidental. Otra
caracterstica importante de estos sonidos es que normalmente la mayor parte
de su energa se concentra en los primeros armonicos.
El banco de filtros de un semitono esta formado por un conjunto de filtros
triangulares cuyas frecuencias centrales coinciden con las alturas musicales. En
la fase de sostenimiento y relajaci
on de una nota, puede haber ligeras variaciones
en la intensidad y en la frecuencia de los armonicos. En este escenario, la
comparaci
on espectral directa puede generar falsos positivos.
En cambio, usando el banco de filtros propuesto, se minimizan los efectos
de las variaciones espectrales sutiles que se producen durante las fases de
sostenimiento y relajaci
on de una nota, mientras que en el ataque se incrementan
significativamente las amplitudes filtradas, ya que la mayor parte de energa de
los parciales se concentra en las frecuencias centrales de estas bandas. De este
modo, el sistema es especialmente sensible a variaciones de la frecuencia mayores
de un semitono, y por tanto se tiene en cuenta las propiedades armonicas de los
sonidos.
El metodo se ha evaluado y comparado con otros trabajos, dando buenos
resultados dada su sencillez, y obteniendo una alta eficiencia. El algoritmo,
desarrollado en C++, y la base de datos etiquetada para su evaluacion se han
hecho p
ublicos para futuras investigaciones.
6 - Estimaci
on de alturas usando m
etodos de
aprendizaje supervisado16
En este captulo se propone un metodo para la deteccion de alturas en piezas
musicales interpretadas por un solo instrumento con un patron espectral simple.
Para ello, se parte de la hip
otesis de que un paradigma de aprendizaje, tal
como una red neuronal, es capaz de inferir un patron espectral tras una fase de
entrenamiento y, por tanto, detectar las notas en una pieza interpretada con el
14 Onset
16 Multiple
183
A. RESUMEN
7 - Estimaci
on de frecuencias fundamentales usando
m
etodos de procesamiento de la se
nal18
Los metodos de estimacion de frecuencias fundamentales basados en aprendizaje
supervisado requieren datos de audio y simbolicos alineados para su entrenamiento. Por tanto, estos metodos dependen del conjunto de entrenamiento,
y por este motivo muchos de ellos necesitan informacion a priori sobre el
timbre a analizar. Probablemente, es posible que estos sistemas puedan llegar a
generalizar e identificar correctamente las alturas en sonidos reales si se entrenan
usando un conjunto de datos suficientemente amplio, pero aun as dependen de
los datos de entrenamiento.
En grabaciones reales puede haber varios instrumentos sonando simultaneamente, que son desconocidos a priori, y que ademas suelen presentar patrones
espectrales complejos. En este captulo se describen tres metodos para deteccion
de frecuencias fundamentales en se
nales polifonicas que estan completamente
18 Multiple
184
A. RESUMEN
evalu
andolos de acuerdo a las propiedades de los sonidos armonicos. Esta
metodologa es adecuada para la mayora de los sonidos armonicos, a diferencia
del metodo de cancelacion iterativa, el cual asume un patron constante basado
en los sonidos de instrumentos de cuerda percutida.
En este metodo de esimacion conjunta, cada ventana se analiza de manera
independiente, dando como resultado la combinacion de frecuencias fundamentales que maximiza una puntuacion. Una de sus principales limitaciones es que
la informaci
on espectral que contiene una sola ventana se corresponde a un
periodo temporal breve y, debido a la naturaleza de las se
nales musicales, en
muchos casos es insuficiente para detectar las alturas de las notas, incluso para
m
usicos expertos.
Partiendo de la hipotesis de que el contexto temporal es importante, se ha
propuesto un segundo metodo de estimacion conjunta, que extiende el anterior
considerando informacion de ventanas adyacentes para producir una deteccion
temporal suavizada. Adicionalmente, se ha includo una tecnica basica de
seguimiento de frecuencias fundamentales usando para ello un grafo acclico
dirigido, para considerar de este modo mas informacion contextual.
186
Una revisi
on exhaustiva del estado de la cuestion para deteccion de
onsets y de frecuencias fundamentales en se
nales polifonicas. Los metodos
existentes se han clasificado en funciones prominentes (salience functions),
cancelaci
on iterativa (iterative cancellation), estimacion conjunta (joint
estimation), aprendizaje supervisado (supervised learning), aprendizaje no
supervisado (unsupervised learning), b
usqueda de coincidencias (matching
pursuit), modelos bayesianos (Bayesian models), modelos espectrales
estadsticos (statistical spectral models), sistemas de pizarra (blackboard
systems), y metodos de coincidencia con bases de datos (database
matching). Se ha hecho un analisis de los puntos fuertes y debiles de
cada una de estas categoras.
El desarrollo de un sistema eficiente para la deteccion de onsets y la
construcci
on de un conjunto de datos etiquetado para esta tarea. Las
principales novedades en este campo son el uso de un banco de filtros
de un doceavo de octava para comprimir la informacion armonica y
las sencillas funciones de deteccion de onsets propuestas. El metodo
presentado est
a principalmente indicado para la deteccion de onsets
percusivos, ya que detecta cambios bruscos de energa, pero tambien
considera las propiedades de las se
nales armonicas, lo cual hace que el
sistema sea robusto ante variaciones espectrales producidas durante la
fase de sostenimiento de los sonidos. El algoritmo ha sido evaluado y
comparado con otros trabajos, obteniendo resultados satisfactorios.
Dos nuevos metodos para la estimacion de frecuencias fundamentales de
instrumentos conocidos a priori usando tecnicas de aprendizaje supervisado. Estos algoritmos fueron unos de los primeros metodos basados en
aprendizaje autom
atico que se han propuesto para esta tarea. Un banco
de filtros arm
onicos se ha usado para reducir la cantidad de informacion
espectral que se usa como entrada para una red neuronal de tiempo
retardado preservando el principal contenido armonico. Se ha generado
un conjunto de entrenamiento y validacion para evaluar el metodo.
Las conclusiones extradas tras comparar para esta tarea los resultados
obtenidos por los k vecinos mas cercanos y las redes neuronales de tiempo
retardado tambien son relevantes. La red neuronal claramente mejora
los resultados obtenidos por los vecinos mas cercanos usando sonidos
sintetizados, mostrando las ventajas de las redes para la generalizacion en
un espacio de observaciones muy extenso. Se han propuesto funciones de
activaci
on alternativas para generalizar los prototipos obtenidos mediante
vecinos m
as cercanos, pero los resultados han seguido siendo claramente
inferiores a los obtenidos con la red neuronal.
187
A. RESUMEN
188
A. RESUMEN
190
Bibliography
Abdallah, S. A. and Plumbley, M. D. (2003a). An ICA approach to automatic
music transcription. In Proc. 114th AES Convention. (Cited on page 70).
Abdallah, S. A. and Plumbley, M. D. (2003b). Probability as metadata: Event
detection In music using ICA as a conditional density model. In Proc. of the
Fourth International Symposium on Independent Component Analysis (ICA),
pages 233238, Nara, Japan. (Cited on page 81).
Abdallah, S. A. and Plumbley, M. D. (2004). Polyphonic music transcription
by non-negative sparse coding of power spectra. In Proc. of the 5th
International Conference on Music Information Retrieval (ISMIR), pages
318325, Barcelona, Spain. (Cited on pages 70 and 77).
Ahmed, N., Natarjan, T., and Rao, K. (1974). Discrete cosine transform. IEEE
Trans. on Computers, 23:9093. (Cited on page 16).
American Standards Association (1960).
American standard acoustical
terminology. Definition 12.9. Timbre. (Cited on page 19).
Bay, M., Ehmann, A. F., and Downie, J. S. (2009). Evaluation of multiplef0 estimation and tracking systems. In Proc. of the 10th International
Conference on Music Information Retrieval (ISMIR), pages 315320. (Cited
on pages xiii, 157, 162, 165, 166, and 167).
Beauchamp, J. W., Maher, R. C., and Brown, R. (1993). Detection of musical
pitch from recorded solo performances. In Proc. 1993 Audio Engineering
Society Convention, pages 115, Berlin, Germany. Preprint 3541. (Cited on
page 48).
Bello, J. P. (2000). Blackboard system and top-down processing for the
transcription of simple polyphonic music. In Proc. of the COST G-6
Conference on Digital Audio Effects (DAFx), Verona, Italy. (Cited on pages
74, 75, and 77).
Bello, J. P. (2004). Towards the Automated Analysis of Simple Polyphonic
Music: A Knowledge-based Approach. PhD thesis, University of London, UK.
(Cited on page 4).
Bello, J. P., Daudet, L., Abdallah, S., Duxbury, C., Davies, M., and Sandler,
M. B. (2005). A tutorial on onset detection in music signals. IEEE Trans. on
Speech and Audio Processing, 13(5):10351047. (Cited on pages 52, 53, 77,
and 78).
191
BIBLIOGRAPHY
192
BIBLIOGRAPHY
BIBLIOGRAPHY
Cemgil, A. T., Kappen, B., and Barber, D. (2003). Generative model based
polyphonic music transcription. In IEEE Workshop on Applications of Signal
Processing to Audio and Acoustics, pages 181184. (Cited on pages 1 and 72).
Cemgil, A. T., Kappen, H. J., and Barber, D. (2006). A generative model for
music transcription. IEEE Trans. on Audio, Speech and Language Processing,
14(2):679694. (Cited on page 72).
Chang, W. C., Su, A. W. Y., Yeh, C., Roebel, A., and Rodet, X. (2008).
Multiple-F0 tracking based on a high-order HMM model. In Proc. of the 11th
Int. Conference on Digital Audio Effects (DAFx), Espoo, Finland. (Cited on
pages 66, 168, and 173).
Cohen, L. (1995). Time-frequency analysis. Prentice Hall. (Cited on page 66).
Collins, N. (2005a). A change discrimination onset detector with peak scoring
peak picker and time domain correction. In MIREX (2005), onset detection
contest. (Cited on page 99).
Collins, N. (2005b). A comparison of sound onset detection algorithms with
emphasis on psychoacoustically motivated detection functions. In AES
Convention 118, pages 2831, Barcelona. (Cited on pages 77 and 79).
Comon, P. (1994). Independent component analysis, a new concept? Signal
processing, 36:287314. (Cited on page 70).
Cont, A. (2006). Realtime multiple pitch observation using sparse nonnegative constraints. In Proc. of the 7th International Symposium on Music
Information Retrieval (ISMIR), Victoria, Canada. (Cited on page 69).
Cont, A. (2007). Real-time transcription of music signals: MIREX 2007
submission description. In MIREX (2007), multiple f0 estimation and
tracking contest. (Cited on pages 159 and 160).
Cont, A. (2008). Modeling Musical Anticipation: From the time of music to the
music of time. PhD thesis, University of Paris VI and University of California
in San Diego. (Cited on page 44).
Cover, T. and Hart, P. (1967). Nearest neighbor pattern classification. IEEE
Trans. on Information Theory, 13(1):21 27. (Cited on page 40).
Daniel, A., Emiya, V., and David, B. (2008). Perceptually-based evaluation
of the errors usually made when automatically transcribing music. In Proc.
of the 9th Int. Conference on Music Information Retrieval (ISMIR), pages
550555, Philadelphia, PA. (Cited on page 52).
194
BIBLIOGRAPHY
BIBLIOGRAPHY
Dixon, S. (2006). Onset detection revisited. In Proc. of the Int. Conf. on Digital
Audio Effects (DAFx), pages 133137, Montreal, Canada. (Cited on pages
77, 78, 79, 91, 92, and 140).
Doval, B. (1994). Estimation de la Frequence Fondamentale des signaux sonores.
PhD thesis, Universite Paris VI, Paris. (Cited on page 121).
Doval, B. and Rodet, X. (1993). Fundamental frequency estimation and tracking
using maximum likelihood harmonic matching and HMMs. In International
Conference on Acoustics, Speech and Signal Processing (ICASSP), volume 1,
pages 221224. (Cited on page 58).
Dubnowski, J. J., Schafer, R. W., and Rabiner, L. R. (1976). Real-time
digital hardware pitch detector. IEEE Trans. Acoustics, Speech, and Signal
Processing (ASSP), 24:28. (Cited on page 55).
Dubois, C. and Davy, M. (2005). Harmonic tracking using sequential Monte
Carlo. In IEEE/SP 13th Workshop on Statistical Signal Processing, pages
12921296, Bordeaux, France. (Cited on page 72).
Dubois, C. and Davy, M. (2007). Joint Detection and Tracking of Time-Varying
Harmonic Components: A Flexible Bayesian Approach. IEEE Trans. on
Audio, Speech and Language Processing, 15(4):12831295. (Cited on page
72).
Duda, R., Lyon, R., and Slaney, M. (1990). Correlograms and the separation
of sounds. In Proc. IEEE Asilomar Conference on Signals, Systems and
Computers. (Cited on page 60).
Duda, R. O., Hart, P. E., and Stork, D. G. (2000). Pattern Classification. John
Wiley and Sons. (Cited on pages xi, 38, 39, 40, 41, and 104).
Durrieu, J. L., Richard, G., and David, B. (2008). Singer melody extraction
in polyphonic signals using source separation methods. In Proc of the
IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), pages 169172, Las Vegas, NV. (Cited on page 163).
Duxbury, C., Sandler, M., and Davies, M. (2002). A hybrid approach to musical
note onset detection. In Proc. Digital Audio Effects Conference (DAFx), pages
3338, Hamburg, Germany. (Cited on pages 78, 80, and 83).
Egashira, K., Ono, N., and Sagayama, S. (2008). Sequential estimation
of multiple fundamental frequencies through harmonic-temporal-structured
clustering. In MIREX (2008), multiple f0 estimation and tracking contest.
(Cited on pages 163 and 164).
196
BIBLIOGRAPHY
BIBLIOGRAPHY
Nature,
198
BIBLIOGRAPHY
BIBLIOGRAPHY
200
BIBLIOGRAPHY
BIBLIOGRAPHY
Kosuke, I., KenIchi, M., and Tsutomu, N. (2003). Ear advantage and
consonance of dichotic pitch intervals in absolute-pitch possessors. Brain and
cognition, 53(3):464471. (Cited on page 31).
Krstulovic, S. and Gribonval, R. (2006). MPTK: Matching Pursuit made
tractable. In Proc. Int. Conf. on Acoustics, Speech, and Signal Processing
(ICASSP), volume III, pages 496499, Toulouse, France. (Cited on page 70).
Krumhansl, C. (2004). The cognition of tonality - as we know it today. Journal
of New Music Research, 33(3):253268. (Cited on pages xi, 30, and 31).
Lacoste, A. and Eck, D. (2005). Onset detection with artificial neural networks.
In MIREX (2005), onset detection contest. (Cited on page 78).
Lacoste, A. and Eck, D. (2007). A supervised classification algorithm for note
onset detection. EURASIP Journal on Advances in Signal Processing. (Cited
on page 81).
Lahat, M., Niederjohn, R., and Krubsack, D. (1987). A spectral autocorrelation
method for measurement of the fundamental frequency of noise-corrupted
speech. IEEE Trans. on Acoustics, Speech and Signal Processing, 35(6):741
750. (Cited on page 57).
Large, E. W. and Kolen, J. F. (1994). Resonance and the perception of musical
meter. Connection Science, 6:279312. (Cited on page 67).
Lee, D. D. and Seung, H. (1999). Learning the parts of objects by non-negative
matrix factorization. Nature, 401:788791. (Cited on page 69).
Lee, W.-C. and Kuo, C.-C. J. (2006). Musical onset detection based on adaptive
linear prediction. IEEE International Conference on Multimedia and Expo,
0:957960. (Cited on page 78).
Lee, W.-C., Shiu, Y., and Kuo, C.-C. J. (2007). Musical onset detection
with linear prediction and joint features. In MIREX (2007), onset detection
contest. (Cited on page 79).
Lerdahl, F. and Jackendoff, R. (1983). A Generative Theory of Tonal Music.
MIT Press, Cambridge. (Cited on page 33).
Leveau, P. (2007).
A multipitch detection algorithm using a sparse
decomposition with instrument-specific harmonic atoms. In MIREX (2007),
multiple f0 estimation and tracking contest. (Cited on page 160).
Leveau, P., Vincent, E., Richard, G., and Daudet, L. (2008). Instrument-specific
harmonic atoms for mid-level music representation. IEEE Trans. on Audio,
Speech, and Language Processing, 16(1):116 128. (Cited on pages xi and 71).
202
BIBLIOGRAPHY
BIBLIOGRAPHY
204
BIBLIOGRAPHY
BIBLIOGRAPHY
206
BIBLIOGRAPHY
BIBLIOGRAPHY
208
BIBLIOGRAPHY
R
obel, A. (2009). Onset detection by means of transient peak classification in
harmonic bands. In MIREX (2009), onset detection contest. (Cited on pages
94, 95, 96, and 97).
Rodet, X. (1997). Musical sound signals analysis/synthesis: Sinusoidal+residual
and elementary waveform models. In Proc. of the IEEE Time-Frequency and
Time-Scale Workshop (TFTS97), Coventry, GB. (Cited on page 121).
Rodet, X., Escribe, J., and Durignon, S. (2004). Improving score to audio
alignment: Percussion alignment and precise onset estimation. In Proc. of
Int. Computer Music Conference (ICMC), pages 450453. (Cited on page
53).
Ruiz-Reyes, N., Vera-Candeas, P., nadas Quesada, F. J. C., and Carabias, J. J.
(2009). Fast communication: New algorithm based on spectral distance
maximization to deal with the overlapping partial problem in note-event
detection. Signal Processing, 89(8):16531660. (Cited on page 71).
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning
representations by back-propagating errors. Nature, 323:533536. (Cited
on pages 39 and 104).
Ryyn
anen, M. (2006). Singing transcription. In Klapuri and Davy (2006),
chapter 12. (Cited on page 26).
Ryyn
anen, M. (2008). Automatic Transcription of Pitch Content in Music and
Selected Applications. PhD thesis, Tampere University of Technology. (Cited
on page 4).
Ryyn
anen, M. and Klapuri, A. (2004). Modelling of note events for singing
transcription. In in Proc. ISCA Tutorial and Research Workshop on Statistical
and Perceptual Audio, page 6. MIT Press. (Cited on pages xi, 56, 61, 64,
and 65).
Ryyn
anen, M. and Klapuri, A. (2005). Polyphonic music transcription using
note event modeling. In Proc. IEEE Workshop on Applications of Signal
Processing to Audio and Acoustics (WASPAA), pages 319322, New Paltz,
New York, USA. (Cited on pages xi, 51, 64, 74, 120, 159, 160, 163, 164, 167,
and 169).
Sachs, C. (1940). The history of Musical Instruments. Norton, New York.
(Cited on page 20).
Sano, H. and Jenkins, B. K. (1989). A neural network model for pitch perception.
Computer Music Journal, 13(3):4148. (Cited on page 102).
209
BIBLIOGRAPHY
210
BIBLIOGRAPHY
BIBLIOGRAPHY
212
BIBLIOGRAPHY
Walmsley, P., Godsill, S., and Rayner, P. (1999). Polyphonic pitch tracking
using joint bayesian estimation of multiple frame parameters. In Proc.
IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
(WASPAA), pages 119122, New Paltz, NY. (Cited on page 72).
Wan, J., Wu, Y., and Dai, H. (2005). A harmonic enhancement based multipitch
estimation algorithm. In IEEE International Symposium on Communications
and Information Technology (ISCIT) 2005, volume 1, pages 772 776. (Cited
on page 65).
Wang, W., Luo, Y., Chambers, J. A., and Sanei, S. (2008). Note onset detection
via nonnegative factorization of magnitude spectrum. EURASIP Journal on
Advances in Signal Processing. (Cited on page 81).
Wessel, D. L. (1979). Timbre space as a musical control structure. Computer
Music Journal, 3(2):4552. (Cited on page 19).
Wood, A. (2008). The physics of music. Davies Press. (Cited on page 47).
Woodruff, J., Li, Y., and Wang, D. (2008). Resolving overlapping harmonics
fpr monoaural musical sound separation using pitch and common amplitude
modulation. In Proc. of the International Symposium on Music Information
Retrieval (ISMIR), pages 538543, Philadelphia, PA. (Cited on page 130).
Yeh, C. (2008). Multiple fundamental frequency estimation of polyphonic
recordings. PhD thesis, Universite Paris VI - Pierre et Marie Curie. (Cited
on pages xi, 4, 45, 46, 48, 65, 76, 128, 160, and 173).
Yeh, C., R
obel, A., and Rodet, X. (2005). Multiple fundamental frequency
estimation of polyphonic music signals. In IEEE, Int. Conf. on Acoustics,
Speech and Signal Processing (ICASSP), volume III, pages 225228,
Philadelphia, PA. (Cited on pages 65, 126, 127, 129, and 130).
Yeh, C. and Roebel, A. (2006). Adaptive noise level estimation. In Proc. of
the 9th Int. Conference on Digital Audio Effects (DAFx), Montreal, Canada.
(Cited on page 65).
Yeh, C. and Roebel, A. (2009). The expected amplitude of overlapping partials
of harmonic sounds. In Proc. of the International Conference on Acoustics,
Speech and Signal Processing (ICASSP), Taipei, Taiwan. (Cited on page 47).
Yeh, C., Roebel, A., and Chang, W. C. (2008). Multiple F0 estimation for
MIREX 08. In MIREX (2008), multiple f0 estimation and tracking contest.
(Cited on pages 163, 164, and 167).
213
BIBLIOGRAPHY
Yeh, C., Roebel, A., and Rodet, X. (2006). Multiple f0 tracking in solo
recordings of monodic instruments. In Proc. of the 120th AES Convention,
Paris, France. (Cited on pages 48 and 66).
Yin, J., Sim, T., Wang, Y., and Shenoy, A. (2005). Music transcription using
an instrument model. In Proc. of the IEEE International Conference on
Acoustics, Speech, and Signal Processing (ICASSP), volume III, pages 217
220. (Cited on page 65).
Young, S., Kershaw, D., Odell, J., Ollason, D., Valtchev, V., and Woodland, P.
(2000). The HTK book (for HTK version 3.1). Cambridge University. (Cited
on page 87).
Young, S. J., Russell, N. H., and Thornton, J. H. S. (1989). Token passing: a
simple conceptual model for connected speech recognition systems. Technical
report, Cambridge University Engineering Department. (Cited on page 65).
Zhou, R. (2006). Feature extraction of Musical Content For Automatic Music
214
BIBLIOGRAPHY
215