Voice Assignment in Vocal Quartets Using Deep Learning Models Based on Pitch Salience
Voice Assignment in Vocal Quartets Using Deep Learning Models Based on Pitch Salience
Blog Article
This paper deals with the automatic transcription of four-part, a cappella singing, audio performances.In particular, we exploit an existing, deep-learning based, multiple F0 estimation method and complement it with two neural Deep Fryer Filter Paper network architectures for voice assignment (VA) in order to create a music transcription system that converts an input audio mixture into four pitch contours.To train our VA models, we create a novel synthetic dataset by collecting 5381 choral music scores from public-domain music archives, which we make publicly available for further research.
We compare the performance of the proposed VA models on different types of input data, as well as to a hidden Markov model-based baseline system.In addition, we assess the generalization capabilities of these models on audio recordings Dog Food with differing pitch distributions and vocal music styles.Our experiments show that the two proposed models, a CNN and a ConvLSTM, have very similar performance, and both of them outperform the baseline HMM-based system.
We also observe a high confusion rate between the alto and tenor voice parts, which commonly have overlapping pitch ranges, while the bass voice has the highest scores in all evaluated scenarios.