Analysing the impact of reverberation on expressivity in ensemble performance
Music & Speech Modelling 2018
Abstract
It is widely observed that reverberation time has an impact on musical performance, and while it has been analysed objectively for soloists, ensemble performances are mainly analysed via subjective self-report. This project presents a study of two keyboard players performing in ten simulated reverberation conditions, and analyses the changes in tempo with a methodology that can be used to analyse other changes in performance. The study presents an interesting finding on error and adjustment in performance where errors increase in dry and highly reverberant conditions.
Introduction
Performing musicians experience variable acoustics from venue to venue, and acoustic design has historically prioritised the audience over the musician to the detriment of the working conditions of performers. While stage and in ear monitoring mitigates against many of the issues for amplified performers, an acoustic ensemble remains at the mercy of the natural room acoustic influencing many factors including Agogic, Dynamic Strength, Dynamic Bandwidth, Tempo and Timbre.
For this study we will be particularly looking at Tempo as a measure – This will be the tempo (speed) chosen by the duo for the piece. From a previous study of a duo conducted by Kato et al[1] it is shown tempo was reduced for short and long reverberation times. While this is logical when thinking about the idea of ‘comfort reverb’ used in say recording studios which utilise a certain degree of reverberation to support performers does it hold true in more extreme reverberation environments and when performers are located in the same space?
Literature Review
In acoustic design of performance spaces the majority of work focusses upon the audience, and work that does look at the impact of acoustic environments upon performers is either from a subjective viewpoint, or concentrates on solo instrumentalists [2]–[5]. In the literature reviewed for this project studies generally only analyse the performance of solo musicians in a non-subjective manner, with larger ensembles only analysed through reflective analysis. A key study in this area by Ueno [6], [1], [7] looks at the playing of soloists in a range of simulated acoustic environments and attempts to understand in a quantifiable way the adjustment of tone, tempo, the length of notes, the silence between notes, articulation, dynamic level, dynamic range, harmonics strength, pitch tuning, and/or vibrato extent in relation to acoustic environments. This particular study goes into some interesting detail, and correlates it to the subjective opinion with the changes in the audio showing considerable agreement with the statements from the performers. The paper found that tempo, note length, silence, articulation, dynamic level, dynamic range, harmonics strength, pitch tuning, and vibrato all varied significantly with room acoustic conditions.
However, there are limitations with this study, along with others by Kalkiandjiev [8], [9] and Amengual Garí [10]. The first is the focus on soloists rather than ensembles. The area of the soloist is widely covered, but music often relies on the ensemble for its performance, and has not been subjected to the same level of study. The second limitation is that where multiple performers have been studied such as in Kato et al[1] they have been located in separate spaces. While this makes logistical sense when using small anechoic chambers there is an issue here in ensuring ecological validity, as most musical performance is conducted in the same physical space – which is something I’ve looked to address in this study.
Procedure for Data Collection
For this study two experienced pianists were recruited from the Centre for Digital Music at Queen Mary University of London. They were given 48 hours to prepare a duet of GershwinPiano Concerto in F II. Adagio - Andante con moto (in an arrangement for 2 pianos)[12]. The musicians were asked to learn bars 0-63.
In order to make a replicable study easily available commercial equipment was used. The keyboards used were M-Audio Axiom 61 keyboards using the Simpler Grand Piano plugin in Ableton Live 9 Suite. This plugin was used as it is a polyphonic sample based plugin, and provides a realistic piano effect. The ten different reverberation conditions were presented through the Convolution Reverb Pro Device for MAX/MSP[13] and a dry condition presented through two Beyer Dynamic DT 150 Headphones. These were patched through a Focusrite Scarlett Solo audio interface and a Samson S-Amp to ensure participants and investigator could hear proceedings. The Impulse Responses were selected from the OpenAIR Library[14] and were chosen to represent a wide range of conditions for the musicians. These are presented in the appendix.
The conditions were delivered in a randomised order, as due to the unfamiliarity of the piece of music being performed there was a concern their improving performance could confound the results if the conditions were presented from (for example) no reverberation time to a full reverberation time.
The performances were captured in MIDI in separate channels create a discrete channel of data for further analysis. The MIDI data was then synchronised and converted to audio to allow annotation.
Due to time constraints the annotation was conducted by the investigator using Sonic Visualiser[15] an plugins ‘BeatRoot’[16] (to analyse the timings of the performances ). Data was then exported and analysed in MATLAB using bespoke code.
Results
Initial analysis was conducted on the overall tempo of the performances to compare all the different conditions. This was outputted in performance time, and also score time in order to check comparison. It is clear that, despite the observations of the performers and the investigator that there was very little significant change to the overall length of the performances. However, there were clearly a range of different tempo changes in the piece, and analysis of the tempo was conducted to gain maximum, minimum and mean tempos for each piece to give an overall picture of the performances.
This analysis revealed an unexpected result with the tempo in the highest reverberation condition (Sports Centre which has RT of 21.20 seconds) providing the highest mean tempo of 163bpm, compared to Cripta di San Sebastiano (marked as Ing on the graph and RT of 0.53) providing a mean tempo of ~187bpm. The completely Dry (with no reverberation) condition was located as a medium tempo (~158bpm). The lowest tempo (~147bpm)was found in the Basement Condition (2.67s).
Next to be analysed is the distribution of tempo in each reverberation condition, as seen in the figure below.
Discounting the lowest values the plots reveal a clear differential in the number of tempo adjustments taking place through the duration of the performance. The middle condition of the Right Stalls Central Hall, University of York (coded as Mh on the graph and highlighted in red) has significantly fewer outlier values which equates to less adjustment than in the other conditions.
Presented below is a table which shows the outliers as a percentage of the tempos in order of Dry to Highly Reverberant Conditions, as in previous figures. The performers response is also shown, this was collected after playing about how easy they found each condition (1 being easy, 5 being difficult) - this table is presented in the order that the conditions were presented. There is some significance in the number of outliers compared to the self-report of difficulty.
Discussion
The changing tempo of a piano duo playing in the same room across the different conditions does not follow the pattern of the literature, and will require some investigation to make a generalisation on this with a new data set being required. While there are a number of limitations to this particular study, it does raise some questions about the impact of reverberation on musical attention and learning which will need further investigation. The performers in this study were under-rehearsed and the experiment was the first time they had attempted the piece, which would account for some of the adjustments, but there is a correlation between the reverberation and the tempo deviations. From the experiment protocol it would be assumed that difficulty and
This study is a proof of concept study, and as such there are a number of limitations in this should be acknowledged. The first is the performers familiarity with the music has impacted the results. Future studies should concentrate on performers who have played together on a number of occasions and have a greater familiarity with the piece. The second is the instruments used by the performers, unfortunately we were unable to use expression pedals for this experiment, in future studies 88 Key Weighted Keyboards with expression pedals should be used. The third is the method of delivery for the reverberation conditions. Future studies should employ an anechoic chamber that can seat more than one performer with acoustic pianos, and use spatial audio delivered through a speaker array. In addition the reverberation conditions should be impulse responses taken from stage positions (these are not freely available at the time of writing).
Conclusion and Further Work.
In conclusion this study presents results which run contrary to the literature, and will require further examination. However, what has been discovered is a possible correlation between tempo adjustment in low and, in particular, high reverberation time.
Further work will be to run this experiment again with well rehearsed performers. In addition there is a task to develop a series of room impulse responses based on stage positions, and to increase the scope research to encompass a piano trio which will allow analysis of different types of audio signal in an ensemble environment. This should then be compared to performances in the real environment to see if the trends are connected to the the acoustic or the methodology.
Appendix
1. List of Room Impulse Responses Used.
*Please note this was not reported in the final study due to an error in the recording.
2. Highlights of Reverberation Analysis of Impulse Responses Used.
Acknowledgements
I would like to acknowledge the following people for their support with this work: Professor Elaine Chew, Edward Hall, Beici Lang and Giulio Moro. I would also like to thank my colleagues on the Music & Speech Modelling Module who have been a pleasure to work alongside.
My studies are supported by the EPSRC+AHRC Media and Arts Technology Centre for Doctoral Training at Queen Mary University of London.
Bibliography
[1] K. Kato, K. Ueno, and K. Kawai, “Effect of Room Acoustics on Musicians’ Performance. Part II: Audio Analysis of the Variations in Performed Sound Signals,” Acta Acust. united with Acust., vol. 101, no. 4, pp. 743–759, Jul. 2015.
[2] J. Y. Jeon, Y. S. Kim, H. Lim, and D. Cabrera, “Preferred positions for solo, duet, and quartet performers on stage in concert halls: In situ experiment with acoustic measurements,” Build. Environ., vol. 93, no. P2, pp. 267–277, 2015.
[3] A. C. Gade, “Investigations of musicians’ room acoustic conditions in concert halls. Part I: Methods and Laboratory experiments,” Acustica, vol. 69, no. November 1988, pp. 193–203, 1989.
[4] K. Ueno and H. Tachibana, “Experimental study on the evaluation of stage acoustics by musicians using a 6-channel sound simulation system,” Acoust. Sci. Technol., vol. 24, no. 3, pp. 130–138, 2003.
[5] S. Bolzinger, O. Warusfel, and E. Kahle, “A study of the influence of room acoustics on piano performance,” J. Phys. IV, vol. 4, no. C5, pp. 617–620, 1994.
[6] K. Kato, K. Ueno, and K. Kawai, “Musicians’ Adjustment of Performance to Room Acoustics, Part III: Understanding the Variations in Musical Expressions,” Acoust. ’08, Paris, vol. 123, no. 5, p. 3610, 2008.
[7] K. Ueno, K. Kato, and K. Kawai, “Effect of room acoustics on musicians’ performance. part I: Experimental investigation with a conceptual model,” Acta Acust. united with Acust., vol. 96, no. 3, pp. 505–515, May 2010.
[8] Z. S. Kalkandjiev and S. Weinzierl, “Room acoustics viewed from the stage : Solo performers ’ adjustments to the acoustical environment,” Isra, pp. 1–10, 2013.
[9] Z. Schärer Kalkandjiev, “The Influence of Room Acoustics on Solo Music Performances. An Empirical Investigation,” 2015.
[10] S. V Amengual Garí, T. Lokki, and M. Kob, “Live performance adjustments of solo trumpet players due to acoustics,” Int. Symp. Music. Room Acoust. ISMRA, 2016.
[11] “Piano Concerto in F - George Gershwin | Sheet music for Piano and Keyboard | MuseScore.” [Online]. Available: https://musescore.com/hmscomp/gershwin-piano-concerto-in-f-ii. [Accessed: 14-Mar-2018].
[12] A. Harker and P. A. Tremblay, “The HISSTools Impulse Response Toolbox: Convolution for the Masses,” in ICMC 2012: Non-cochlear Sound. The International Computer Music Association, 2012, pp. 148–155.
[13] Damian Murphy and Simon Shelley, “OpenAIR | The Open Acoustic Impulse Response Library,” 2017. [Online]. Available: http://www.openairlib.net/. [Accessed: 27-Feb-2018].
[14] C. Cannam, C. Landone, and M. Sandler, “Sonic Visualiser : An Open Source Application for Viewing, Analysing, and Annotating Music Audio Files,” in Proceedings of the ACM Multimedia 2010 International Conference, 2010, pp. 1467--1468.
[15] S. Dixon, “evaluation of audio beat tracking,” Preprint for Journal of New Music Research. [Online]. Available: http://www.eecs.qmul.ac.uk/~simond/pub/2007/jnmr07.pdf. [Accessed: 14-Mar-2018].