Students within the MA Music Production, MA Music Psychology, and MSc Audio and Music Technology have been preparing for their dissertation projects. In preperation for this, the YMMS symposium serves as a platform for students to showcase their work.
Presentations will take place from 9am - 5pm on June 7th and 8th. A detailed schedule is provided Here.
KEYNOTE PRESENTATION
Musicians face challenges when using stereo headphones to perform with one another, due to a lack of audio intelligibility and the loss of their usual benchmarks. Also, high levels of click tracks in headphone mixes hinder performance subtleties and harm performers’ aural health. Leonard Menon will discuss approaches and outcomes of eight case studies conducted by a team of four sound engineers-researchers in professional situations that aimed to compare the experiences of orchestra conductors and instrumentalists while monitoring their performances through binaural versus stereo headphones. These studies assessed three solutions combining augmented and mixed reality technologies, that is, binaural with head tracking to conduct a large film-scoring orchestra and jazz symphonic with a click track; binaural without head tracking to improvise in trio or on previously recorded takes in the studio; and active binaural headphones to record diverse genres on a click track or soundtrack. Findings concur to show that better audio intelligibility and recreated natural-sounding acoustics through binaural rendering enhance performers’ listening comfort, perception of a realistic auditory image, and musical expression by increasing their feeling of immersion. Findings also demonstrate that the reduction of source masking effects in binaural versus stereo headphone mixes enables performers to monitor less click track, and therefore protect their creative experience and aural health. Menon will conclude the talk with an overview of recent developments in 3D monitoring technology.
Student Presentations
Music has been shown to serve a myriad of functions that can cater to any individual social, cognitive or emotional need (North et. al., 2004; Rana & North, 2007). Through the rise of digital music streaming services, researchers have been able to investigate how people curate their own personal listening experiences and further study the motivations behind these functions through the Uses and Gratifications of Music model (Lonsdale & North, 2011). While most of the previous research has focused on one influential factor (i.e. listening device, location, individual characteristics), the current research aims to use a holistic approach to explore how streaming service users utilise playlists in their everyday music listening behaviours. Through implementing self-report measures and an Experience Sampling Method (ESM) application, this research aims to collect real-time data regarding playlist curation, listening context, emotional state, and the musical engagement of each listener. It is predicted that the curation and intended audience of each playlist will influence the function of each type of playlist in everyday users.
Which came first, the stimulus or the response? Classical theories postulate that emotions are differentiated by their individual mental states. Wherein, Anger, Sadness, Disgust, Fear, and Happiness are innate reactions to stimulus outside of cognitive control (Tracy & Randles, 2011). However, Barrett (2017, 2006) proposes that emotions are constructions derived from subjective experience, predicted by the brain asit simulates experiences you have or might have. Research referring to the direct causal relationship between music and affect often result in vastly different emotional responses, but listeners routinely agree on labeling musical emotions (Krumhansl, 2002). Konečni (1991, 2003, 2008, 2010)instead proposes the question, “is the listeners choice of music dependent on one’s current affect?” This study aims to provide empirical evidence regarding the relationships between a listener's affectual state, their musical choices, and musically induced emotions.
Music education in the UK has long been centred around the study of historic European classical music. Many studies within the field of Music Psychology have investigated the various ways in which the human brain responds to so-called classical music, and how those responses may differ in accordance to the listening context, but very few have examined the effect of music by specific composers. Bach's music is still performed regularly around the world - arguably more so than any other composer - and continues to be regarded as essential repertoire by instrumental teachers in many disciplines. This study seeks to understand why that may be, and aims to determine whether there is an argument for the continued hagiolatry of J. S. Bach, even despite the welcome drive towards the diversification of music education.
Presentation link: https://www.youtube.com/watch?v=agDtpJKumNk
This presentation will focus on the birth of grime, how to approach selling beats in the hip-hop industry and female power in grime/hip-hop. Using video interviews and recorded conversations from footage of our time spent with The Ruff Sqwad we aim to give an enjoyable and informative presentation!
Music Performance Anxiety (MPA) is an almost ubiquitous phenomenon for musicians and music students. Kenny (2011) argued that MPA might be relevant to individuals' self-conception, self-efficacy, self-esteem, and locus of control. Existing studies have discussed the correlation between musicians' self-efficacy, self-esteem, and anxiety about music performance. However, despite the different demography and focuses, they did not give a complete picture, although these studies provided constructive knowledge. In addition, although the relationship between locus of control, self-concept, and music practice has been discussed in existing studies, they have not been investigated in relation to MPA. Therefore, this study investigates potential musicians' performance anxiety from the dimensions of self-concept, self-esteem, locus of control, and self-efficacy in higher education. Members of three university symphony orchestra and choir will be recruited in this study, including professional and non-professional musicians.
Nature sounds, such as birdsongs, always inspire composers and appeal to our senses. Birdsong is arguably the most outstanding vocal display in nature. What are the different emotions evoked in humans between birdsongs and artificial music? Is it common for modern people to prefer birdsongs rather than the model of artificial music? This is a comparison study for birdsongs and music with corresponding pitch, timbre, and articulation. For these two genres, this study aims to illustrate three bonds which are musical emotion recognition, sound preference and subjective feeling for music features. The study will ask participants to fill in an online questionnaire. In the questionnaire, participants are able to listen to 3 species of pure birdsongs and the 3 corresponding music pieces, based on pitch, timbre, and articulation, then let them score their preferences, emotions, and music features for each music excerpt. The questionnaire has a pre-validity scale for measuring personality and a few questions regarding demographic information. The proposal for the study aims to identify people’s openness to music. The aim is also to encourage people to be more attuned into nature and love it.
Presentation Link: youtu.be/igQtAMDql1U
Machine learning is everything today. It’s painful to replicate synthesizer parameters in order to obtain a target sound. How can machine learning help? I will show you some in my presentation video.
Presentation Link: www.youtube.com/watch?v=Fw5Y-r9FVnM
Real-world acoustic measurements can be taken with a soundfield microphone and exciting loudspeaker(s). Taking multiple measurements will produce a series of discrete source-receiver combinations that can be used for the purposes of auralisation. This is a powerful tool; however, the measured positions may not be desirable for all auralisations for a variety of reasons. This project is concerned with how these measurements can be used re-position sound sources within the virtual space. It aims to establish perceptual thresholds for both the azimuth and altitude angles of the source by re-panning the direct sound in relation to the early reflections and reverberant sound.
Presentation Links: youtu.be/I4gk8wL-III
The project aims to investigate changes in the shape of the vocal tract when singing at different registers. This will be aided with the use of MRI data of the vocal tract of soprano singers, as well as a numerical model called the digital waveguide mesh. The presentation will first look at relevant prior literature to provide a background and introduce key concepts of voice acoustics. It will then provide a more tehcnical explanation of how the MRI data and digital waveguide mesh will be used. The goals of the project are to gain a better understanding of the mechanics of the vocal tract especially at high singing registers. A major concern is the intelligibility of words and the project will potentially look into how this can be improved at high frequencies.
Presentation Link: www.youtube.com/watch?v=75D2_V0SK80&ab_channel=PabloAbehseraMorell
Podcasts are on the rise as of late, thanks to their widespread availability and the fact that anyone can now record and distribute one with relative ease. Binaural Audio is a form of Spatial Audio that is experienced over headphones, presenting listeners with sounds in a 3D environment using only two channels. Some creators have already experimented with using Binaural Audio in Podcasts, attempting to deliver the sensation that the speakers are in the room with the listener. This project explores potential ways to make the most out of using Binaural Audio in Podcasts, so that listeners can get the best experience out of it.
Presentation Link: https://www.youtube.com/watch?v=GwOOXfNbsXA
Our presentation elaborates our journey towards the amazing concerts held at streetlife as well as here at the University of York. On May 4th, we started off by sending and analyzing the impulse response of the big room downstairs at Streetlife as well as taking measurements of the listening booths. On May 10th Tuesday, we did a mizing session to prepare and setup the 4.1 mixes of Trevor Wishart.Following that on May 12th, We did the necessary room configurations required at Streetlife which included cleaning, laying down carpets and boards, soldering XLR cables for speakers, cable management etc. After all the necessary pre requisite measures were done, rest of the procedures like sound checks, level checks, EQ were done which finally led to the concerts on Friday, Saturday and Sunday. Working and Learning from Ben has helped us to have an insight into the world of surround sound installation as well as mixing. It was definitely one of the nest sessions we ever had.
Presentation Link: https://www.youtube.com/watch?v=b2_XigGtGQM
The research is based on the online survey of important demographic information about the audio industry and the experiences of music producers/recording engineers with discrimination in the studio. Both qualitative analysis and quantitive analysis are applied for interpreting the survey’s result. The purpose of this study is to document the experience of discrimination in audio engineering and music production. The findings will be used for academic, educational and industrial purposes.
Presentation Link: youtu.be/S7iLKm7czi0
The presentation will cover effective methods of online teaching, specifically then when teaching code, and will then discuss the various audio and visual toolkits that will be covered in generated lecture material as part of the research paper, and how they can be combined.
Presentation Link: youtu.be/ZuWrL6fLn68
HRTF’s are a popular method for utilising Binaural 3D Audio in Interactive Media, however, people often experience perceptual issues such as front-back errors, phasing, notch filtering, and poor externalisation if the HRTF’s are not individualised to the listener. ORTF is a popular stereo microphone technique that has similar ITD and ILD cues to Binaural Audio. This project will explore whether a virtual ORTF configuration can be utilised in Interactive Media, to create high quality spatialised audio.
Presentation Link: youtu.be/__Mn0rSjSWQ
Because of the recent computer technology, it is now possible to draw 3D models relatively easily and more realistically. We can also experience being inside the 3D space with the VR video, even from smartphones. Have you also experienced 3D sound? With the technology called 3D sound or also named Spatial Audio, the 3-dimensional surround can be created in a pair of headphones. This presentation will explain spatial audio technology and the project that applies it. In the project, a live music venue in York city centre, The Basement, is modelled. VR concert with more immersive and highly realistic sound will be created in the modelled venue.
Presentation Link: youtu.be/Aq6trToQ3so
The topic that I am looking into for my masters project is iOS development. I will discuss the potential of 3D audio for interactive AR/VR applications and more specifically I will examine the possibility of creating an app for interactive 3D sonic exhibitions using game engines such as Unity and Unreal.
Presentation Link: https://youtu.be/mTFkW2hVXuI
On May 11th 2022, fifteen students and lecturers from the University of York converged at Ayriel Studios, a residential studio located within the spectacular landscape of the North York Moors National Park, to examine the newly built studios' capabilities. In this session, it is learned that research methods about organization, communication and inspection. Similarly, we got inspired by final project related topics. The following presentation will recount their exploration of the studios' acoustic properties through a two-stage practice. Firstly, the studios' physical acoustic properties will be examined through acoustic documentation and Impulse Responses created from sweep recordings of the Live Room. Secondly, the acoustic properties of the studio will be explored through a synthetic lens by examining the post-production process of a selection of compositions developed for contemporary extended techniques recorded by Loré Lixenberg (British Mezzo-Soprano) in collaboration with Paul Baily (Producer, Re:Sound).
Presentation Link: youtu.be/1Xp-dljl4AM
Voice acoustics is an important field of research in the modern day, with great implications for realistic speech synthesis, improved healthcare, audio forensics, and more. However, relatively little research has been conducted into how components of the vocal tract produce a voice signal; biological complexity, individual variation between people, and technological limitations all contribute to this lack of voice production understanding. One topic that particularly needs research is the acoustic impact of the highly-complex nasal airways. In this project, the effect of the nasal tract on the frequency content of the voice during vowel production will be investigated through editing 3D MRI data and using cutting-edge 3D acoustic modelling techniques.
Presentation Link: youtu.be/z35TYcVJpSU
After doing research and reading relevant literature on the fields of sonification, it was found that sonification provides a distinct perspective on the design process in sonic interaction design, which complements other perspectives such as aesthetic or emotional qualities of sound or branding/identification aspects. It was also found that there are issues of functionality and aesthetics in sonification at current stage and there is a need of creating aesthetically pleasurable sonic experiences which is of an exploratory and artistic nature. My project will focus on Interactive Sonification from an artistic point of view and how can Sonification techniques be used in creating aesthetically pleasing sonic output. More specifically, this project will use Interactive Sonification techniques to create leitmotifs for video game characters, whose varied attributes will represent different datasets. Parameter Mapping Sonification techniques will be studied and used in the attempt to improve the interactivity of the control system.
Presentation Link: youtu.be/TPnPd7a0Ptc
Smart devices are rapidly developed over the years, more and more software are now porting from PC end to portable devices like pads and phones. Musicians are also starting to looking for some solution on smart devices rather than bring separate set of PC/laptops during the performance. This presentation aims to explain the idea of the logics when designing synthesiser or similar app's interface on small screen devices (iPhone).
Presentation Link: www.youtube.com/watch?v=TOIDg8txynA
Visual tools may be useful in language development for autistic children. This project hopes to produce an iOS game that encourages desirable vocalisation from non-verbal and language delayed children using audiovisual feedback, with a focus on Autism Spectrum Disorder (ASD). Current research into language teaching methods, speech encouragement and existing vocal-based games will be considered in designing the app. This will give adequate design information to devise effective audiovisual feedback within the iOS game. An initial focus will be on vowels and their vocalised pitch, with potential to introduce plosive and nasal detection to the project, which will aid in more advanced vocal encouragement.
Presentation Link: www.youtube.com/watch?v=7XtrCcxy64w
The project's goals are to improve a free-to-use impulse response database called OpenAir; developed and maintained by the University of Yorks Audio lab with support from the UK Arts and Humanities Research Council. An impulse response describes a recording that encases a space or environment's reverberant characteristics. Information can be extrapolated from impulse responses to analyse acoustic properties, useful for improving spaces with poor sound quality. Impulse responses can also be used to simulate how a virtual environment may sound. This project will focus on implementing an acoustic parameter toolbox to be hosted online through a web application. There is currently a range of offline open-source acoustic parameter scripts available for MATLAB and Python. The current scripts involve some programming knowledge and access to software licenses. Developing a web API toolbox will mean that acousticians can upload impulse responses and generate graphical and numerical data to analyse acoustic parameters in a free to use environment. The basic aims are to generate acoustic parameters stipulated in the ISO 3382 standard such as reverberation time, clarity values and early decay times for various octave bands. If time allows the project will look to further improve the toolbox through additional acoustic parameters and audio visual tools.
Presentation Link: www.youtube.com/watch?v=u8BjCjylaLo
With a rising interest in virtual and augmented reality, it is important that we generate audio for these platforms in the most convincing way possible to enhance the overall user experience. This is not necessarily an easy feat as devices often have both limited storage space and processing power. This project investigates a possible method to reduce the number of impulse responses required to recreate the acoustics of a real-world space.
Presentation Link: www.youtube.com/watch?v=jvcWDuq3Y7U
The OpenAir library is an online database containing impulse response and acoustic data for a multitude of spaces around the world. This database is an extremely useful resource for anyone looking to collect anechoic material or IR’s for auralisation. However, the database can be daunting to users who aren’t aware of what convolution is and how to achieve an auralisation. To resolve this, an OpenAir Convolution Reverb Plugin will be created with the intent to make the OpenAir library compatible with any Digital Audio Workstation that supports VST3. This presentation details the purpose of this project, the main aims and objectives, the work that has been completed thus far and a preview of the upcoming work needed to complete this project.
Presentation Link: youtu.be/sBjVBuITvko
What is choral blend? Is good to use vibrato in choral singing? My research aims to investigate the role of vibrato in the choral blend. The use of vibrato in an ensemble environment, which is still a controversial subject among choral singers and directors, demonstrates the need for more research to educate performance practice that supports healthy and optimal voice production in group singing tasks. This presentation looks into the context of the choral blend and vibrato, also emphasis some previous experiments and studies that related to my research topic. In addition, I will outline some ideas and planning for my research development.
Presentation Link: https://www.youtube.com/watch?v=f1VuHsU1K9Y
Focusing on recorded concerts at the University of York and Streetlife in Spring 2022, this presentation will explore practices of improvisation and interactive music, and how music producers and the technology they use can act as intermediary between musicians working in these fields and the listener; with the heightened emphasis that these practices place on communication - between musicians, technology and audiences alike - producing recordings of this music can require the producer to interpret and respond to often complex musical and technological languages and relationships that develop through the creation of the work. Through the presentation of these recordings and the context in which they have come from, we shall explore aspects of these practices and the parts we as producers can play in these ecosystems of musicality and technology.
Presentation Link: https://www.youtube.com/watch?v=msAIR-h6OWs
The main focus of our presentation will be the technology of tape machines and their impact on recorded music since the early 1960s. We will also explain the process behind the operation of tape machines as well as how they are cleaned and maintained. We will include videos of sound engineer and producer Roy Harrison demonstrating the different techniques. We will lastly look into Harrison's life and work in this field, considering the history and importance of his work and its impact both in the UK and international music scene.
Presentation Link: www.youtube.com/watch?v=smaWN77NizY
Frameworks are a necessary backbone to programming. They are useful to both the self-taught and those under the tutelage of a professional. Given their crucial role in programming, these frameworks require coherent documentation. However, the increasing depth of a framework’s applications and their mercurial nature provide challenges to those producing documentation. Using a taxonomic approach, this literature review evaluates the various styles of documentation employed by programmers. ’Traditional step-by-step’ documentation, a minimalist’ approach and ’patterns’ style documentation are analysed critically for the purpose of producing this project’s deliverables, sample documentation, created using all three styles, for the AudioKit iOS framework.
Presentation Link: youtu.be/mq0RzbKThrE
There is a high wall for beginners to get into the world of acoustic signal processing, although humans perceive and desire acoustically-processed sounds. This project aims to solve this problem by creating a web-based, user-friendly app letting users experience the effect of acoustic signal processing to lower the threshold to enter the acoustic signal processing world. This presentation starts with a specific introduction and background of the aim. Then, the app architecture and contents will be described. Lastly, as the app's prototype has been already created, the app's current progress will be shown with video clips. This presentation contains a lot of fields different from what we learned in our course, so please email ms2676@york.ac.uk if you have a question.