date: 2021-12-16 Thursday
time: 14:25–15:55
session title: When Music and Technology Meet
paper title: Towards a Course Framework of Music Data Science in Higher Education
author: Hsin-Ming Lin(林欣名)
department: Executive Master of Arts Administration (EMAA)
university: Tainan National University of the Arts (TNNUA)
handout: https://bit.ly/20211216nccu
Data science, the ubiquitous technology nowadays, has brought huge changes and raised many issues. Its material, method, and product are big data, machine learning, and artificial intelligence (AI) respectively. It also plays an important role in the music industry. Departments including electrical engineering and computer science have been offering related courses in foreign and domestic universities such as MIT, NTU, NTHU, NCKU, and NCNU. By contrast, music departments in Taiwan rarely offer complementary courses relevant to data science. That situation builds, unfortunately, an unhealthy ecosystem for interdisciplinary higher education in music.
Thanks to Talent Cultivation Project for Digital Humanities (TCDH) by Ministry of Education (MOE), the author of this paper has the chance to create an undergraduate-level beachhead in Tainan National University of the Arts (TNNUA), where few (if not zero) students have STEM (science, technology, engineering, and mathematics) majors. The author has been offering STEM courses, e.g. statistics, data science, and music technology in TNNUA since 2019. He also provided parallel MOOCs (massive open online courses) and off-campus workshops with other music teachers and entrepreneurs in 2021 for outreach education to elementary and high school students.
Based on the author’s experience during the MOE TCDH projects, this paper proposes a course framework of music data science. The workflow in this subject may consist of three levels. First, the upstream level encompasses music theory, data science theory, and data ethics. Second, the midstream level contains expert manual annotation practices, programming languages, data mining, knowledge discovery, and machine learning. Third, the downstream level includes AI services or products, AI-assisted composition, improvisation, and audio mixing (as known as intelligent music production).
The learning or teaching sequence, however, is not necessary to follow those up-mid-down streams. This paper proposes a two-phase progression. The initial one is music artificial intelligence (MAI), which starts from the upstream level and jumps to the downstream one. By circumventing the midstream level, students will not encounter difficulty and feel frustration too early. After they enjoy AI products and appreciate AI-assisted works, the course(s) should gradually enter the next phase, music information retrieval (MIR). This phase corresponds to the midstream level so that students accomplish the whole workflow. Each of the two phases could be a quarter, a semester, or even an academic year, depending on the depth and width covered in the course framework.
Collaborative teaching is recommended to support the course framework. It may ease the instructor’s burden. Nevertheless, caution should be exercised when the course(s) is offered by multiple teachers. One of the challenges is the overall cohesion of instructions. The other is the cost of collaborators’ time and budget, especially when external guest lecturers travelling to rural campuses. Another challenge is how to track and evaluate students’ performances by diverse faculty throughout the course framework.
The course framework has been comprehensively experimented and revised in the Department of Applied Music at TNNUA for two years. Next year, the university will incorporate it into the curriculum structure of the newly established Graduate Institute of Sound Technology.
related courses: music in EE and CS
international
Studies in Western Music History: Quantitative and Computational Approaches to Music History, Department of Music and Theater Arts, Massachusetts Institute of Technology.
Music Signal Processing, Department of Electrical Engineering, Columbia University.
domestic
Music Signal Analysis and Retrieval, Graduate Institute of Networking and Multimedia, NTU
Music Information Retrieval, Department of Computer Science, NTHU
Introduction to Artificial Intelligence and Music, Institute of Information Systems and Applications, NTHU
Analysis of Digital Music Signal, Department of Computer Science and Information Engineering, NCKU
Music Programming, Department of Computer Science and Information Engineering, NCNU
my courses: STEM in music and arts
Music Data Science (1) & (2), Department of Applied Music, TNNUA
Music Artificial Intelligence (MAI)
Music Information Retrieval (MIR)
Modern Development of Music Technology. Center of General Education, TNNUA
Basic Statistics, Department of Applied Music, TNNUA
Introduction to Data Science, EMAA, TNNUA
Big Data Analysis and Interpretation, EMAA, TNNUA
Digital Marketing Tools, EMAA, TNNUA
Traveling with Music, MOOCs, TNNUA
workflow
upstream
music theory
data science theory
midstream
expert annotations
programming
data mining and knowledge discovery
machine learning
downstream
AI services and products
AI-assisted works
composition
improvisation
mixing and mastering (Intelligent Music Production)
selected examples
AIVA (Artificial Intelligence Virtual Artist)
Sony CSL Flow Machines
Orchestrations of Beethoven's Ode to Joy by Sony CSL Flow Machines
symbolic music information retrieval
sub-symbolic music information retrieval
manual vs. automatic analyses
interactive performance with pre-trained AI
在利⽤圖表分析的過稱中,我有一點恍然⼤悟,發現⽤電腦分析⾳樂並不是我之前想的那麼沒用。回想起之前上的樂曲分析課,我們也是無所不⽤其極,將⾳樂的內容全盤進行剖析、定義、分類並解釋,有時候分析到最後,其實已經和感性的⾼層⾳樂層次相差甚遠;⽽是要處理這些最基礎的⾳符的排列邏輯。因此,我覺得利用電腦輔助分析⾳樂,雖然對現階段的我來說不是必要,但若可以利用它,不但可以做一些數據上的精確佐證,更可以是一種幫助我思考的⼯具。因為當我看著那些圖表,然後問為什麼是這樣時,我也從尋求解答的過程中學到了一些東西。
https://youtu.be/rqMmqczhmDo?t=163
這次整個過程,從在Aiva上⽣成樂曲,⼀直到我在Logic Pro X上完成編排,共花了⼀小時。就樂曲結構來看,雖然它不特別有趣,但也不會令⼈過於無聊,因為它有三個主要的樂句互相穿插出現,並且在樂曲後半時,還在主奏樂器出現了一個類似⾼潮的都是16分⾳音符的段落(雖然出現得有點早)。⽽且它也常將第⼀個出現的旋律律和第二個同時演奏,形成同時擁有主旋律和副旋律的效果,增加⾳樂豐富度。因此我主要的修正即是修改樂器⾳色,並將無必要的聲部刪除,希望能將⾳色的不同搭配結構組織安排的變化,讓全曲更有趣。
incidental music and AI
future sound lab
Graduate Institute of Sound and Technology (GIST), TNNUA
GIST features
an independent graduate institute dedicated to sound technology for aural arts
research and development for (and with) performance, composition, improvisation, installation, and exhibition
modern advanced topics such as immersive sound (Ambisonics 3D), music artificial intelligence, and interactive media
English as a medium of instruction (EMI)
questions
student's interest and need
course to offer
research to conduct or collaborate
reminders
This site is open to public.
Hyperlinks may be outdated.
I may update the contents anytime.