What Happens When A.I. Enters the Concert Hall – The New York Times

Dr. Schankler ultimately used R.A.V.E in that performance of The Duke Of York, though, because its ability to augment an individual performers sound, they said, seemed thematically resonant with the piece. For it to work, the duo needed to train it on a personalized corpus of recordings. I sang and spoke for three hours straight, Wang recalled. I sang every song I could think of.

Antoine Caillon developed R.A.V.E. in 2021, during his graduate studies at IRCAM, the institute founded by the composer Pierre Boulez in Paris. R.A.V.E.s goal is to reconstruct its input, he said. The model compresses the audio signal it receives and tries to extract the sounds salient features in order to resynthesize it properly.

Wang felt comfortable performing with the software because, no matter the sounds it produced in the moment, she could hear herself in R.A.V.E.s synthesized voice. The gestures were surprising, and the textures were surprising, she said, but the timbre was incredibly familiar. And, because R.A.V.E. is compatible with common electronic music software, Dr. Schankler was able to adjust the program in real time, they said, to create this halo of other versions of Jens voice around her.

Tina Tallon, a composer and professor of A.I. and the arts at the University of Florida, said that musicians have used various A.I.-related technologies since the mid-20th century.

There are rule-based systems, which is what artificial intelligence used to be in the 60s, 70s, and 80s, she said, and then there is machine learning, which became more popular and more practical in the 90s, and involves ingesting large amounts of data to infer how a system functions.

More here:
What Happens When A.I. Enters the Concert Hall - The New York Times

Related Posts

Comments are closed.