Simulation of learning Brian Ferneyhough’s Lemma-Icon-Epigram for solo piano with GesTCom
Pianist Pavlos Antoniadis presents interpretational possibilities in Brian Ferneyhough’s solo piano work Lemma-Icon-Epigram in the form of a real-time simulation of the learning process. His approach, by the title “embodied navigation”of textual complexity, is informed both from embodied cognition concepts and cutting-edge technologies for gesture capture. Purpose of this lecture-performance is to make palpable the tension between text and act as aesthetic concept. He utilises a prototype system by the name GesTCom developed at Ircam with the collaboration of Frédéric Bevilacqua and Dominique Fober.
The presentation is based on the paper “Comparison of gestural patterning and complex notated rhythm via multimodal performance data: Brian Ferneyhough’s Lemma-Icon-Epigram for solo piano, phases 1&2″. In the preface to his piano work Lemma-Icon-Epigram, Brian Ferneyhough proposes a top-down learning strategy: Its first phase consists in an “overview of gestural patterning”, while notated rhythms are to be dealt with at a second phase. We present a methodology for the inference of gestural patterning from multimodal performance data (first phase), and we map our results on to the complex notated rhythms (second phase). The coupling of physical movement patterns and symbolic rhythm affirms the multilayered, embodied and enactive nature of musical forms: Gesture might be proven more effective than abstract understanding in dealing with musical rhythm. This work draws equally from embodied cognition, gesture modelling, performance practice and music analysis. Future perspectives include the probabilistic modelling of gesture-to-notation mappings, towards the design of interactive systems that will be learning along with the performer while cutting through textual complexity. The proposed demo re-enacts Ferneyhough’s suggested top-down learning strategy based on a gestural patterning and measures varied performances against this patterning through the use of the motionfollower and the INScore joined together with the GesTCom (gesture cutting through textual complexity).