Diolch yn fawr iawn am wylio'r fideo. Thank you. Thank you. Thank you. Thank you. I'm sorry. Hello, welcome back, welcome to the third panel of synesthetic syntax in front of your ears and eyes. In this panel we will address various perspectives, for instance how can animation be performed, how can animation be performed under these challenging circumstances? Can the audio and the visual be combined in improvised performances? And how can live and hand scribing or music notation or coding or drawing be used to ensure spontaneous audiovisual performance? I will briefly introduce the program. The first presentation is entitled audio-visual performance notation by Dr. Juan Manuel Escalante and Yin-Yui. Dr. Escalante is a designer and an artist working with computer code, modular synthesizer and analog drawings. His work has been shown in the United States, France, United Kingdom, Spain, Peru, South Korea, Mexico and featured in many festivals and exhibitions. He was a member of the National System of Art Creation and received the Corin Award of Electronic Acoustic Composition in 2016. Yin-Yui is interested in exploring the potential of interactive multimedia environments and the relationship between architecture and sound through emerging technology in art and architectural practice. Her work ranges from installation interactive multimedia installations, product furniture and interior design. Yin has won several awards and she is a PhD candidate studying at media arts and technology at the University of California Santa Barbara. Based on the evolution of audiovisual exploration through the past decades Escalante and Juul will raise the question what kind of notational system, graphical notational system we use, can we use to shape audiovisual experiences using contemporary and traditional instruments. The second presentation, alignment of rhythms in live coding visual performances will be presented by CODI. This is a live coding collective. Kate Sucho, Sarah Goffrin, Henning Palermo, and Melody Loveless. Sucho is a choreographer and media artist based in Richmond, Virginia specializing in algorithmic venues including New York Hall of Science, Access Space and InSpace Gallery. When not live coding Sarah Grover Henning Palermo makes large-scale video art and computer programs in Berlin. She has shown works at Pioneer Works, Westbeth Gallery and NOMAS Gallery in UK. Loveless is a musician and creative technology artist and educator based in Brooklyn. In addition to live coding, her work ranges from generative sound installation, sound sculpture and multi-channel performances. and multi-channel performances. They will talk about their practice and their activities in live coding performances. The second presentation, Liveware Improvisation Interaction and Process Intensity is held by Michael Century. He is a musician and media art historian, a professor of music and new media at Resler Polytechnic Institute. And Sean Lawson, he is a computer artist, computational artist and researcher. He performs under the pseudonym Obi-Wan Code Nobi, where he live codes real-time computer graphics with his open source software. In this presentation they will introduce LiveWare's past performances and work in progress and they will discuss the fluid interplay between pre-composed and improvised dynamic processes in their body of work. Hi, my name is Yin Yu. And I am Juan Manuel Escalante. And in the next few minutes, we will explore the notion of MIPES as a case study to share with you a few insights on sound and image generations from Yanis Zanaki's multi-sensory polytopes to the groundbreaking Night Evenings theatre and engineering series of events, we have witnessed an evolution of audiovisual explorations through the past decades. With the increasing advance of technology, highly complex performances orchestrate sounds and images today using various media or at once the assemblage of such contemporary audio-visual experiences presents today an unresolved notational challenge In the past, artists have used a variety of unconventional visual systems, such as Laszlo Moholy-Nagy's score sketch for the mechanical eccentric theater play. His performance diagram describes transformations of light, movement, film and sound. Given our current technological landscape, what kind of notational systems can we use to shape audiovisual experiences using contemporary and traditional instruments? contemporary and traditional instruments. As we mentioned earlier we will discuss our performance the generation of maps as a case study. This project demonstrates how a diagrammatic approach can serve as an orchestrating force for algorithmic systems using graphic notation, co-generated imagery and live electronic sounds using modular synthesizers. First, let's talk about sounds and image generation. To be more specific, generative environments. We explored audio-visual connections in different ways. Using computer code, we program visual events or cues. For example, this is a sketch of a triggered bass system. Imagine each one of these elements is traveling at different speeds. Each one of these elements is traveling at different speeds. And whenever one of them cross the timeline, it triggers a sound event. This is another sketch using the same principle. And this is a code implementation of the same idea. So how exactly do we generate the sounds? synthesizer. To be more specific to an ES8 module, the ESA converts these signals into voltages and send them through the system. In addition, many of these graphics also react to the sounds in the room. This sound's reactive component completes the loop. In other words, a generative system creates a visual result, which triggers various sounds, and then the sounds complete the feedback loop, influencing the visual component. For the generation of MIPES, we approach our life act using the programming perspective we just described. But how do we give form to these graphics? We used a series of abstract symbols or diagrammatic representations of what maps or mapping meant for us, symbols of our own creation. meant for us, symbols of our own creation. For example, the implicit idea of a journey, of a search, represented as a lonely particle traveling from one location into the other one, and discovering something along the way. Or the combination of two lines that mark a specific location, a specific spot in the map. Or the notion of scanning the landscape and as a result the discoveries that we might find from such observations. In our performance, each one of these events triggered a sound as well. We used all of the symbols and arranged them in the projection unfolding over time, telling our map generation story. In a way, visual components such as this one also act as an interface between the algorithm and the viewer. Our projections offer the audience a hint of our systems composition of its internal mechanisms and the rules governing the audiovisual experience. Approaching the visual component this way, we can also represent the sound events in different modalities. For example, in this code we represented a system of triggers in two ways, as a deconstructed grid of circles on the left and as a row-based system on the right. These representations offer the audience a glimpse of what the computer program is doing to generate sounds and images in real time. So how to perform this kind of graphic? Performance scores is a graphic notation subgenre, diagrams that help the performance orient themselves during a live act. They are meant to provide artists with critical time-based instructions to be read quickly and this kind of scores have been quite useful for us in the past. The End I'm sorry. The generation of MIPs posed a different challenge. Two performers had to work in sync and react to the different signals sent by our program. work in sync and react to the different signals sent by our program. Our first version also resembled a composition score. In this drawing, we can see a rough representation of overall structure, a quite built up, leading to a strong climax full of sonic elements represented as clouds. This is a drawing that also provides more information of a sound's arrangement process. For example, the presence of reverb near the end, or short instructions for one of our modular stations. The second version of this score added more information. We can see that overall structure starts to have a little bit more detail. We can also observe more technical information such as pre-flight checklist or certain sound manipulation actions such as freezing events, repetitions and so on. Still, this graphic was hard to read in a performance setting. After this second version, we proceeded to clean up and improve the overall layout. We carefully mapped our times and started to separate each one of our stations, our instruments, into rows. This is a stage where the score still saw many notes and went through many revisions. And this is how we arrive to our fourth and final version. In the central part of the score we can observe an abstract representation of each one of the scenes throughout the performance, from the opening scene to the climax and final one. Each one of the vertical marks emphasize the division between each scene. Clouds on the drawing represent the amount of audiovisual energy throughout the performance. On top, precise markers allowed us to design this diagram with accurate proportions. The score is organized by rows. The top row represents one modular station. We can appreciate important instructions to be executed at specific times, for example, connecting a patch cable or slider movement or pressing a specific button. In addition to the abstract drawings, the central row featured channel information for the different sound events. These numbers would be a reminder of which channels were used at specific moments in time. channels were used at specific moments in time. Below the main row we have a series of knob control instructions for our mastering chain where we had a couple of auto machines controlling feedback, reverb and compression. In the spirit of an architectural drawing, the lower portion of our score featured additional technical information. A preflight checklist offered the initial hardware configuration of our system. A small legend near the center described some of the icons and symbols used in the score. And in the lower right portion we can find important considerations to connect the two modular stations and overall system configuration. Throughout this project, our diagrammatic approach balanced a precise visual representation and full abstractions, leaving enough room for poetry in between, as we like to see it. As a result, these graphics expand the listening experience and hopefully provide new insights into the audiovisual phenomena for both creators and audiences. To conclude we will leave you with a few extracts of this performance. The Succes! Продолжение следует... Субтитры создавал DimaTorzok I love you. Nettopp, vi har en viss utsida. Vi har en viss utsida. Vi har en viss utsida. Vi har en viss utsida. Vi har en viss utsida. Thank you. Thank you. Thank you.... S.A. Thank you. Teksting av Nicolai Winther Gå in på www.sdimedia.comご視聴ありがとうございました ស្លាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រូវនប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប្រូវនន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវន្រូវនប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ Teksting av Nicolai Winther Kjell Kjell Gå in på www.sdimedia.com Kjell Kjell ស្រូវនប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប់ប�ាប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បាាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្វាប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់បានប់� Terima kasih telah menonton! Terima kasih telah menonton! Nå er vi på Norske Norske. Thank you. so Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. so Thank you. Thank you. Thank you. Thanks for watching! Hello. Hello, we are LiveWare, an audiovisual performance ensemble consisting of myself, Michael Century, music performer and composer, and Sean Lawson, live-coded animation. Our presentation today spans the spectrum from initial works using real-time live coding of animation to classical and contemporary music compositions, experimental pieces using expanded instrument systems, and current work in progress that generates real-time animation from datasets trained with machine learning. Both the image and sonic event streams are processes that vary between strict predetermination and free improvisation. We introduce the term process intensity to define this continuum for spontaneous expression. Think for now of process intensity as a metric for real-time decision making, whether this is done live by one of us or carried out by algorithms we are interacting with. Process intensity was introduced in the context of computer game design, initially to designate the degree to which a game uses procedural mechanics as opposed to pre-rendered fixed media assets. The term can be usefully applied to our work in audio visual performance as well. Music notation in the Western tradition evolved with more or less open parameters for improvisation. Initially rhythm was relatively free while pitches were precisely graphed. Over the centuries, even while notational precision became successfully more prescriptive, latitude for improvised components remained a possibility, if not always exercised. Similarly, for live coded animation, notated scores can serve as a general point of departure when interpreted by the animator or tracked automatically using machine listening methods. Piano Counterpoint uses a strictly notated 1973 score by Steve Reich. We hear six interlocking musical canons that correspond to a live-coded geometric visual score that is entirely generated by my live-coded algorithms. I use a variety of audio frequency intensities to modulate animated object parameters. The audio enhances the liveliness of the existing movement. Arrann ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù ar c'hortoù Thank you. Aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, aran, The following two examples are musical improvisations. In both, all of the visuals are generated and procedurally modulated, meaning very heavy in process intensity. First is a scored elegy in memory of Pauline Olivares, American composer who died in 2016. This clip begins with an improvisized interlude, adding percussive and tremolo effects on the accordion while the corresponding visual part is elongated before returning to faceted patterning. Audio frequency enhancement as mentioned in prior work is also used here. Thank you. ¶¶... In improvisation for expanded piano, I designed a program that is capable of capturing, transforming, and recombining and rhythmic density of the piano part. Thank you. موسیقی در موسیقی درسته In the piece Small Infinities, the sounds of modulated and spatialized digital accordion feed into Sean's semi-controlled, semi-autonomous visual system. This system reads and overwrites images from a looping sequential image buffer, creating temporally shifting feedback. © BF-WATCH TV 2021 © BF-WATCH TV 2021 The Isle is Full of Noises evokes a scene from Shakespeare's play The Tempest. You hear an animistic soundscape inspired by speeches of characters Caliban and Miranda. Images are generated live using an auto visual system built with machine learning algorithm trained on contrasting feature films Videodrome and Planet of the Apes. During the performance the machine learning utilizes its latent space of learnt imagery to create real-time animation. A secondary neural net was trained with audio spectra and audio feature data taken from rehearsals. During performance this information is captured in real time and pushed through the secondary neural net with those results used to navigate and animate through the primary image space neural net. The audio consists of eight asynchronous loops of granular processed human, animal, and nature sounds mixed live. Kepala Thank you. um Not a field. The air is full of my friends. So I'm sure that you will like our first movie. I heard the voices of the years, like the words that finally came, Sometimes voices that if I'd been I'd wait at the long sweet willingness Unheard to sometimes sit down How beautiful Consider for a moment process intensity in the previous and forthcoming work. Machine learning introduces an additional step of consideration to process intensity. Samples or data intensity are used in a process intense method to train a neural net. The resultant neural net or amalgamation of sample weights could be considered a chunk of data. At performance time, the neural net is process intense to generate images. Latent Cartographies, our current work in progress, uses a machine learning algorithm to generate real-time animation from map image training. On the fly, I manipulate an image noise generator that describes how to navigate or teleport through the latent space of the neural net. The manipulation is further modulated by attaching sound frequency intensities from sentries performance. The sound intensity samples are added to an accumulator which acts like a continuously increasing value, like the concept of time, to move through the generative image noise. Musically, the improvised piano part is delayed, transposed, and recombined using various modulated shapes and values. These include time prolongation, various waveform shapes, and random event generators. These are features of the expanded instrument system designed by Pauline Oliveros and used with permission of the Oliveros Trust. Like the visual system, the semi-autonomy of the musical processor is only steered by the performer. Its processing draws on a memory buffer that is updated on the fly rather than pre-trained as in the visual system. Thank you. © transcript Emily Beynon Thank you. Thank you. So Wrapping up, we suggest that process intensity is a useful concept to encompass the full range of case possibilities in live audiovisual performance. Pre-composed elements like scores, fixed media recording, or pre-trained datasets can be interpreted or navigated separately by both performers, or they can be sampled on the fly with the resulting components similarly stared and transformed. The degree of process intensity in a given case varies, therefore, as a fluctuating ratio between predetermination and spontaneous choice. Yeah, thank you for this great presentation. We now have this opportunity to discuss with the presenters. It's an honor to welcome Joan Manuel Escalante and Yin Yun. Also from the CODI collective, Sarah and Kate, welcome. And also from AliveWare, the duo with Michael and Sean. So it's great to have you here. And we came up with some questions already in this panel in front of your ears and eyes. And I would like to start with some individual questions. I would also invite you to interrupt me and also address questions to other presenters and I would really appreciate if it's more going in this kind of dialogue. We have a couple of minutes, so a lot of time to discuss. And I would like to start with Julian Manuel Escalante and him, Louis. So unfortunately, I couldn't join this live presentation, but I was very interested in this kind of feedback to the audience. So I was wondering, so it's a beautiful installation and it looks kind of an animation and I don't saw so much this kind of live performance. Is this something that is due to this presentation? Well, thank you. It's great to be here. I think what we see also in other talks is how bringing the audience by providing them an understanding of the process in different levels, either through the code, either through the functions, or either how graphics behave. I think that enhances the audience's experience. In this case, we can remember that at the end of our performance in 2020, there was a member from the audience who came and he told us, well, you know, I'm not sure that was music, but by looking at the graphics, I could understand what was going on. So that was a big compliment for us because part of our intention was that also using the sound reactive graphics as an interface and as a way to communicate with the audience how the system is behaving. I was just wondering because on stage you are a kind of performer and this is what I actually was thinking about so how do you actually interact with this live performance could it be visible for the audience is that something that you considered well in this case, the audience can interact with the performers, but as performers, we have the graphic score that we follow. So that was our performance score. And also the graphics are shaped by the sound in the room as well. So not two performances are the same and because we are using modular synthesizers even though so some things using code are already pre scripted because we are using these instruments that we electricity is flowing in different ways right so we have random modules and we have different events. So as performance, we had to react to what the system was giving to us and then shaping the audio visual experience. I'm not sure if that answers the question. Maybe I can add it on that. So while we perform, during the performance, as we turn in the knobs and then the sounds react to the graphic, and then later create this feedback, and then back to our songs. And this kind of a performing experience as a performer and artist, it's really kind of a new and a refresher. And for the audience, for them, after we perform, audience talk to us, and then they just try to digest and understanding what is this performance is, is because the visual and then the whole room reactions is kind of very interactive in a different ways compared to performance we did before. So it's kind of very interesting way to experience that. I was also thinking about you mentioned Holy Notch and some other poems in this field of graphical notation. Color, is this something that you don't like or don't want to use? Does it fit to your concept? Well, that might be a question for Sarah and Kate where in her presentation we saw an amazing display of color. We are very interested in the diagrammatic language and how these abstract shapes can help us to perform and also to abstract certain aspects of the sound. So as part of this diagrammatic language, we prefer to leave color a little bit out of the picture to to enhance this abstraction idea. But I mean, but certainly it's part of our to-do lists as well, yeah. I was also thinking about Lepan notation. Do you know, or have you considered this as well as you talked about performance notation? Yes, absolutely. Certainly we have a notational challenge right now because we have so many different tools and yes, Moholy-Nagy scores responded to a different period in time. All the scores that we saw from Cage in the past in this famous compilation notations also responded to a different time but now we have other tools we have a computer code we have interactive tools as you were mentioning that can bring the audience in different ways of their performance in different ways so how do we notate those experience in for different purposes, for performance, for composition, or even for archival purposes, so that these pieces can be executed in the future. So we believe that's a very interesting path to explore. I will pass this question over to Sarah and Kate. So as we already talked about color, so this is something that is important in your performance. You have this kind of a tool already, so color is something that you is important in your visual. is important in your visual. Sarah, we already had this conversation on the chat. Can you give us an insight how this code is actually working? So you are the coder. Is it custom-made software? It was also mentioned that this is on GitHub, so it's open source. Yeah, I mean, it's open source in the sense that people are totally welcome to look at it and fork it and do what they want. It's not open source in the sense that I promise that it will ever continue to work for people and that things won't break. So because that's like a whole different responsibility. But yeah, so basically what the framework is, it's called the Habra. And it uses ClojureScript, which is a list Clojure that compiles to JavaScript and SVGs. So just the way that like you draw images in your browser, or one of the ways you draw images in a web browser. And so, you know, that's really nice for me. So basically the reason, and then I abstracted some of it to make it easy to write as we're performing. And the reason I chose this stack is that, you know, I don't have to maintain a lot of complex code for visuals or anything. Like the browsers do it for me. They continue to get faster. They continue to add features so I can like build on other people's work and just focus on the bits that I want, which is essentially A set of functions written to make it easy to add things into the SVG while it's going on or use CSS animation. So since SVGs are vector shapes, they kind of just display as you type things in sync to the extent that they're in sync. That's one of the differences, I think, with Kodi in a lot of audio visual work is that we don't use any technical ways of staying in sync. It's just that our base timer is 250 milliseconds, which is an eighth note at 120 BPM. I think that's the math on that. And so we have that timer and then Kate and Melody code the music and then it's just us, you know, vibing off of go back to color, and like create, like if we're stuck, or even if like we just like check what Sarah's doing, and like a new color might have emerged, and that will get all these ideas for new sounds, or new directions to bring the sound. So like color as a score ends up being part of the performance for sure. And vice versa. There's been several times when like Sarah's like, what sample are you using? Because like I want dark colors now. So we like we definitely use color as a way of bringing the two things together a lot. Yeah, and it's a really fruitful area for us to like find things while we're improvising. And when doing performances, I've done live coded performances for other bands. And I use different color palettes for those bands. Like this is the Cody color palette. And the last thing I will say about color is it's also just a really nice technique. Like you could see it a bit in this example where you can, it makes it really easy to play with figure and ground. And I find that to be a very appealing use of the color that it's like this exists and now it doesn't and it's not computationally intensive so you can just be playing it on your laptop and it seems very complex when it's not so yeah. I was wondering if have you considered this coding process as a collaborative coding process as well so that more people are actively involved in the real-time generation of the images? Because we see one coat and one person, Sarah, is probably the coater. Well, so we, the music, I'm actually going to pass this to Kate because Kate can answer this better. The music is coded between two people and it's just the way that it's easy for us to record it. We don't show that, but usually when we perform, we have one or two other screens that have the music code on it. Yeah, so simultaneous to Sarah coding, Melody and I are also coding and that's how the sound is created. And we do code collaboratively. and that's how the sound is created and we do code collaboratively we use we've done different we've done lots of different ways of collaborating but our current setup is we use an environment called troop and we have an instant setup on a server because most of the time these days we're not in the same location the three of us Sarah's Sarah's in Berlin, I'm in Richmond, Virginia, and Melody's in Brooklyn. So we were doing everything through a server. And Melody and I are, yeah, actually coding together in Sonic Pi in this environment called Troop, which means which means some we usually start maybe with each writing our own sort of functions but then by the end we're all over the place and in each other's stuff and um yeah and we're not we've gotten to this we're not precious about it um you can delete the other person's code and it's okay um um so we yeah we've grown to this really, for the sound, very collaborative method of creating it through coding. So in this, the performance actually, so I saw some documentations online, where you also have these kind of two screens or split screens that we have seen in your presentation. It's equally 50%, so seeing the outcome and the code. Is this how you perform actually? And the very important question is how do you code perform now in this situation, in this pandemic situation? now in this situation, in this pandemic situation? Yeah, when we are in front of an audience, we definitely split the screen. You'll see the music code on one side and the visual code on the other. If we can, we try to make it a little bit bigger for the visual because it's more exciting and beautiful and should have more screen space. For you, it's important that the audience can, can see what you are actually doing. That's, that's very important. Yeah, that's very important. It's, it's part of this exposing the knowledge and the information that we talked about in the paper. It's also like a steeped in this tradition of live coding. it's also like a steeped in this tradition of live coding um we have this um thing that um we say show us your screens in live coding which has come out of various traditions um one being that a lot of times at laptop performances people say like how do i know you were actually playing how do i know you weren't checking your email and just press play um so there's this idea of transparency and um but also seeing a thought process because we are improvising seeing that emerge and unfold and change and when things go wrong seeing the frustrations um messages messages we don't hide these things on the screen. And I'll say I do think that making trying to make it approachable for people who, you know, all the way from other live coders who can look at it and know exactly what we're doing to somebody who might just wander and you know, it goes all the way through to the way we name functions, you know, to the way since they're to the way we name functions, you know, to the way synths are named, so that even if somebody doesn't know everything, maybe they can look at this one part and be like, oh, every time it says blobs three, the blobs spin around, and like that's really helpful to know, and it's sort of a way to help people understand that like computers aren't magic, like they're stupid just like people are and you can totally understand it with a little bit of time and so I think that that's really important to us is the demystification while having fun like you get to go to a bar and dance around and listen to weird music and also demystify computers a little I would like to to invite Sean to this discussion, as we have seen also Cote. Maybe in a slightly different way, there is Cody's performance. We saw split screen and it was kind of a layer. So is it not that important that the audience can actually see the impact of what you are doing, or just seeing that you are doing something? You mean in the works presented or my own philosophy towards this? Both. Both. We just saw this presentation. For me, I try to show as much of the code as I can that's possible that I'm editing. because we just saw this presentation. For me, I tried to show as much of the code as I can that's possible that I'm editing. I mean, there's many tools in use for live coding. There's a huge chunk that's hidden behind the scenes to expose the front end. It makes it much easier to make changes quickly. So most of the work does have code on screen that I present. A few of the more recent works do not have code on the screen in some cases because the code editing is sort of minimal and not super interesting to see. There aren't that many changes, whereas in other pieces it's more frenetic and there's more of a performative aspect to it for me. And some of the other ones that are sort of less frenetic it's changing values or moving around through the code and making very small edits based on my own interpretive listening to what Michael is performing or what he's doing and trying to sort of weave something together visually with the audio so but I do I do, I do both, I guess. I don't know. It all depends on the show. It's all for the show. All for the show. And I think I would like to invite Michael to talk about what is first. So if you collaborate, is your music and improvisation the starting point of Sean's concept, so to speak, or is it ping pong? You are unmuted. Every piece has got a different point of origin. And sometimes when we began, and you probably saw the very first piece based on an established score by Steve Reich, sometimes if it's a repertoire piece or even earlier music by Johann Sebastian Bach or other pieces like that, we have a score that I'm interpreting and then that's the starting point. But the middle bunch of pieces and then the other ones that use machine learning are much more ping-pong where we're influencing each other. We already discussed artificial intelligence in the second panel already so this is also a question to you. As we saw at Ars Electronica, a couple of experience with an art site, specific art installations with artificial intelligence. And of course, we know some tools. And this is a provocative question as well. As thinking about the audience, artificial intelligence is not transparent for for the audience so you don't really understand what it is doing and on the other hand if you use similar tools the effect is similar so question to life Sean, you go first. Shoot, okay. Yes, I mean, to even sort of jump into using degrees of machine learning, I feel like there's a heavy technical step up to even move into the space, and there's both on a software side and from a hardware side. even move into the space and there's both on the software side and from a hardware side. So it already establishes kind of like a technical divide between those who can have the hardware and have access to the knowledge of being able to program the software for the hardware to be able to do this. I'm fortunate at a research university that has access for researchers to be able to do this. But that is one sort of underlying implication about the technology that I think has been pervasive for many years within it. In the way that we have been using it, we've tried a variety of different techniques and trying to understand what machine learning is doing is not easy. There's a field of XAI, which is explainable AI, that is attempting to sort of look at this and understand what is actually happening between the layers of the interconnected neural nets. In some points, we've been trying to manipulate those layers independently through live coding, so we're actually changing the weights of the layers as data is flowing through them with sort of interesting results. But yes, if the appearance of the nets can produce similar type of graphics when used again and again and again. And so it in a way can become and feel like a type of filtering. again and again. And so it in a way can become and feel like a type of filtering. So in both of these pieces, one is StyleGAN and the other is StyleGAN2, there's a slight difference. StyleGAN2 has better results, but it depends on the training and navigating through the space kind of feels like navigating through almost any space of StyleGAN2. And so how does one want to use a type of filtering or a type of aesthetic that looks like this, but still do something interesting with it, while also understanding that there's a data curation process of what data am I putting in to then make this navigable space where images that come out of it may exist, because you can request a specific image out of it, but also everything in between this multi-dimensional space does not exist previously. It's a huge area that I think not a lot of people have really questioned because the accessibility is very hard. I don't know. You want to add anything to that, Michael? Well, I just come at it from just from the musical point of view and you asked initially the question had to do with whether the audience sort of knows what's going on um and to address that musically um i'm not using artificial intelligence at all what i am doing is using a tradition of live processing of input improvised systems so there's a kind of tradition in computer music and in uh algorithmic uh uh analog music and so forth there's a kind of tradition in computer music and in algorithmic analog music and so forth. There's a kind of a tradition of the live performer who plays something and then an algorithm or something reacts to it. Maybe it captures what you've been playing over minutes or the whole performance and then gives you something back to it and as far as what the audience is understanding um you see it does i i want to be able to to have the audience in a sense not even know a kind of seamless play between what what i can come up with, what the machine is giving me, and sometimes surprising me, and then what I can actually respond to by that surprise. So it's a kind of, that's why in our title, improvisation and interaction are included. As we are running out of time, and I would really love to have a kind of closing note from everybody. We tackled a couple of topics, but we have not discussed process intensity, a topic that I'm really keen on, and I also like this intersection to game we have not yet talked about the link in a Cody's piece to Henry LeBuebwer so this is a kind of a last round you can address these topics that I have forgotten and also to interact with the other presentations and i would start with uh uh manuel and lin yun to start with a kind of a closing note address maybe as uh pegita mentioned find some synergies between these presentations uh yes well it's been great to be a part of this panel and also to see our colleagues process and also to see our colleagues process and also following on the previous question, maybe we would conclude and answer the question in a different direction because we use a score for a performance made with pen and paper, which had been around for quite some time. So yes, we live in a heavy digitized world where these whole technologies are present at so many stages of the creative process and so many stages of our lives. And I feel for us, it was also a way to reconsider the value of analog tools and how by mixing both digital and analog tools we can uh we can shape the results and and also take this uh outcomes in a different direction i feel uh our scores we believe that they all draw us maybe a layer of warmth to the through the algorithmic co-generated images. I really appreciate that and would just add that for me, I go back to a very analog world where the initial motivation I had to even do this kind of live playing with images was because I played in the old silent film theaters, improvising music to early cinema. This is also a topic that we tackled already in the first panel. So the appeal of the analog, that's very important in our society. Yeah, we'll pass over to Cody. What is your closing remarks that I very much? Yeah, so for me, I think one of the links, and it links to Lefebvre, so I think maybe I can wrap all of this up a little bit. But what I think is interesting about Lefebvre and Cody's various rhythms in a city from the flows of people to the garden rhythms where there's the daily rhythm and the cycle of seasons and a yearly rhythm and even a longer rhythm and so I always think of Cody's work in that terms the rhythm of the visuals and the music internally within themselves the play between them the play between the rhythms on a larger scale and so it makes me think of Juan and Yu's work, because when you talk about analog and digital, that's another sort of piece of like set of rhythms that are standing together. And the reason that these overlaid rhythms are interesting is because it allows us to sort of uncover pieces that we wouldn't notice otherwise, right? It's like a step beyond the dialectic, that if the dialectic has two, rhythm analysis has three, and it's about these offset relationships. And I think that like, I love everybody's work just aesthetically too. It's been such a pleasure to see all of it, but I love that it also helps us uncover things by placing things into different aspects with one another. into different aspects with one another. Thank you, I totally agree. Perfect. Final statement, Sean. Oh, no pressure being called out, huh? I guess if I had to wrap things up under our process intensity kind of thing, I would say that each in our own way, there's a lot of process intensity, whether it's through analog drawing or through digital drawing or coming up with algorithms or live sort of brain-based process intensity when coming up with different code to write. Speaking for the other two, I don't want to speak for our own work, but speaking for the other two sort of very awe inspiring audio visual graphics that are coming out of these works that have inspired me to think differently about how I want to create work, I think in the future with different modes of sort of process, whether it's analog or digital or sort of mentally thinking about things. Thank you so much. We are right on time. So I will conclude with a topic that we totally all agree. The topic is in front of your eyes and ears. And this is something that we really appreciate and love. And we would see your presentations not on the screen. We would love to see it on large screens, in public spaces. We would love to talk about to each other and discuss the performances, disturb the performances, see these coincidences and smell it and taste it. And that's something that we, of course, totally agree. Thank you so much to LifeBear Cody and Escalante Yulin. Thank you so much. Stay tuned. We will continue with the keynote in 25 minutes. Thank you. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.