Thank you. Is it working? Yes. It's my fault. Maybe I should start again. Good afternoon. Welcome to Expanded, to the conference. It's a privilege to kick off a new format within Expanded Animation. Expanded Animation is a format that we have established 12 years ago. The 12th edition has a new format, and it's a conference. So, from the symposium format to the conference. The conference is focusing on expanded animation and all the things that are interactive. So it's the conference on animation and interactive art. We have this idea at the beginning of the year and we got a very positive response from the academia and we have selected a great panel to you. The expanded animation frame or event is hosted by the University of Applied Sciences Upper Austria in collaboration with Ars Electronica and we have already since many years a lot of collaborators. This year new is expanded play exhibition at the Atelier Haus Salzamt that was kicked off two days ago I think. Ars Electronica is also a third day already and we have a lot of new partners as well like Creative Region and I cannot mention all the partners but you can see it on the slide. So funding and partners is one thing but we have a huge team behind expanded animation since many, many years. And this time I would like to start with the volunteers from Campus Hagenberg. So there are a lot of students involved since the beginning of this event, supporting the symposium, doing the documentation, the website, the graphic design, the trailer, the t-shirts just arrived, and goodie bags, etc. So this is really a long project where students are involved over a couple of months and here in Austria we have holidays, even in their holidays they are working together with us on an eye level that's very important. So this is my first and the biggest thank you to these volunteers and of course to my colleagues from Hagenberg who are teaching and leading this team. I'm also very happy that we have new collaborations like the Masaryk University where we are doing the expanded play exhibition and now I'm coming back to the conference format. I'm very happy that we have a lot of helping hands in organizing the conference. I will not introduce the track chairs because they will be on stage in a couple of minutes but this is my next very very big thank you. And I'm also very grateful to especially the support from Hagenberg. It's not that easy that we can evolve this format and next year there is another idea and another initiative that has to be supported. So, coming back to Expandit, we have a great archive, a great website where you can see all the previous editions. It's everything recorded and right now we do a YouTube stream so also I would like to welcome the audience that is watching right now. And within the conference format we have a new archive that we are starting because the conference format is already finished or we closed it already with the proceedings that will be also introduced. Now it's my pleasure to pass over to the track chairs and I will give the microphone to Philip Wintersberger who was Who is the track chair for the art? research paper track together with Victoria Sabo Thank you Hello Okay, so first of all, maybe the question is what is the art research paper track? That's something new and expanded and yeah, the name already says this conference should expand in one or other direction. So we thought we would like to have an additional format here that would allow different types of work. So let's assume, for example, my capabilities. I'm not an artist. I cannot really create something, but I wouldn't know all the methods and how I could evaluate something. So this would be a track for me where I could just take artworks from colleagues or also something that is out there and do a scientific evaluation out of it, for example, and publish that. Or I might be knowledgeable about some theories, so I could also write a paper, a more theory-based paper, and also submit that one. So there are different possibilities for people who are not artists themselves to also engage with this community. And we got 36 submissions from authors, 75 authors from 16 different countries. So, Austria had 16 authors. I think we were the country with the most submissions, but we were closely followed by the U.K., U.S., and Asian countries like China, Hong Kong, or Taiwan, and also some submissions from authors who were just one representative from their countries, including the Netherlands, Belgium. So we could show that already this first installment, it's quite international. So Victoria and I, we organized a review process for these papers. We had a program committee with about 25 dedicated experts in the field. Each of these papers received two independent reviews. They were scored according to their scientific quality, the fit to the conference program, and the presentation clarity. Based on a ranked list, we then together discussed which papers should be presented here at the conference. We decided to have 12 papers accepted in the end, which equates to an acceptance rate of 33.3%. I think that's already quite a high bar to get in for first time presence of this track. So I think that's all from my side. Do you want to add something, Vitoria? Sure. First of all, I want to thank you all for including me in this process. I'm new to Ars Electronica. I have been talking with Juergen about ways to engage ACM SIGGRAPH, for which I'm the chair of the digital arts community advisory group for the conferences, and we were trying to think about how to bring these different dimensions across these two institutions and to bring people from those communities with each other. And with this particular track, I saw a lot of overlap in community members and I look forward to seeing more. Thank you. Then I hand over to the art paper track, Martin. Okay. Hi, everyone. I'm Martin. Okay, hi everyone, I'm Martin and I'm going to present on behalf of Bonnie Mitchell who is here with me and Barbara Guglieva about the process how we reviewed art papers and all your great submissions. We can go to the next slide, the art papers, so we have some facts as well. But in general, before we come to the next slide, and Philip already introduced it pretty well, what is the difference between the art research paper track and the art paper track? And putting it simply, it's less research, less data, less user studies as well, and more focus on the project per se. So it's more on the hand crafting part. And this allowed authors to present their art installations, art exhibitions, and all submissions in this realm. And in total we have also 38 submissions. The review process was in line with the review process for the art research paper track. So each paper received two peer reviews. And in total we have 15 papers accepted, resulting in an acceptance rate of 39.5%, which is also pretty high for the first time. And yeah, so this is basically the process of how we created the art papers. And we also like, we separated in two categories, art research paper, this is part of the proceedings, the art papers is also part of the proceedings. I will tell it in a second. And we, yeah, we have these two submission types in general. And I will also hand over to you, Bonnie, for a couple of words. I just want to say it was a pleasure reading all the papers you know sometimes it's a bit of a chore but this was not at all there were a lot of really interesting ideas and I just want to commend the authors and the artists that are in this room right now because overall it was a really exciting batch of ideas that were presented to this conference so thank you okay and now I already introduced the proceedings basically the heart of any conference right with with all the great works received from from authors that submitted. And the proceedings are structured as follows, basically. I repeat it once again, we have the art research paper and the art papers, these two categories. And for us it was super important that we have something that is open, freely accessible, any kind of digital library where we can upload the papers so that they are publicly available. Then also, as I said, open access. And one important part was that they also get a DOI, like a unique specific ID so that all papers are persistently available. And this is also an important part. And that's why we come up with Zenodo which actually meets all the requirements for it and we uploaded all the papers there you can access them using the QR code as I said we combined the art paper and the art research papers into it we also have a like a last section expanded play and which is also holding right now with games and demos that are also included in the proceedings. And yeah, and I think Zenodo is a good platform for it. That's why we used it. And this is basically it from my side as well, from the proceedings. As I said, feel free to try them out and read them, have a look at it. I hope, or we hope that you'll like it. And I will hand over to, like Jung, you said, feel free to try them out and read them, have a look at it. I hope or we hope that you'll like it. And I will hand over to, like Jung, you said, to Philip for the first session, right? Cool. So then let's start with the conference program. We have this first session on art research papers, which is called Interactive Experiences. We have four great works to be presented here although one author cannot participate with a let's say last minute information so we're a bit more relaxed with the program here so strictly speaking we would say we have about 20 minutes per paper this means 15 minutes for a presentation with what is followed by five minutes Q&A. But I would say as we have just three papers to present in that session, I will not be as strict with turning you off if you speak for a couple of seconds longer. I don't want to call it minutes now because at some point I will have to shut you down. No worries, but so the first paper that will be presented by Chi-Hung Wang, it's called The Time Organ, Precision Imperceptibility and Synchronization of Quantified Time. And I would ask the author to go on stage and do the presentation... Hi, everyone. My name is Ji Hong Huang and this is from Taiwan and we are glad to be here paper. This name is called time organ and synchronization of quantified time. And I will be the first presenter for the first part and Junhuangling will cover the rest. Humans have been exploring the relationship with time. From the earliest mechanical water clock, humans have divided time into calculable units through tools, continuously seeking the most precise frequency to measure time more accurately. However, such miniscule units of time can no longer be perceived by human body. In the past, the philosopher Henry Bergson categorized time into two types. One is perceived by the body and the other as quantified time. When there is a discrepancy between quantified time and bodily time. People tend to gravitate toward standardized quantified time. Our understanding of time has gradually shifted from human sensation to unique representation and become increasingly insensitive to our perception of time. This change in the relationship between human and time make us wonder if there is a way to shift people's perception of time back toward our bodily sensation. With this idea we observe from the rubber hand illusion and out-of-body experience through synchronization, multi-sensory experience, the brain can be tricked into altered perception. So can we use multi-sensory synchronization to make people aware of their most intuitive perception of time? Based on the aforementioned background, the time organ project was conceived in the rubber hand illusion by synchronization tactile and visual sensory inputs. Individual experience the illusion that transfer their sense of pain to affect rubber hand. Inspired from the rubber hand illusion, we designed this multi-sensory experiment through a synchronization of tactile, visual and auditory. We aim to shift individuals' perception of time from the quantified time to bodily time. The time organ project experience lasts for seven minutes with extra three minutes for setup and instruction, totally ten minutes. First participants put a VR headset and place their hand in a dripping device sensor area. Once they're ready, they're gradually immersed in the experience using a see-through feature. During the experience, the VR used sounds and visuals to help participants feel their own sense of time. The dripping device tracked their heart rate and water droplets fell into their hands at a frequency matching their heart rate. As time passed, the VR and the dripping device sensation will slowly synchronize. At the end, the participants see their historical heart rate data recall and the sense of time in the VR, prompting them to reconsider the relationship between their body's sense of time and the quantification of time. Here's the deval medium of the project. Next, we will explain the relationship between the dripping device, the VR, and the participant's heart rate. What is time? What is time? Everything we're telling is that time is passing through their changes. But humans choose to understand these moments through numbers. Despite the increasing availability of infinitely precise units of time, is this truly time? Can you still sense the purest essence of bodily time? In the video, the dripping device designed to resemble the medical instrument, simulating the external organs to help participants perceive their body's sense of time. The top of the dripping device has a water reservoir that supplies each experience the lower part of using a stepper motor to precisely control the timing of the water drop by squeezing the water tube. Second, the heart rate area. The area where the participants place their hands has a heart rate sensor that optically reads the heart rate and sends the data back to the dripping device to control the water drop frequency. The data is also sent to the computer to manage the sensory change throughout the experience. Third, the water drug mechanism. In the Dripping Device and the VR Virtual, their simple yet effective water drug mechanism help participants feel their connection or disconnection between their sense of time and natural time. Additionally, sound changes the VR feature, enhances their perception of the screen is the design framework of the TimeMorgan project. The main tools we use are Unity and ESP32 development board for wireless signal transmission. On the right side of the screen is what the participants see the VR after experience and the screen displays the historical hardware data record along with the corresponding time points. The time-worn project has showcased this April in France at Lava Virtual Hector Verso, the art and VR gallery exhibition. During the event, around 150 participants experienced the project, allowing us to gather valuable feedback and hardware data. Below is a documentary video from the exhibition in France. Ok Terima kasih telah menonton. Thank you. In this exhibition, we also received a lot of feedback from the audience. First, the audience think that the horror of most experiences gradually decreased during the process, which may be attributed to the overall slower path of the experience. And some experiencers suggest that using heartbeats might more be closely aligned with our discussion than use heart rate. Moreover, some feedback provide us with new ideas, such as exploring the temperature of the water and the color of the visual, which might be area worth further research. After the exhibition, our project aims to further experiment with whether synchronization can prompt individuals to perceive time through bodily reason. We will experiment with combining a wider range of sensory stimuli and investigate how different sensory combinations affect humans' perception. Additionally, to better understand participants' bodily changes, we plan to incorporate more body-sensing devices and conduct quality-type research to gain deeper insight into their experience. In summary, our project seeks to explore the issue of how the precision of quantified time often far from our most direct bodily experience, leading to an increasing insensitivity of passage of time. The Thai organ project gets inspiration from the rubber hand illusion, attempting to combine VR with a GPEN device to design a multisensory synchronization experiment that investigates the disconnection between human perception and bodily time. After the exhibition, further info will focus more on type research to explore whether synchronization can help individuals reconnect their sense of time to their body's natural reason. This concludes our report. And finally, let me make a brief announcement. Both of us have new artwork on display at the campus session in Pulse City. We are only invite you to visit and provide your feedback. Thank you once again for your time and attention. You can have one of these. Great, thank you so much for the presentation. There is room for questions. Please, in your hands. Go ahead. Thanks so much for your presentation. I was noting you said that the device was like a medical device, and I wondered if that actually created stress for any of your participants, the idea that they're being hooked up to something that feels similar to a medical device, or if you were able to counterbalance that effect. You mean the... The apparatus, yeah. You said that heart rates were lowering, but I was thinking something that feels like a medical device. Often the effect of going to see your doctor might be stress. I wondered if that was a concern for the project. Because the first time we make the device, the shape is like a needle. Everyone is scared to put their hand. So after that we use the water tube. It's more safer to put the hand on the device. Thank you. So more and more questions. Okay, let's go there. Hi, Thank you. If you can synchronize the water dripping with the heartbeat, then why do you need the VR glasses? We use VR in this project because the dripping water gives us the tactile, but we think that if there is more visual and the sound for synchronization, we think. There is research that someone, one is a fake hand and one is a real hand, and they click the fake hand, the real hand will fail. So we think that maybe visual and the sensation of the hands, water drop on the hand, then combine and let people feel the body time is the real time. Okay. We have one more question here. Yeah, thanks. the time is real time. Okay, we have one more question here. Yeah, thanks. Sorry. Thanks so much for this very interesting talk. I would love to experience it. You said you have it in as a demo here. No, no, no, no. I was like because I think it's could be really interesting to experience it. But yeah, in general. So interesting to experience it. But yeah, in general, so. That's the water. I just tried an experiment. Like the time is different. Heart rate is faster. Yeah, it works great. I was just wondering, so particularly, what did the participants or the users experience in VR because when we have like sometimes you have a kind of like a latency you did not track that the water drops right this could be interesting to have a real oh yeah I feel a water drop and what I see in the virtual world is really in sync but how did you manage it to present it that it's really in sync because this is what you said like I have the synchrony between my vision as well as my tactile so we have this visual tactile synchrony right so what did they experience in VR so did they really see the water drop on their hand yes but how did then the question how did you make sure that it's really in sync? Is there any delay? You said how to let them in at the same time? Yeah. The visual and the water drop on the hand. It's not too difficult because we know the distance, the device is top and we can count that, yes, count that the delay of this distance. And here is also a speak sensor, so when the water drop and we will get the data and let the VR drop the water.. So when the water drop and we will data and let the VR drop the water. Thanks so much. have room for one more short question. Anyone? If not, then thanks again. And thanks again. Another round of applause for the authors, please. We could see on the questions that we had already, this is really a scientific track. So be prepared to answer why and how you did some stuff. And in the meantime, I will hand over to the next speaker. It's Hannah Pukoya with the paper titled, Make Some Energy, Tangible and Interactive Chemical Reactions. Thank you. So, hello, my name is Hannah. I'm from Masaryk University. I study visualizations, and I specifically focus on combining art, technology, and science to basically make science more exciting and more approachable and I'm here to present our project make some energy tangible and interactive chemical reactions. Yeah so just to jump into it straightaway human-computer interaction and visualizations are very useful with explaining STEM subjects so science engineering mathematics and technology I forgot that one you can use alternative visualizations to for example different formulas or even diagrams to make it more understandable and exciting and really good example is Drew Barry's animation about ATP energy. You have also different ways of explaining and understanding science. So for example, augmented reality applications, as well as virtual reality applications, and even tangible models. Our motivation for picking the ATP energy chemical reaction, which is one of the most basic chemical reactions that we need to function, is also taught at high school level in most of Europe. This is really cool, but equally slightly difficult because, as you can see on the right side, you just have a bunch of formulas that are not exactly memorable or exciting, and a lot of the times you just need to memorize them without understanding them. And when you look at textbooks, on the left side you have an example of a very simple diagram of a lot of chemical reactions happening. Once again, I am a visual person, but that is not enough for me to understand chemistry. So, one of the reasons this ATP synthesis reaction is not understood very well is also because there's a lot of molecules that are quite complex. People don't really understand the connection between biology and chemistry as well as their structure. We have opted out to create a 3D model of molecules that are involved in this reaction because 3D physical models have been proven to be very useful before. For example, Dr. Dorothy Hodgkins has created a physical model of penicillin, and the structure then was very useful for determining other properties of similar molecules. Another molecule that was created in 3D that you may know as the DNA double helix, very popular, and we also use stick and ball models in high school when we're learning chemistry. 3D physical models can create emotion and they can also inspire people to learn more because motivation is raised in learners when the visuals are a little bit more attractive. So we have decided to create a set of experiments where we did research. So basically a little bit more extended version of what I just said. Then we created an ATP synthase model, which is this guy right here. created an ATP synthase model, which is this guy right here. We have tested it on experts or got their feedback, improved it, then we tested it again on general public who are not usually experts in chemistry, and we have another iteration enriched with animations and some cool floor stickers in Salsam Atelier House, so if you're free, we would be very happy if you came to see the third iteration. So I'm going to pass this on. Feel free to play with it, spin it. It's here to be touched. And so far, we have managed not to destroy it. This is the third iteration. So we have used PDB data, so real-life data of real proteins from the protein data bank. We have created a 3D digital model and another open source software, 3D protein imaging. And then I exported it to Blender, which is another open source software, and decreased the vertices from 4.5 million to 2.3 million. So not only did my computer not explode, but actually I could work with the model further. Next step was basically dividing this model into several parts. So when the model gets to you, we were trying to make it so it can actually move like it does in real life. So the motor, which is the orange part there, but yellow part there can actually rotate, so does the axle and the other two parts are stationary. We have also used simple Boolean operations to represent the chemical reactions that happen within the ATP model, so lock and key for the ADP and P that create ATP energy and also H plus proteins which are simple cubes that are later on also used in different iterations. So very simple but very effective. And this is an animation that may or may not work. And it should work. Yeah, so basically this is how it is supposed to move and yeah, a lot of things happening at the same time. Oops. There we go. So this is the final product. This was the very first iteration before we actually spray painted it, which is what is currently making rounds. And we have tested it as a poster presentation at Visby, which is a conference and workshop about visualizing biological data and basically art and science. A very beautiful event. We got feedback from a bunch of experts who told us how to improve it, which was very useful and one of the most exciting feedbacks was that we don't have to have just one protein. We can have an entire electron transport chain, which is located in the mitochondria. So electron transport chain is basically from the left up until here. So as you can see, more molecules are being involved. So we have listened and once again went to PDB, the protein database, created more proteins that are true to life, printed some more molecules that are simplified but that can still combine with the H plus protons, so you can just insert them in the Boolean, well, the whole created with the Boolean. And we have tested this at an annual researchers night at Masaryk University, which is an event for public to kind of see and check out what the research institutions, including universities, are doing. Once again, we had pretty good feedback. There are some little kids who started asking about molecules, which was really nice, like how big is this and where in the body it is, so it sparks a conversation. We had some medical students who said they would really appreciate it if they had it in their chemistry class. So once again, we're kind of diverting from our original target group, which is high school students, but it's nice that it has so many applications. So that's great. Another iteration or suggestion was basically make it more complex and include electrons. So it's called electron transport chain for a reason. It transports electrons. We have decided to represent this with magnets. Unfortunately, those are not displayed here. They're back at the Salsum house. However, overall the discussion is that our our prototype was kind of adhering to the Explorer Nation, which is a term describing when you put science and art together to help understand real-life scientific data. We think this would be really useful to put it into scientific museums so people can actually learn and interact with it and kind of enjoy science. There's many examples of functional scientific centers like Sweden, Visualization Center C, Exploratorium or VEDA in Czech Republic. And the pros of these models is that they're easily reproductible once you actually create them, you just need to 3D print them. People are enjoying it, they are prompting new questions and non-experts and this can be done also for other chemical processes. However, when you have a 3D printed model you can't really zoom However, when you have a 3D printed model, you can't really zoom in. However, you have the 3D digital model that you can zoom into and like for example take just a section that you can 3D print and you can't really change the physical models, but you can just make a new one. So in the future, we would like to test this prototype actually on high school students because this was our original idea. We would also like to test it prototype actually on high school students because this was our original idea. We would also like to test it on first year medical students. We would like to test this and generalize this approach to different chemical reactions and maybe compare animal cell and a plant cell, which is something that happens a lot in scientific textbooks. This is just a little sketch of what the setup should be in the Salsim house. It is done a little bit differently, but it's very similar to what you see right here. So you have some 2D stickers on the floor. When you enter the cell, go through the mitochondria, and then you're zooming in on the physical apparatus. So yeah. Thank you very much for your attention. Thank you for the presentation showing me that I'm not on the level of an average high schooler, obviously, knowing about these different directions. So again, we have a lot of time for questions. Do we have some in the audience? Thank you for the presentation. You mentioned one disadvantage that there is no possibility for zooming in. Any ideas? So for a next step to offer this as a possibility? Hello, so zooming in I would maybe we could make a several different models where you have bigger and bigger model or you basically just focus on one section of the model you would like to see enlarged. Other than that you can have a 3D digital display that you can turn around. Cool. Okay. More questions? Sure. You in the back. There's a microphone coming. Don't worry. Yes. Thank you for a very nice presentation. I was wondering, have you thought about a future iteration of this piece, maybe in a virtual kind of digital systems-based approach where maybe this is a little game and you can interact not only with the models but also with the system itself? Thank you. Thank you for the question. That's actually very interesting. We're currently testing it and there were some people who saw animation overlay and they thought if they move a specific proton or what, not proton, sorry, protein, something is going to happen with the animation and that's what kind of gave us the idea for next iteration, maybe something can happen. Also, there were some other people we were talking to and they thought it would be really cool to see all of this in VR. So that could be another modality to test. So yeah. Thank you. Yes. I also have a question. So how would the test plan look like for the thing? So how would you evaluate if it's useful or not? So currently we have a lot of open-ended questions and like hard scale so we have both some quantitative and qualitative data. However, it would be really cool to maybe have some knowledge tests in the future. Once again, the high school student or curriculum would be a pretty appropriate thing to test it on. Approvements for testing people of high school age is a little bit difficult, so that would take a little bit longer time, but could be good. Yeah, but there are also non-knowledgeable adults like me, so you can take me as a test subject. There would be room for more questions. Anyone else from the audience? If not, then we all thank Hannah again for the talk. And we move to the concluding presentation of this session by YRQ, Fencing Hallucinations, Increasing Artistic Control in Interactive Media Arts by Merging AI Models with hand-coded programs. Please come to the stage. Hello, everyone. I'm Wei Hao. It's my great honor to be here today to talk to you guys about my project. I want to thank you again for the expanded animation committee to select my work and have me present it here. My title is pretty long. I'm very happy to be here today. I'm very happy to be here today. I'm very happy to have the opportunity to select my work and have me present it here. My title is pretty long. Fencing hallucinations, increasing artistic control in interactive media arts by merging AI models with hand-coded programs. Before I go to the actual me, I'm from computer science. My specialization back then was Internet of Things, which I deal with a lot of sensors. And then I'm interested in photography, so I wanted to use computation to generate images. That leads to my way to my program in Media Arts and Technology which is at the University of California Santa Barbara and my professor George Ligretti, which is the second author of this paper he is a mainly working as a visual artist and He also started as a photographer and he was one of the earliest person using computer code to generate images. And he's running this lab called Experimental Visualization Lab, which we deal with a lot of data visualization and also generative algorithms. So that's kind of the background of where we came from. And that's also our perspective about AI is highly influenced by this mix of background. So today I will talk about in this structure, which I will introduce the background of this research. And then I will talk about what's the challenge, what's the solution, and my method is basically the fencing hallucination project as a practice to implement the solutions. And then we will see some outcome from the research and that leads to our concluding presentation with my discussion part. OK, so we are in this background, which is like all these AI videos. What you see is very, I would say, great, because you can see a lot of camera move. You can see highly photorealistic. And this was like from Dali to here, only take like a year and a half, very crazy time. And in this day, everyone can be a director, can be an artist. You're just writing prompts to computers, right? So that's our expectation, what we see today in our AI technology. But let's take a look how we came today. So basically, all this technology go back to this is a recent rise of our AI, go back to 2015, where Deep Dream just have this image came out and then developed to different GANs and now the diffusion model which you can see is like going towards the most more photorealistic and also easier to use, so-called, because you can just write in prompts instead of dealing with models. And the other trend, what we saw today is like from these two images is like the news talking about how generative AI will replace encoding. Like the Nvidia CEO said, the kids should never learn coding anymore because English is gonna be the next coding language. So yeah. And also like a Tesla in their autopilot program, they just remove all the code. So now like the coding becomes something like unnecessary and redundant. But we know like for creation, it might not be true. This is our expectation, like the technology seems to offer the totalizing possibility that we can skip all the work of art making, writing, and making any numbers of difficult contentious decisions, and go straight to the result. So that's kind of how we expect the AI would do. Specifically for creating visual arts, like, you know, using code, using program to generate visual arts is not new, right? So, if we really go to the history, go to the 1960s where art and technology comes together in those art technology movement, we see how people do this. And what's interesting to see is people not only view this art, the outcome as art, but also the code themselves as art. So coding is very important in these settings. And that's because coding is also determined like the agency of the artist. I'm just, because this conference is about animation, I was taking the left, the workflow and artist control, which is an analogy in the film industry. So in the film, when you create a film, you will have a workflow, not just like go straight to the film. You will write a script, you will have a storyboard, you have like the person handling the lighting, you create the set, and you shoot the film, right? So the entire workflow just gives you a lot of room for creation. Each section can be a part of, each section can have a special artist to do that. And also, in each section, you will have very fine control over the results. So that's what the artist's control is about. Like in this anime, you have the part to control the face in very fine details. The same thing happening in the coding, this is like a typical programming interface in TouchDesigner, where you can see the workflow from left to right. And you can also, in each of the module, you can change the settings. So basically, we are talking about two kinds of programming in computers. One is using code. Basically you're programming with explicit instructions. But you AI basically also program by instructions you are programming with examples. So we are so in the code you design with code. In the AI you're gathering examples. And the code, you design with code. In the AI, you're gathering examples. And the code is changeable. AI is not changeable once it's trained. So my proposal is to combine these two together, so we can use AI with traditional programming. I'll give you an example. So a typical AI model is going from input to output. And then if we just add the two ends and some programming, then we can make a longer workflow for programming. And then what the current trend is trying to do is making this box bigger and bigger so to cover more of this region. But they actually overlook the benefit of traditional programming. With the programming there, first you can open up more customizability for creation process where you can add different controls there. And second, you can also connect different kind of input and output to the model. So you can use programming to translate different things that can be processed by the AI model. You can also translate the output from the AI model to a different output. So the same AI model can be used to do different things. And then you can also connect different models together using programming. Okay. So that's basically the concept of AI and code. And what I practice is in this fencing hallucination project, where with the workflow on the left can produce a beautiful image on the right. And it's in real time, it's created from people's movement. So let's take a look with 30 seconds of this project. So yeah, the project has has interactive part where you basically doing this fencing game with the AI. And on the other screen is dynamically translating all the move into image layers. Finally, it used aesthetics of chrono photograph, which is like 140 years ago, the photography early experiments. Yeah, the inspiration come from the general interactive principle and also movement visualization that we saw in the history. And I'm just going gonna go over how this project was done Because it is an interactive program basically what we try to do is to translate the body movement into images So then with a software in the middle, right? Doing this in real time But if we can divide them by steps, then you have the body movement is first parsed as skeletons. And skeletons, we are triggering skeleton response, which is the opponent you saw. And then they are translated into some images. And the images become the final image. And in each of the steps, what's cool is you can choose between using AI or using traditional programming. And in here, I'm making this choice. So it's like either or. And then I'm going to talk about how the two AI was done, but also talk about how I programmed them together. So the first AI model is to basically predict the pose. So I have this data set on the left, which is all the videos. And then I also parse them as skeletons. So basically, these skeletons can be used to train the model. So for each of the videos you just saw, the video frame will be parsed into these 25 by 2 numbers, which is 25 joints. And each joint has a 2 xy coordinate. And then they are going to be inserted into these layers of neural networks and generate another pose. Yeah, so just to let you know there, I also input the distance between these two pose. There as an input as well. So what the model predict was the gray one. And what we try to do is to minimize the difference. So in the end, the two things will be aligned. So that model, once it's trained, it learns how to map out this relationship. And then I programmed this. I just used my own pose in real time, used the same model. And I got a result, and then I aligned them together. So basically that's how I programmed real time pose feed into this AI model to create this interactive experience. Yeah. So on the left, that's me. It is doing the pose. And on the right is AI prediction about actually the AI is trying to win me, so it's always trying to be more aggressive. Okay. So, once I have this, I'm trying to generate some realistic images from this. So I'm using the back at that time, stable diffusion came out, I tried to do this experiment. So I started with just vanilla stream diffusion, which you can use text and image. But the result is not good because it suffers two issues. Why is it the styles, issues. Why is it the styles, as you see there, they're not the same? And also the skeletons are not really matched. To solve them, first I use something called Dream Booth to find the model with the examples, and they will be able to help the model to keep the style consistent. And second, I also have another module called Control Net, which is basically taking that input and then generate the result. So finally, with the two kind of upgrades to the model, I can get this image. So as the program runs, it starts to generate images every second. Then you can layer them together. And finally, we got this very complex image that is very hard for AI model to generate in one step. And this is simulating the multiple exposure effect in the chrono photograph. So there is an example how this is being translated. Okay. And that leads to the outcome of this project, which is an exhibition. It was exhibited for twice and attracted 800 participants. And the installation has two screens you have saw before, and also has a strobe. So when people are being very active, the strobe will be flashed just to simulate the experience of you are taking a photo where actually no camera was really used. So there are some live videos when people are interacting. On the left, the participant is very good at fencing, so he was taking this really seriously. He tried to win over AI. On the ride, the girl was just dancing. And both of them are generating different results. For me, it's very enjoyable for people to interact with my project. And also, there is a curated set of images. What I try to highlight here is all the images you saw are very different, but also have a shared aesthetic styles. So and the images, each of them are pretty complex. And they are different when your movement with the AI is different. So then I would like to talk about some future works. So current approach, what I did was really demanding in terms of technology proficiency. So you have to know a lot of tools to be able to do that. But luckily, today we have a lot of off-the-shelf tools. And I will also help open source some of the data collection tool I use to really make people easier to create their own AI models. make people easier to create their own AI models. And the second drawback of this project is like I'm not inventing any new aesthetics here because I'm basically remediating previous aesthetics from Marais or McLaren in their videos. So that is a decision that I made intentionally, because I think nowadays, when people come to the AI image creator, they always try to ask AI to create images that not exist before. They try to find the new aesthetics. But I would say this gesture is to show we can also search a different way to use AI. But also, I want to explore more aesthetics in the future with the same workflow. So actually, I also want to take a look at the different state of the art development in AI. We can look like the controllable creative AI is already happening. So on the right, on the left, you can see there is a Compu UI tool for stable diffusion with all these connections actually very close to touch designer image we just saw. So people notice this drawback and already making improvement. And on the right, that just came out of this video making tool from Runway ML. You are able to change the background and foreground differently. So it's very controllable at this point. I think this trend is already going in the right direction. We just need to have more and more practice. And not only in the creative AI realm, we also have scientific research that that also doing the same thing. In the large-language model, people come up with something called AutoGen, which is basically each large-language model is gonna be like a conversable AI agent. And they're gonna talk to each other to complete a more complex task. So as a person, you basically program how this AI agent can communicate with each other. And also, there's a newer term called Compound AI System. Just came out earlier this year. It's also talking about similar ideas. That brings to my conclusion. So first, generative AI serves as a new kind of tool for content creation. But current development trends tends to eliminate creative process and coding. Insights from art and technology practice since the 1960s indicate that coding is essential for creative expression. As demonstrated in Fencing Hallucination, using traditional programming with AI models provides greater aesthetics control and allow artists to regain their agency when using AI tools. OK, that of my presentation. Thank you so much for your attention and please feel free to ask any questions.. Thank you. already have questions. Thank you very much. Thank you. I was very curious about the different response between your expert fencer and your novice fencer. Did they give you different feedback on what the project was like, how they perceived it? Yes, yes. So that's actually an inductive bias of AI models, because in the training model you only see professional fencers. So what do you get if it's a professional fencing pose, you get very fencing-like moves. But if it's a non-expert that's just doing random thing, the AI model behave very boringly. So it's just like standing, not doing anything. So that's actually... So that encouraged the audience to move more. And when they trigger the AI to do some fancy moves, they feel excited. More questions. So I also have a question. Or do we have another one already? OK, good. Yeah, I have a question or do we have another one already? Okay good. Yeah I have a question. You showed fairly realistic AI generations but have you thought about maybe doing a more abstract spin of it on it like futurism or cubism? Yes that's a good question. Actually, I have tried it, and it's working fairly well. As I said, I think more of this project is trying to make the AI images not so AI. So people have this general stereotype about how AI image should look like. It should be have like a morphing, not understandable forms. So that's kind of a tension decision. But yes, the same technology can be do any kind of style of images. Okay, so you showed us your architecture where you argued that this kind of mixture between AI and programming could be important in the future as well. So based on your prototype that you showed then, it felt like somehow you needed the programming because the AI tools that are available didn't just not produce the result that you wanted to have for example regarding the pictures were not the skeletons were not aligned It didn't have to visual styles that you want so you needed to include programming here to get to that But I mean AI engineers would then say yeah sure but with better and newer models But AI engineers would then say, yeah, sure, but with better and newer models, we could also resolve that. So you could make what you call programming also part of a future prompt. So how would you respond to that? I think that's a very important question. As I also include there, that's the trend, right? So you have more programming, the box become bigger, you have a larger model. My belief, I think, the larger model won't solve anything, especially when it comes to physical stuff. AI model won't build artistic art installation, won't build interactive installation. It doesn't consider the experience. So all these control decisions, you have to have some way to enter the system. And I think at this point, programming is the very essential way to do that. Thank you. Do we have another question from the audience maybe? Yes. You showed some really fascinating examples when you were talking about programming, you know, Coelho's work and a number of other people. Do you see the programming actually becoming essential to the aesthetic output instead of just the sort of trying to achieve a goal? How can you bring those models, the results of those models, into a different realm aesthetically through programming? Yeah, I think that's a good question, as I also feel that's kind of a drawback of this project, which is it doesn't explore new aesthetics yet. Yes, programming, there's no AI, you can do anything you want, and every line of code will determine the final result. That's what we are used to, and that's what we like, right? But when the AI came in, because of the training data is always sourced from online, it already have this, you cannot remove this aesthetics in the AI. What you can do is try to guide it to your intention. In my case, I'm trying to respond to the general use of AI, which is very similar. It's trying to generate something that's not, that's look real, look like a photo, but it's very imaginative. Yeah, when I presented this in a different place, people point at my image, say this is not AI image. That's what I want. So yeah, because I present this along with some like, like, yeah, that image from AI generated. They can tell that's AI generated, but they said my image is not AI generated. So what I should say is like, yeah, with the programming, I did control the aesthetics. Yeah, I just didn't control it to another way. Yeah. Yeah, thank you. A recent follow up question. Yeah, your question leads me to ask, are you more interested in the dynamic interactive experience design or more interested in the chrono photograph final product? Is there one or the other that's the ultimate goal or vision? Yeah, the ultimate goal is the interactive part. So you have the AI, I mean this started three years ago, at that time AI was not really interactive and you are limited by your input bandwidth which is only, yeah, it's hard. So I feel like real-time interactive was really important, a breakthrough at that time. As to the chrono photograph, that's a dual decision, is determined both by my interest in photography, which chrono photograph at early age was actually used to study motion. It's actually, I think it's the first computer vision without computer. So I think this is a very important history, so I want to make a response to that. So I also picked that. Well, I was just thinking back to the presentations this morning and whether a system like yours could be used to produce animation. Yes. In a variety of contexts, for example. Yes. Yes. And now there's because the advancement of the technology, yeah, you can do this on real time as well. So it's already become like a filter that turn you to a different character in real time. Yeah. Thank you. Yeah. Thank you. Yeah. Thank you. So I think we could conclude the session here. Another round of applause for all the authors that were presenting in that session. And yeah, as already announced at the beginning, we have finished this session a bit earlier. So we have now nearly an hour break until we continue with the first session on the art papers starting at 4.15 here in the room again. In the meantime, grab a coffee, go to the authors, the presenters of the last session, engage with them, ask them your questions, and we're happy to have you all back in 55 minutes. Thanks for listening.