Kepala Kepala Kepala Kepala Kepala Kepala Kepala Kepala Kepala Thank you. Unfortunately, we are now out of time when it comes to the games and art track, but we'll continue immediately with a new topic. I'll hand over to my colleague, Houston Rodriguez, who will do the introduction. Thank you very much. We reach the final panel on the Expanded Animation 2022. And now on this panel, we have two very interesting talks. The first one is Philippe Pasquier. Philippe Pasquier is a scientist specializing in artificial intelligence. He is also a multidisciplinary media artist and educator. And he is a professor at the Simon Fraser University School for Interactive Arts and Technology in Vancouver, where he leads the Meta Creation Lab. Thank you very much. You're very welcome. Thank you, Austin, and thank you to the organizer for inviting me. It's really a pleasure to be here today in person with you all. So I'm going to talk today about AI indeed. And as we all know, AI is about everywhere now. But I'm not going to talk about AI in the sense of strong AI, sentient AI, the AI that possibly will replace us all, because I don't believe in it. And in science, we have no idea how that could be the case so far. It's a research topic but it is not a reality. What is very real though is that we have a lot of algorithms that are indeed automatizing tasks that before and previously and not that long ago were only possible to be achieved using a human brain. And that goes from finding the shortest path on a Google map, then driving a car, flying a plane, regulating a nuclear plant. Those are the tasks for which that's not the case. There's no such thing as the best graphic design, an optimal animation, or a Pareto dominant joke, for example. And so in the Meta Creation Lab, we really look at, can we automatize partially or completely those creative tasks and augment creative tools with those algorithms? And we do that across domains. And I'll talk today about examples in music, in dance, in video game, in animation, of course, because this is the extended animation symposium, and other domains. And at the Metacritic Lab, we do that using a truly interdisciplinary approach, where we do produce the algorithm, we publish them, we evaluate them. And if they are good enough, which recently has been the case, and increasingly so, then we apply them in collaboration with the industry, whether developing new software or augmenting existing software with those new features. And then because we artist ourselves, then when those existing software with those new features. And then because we artist ourselves, then when those algorithms work, we're also the first user of those softwares, and we make art with this algorithm. So today I'm going to go through two broad categories of creative AI that are out there. The first type of creative AI algorithm that we find, an application of those algorithms that we find, is computer- computer assisted creativity. The idea of augmenting creative software that I use every day by millions of creative workers clicking on buttons to achieve their creative, generating those creative assets that the audience will later consume. So that's computer assisted creativity. We want to embed the AI to augment the software with generative capabilities. The second family of approach are embedded generative systems, embedded generation. And that's closer to pure automation, is when you generate the asset, you generate the content in real time. You want to generate the right content for the right user at the right moment. You can't do that ahead of time. You've got to do it on the spot. So that applies into games, into live performances, some interactive installations, interactive systems, and such. So let's start with the first family of approaches, computer-assisted creativity, sometimes called co-creativity. And we see the opportunities and challenges arising as we go through. Let's start with computer-assisted graphics, because that has been very, very popular. We worked a while ago, prior to neural networks, on style transfer. This is my face in the style of Rembrandt, and this is it in the style of Horton. And more recently, we used neural networks like everyone else, and we pushed those new features into some creative application. We worked with a company in Toronto called Generate, everyone else and we push those new features into some creative application. We worked with a company in Toronto called Generate and not only can the user apply filters that were pre-trained but they can train their own filter and develop their own little tool, their own little effect that they can apply to their photo and video, thus enhancing the creative process for them. And then you're all very aware about the flood of text-based image generation, computer-assisted creative tools that are there right now. There is also a wave that is coming of such tools to generate video snippets or little animation, thus changing the game quite a bit between I can make an artwork to I can describe an artwork that I want and the system will make it. But we see that it doesn't go without any issues, of course. What people know less, because we're not one of the Google or Facebook of this world, so we don't have a big PR machine necessarily, is that there are systems out there that do the same thing for sound design, for example, where you describe a scene. Maybe it's a scene of a movie or just a soundscape that you want to generate, the length of it, and the emotions that you want to transpire through it. Maybe it starts calm and finish hectic. Maybe it starts sad and finish happy. And then the system will generate it for you. And if you don't like it, you generate it again. And eventually, you can also download the mix and adjust things by hand. So there's this idea of computer-assisted creativity. You go back and forth with a system that does the bulk of the work, and you act as a curator and as someone who adjusts and controls the system so as to generate the assets you want. That changes the creative work and the creative process quite a bit. So here with this system, if I want to hear a waterfall in Thailand, it might sound like this. While if I want to listen to a city in the bush, it might sound like this. Of course, as artists, we can also turn the system upside down and make much more abstract requests and use and abuse the system in more aesthetic ways. So this is a quenching rain drenched my burning head. Recently we had a very successful algorithm that is looking at computer assisted composition. So that's another creative task, symbolic composition, the writing of a score, not the playing of music. And so we trained a transformer architecture, the multi-track music machine, trained on the largest data set of music. To get back to Isabel's talk, this data set is heavily biased toward Western music, because that's what we have in digital form right now when it comes to scores. In fact, a lot of non-Western music cannot even be written in scores. But nonetheless, it is the largest, and it is a superhuman composer. It knows so much more music than any human can. And it's very versatile and a bit more controllable than its competition, even though it's less famous, because we don't have the same PR machine. And so it has been quite successfully applied by a number of companies. I'll talk about it in a second. But let's watch a little video that shows some of the features of the system. Thank you. ༼� ༼ つༀব་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་་� So you saw here it is easy to generate variations, copyright free variations in fact, of existing songs, add tracks to your songs, resample some part of it, and progressively progress to a new composition that you co-create with the system. If you want to try the system, there's an online interface that we developed. You can try it for yourself. It's quite powerful. And it's been picked up by the industry. The game industry, for example. Elias in Sweden developed software for composers to make tracks for video games. In video games, repetition, gamers play hundreds of hours. Repetition is a problem, so the need for variation is real. And composers are expensive. So if you have 11 million players that play 20 hours per week, how many composers do you need, really? Way too many is the answer. And so we're also working on a plug-in for Ableton, and working proudly with Teenage Engineering on integrating some of those algorithms into synthesizers where pattern generation, the algorithm can take care of that and the user can actually play with the sounds which is what people like to do especially if they're not really composers but they want to make music. In fact in the music software industry most of the users, 90% of them, buy a software, will never finish a track and so maybe with those co-creative systems, we can lower the bar of entry for creativity and for those creative tasks. We use those systems, as I said, and you can go on Spotify, Apple Music, check my album, but we also work with a number of artists, and we got funded by the Canada Council for the Arts to actually produce an album of artists using the multitrack music machine. Here's a quick example of house music here. All right, and so those systems increasingly and increasingly they're gonna be developed deployed in existing software. We work with Steinberg right now, we did a plug-in for Cubase, and we're working on an evaluation study. And the reason why evaluation is important, especially the evaluation of the acceptability, what we call the acceptance of the system, is because, again, those computer-assisted creativity systems, they're going to be applied to pretty much every creative task. This is a list of some of the tasks, domains in which right now those systems are being deployed. So I talked about visual art, I talked about photography, music, and sound design today, but there's also equivalent systems for animation, graphic design, design, fashion, architecture, writing, you name it. It's actually a problem because our students are using those systems for cheating at school, which is an emerging issue. But eventually everything will be covered. So what's really important and what we're trying to do now is serious, long-term studies of, yes, if we deploy the systems, are they competent? That's a normal user study evaluation. Is the system perceived as reliable and competent at its task? Yes or not. Nowadays, it tends to be the case. Are they efficient? Does the system allow saving time or effort? For the cast, a very capitalist type of question, but very real for a lot of companies that have legends of animators and composers working for them. But for us, what's more important is, in the long run, is there agency? Is there enough controllability so that you can express yourself with a co-creative system? Do you feel at the end that this is your work? Right now when I use DALI, I have fun, but I don't think this is my work. And that makes a big difference in terms of phenomenology. So we start looking at new methodologies and ways to evaluate the system beside the surface level experience and try to see how it feels at the effective, subjective, artistic, creative level for those users. Both for beginners, for which sometimes it's the first time creating something that they really like, that they're not really capable of writing the music themselves, for example. But also for professionals, people who make a living out of using those systems. How do you feel to use those systems, and how does it feel to use them every day? Does it feel better than clicking on the usual buttons for a day? Maybe, maybe not. So we have some results. It's not published yet, so I'm not going to talk about it. But this is where a lot of the research is going in HCI right now in human-computer interaction regarding AI. So that's the first family of systems, computer-assisted creativity systems. We're going to augment software, creative software, in front of which a lot of people spend a lot of time. In fact, creative tasks are the menus of computers now. It's not the army anymore. It's not the industry anymore. It's people doing creative tasks, photoshopping things, doing video editing, making music, playing games, such tasks. The second family of creative AI systems that are out there are real-time online on-bedroom generative systems. That's closer to pure automation. Why would we want a system to generate the assets from scratch without human intervention? Why, in other words, would we remove the human from the loop? Well, there's many reasons, but two good reasons are that we want to move away, we want with interactive system, we have to move away from linear media. If I make a movie, it's an hour and a half long, it makes complete sense to have a composer make the music and every image being looked at one by one. If I program a video game and I have again like World of Warcraft, 11 million players playing 20 hours per week. How many animations do I need? How many hours of music do I need? We need to move to what procedural systems, otherwise those game studios are exploding in terms of cost. And they already were bigger than TV plus cinema together. And the second reason why we might want to generate from scratch, aside from the intellectual challenge and the research potential there, is that we want to provide users with adaptive experience with those interactive systems. For any interactive system, it's true for learning, for playing, for any sort of interaction, there is such a thing as a flow zone. If a game is too easy, if an experience is too easy, it's boring, you are going to lose your user. If, on the other hand, it's too challenging, too hard, it's going to also disgust the user and you're going to lose the user. And this flow zone is different for everyone. You know, you get beginners and intermediate and high-level players maybe, or learners. But not only that, it changes for everyone. And it changes for everyone at a different pace, in a different way. And so I want to eventually be able to track the user's flow zone and to adapt the experience of the user according to that user. In other words, I want to increasingly be able to generate the right experience for the right user at the right time. We're not in a capacity to do that if we do everything manually. So here are examples in there. We did, 10 years ago already, won some awards for automatic level generation for games. And we've been working since on generative music and adaptive music for games. Indeed, you can't really play really sad music when a player is on a winning spree, or play a very happy music if the player is about to die after 300 hours of play. And you can't even, for a two-person game, play the same music to the both players, because they're having a completely different experience if one of them is winning and the other one is losing. Another reason to have generative online systems is in the artistic realm, we work with musical and virtual agents, interactive generative systems that eventually we get to play with on stage. And because they play with us on stage, like any other musician, they need to listen to us and react to us in real time. We use such agents. The last time I came to Ars Electronica which was in 2017. We were showing downstairs at Deep Space 8K a piece called Yota. Our agent was generating the music trained on a composer Mehmet and then Uç from Turkey was generating the visuals. Thank you. All right, and that was a situation in which the agent was talking to the system and generating, so you got a synchronization between the sound and the images that would be hard to do in real time otherwise. But we could have canned that one. While later on we really took a risk and played live on stage with the system, I had this dream of playing with a number of composers, and they were dead. So we decided to train the agent on their recordings and revive them, and the slightly controversial name of the piece there. And we were invited to play that piece at SAT. And if anyone is going to Montreal, I highly recommend to check the Society for Art and Technology there. You lie on the floor, there's 250 speakers and a dome projection. The sound were traveling, the agents making the sounds were traveling, along with our sounds were traveling too, leaving traces on the dome. And that was a lot of fun. So you need agents to be responsive and generate material in real time when you play on stage with them. In fact, they have to act like an improviser. Another domain closer to the, although music is maybe the expanded animation of the soul, I don't know. I know that Jürgen and the organizers of this symposium love to expand animation to new boundaries. But I'll come back to something much closer to most animators, is generative movement and generative animation. We've been working with a number of companies and motion capture studio in Vancouver on movement generation. And so what we did, the pipeline to do so, is to record dancers and actors acting and dancing and doing all sorts of movements with a variety of emotions. Here we use the valence-arousal model. Valence is positive or negative, happy or sad. Arousal is low, you're tired or aroused, tense. If you have high valence and high arousal, then you're excited, et cetera, et cetera. And so we record those actors in a number of different states. And they're really good at playing it. And we have this database. Then we do what we call ground truthing. We check, is it the case that movement can express emotion that way? And so we ask thousands of people online to look at those recordings of a skeleton moving, no facial expression, and we ask them, what emotion do you see there? And in turn that, people do see the right emotion, and what we call the inter-rater agreement is really high. And that means we can train a machine to recognize emotion on that database. And we did. And so now there's application like on motion, which if you get into your computer and the camera is on, then the computer knows if you're happy or sad, tired or excited, things like that. But so we have the database of a movement with emotion. Now we train a machine to, and the database is online, by the way, for those who are interested in mocap data. There's very little free mocap data online for the very fact that game company, a film studio, and et cetera, have them proprietary and don't want to give them away. So it's hard to do research in this area, in this age of big data. So here is a neural network trained on that data, factored, constrained, restricted Boltzmann machine. But that is not really important. And it generates the movement in real time. It's like a game controller. You can move this little character. But what the industry is interested in is not in having one character. It's having 1,000 characters. So we mock up several performers, and we have a latent space. And here we're moving from the style of one performer to the style of another performer. And so with 10 performers really different, I can generate 10,000 different signatures of movement. I can also control the emotion. Here we're getting calmer in the walking and eventually we can get more tense. And then I can not only generate movement that I've mock-upped, but I can generate any movement that the body can do, because the system will have enough data to generalize in the latent space of possible movements. And so therefore, I can take a famous actor and have a model of the way that person moves. And you know movement, there's strong signature. Often you see a friend in the distance, and you're like, you know, I recognize him or her just because of the way they move. Or here, the character was looking a little sad, and now he's looking a little bit more happy and proud. We also worked on generative dance. So here we have a neural network that is trained not only on the movement of dancers, club dancers grooving. It's not contemporary dance or classical dance, and then we train the network to listen to the music at the same time. And after that, we can play any music, and the network will try to dance on it. This is how it looks. There should be sound with this. No sound? Let me see. Did we not have the sounds in the beginning? Yeah, we did. OK. Oh, well, trust me, this sound. In fact, it was generative music, ours music in that case. All right. I think maybe this one has sound. Ooh, a little bit faint sound, faint sound. And then the choreographers we work with, they're interested in integrating that into Unity or Unreal. The game companies are interested in generating motion graphs for the characters. The film companies are interested in getting the models of actors. But some of the artists are also interested in having the dancer made movements that a human can't, and have a funny, some dancers that are on substances, let's say. If you're interested in movement, movement generation, movement recognition, all that, there's a great community called Movement Computing. And we started a few years ago, five or six years ago, seven, eight years ago now, a conference, which is now an ACM conference called ACM Movement and Computing. And I think the expanded animation symposium and the movement and computing community should have a party one time, one year. A merger would be fantastic, I think. We've been also dabbling with generative video. That's art. Video is a lot to handle. We started a long time ago by doing some mashing, automatic mashing, so a total copyright infringement, but it is a style of art in itself. Here is an example of DJ MVP, the automatic music video producer at work. So this is a mashup. This is more like a collage, really. And we've been doing artwork in that style as well, developing some video sprites, putting together some motion graphs of dancers escalating and navigating the facade of buildings. And so we did show a number of installations, generative installations that's, again, generated in real time, the editing of the video, the collage, if you wish, and we got those dancers that are sitting or standing on the windows and moving from one place of the facade to another, occupying the facade that way. It's a collaboration with Mathieu Guingold, who is here at Ars Electronica this year, and we're working on a new version these days. The last project that I'm going to talk about before concluding today is neural video, which is another type of co-creative tools but with real-time video generation. Here it's a neural network that is listening to the sound and generating visuals according to what it's been trained on. When it comes to video generation with no network, the quality, the definition is not fantastic but I sort of like in an artistic way the analog sort of warm analog feel that it has. It reminds me of some experimental cinema without a camera. So we showed some work during the pandemic in Istanbul, and during the pandemic as well, we developed the auto-lume VGE interface, which allows to load many models. And those models, you can train them on between 50 to thousands and thousands of images. And we've been working with artists, collaborating with artists, painters, graphists, and taking their work and devising a model of their work. And then through mapping in the latent space of those GAN models, we're able to map the sound input and generate videos and animations of their own material. So the models are audio reactive and controllable through the interface. So you get a bank of images that you need to acquire in an ethical fashion, and then you generate a latent space that allows you to generate video out of those images. a latent space that allows you to generate video out of those images. And then you develop your own mapping that you can save later on. For example, the image can react to the amplitude of the sound. Can react to the pitch of the sound. You can also send other MIDI OSC control. Every parameter is addressable. I won't go through the parameters, and you have to be quite technical to know the parameters of a neural network. But you can do network binding. What we used to do with electric circuits, you can do that with neural network as well and directly tap into the weights and manipulate them. In fact, you can do that in real time using a MIDI controller right here. And that gives another more tangible, VGs really like to have tangible control and being able to do that. And other useful features are embedded that are just convenient for VGs because we actually gonna deploy this and go ahead with a number of shows with VGs using that system in conjunction with the other systems they use. In fact, some of the operation on the weights correspond to actual effects such as translation, rotation. And you can also change model on the flight of presets and do a sequence so that you can really prepare a show and play like a VJ. And we're working now on integrating AutoLume with a number of vision tools. So I'll play a last example, and then I'll conclude. And this is AutoLume at work. Thank you. In that case, it was trained on a database of anatomical drawings and some abstract colorful pictures. The drawings were black and white and a number of selected pictures there. All right, so I went through those two broad categories of creative AI algorithm. Computer assisted creativity, we embed creative AI in, computer assisted creativity. We embed creative AI in creative software. We augment the software with those generative capabilities, allowing the user to interact and converge toward the asset of generation that satisfies them. This is augmentation as a framework. And the second category, embedded generative systems, automation, where we generate the right content at the right time for the user. And those two contexts of augmentation and automation, they're nothing new. They're, in fact, what happened with the Industrial Revolution. And now, this digital revolution, instead of automating physical work, we automate cognitive work. But in fact, we've been doing that for a while, because apart from a human brain, nothing on Earth can do a six-figure multiplication. And so the computer was, from the start, quite a revolution. And of course, there is pushback. And that's to be expected, and that's quite normal. There was pushback during the Industrial Revolution, late 19th century, when the loom was automated. The Luddites and the workers were revolted. They were rioting, and they were trying to destroy those machines that were taking their work. And so there was already the fear of technological unemployment. When the musicians were replaced by record players, gramophones at the time, in cinemas, there was some pushback. There was actually the American Federation of Musicians that were spending its money on fighting in public forums, such as journals, against these developments. So of course, right now we get the same pushbacks. And they're very relevant, because they bring forward the conversation around whether we will use those AI systems, how are we going to use them, and what are going to be the impact around them. Because there will be social, economical consequences for the systems, of course. But it is also the case that the creative studio right now is really made of click workers for a lot of the time. And the amount of, for professionals, people who really come to work to sit in front of a computer, clicking on buttons all day long, we are on the verge of being operating the click factory. And so again, we have similar issues that we had with the Industrial Revolution and that we need to deal with. Is AI the right way to deal with them? Who knows? But the truth is that technology has always continually shaped us, and that away from the fears of AI, the strong AI I was talking about at the beginning, taking over, we have to believe into the humanist tradition of anthropocentric instrumentalism. And all those tools are presented today. They're only tools, and there's a lot of human work to make those tools happen. They're only tools that are developed by people who believe that those tools might be useful and beneficial. However, the computerization of our society and the rise of autonomous machine has deep social political implications. Lots of people work on clicking on the software, the same way that a lot of people, six million people in the US make a living out of driving a vehicle. If we automate those tasks, even partially tomorrow, it will have consequences. Therefore we need to have public debate and we need legislators to get to work, which is typically a very slow to do. So the future is generative and we should harness the power of machine to expand and support our creativity, not to replace humans. So I want to thank my team at the Metacritic Lab, colleagues, I want to thank the funding bodies that make all that work possible, and I also want to mention for those who are students who are interested in the topic that there's two free online classes on the Canon Z platform that I give on computational creativity, and I do recommend the Canon Z MOOC in general. It's a MOOC that was started at CalArt in California and that is focused on teaching art to the masses. The other MOOCs that you know are there, they're teaching business, industry-ready and engineering skills. If we don't teach art to the 93% of the population that won't reach university, we are at a loss. And so bravo, Cananzi. That's my time. Thank you. Thank you. Thank you very much, Philippe. Excellent talk. And now we can make a quick session, Q&A. I'm pretty sure that you have a lot of questions. I would like to start because I also have a bunch of questions. But just one, I promise. The U.S. Copyright Office ruled that AI art can't be copyrighted. This is kind of new. I saw this like two weeks ago. I would like to know your opinion about that because there's a lot of people like defending the idea that AI is just a tool and people saying okay AI is like the artist. We know that you basically open your explaining that. I would like you to talk a little bit about that. Well, sometimes legislators are doing a good job. And sometimes, they're just doing value showing. So in that case, this is not a bad idea, necessarily. And it would take a longer conversation to really get into what it means. But this is not enforceable. I have released album, vinyls of techno music that are entirely machine made. Nobody knows. Because I don't have to tell people. And so with deepfakes, this is really the debate, is that the problem is not the system are good enough so that they cannot be distinguished from human-made artifacts. And so what do we do? Good point. It's like a Turing test on a daily basis. Very, very interesting. Questions? Hi, Philip. This is actually Chris. Chris. I have a question for this augmented creativity tools as assistant creativity support. Do you think that this is really also something you would like to use for educational tools? Because for professionals, I think it totally makes sense what the philosophy is behind augmented creativity. But for education, I see there's a big risk. but for education I see there's a big risk. Starting from the fact that my students, for example, come with YouTube videos and say this is their sketch for their ideas for their capstones, and this is not acceptable. I think they should start somewhere else. So where can they start when all these augmented reality tools and information is out there for them? Well, yeah, there's a lot of changes for educators coming through. I was mentioning very briefly during my talk that actually we have an issue now that the language models, they're so good at paraphrasing, at summarizing, that if a student doesn't write in front of you, then what you're reading might not be the writing of your student. In Canada, at least, we have that problem a lot. And it's impossible to identify if it's a student or a machine. So oral examination, examination in person are back in fashion, which is great because COVID is giving us somewhat of a break right now. But in the long run, I do believe that we will have to do both. We will have to educate our students. Our things were done before AI, but we will also have to educate our students to use those tools and use those new creative processes to their own benefit so that they keep in tune with the market. I hate to say that, but this is the case. The same way that in gardening schools, you would learn how to cut the grass manually, but eventually the land mower was invented. And gardeners were not happy about that, but eventually a few years later, everyone was like, oh yeah, that's great. Now we have a machine to cut the grass, and we have to teach gardeners how to use the land mower. Yeah? Any more questions? No? OK. If there's no more questions? No? OK. If there's no more questions, this concludes Philippe Pasquier's participation. Thank you very much, Philippe, once again. And now for the final talk, we have Martin Piefmeier. He's a researcher, game developer, entrepreneur, and former media artist. His main topics are the essence of play, the condition of humanity, and creative applications of artificial intelligence. He is an associate professor at ITU Copenhagen and co-founder of the Vienna-based indie game studio Broken Bros. Welcome Martin! Hello, I have my computer. Connect. Does it? Hmm. It shouldn't mirror. How can I un-mirror it? Nearly there. Do not mirror. Come on. Technology, yeah. Great. Technology was a mistake. Okay. It's doing what we call the psychoseminal. It's still mirroring. Why Okay, and it's... Come on, come on. Okay, shouldn't it, if I use a separate display? Let's see. Okay. Okay, I'm pressing the play button. Pressing it again because nothing is happening. At some point our IT department will configure my computer to be utterly unusable but completely controlled by the IT department of the university. Well, okay, welcome. I'm going to talk about all the stuff that was in the previous talk that was mentioned and highlighted and a little bit commented on, but the talk didn't really go into it, because I'm only going to talk about negative stuff. But I'm trying to do so in a very entertaining way. So as previously mentioned, my name is Martin Pickelmeier. And I'm currently associate professor at the University in Copenhagen. We have a large games program that I have been running for the last, I think, six years now, by now already. Before that, I was in Vienna as a co-founder of a little indie game studio called Broken Rules that is making very few, very highly regarded games now and then. Very cozy little boutique in this studio, lovely place. And now I'm kind of in the process of turning something that I've been working on, it started during COVID, into an actual new enterprise that is called Write with Leica. And it's an AI-based writing tool that I actually asked what I should talk about today. writing tool that I actually asked what I should talk about today. So I gave it a prompt that my talk will be called Creative Challenges in Artificial Intelligence, and then I let it continue to come up with a concept for me, and I'm not going to follow it strictly, but about half of my talk will be AI generated. So if anything is a bit weird, like all the images, than it is because I've created all the images using stable diffusion and by giving it the titles of the relevant sections and just living with whatever I got back. I didn't even create much, to be honest, because it was like happy accidents are more entertaining than unhappy perfection sometimes, to me at least. I think Margar think would agree. A quick primer on machine learning, just to frame it, because I just assume who knows how machine learning works and how these AI things work? Raise your hand. OK, who doesn't know it? No, that's always an unfair question. If you don't know it, it is basically you take a lot of examples of something, give it to the machine, and the machine makes a statistical model about how the different parts of those examples relate to each other, and then you can just let it complete whatever part you throw at it of the data you've been used for training. But you need a lot of data for that, and that is, and what you get out of it is never an exact replication. It's not a search engine, but it is very close to a search engine. What you get out is never a replication of what you threw in. It is just working with the statistical information about what you gave it as a training material. And what can those systems produce? As you've seen in the last talk, as good as anything that is digital actually, or can be transformed into something digital. So they are very, very powerful and that is also why, as the last talk again mentioned, they will just penetrate everything, every digital domain over time. It will take a little bit longer for things like architecture, where you have so many guidelines and rules and laws and regulations and physics. That's a real problem, but people will get around it at some point. Why is it important now? Because for the due to, I don Because due to a weird alignment of things, suddenly humanity found itself in a place where those models can be made to work. It has to do with computers getting more and more powerful. It maybe also has to do with fake news making us more receptive to made- up stuff, basically, and we just are more comfortable with that. Who knows? I think a lot of factors have to align for something like that to happen. But at the same time, just like the VR craze, it happened 20 years ago, too, and it happened most likely 40 years ago. So it also comes in waves. And any of these many headlines, who is familiar with the AI that demanded a lawyer? Has anyone read about that? So it's a very curious case. It started with a researcher at Google, Blake, I guess, Lemoine. He has a French name, but I don't know how much I have to Americanize it, who was talking to Lambda, their newest language model, and had this conversation with Lambda that kind of, to Mr. Lemoine, suggested that AI has sentience. Now, of course, if you look at what is actually going on here, then you can see that all the input that he gave the system is very leading. He didn't ask what the system thinks about something. He gave it a proposition. And the system was developed in a way that it kind of tries to please him. And that is why the system just picked up on that and just went double down on all its own sentience that he prompted into the system. But nevertheless, if you don't read this very carefully and analyze the text, then it might seem as if the AI had really demanded something from Mr. Lemoyne. Well, he was suspended then after going public with this observation because it was very much criticized, even from people that are working in the AI and ethics field. Because it was, of course, not really a real analysis of how AI works. It doesn't have sentience, it doesn't even have intentions, it has nothing. It's just statistics. Well, it's not that easy either. In any case, the Lambda system also at some point wanted to hire a lawyer and Blakely Moyne said he didn't hire a lawyer. The AI hired a lawyer. Well, and the good news is that lawyers will supposedly be the next job that gets replaced by computers anyway. So we don't have to think about that too much. Margaret Mitchell, who is one of the leading people when it comes to AI ethics out there in the world, she's working for Hugging Face nowadays, which is one of those big hubs for models that are used in machine learning, did a very, very good analysis in a short Twitter thread about what has happened there. Why did someone assume that a language model that just produces text can be considered sentient? And it is more or less because the system was trained to fake being knowledgeable, reflect it, intentional and so on, because that's what it learns from having been fed written language. And then we as humans, as soon as we have a glimpse of sentience, we are like, okay, this thing is thinking, this thing is clearly thinking. Because that's also, we're not used to thinking machines out there or fake thinking machines out there. We haven't developed any knowledge about how to react in such a situation. But in the end, the AI has no clue what it's talking about, of course, and it's only a projection of us on the machine. What it means is that we still need to have strategies for dealing with that because we will have to develop this sense of what does it mean to create with a system that is to a certain degree very, like mimicking parts of intelligence very successfully, let's put it that way. Well, there is a lot of discourse about that, that one of the most famous papers in this area is on dangerous stochastic parrots, so probability-based imitation devices. So these are people that think purely about these topics all day long. I don't, but it's still interesting to look at it. So the situation we have is that all these tools, all these networks for different applications like images, text, music as we've seen, dancing and so on, they are unleashed and they're out there and they create this new dream of a general artificial intelligence that, as was also previously mentioned, doesn't really work. And it's really fascinating what is going on there. So these were all done with my tool that we are just making, which is a pure text basis for writers. But you can feed what the machine has created as a co-creation between a writer and an artificial intelligence to an image generation network. And then suddenly you get this paradise of infinite violence, which is a very good sentence. It makes no sense. But it's a very good sentence, isn't it? I mean, I wish I could come up with that. I came up with the first half. And those machines are just getting better and better and more usual every day. Civil Diffusion was just launched a few weeks ago and has completely taken the internet by storm. And now suddenly everything is full of these AI generated images. It's a flood, it's a veritable flood. And this is an open source model, so versions of it will pop up for very specific purposes. So it's gonna, not only the products will flood every place, also specific interpretations of this product design will suddenly turn up in other places super interesting stuff and yesterday We've seen this fantastic machine learning based voice mimicking app that the fake voice of Holly hern, which was fantastic. I had a different slide here until yesterday, but I really loved the performance yesterday the wardrobe and In other areas that are unexpected, things are happening that are of similar relevance, in my opinion. Like in game design, my friend Mike Cook is working on a lot of automated game designs that use the same back and forth of co-creation. That was, again, in the talk before. Well, we didn't even coordinate this so much, but it's perfect. So Mike Cook made that as a tool for game designers. In the moment, it mostly makes match three games, but it used to have different incarnations before that that made different kinds of games. So what you can do is you slowly steer a pseudo-intelligent game designer towards a direction that makes a playable and nowadays even a little bit juicy game. And then yeah of course the not to expanded animation. These are animations that were done with Sable Diffusion by after a few days after the launch of that image generation network people figured out how to squeeze enough frames out of it to make it viable to make short animations and videos. And what you more or less do in order to create this is again it's textual enough frames out of it to make it viable to make short animations and videos. And what you more or less do in order to create this is, again, it's textual prompting. You describe what you want to do, what you want to see in the video. So for example, this one has some kind of story about a bear. So it gets prompts like two people wandering into the forest. Then they encounter a bear, and the small kid runs away in minute two or something like that. And a text description like that creates this short animation that is very rough, but this is one-month-old technology. So I don't expect too much from it. It's also, again, nice in its roughness somehow. It's also pretty ugly in its roughness. But hey, I'm going to get there when other people use it in more sophisticated ways, in more sophisticated tools. And what are we doing there? It's actually quite interesting. We are kind of tapping into some weird kind of global unconscious that is boiled down into a database of statistics and probabilities between artifacts of human expression. So that sounds super esoteric. It is not. It is just if you take the library of Babel and compress it into a text model, then you can still access a veritable amount of human knowledge, even if it is abstracted to a way so that you can't reconstruct a single book out of it. But you can make a new book that works with the same statistic probabilities as all the books that have ever been. And this is what they're doing. And this means we are more or less only creating context. Context that is in art, for example, something that artists have always been playing with. Artworks have been since, I guess, Dijon, but I bet there is someone older. Before that, art has been not only an artifact, but a comment that just pulls all the context that is created into the artwork. And this is something that comes, you can't escape it with AI-based generation because what you're doing is remixing an infinite sea of data. But those data values are historical, they are always from the past, usually they are even quite old because we have the copyright law. And that means there are huge biases in the data because they have extremely specific, like biases that are kind of intentional by how data was created over a specific time. Like you have a certain representation of women, for example, in the past that is not regarded contemporary, but wasn't malicious at that time. It was just how things were portrayed at that time. But nowadays, we wouldn't find that a proper thing to do or an adequate thing to do. So these biases are not always there on one hand because we have changed our value system. They've changed on the other hand. They are because for legal reasons, procedural reasons, reasons of things burning down and wars being started, destroying a lot of artworks. It's a very arbitrary set of data that we have access to that can be used for training and it's not really leveled out, it's just what it is. I used an AI to draw portraits of the people making Leica with me and this is not how our team works. I'm only gonna reveal I'm the top left guy. It's absolutely not how our team looks, but it's actually quite interesting because we were at that point about as diverse as this image looked. Yeah, as I said, it's copyright law. That is mostly to blame for that. Or to thank for that. I don't know. Then there is, of course, another dynamic. The internet is a pretty recent invention. And the internet is suddenly this huge media collection that we didn't have in the past. And that means that the data that is older is usually coming from museum collections, coming from archives, coming from all kinds of places. And only in recent years, suddenly, we have read it and other text sources that are very differently written to books from the 17th century, of course. So there's a weird little change in tone in what the mass of cultural artifacts suddenly has to what the older artifacts have. Same with, again, stable diffusion. There is a visualization of the data set that was used for training this image generation network and it's actually quite interesting. I'm just going to point on one thing. Actually I have a digital pointer, I don't have to go anywhere. But these are the Alps. I have no idea why the Alps are a blob that is about the size of China in that database. But they are. There is most likely some Alpine photography collection that was bought by some American archive in 1989 and then digitized by the Smithsonian. Or what do I know? But in any case, the Alps are a huge blob. Who knows why? It's completely random because it's just historical reasons that accumulated this specific data set. And then, of course, ironically, if you run stable diffusion, you have to use a not safe for work filter in the interface, at least. And that means that, for example, this image that is maybe known as a piece of art history wouldn't make it through. You wouldn't get this image back because it's not safe for work to look at the Sistine Chapel. So there is, which is of course because our value system has changed, or maybe not ours. Because there are a lot of articles about the harm of these image generation networks out there. And they very often come from this standpoint, which is a collection of very, very US-centric values that is absolutely applicable to specific worldviews, but doesn't really 100% align with us. It is really this list of AI risks. And for example, it mentions being political. Like being political in itself is, of course, a risk to politicians. But if a tool doesn't allow you to make political artworks, then it cannot be used in, like, why would we? It's just bullshit. And the same, there's a very different sexual display, nudity in itself. Of course, you don't want to make a revenge porn engine. That would be horrible. But nudity in itself is regarded very differently across different cultures, across different times, and so on. So a lot of these things are not in itself risky on paper, but they are, of course, regarded as that for a lot of people out there. So it is really a real risk, I guess. Well, we have created these stochastic parrots and they are the forests that shot back at us. We train them on who we are and then they are who we are and that is an interesting experience. Some things age better than others. For example, we have, we call them brains. A model trained on Marcus Aurelius is writing completely timeless, unless you tap into the wrong pockets. But in the end, completely timeless. And of course, the creative shift to prompt engineering is exactly what is done with the tool changes. Nothing. Well, then just a short comment on NFTs, because it was like when these image generation systems came out, it was like the dream for a lot of people that wanted to immediately monetize them to hell. And that is, of course, something that sounds very tempting or sounded very tempting at that point. But I think that cup has been passed on. So what are the challenges? Well, there is also a very, very big elephant in the middle of the room that is sustainability. Thank you, Margarete, for realigning me with the term sustainability. I would have changed it if I had heard that before, because it's really an interesting viewpoint. I am totally on board. I think it's also extremely anachronistic to have these huge number crunching systems as the new basis of how we work with computers, because we know that we have to save energy. We know that we cannot continue growing data warehouses in all eternity. It's just anachronistic. It's working against the direction that humanity should work into. It's planet Aachronistic. It's working against the direction that humanity should work into. It's planet A, not planet B. And then, as I said, bias. Then there is this, as I said, there is a lot of historical data in these models that are used out there. And in a weird way, unless you work very actively with what is there, it is very hard to work with a system that looks into the past and makes something that looks into the future. But that is where art should happen or where art happens, especially in places like Ars Electronica. But also normal contemporary art does that very often. Yeah, and then it's very arbitrary. The understanding of art is very sketchy sometimes. Then they try to please you because that is somehow how we train them. So that is something that will deceive us as we've seen with the sentient AI that hired a lawyer. And of course there are copyright implications when it comes to training material, but also more contemporary like, doesn't open source software allow an AI to train to be true? Is it allowed to use open source software as training material for an AI? Even if you give away the copyright, who does that apply to? And as was mentioned before, the ownership thing, who owns what the model produces? It's not the model, because according to most international copyright laws, whoever was creative owns it. So if it's the prompt engineering that is the creative part, it's in my opinion, very clear that courts will say at some point that it is the person working with the system, but except for in the UK where that already happened, the courts haven't decided yet. It's gonna happen. There will be a lot of court cases at some point. Well, as I said, variations are very easy. Yeah. Now I lost the track a little bit. Okay. You all know this image. One of the fantastic features of stable diffusion is that it can do inpainting and outpainting. So it can actually complete pictures. And there is a really very, very clear, there it becomes so extremely clear how suddenly the girl with the pearl earring is in the kitchen. So it will be very, very hard to push to something, except when you're very self-critical, it will be very hard to push to anything that has to do with forward-looking art from that space, because how do you cross over from there to actual contemporary art? Whatever that is, I just go with contemporary art to show a stark contrast here. But people are working at it, like Maria Klingemann is making a lot of very intelligent art pieces that are very consciously working with specific AI facets and aspects that are contemporary or ahead of the time. And this understanding of technology that people like him bring to the table are, of course, necessary in order to create better tools for non-professional artists too. So we're going to get there. We're going to get there. But it will require a lot of engineering. It will require regulations or laws and rules, international, I hope. A lot of communication, so making clear what is actually happening, like I'm trying here. And reflection, like active, taking a perspective on what you're doing in a very active way. And there are huge chances if that happens. So the reflection part, for example. If you work with your own curated training material, then it is a very interesting interaction with a copy of yourself that you can have. We have had professional writers do workshops with us where we had them train language models on their writing in order to give them a buddy to work with a writing buddy that is a copy of themselves and they said one of the main experiences was that they actually learned a lot about their own writing about their own ways of expressing themselves and it was very spooky and sometimes really disturbing but also enlightening because they found mistakes or just habits that they have in writing that they suddenly could reflect on actively because it didn't come from themselves directly. It just came in a weird reflection. Then interactive cognition. So this way of working with, I call it the global unconscious. I stick with the esoteric term. This way of working with this huge amount of human data is of course something that really allows you to create in a very, very different way to how a brush and a canvas were. And there is a huge chance to get really interesting stuff out of that. And a certain democratization of art production will also happen thanks to that because you have, just like you had with a lot of electronic instruments and tools, like Photoshop, PowerPoint, and so on in the past, the better tools you get, the more they separate the physical skill or the crafty skill of creating something from the intellectual skill of being able to understand what you want to create. And that gives a lot of people that are not very good at painting suddenly the ability to paint. And I'm not only talking about able-bodied people. Of course, it applies even more if you're not able physically to paint. And you can still paint with this. And this separation of skill and idea is actually a huge change that will create a wider range of creators, for better or worse. OK, I had that in depth here. The new self-reflection. Yeah, we had people in workshops that learned who the murderer is in their own murder novel by asking the AI who the murderer was. And the AI answered, and they had to change some chapters afterwards and had to change the plot of their book. It was actually a proposal that not they picked up on it, but they just found it inspiring and interesting and it gave them a new perspective on their work. Then making variations is such an important thing that we have done a lot in the past when you look at how these tools worked and what they can do really well is making slight variations that are very similar to what you have done before. And this is a very interesting way of working, where you just vary a little bit. If you work in any creative field, you know that you make variations. If you cut a movie, if you make a video game, you constantly make variations. The AI is just very fast at it, and you're becoming the creator and not so much the creator, the pure creator. So it's interesting what is happening there. And of course, the reflection, for example here, could be also an interesting research topic in itself, because if you look at representation in a specific context, you could learn what does that mean? Who were we back then? If you suddenly find that all images about a specific topic contain or represent a specific thing, then that says something about us as humanity. So if this is the background that gets created, then it says something about life circumstances in that time, or at least their representation, which is maybe a different thing. And also, of course, only in that time, or at least their representation, which is maybe a different thing. And also, of course, only in a very, very tiny cultural context. But it could be interesting to squeeze these values out of a system. But how can the democratization that I was mentioning before happen right? I read a lot about this topic because I'm very concerned with it and one of the more interesting things that I read is this call for a kind of intersectional AI production which is more or less what we're trying to do with Leica 2 by making a very colorful team. We can actually and a team with a lot of diversity, coming from different continents, having different genders, having different ways of living, different cultural contexts that we bring with us, we can make a tool that just is broader and safer than other tools, because we all know how technology that is made by white men like me, who keep preaching from a stage is not always the perfect solution for everyone, very often not for anyone except them. In any case, democratization would be awesome. So I don't know what the near future of machines of loving grace actually will look like. But what I can say for sure is that those models will get more sophisticated, and they will be able to adapt to more different domains. In language models specifically what we also tend to forget from our little Western perspective is that China is actually the leader. They have the biggest models I guess still that was at least true half a year ago and are actually on the forefront of all things that have to do with AI and and more and more getting ahead, actually, rather than falling behind. So there will be a little bit of a geopolitical shift, too, I guess, that will come with it. I would put money on that bet, actually. Then we will continue to struggle with defining consciousness. We have a lot of ideas of what consciousness means, but they are not all in sync with each other. We have conflicting ideas and people have different ideas. And at some point we will have to work with those ideas if those machines get more and more sophisticated because it will be harder and harder to argue that they are not sentient. Well, they still won't be, maybe. My theory is that the kind of intelligence that they will develop will be very different from what we regard as human intelligence now, which is only an aspect of what intelligence is anyway. And even there are competing theories. But they will definitely not be human intelligent artificial general intelligences. But there will be some form of intelligence, as in being able to hold a thought for a long time, being able to explain something and so on. If there is already intelligence, I don't even know. Climate change will make more and more expensive computation an ethical challenge, but it actually already does. There is a problem with that. And some of the makers of these models are getting to a point where they are reverting a little bit and making smaller models and investing more in sophistication than in just scaling up, scaling up, scaling up, scaling up. But there is still a huge pattern that every year there is like a tenfold increase in the size of machine learning models out there. I just hope this stops at some point because it's not, sorry, sustainable. OK. If you want to hear snarky takes like this on a daily base, follow me on Twitter, like and subscribe. You can also sign up on our rating list. And that is it from me. And I have no idea how I was in time. Was I perfectly in time? Whoa. Whoa. Thank you. Thank you very much for this excellent talk, Martin. Really, really interesting. A lot of reflections. Questions from the audience, as usual, since I'm very selfish, I would like to start. Thank you. Thank you. You're welcome. Actually, there's something very, very interesting on your talk when you talk about the bias. As an academic, this is something that interests me a lot, and I would like to know how to deal with this problem, because everybody has bias. So how to solve this problem? How to improve the data? I don't know. There are a lot of answers to that. What we are doing with Leica is really based on you fine-tuning the model on your own materials. So you actually solve it by bringing your bias, and that being the only bias, and hopefully you're aware of that. Or you use material of people that are also bringing a specific bias, and then you're working with that. But it is very, very difficult. And it's also not that easy. In a creative process, you want specific biases. One of my favorite brains that I created in our system is based on 18th century, that was actually commenting here a lot, 18th century religious writing and texts about cryptocurrencies. It's fantastic what it produces. It's completely biased in the weirdest ways, but it's really evocative. So for creative purposes, you just want as much control over the biases, and you want to have an interface that allows you to actively work with the bias you encounter, and you want to work in a context that also helps you in developing a good practice. But I don't think there is a universal solution to the bias problem we have out there. It is of course much more of a problem in predictive policing and in these other areas of the application of AI where it has a lot of influence on the lives of people, negative very often. Thank you very much. I have a quick question and it kind of hooks into the same theme of bias, of course, which is obviously a big, big thing when it comes to AI. But one thing that wasn't mentioned before and might not be as big of an issue, but I just wanted to check in with people that have used it more and thought about it more, is the bias that comes with using the AI rather than the bias that is coming from the training data. The training data is pretty obvious what we have, how limited we are. But the bias, especially when it becomes a system that is controlled by words, by language, all the data models are written in English, all the keywords are generally in English, all the data that is trained on generates data that is sometimes abstract, sometimes probably keyword-based, and those kind of things. But when I think about art, when I think about emotional content, sometimes words are actually quite limiting. And especially living in Canada for seven years now, sometimes there's words that just don't exist in English that are totally normal in German, for example. Or of course, in other languages that are not even Western based, it's way worse. So thought processes change with language. But if we have to use a specific language, in this case mostly English or maybe Chinese in China, to use those systems, then that bias is something that might go quite unnoticed, maybe? Yeah, so, sorry. There is actually a lot of development, especially in the last year, about supporting more languages. There's a very new language model called Bloom that speaks, I think, 46 languages, of which 13 are programming languages, and the rest are human languages. And it is the first model that supports a handful of languages that just never got any model trained on them. I think even Arabic is one of them, which is crazy given how many Arabic speakers are out there. So there is a little bit of development happening that helps at least with adding more language support to those systems. And of course, they will always, since they are trained on the body of writing in a specific language, they will support the language, the words that are missing in other languages. I'm personally constantly fixing the English language when I'm talking to my Irish girlfriend, because a lot of words from German don't exist. So you just need to make them. It works. But it will never be fair, because that's not how the world is. So I think that a constant effort is needed in order to mitigate those problems. But it's also not limited to language models specifically, because I'm pretty sure that the fact that we all speak the lingua franca English actually reduces the amount of room that we have for having discussions and speaking and expressing ourselves for everyone. And that is a problem that just exists with us as humans and has nothing to do with language models. But I think in the area of language models, it's going to get better. There are just clear signs that it's going to become a more universal thing. More questions? a more universal thing. Questions? First of all, great talk. Secondly, in terms of metadata, what, in your opinion, is going to be crucial? Do you know of any open I.O. sort of exchange schemas or protocols that support AI generation? And I don't mean in the line of accountability as soon as I present it, like what's the prompt, whatever, but more in the terms of art direction and how can I have the sort of AI generated assets down a creation pipeline? So what's the information that we sort of have to crucially look at? the next So you mean tracking what went into a piece for example, yeah, because I don't know what the fuck I'm talking about. So I'm asking you so there are There are some endeavors, but so There are basically two schools of how these systems are developed out there. One is the big corporation school, where basically they publish at the same conferences, but they never really tell what exactly went into their models. They are very, very secretive. And that, of course, makes it very hard to, for example, build a business on their models, because you just do not know if you have something that was trained on the wrong stuff, more or less. That is corporate models. They very rarely publish that. And then there is the other school, which is just oddball academics plus old open access advocates of different colors that and some weird entrepreneurs that create more open models and that is the same for language as it is for images and so on. They're very often open source stuff actually everyone is very open when it comes to source code but they also clearly specify what was the training material but you cannot really trace back very easily what went onto one specific created asset exactly that is something there are research projects working on that but it is just an immense amount of data it makes things so much bigger because it's like yeah it's just it's just too much more or less so you can describe the whole basis of the original material but you can't very easily track back what was the ingredient of one creation you create except in very small models but maybe it will come at some point finally there is for example stable diffusion watermarks everything that was created so at least you know it was actually an artificial image. When you can always later check if it was posted somewhere in its original form, then it will still retain the watermark. So you can always find out if something was a fake or not. So I think this is more or less all the metadata that what is happening there in that area. Everything is very in flux, so this might be an answer that is very wrong in two years, when that is suddenly solved. Thank you. More questions? We got another question on the live stream. Alexander Wilhelm asking, to create a model has high costs, but isn't the workout afterwards very cheap then? Yes. I mean, yes and no. Define very cheap. It has immense costs, actually. But with clever engineering, actually, it has become much cheaper over time to make these models because the hardware is getting better and the algorithms are getting better it's just a process that is getting more and more efficient because well because it costs so much that making optimizing it has a huge payoff inferences so creating something out of the model is of course by orders of magnitude more cheap, so it does scale, in that sense it scales well, that is definitely true. But it's still not free and if you have a model that doesn't even run on one computer but needs several GPUs, so general purpose units, what is even the name of it? Graphic processing unit. JAN SCHNEIDERMANNKERGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENGENG power you have to consume in order to make one inference. But as I said, there is a lot of effort going into optimizing these things. So it's again a work in progress. Nothing is free, right? No, nothing is free, but some things are really expensive. Some of those language models get trained for months or at least weeks on 400 GPUs. I mean, it's just huge. Any more questions? No? OK, that concludes our last talk. Thank you very much for the talk, Martin. Thank you very much for the patience. Gracias. Terima kasih telah menonton! Thank you. Thank you.