Thank you. សូវាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប់ពីប្រាប� Good morning. This is the trailer by Lilith Sammer, a student from the Digital Arts Bachelor track at Hagenberg. It's part of the event production that is done by a lot of students from the department Digital Media. It's really great to work with such professional students supporting this event since many, many years. Welcome to the second day of the Expanded Animation Conference. I mentioned several times already in the last days that we are transforming from a symposium format to a conference. We kicked off the conference yesterday with two fantastic panels. We are very, very proud that we already published the proceedings. If you are interested in research, everything is online available on the Expanded Animation website and since yesterday also available at the Ars Electronica archive. All the presentations at Ars Electronica Expanded Animation Symposium are also available. So if you are interested in talks that you missed because at Ars Electronica so many things are going on. Geffrith mentioned roughly 500 events in five days. So you probably missed some events and that's the reason why we record everything. Also hello to the crowd that is following us on the YouTube stream. I will briefly introduce the program today. We will kick off with artist position. I'll give you insight a little bit later but it's fantastic to have the winners from the AI and Art category. It's a new category here. And also, Honory mentions from the new animation art category. So, that's always this link between Expanded Animation Symposium Conference and the Prius Electronica category, new animation art. the Prius electronica category new animation art and then At 2 p.m. We will continue with the conference panel and then there will be another Panel until 6 o'clock. You can listen to fantastic presentations today and If you still have energy At 1 o'clock right after this panel, there is a fantastic screening downstairs at Deep Space. So you have also a program during lunchtime if you have the energy. And now I'll pass over to my colleague Jeremiah Tipas who will introduce the panel in detail and guide you through this fantastic panel. Thank you. Thank you, Juergen. This is an excellent sign. The room is packed and after a full day of festivities and events, it's actually really good if you can have this many people here in the morning. So thank you for all coming and being being on time i'm really excited to introduce our first speaker uh rachel mclean she's an artist and filmmaker based in glasgow and she has had solo shows at the tate modern and the national gallery in london her work has been shown all over the world at festivals today we're going to see a little bit about the production of one of her latest works duck and her approach to using AI in this film. So let's give a warm welcome to Rachel McLean. Thank you. Hello. Oh, it's so great to be here. Yeah, I'm really loving Ars Electronica so far. How do I get my PowerPoint up onto the screen? Hey, there we go. Okay. Oh yeah, thank you so much for the invite to be here. I'm going to talk about my new film, Duck, and its use of deepfake technology. But first, I thought I'd just give you a little insight into what I do. So here is a very fast showreel We want data. We want data. We want data. Again, again, again Next generation network system I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone I'm gone, I'm gone, I'm gone ស្រាប់បានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានបានប� Okay. Okay. Okay, so as you can tell, I make elaborate films that use costume, makeup, and often I play the only character in the films. So this is me, but dressed up as a little girl. My films often deal with ideas of national identity, truth, and illusion, and they're almost always shot against screen screen. Deep fake technology appealed to me because in lots of ways it's similar to what I already do in that I take my own image and I transform myself into somebody else so somebody I'm not but with deep fake technology I was interested in instead of using makeup and prosthetics as I would normally do using technology or AI technology to almost do the same thing. So I'll talk a little bit about my new film Duck. It's a 15 minute short film I made as part of a new act fellowship at Newcastle University and it's a collaboration with InSpace at Edinburgh University as well. I'll talk a bit more about the technical aspects of the process later but I thought I'd give you a bit of a feel for what it is. I'm assuming you know what deep fake technology is, but to summarise, it's a machine learning process that uses video and audio data to simulate a person's voice or face. So in short, it's a way you can create fake videos of people. Duck is a deep fake spy thriller starring Sean Connery and Marilyn Monroe amongst others. All the characters are played by me but with a deep fake face added over the top and so here it is with a deep fake face. The film opens with a deep fake JFK reading the words if it looks like a duck quacks like a duck and acts like a duck then it is most probably a duck. I'll show you a clip. It looks like a duck, acts like a duck, and acts like a duck, then it is most probably a duck. There is an obvious irony to this in that you're seeing somebody say something who looks in silence like JFK saying something that JFK never actually said. So the logic of what he's saying is clearly contradicted by the context. Historically, this phrase coined as the duck test as a form of abductive reasoning has been associated with robots and computing. Duck typing and computer programming is an application of the duck test for example and going back much further to 1738 Jacques de Van Hansen the French automaton maker produced a mechanical duck. There was a clear irony in this as well though, because it looked like a duck, quacked like a duck and acted like a duck, even shitting out a mixture which smelt like duck droppings. But despite all that, clearly wasn't a living duck. In a world in which our sense of truth and what it is to be living or non-living is complicated by AI and machine learning, I find this phrase an interesting starting point, not least because there's something absurdly banal about ducks. It takes the subject of artificial intelligence and deep fake and adds a comic spin to the threat that's often treated with ominous melodrama in the mainstream media. But back to the film. After we meet JFK and are initiated into the duck reasoning, the film moves into a spy thriller narrative and we are introduced to Sean Connery in an authentically 1960s world, collecting clues and wrong-footing assailants. We see him steal this photograph out of a document wallet on Marilyn Monroe's bedside table, which is proof that we are not alone in this universe, apparently. I went down a bit of a rabbit hole looking at the various conspiracy theories associated with Marilyn Monroe, the Kennedys and UFOs which vary from the just about plausible to the totally absurd. Starting with the rumour that Marilyn and JFK had an affair which is in itself hard to prove to the theory that Marilyn was killed because JFK during their affair told her about the government-led UFO cover-up, and she was planning to expose the truth. The film plays with stretching a sense of plausibility. There are moments when you're on Sean's side, his narrative seems coherent and logical, but then increasingly as the film unfolds, both Sean and the viewer are thrown into a state of questioning the plausibility of what they're told. The interesting thing about Deepfake to me is that you can play with the audience's suspended disbelief. For example, at the start of the film I think it's clear to an audience that this is a deepfake Sean Connery, but you accept at some level it's supposed to be Sean Connery. You suspend disbelief I think and I will engage with this character as if it's Sean Connery. He's acting like Sean Connery in a Bond film, he's doing the things you expect him to do, apart from the odd awkwardness. However, this sense of security quickly unravels as things about the world don't quite add up. For example, Sean discovers there are multiple Marilyns. She exists more like a video game avatar than a film character. It seems impossible to kill her, no matter how hard you try. To add to this sense of disorientation, all the other Bonds show up. Roger Moore, Pierce Brosnan, sense of disorientation all the other Bonds show up. Roger Moore, Pierce Brosnan, Daniel Craig, George Lazenby and Timothy Dalton. It's clear we are no longer in the 1960s. And further to that, the other Bonds aren't acting much like Bond. Pierce Brosnan is boyishly excitable and acts more like a 10-year-old inhabiting Pierce's body than the man himself. At this point it seems like Sean is the only one not in on the joke. Everybody else in the world seems self-conscious of their own artifice except him. And as an audience, you start to question, am I still supposed to engage with this person as if he's Sean Connery? Or am I supposed to think that this is someone else masquerading as Sean Connery? Does he know he's masquerading or does he think he's the real deal? This kind of slipperiness is what interests me. In making the film, we tried to create a deep fake that was as believable as possible. But even with that acknowledgement, I'm fully aware that there are things that belie the artifice. And we were making this in like 2021, 22, so at that point the technology wasn't so progressed. For example, the deep fake audio is quite crunchy and robotic. We can't, despite some effort, make it sound like it was recorded in a studio. But to me, the robotic sound is interesting. It reminds you of the simulated quality of what you're hearing at a subconscious level. Similarly, there's subtleties of movement and mannerisms which I can't disguise. There's an odd sense that my mannerisms are forcing through the artifice of the deep fake and in writing the film I got a bit too deep into it making sense in a linear way and it just wasn't working the main problem I think was I was watching too many Bond films in a Bond film he's working for the British state and there's some kind of loose political backdrop which gives it context often to be honest there's loads of plot holes in Bond films and they don't really make sense however there's a semblance of a logical world around them. They are intended to make sense. I got quite deep into thinking, I need Moneypenny, I need M, I need all the Bond characters to make this work. But the deeper I got into a kind of linear logic, the more the script was just kind of careering off the rails and not working. To help me with the problem, I ended up doing some experiments with AI script writing. This was like before chat GPT, so like mega sort of early stuff. I haven't literally used any of it as the results were totally mental. However, it was quite inspiring. AI has a great way of creating things that make a sort of abstract sense. They hold together and you can comprehend it, but it's not logical. They are able to synthesize two totally unrelated things and make it believable. It's something made by being a being that totally understands how the world looks and sounds but doesn't have categories to establish or distinguish between one thing and another in a logical way. I thought there was a kind of illogical logic to jamming Sean Connery and Marilyn Monroe together in the same world. They're both archetypal masculine-feminine figures. Sean Connery and Marilyn Monroe being in the same film doesn't make literal historic sense. However, it makes sense, at least to me, in an abstract sense. As in, I'm certain that there's something I could persuade an audience to accept as reality. Also, I think it brings you tonally into an unstable, simulated world that's familiar to AI-generated content and away from a more linear, logical world. Something I was conscious of in writing the film was gender and playing with a very strictly gendered world of Bond. A lot of the coverage of deepfakes surrounds the idea or potential they have to undermine democracy. Having spent a bit of time thinking about this, my feeling is the threat is a bit overblown and overshadows other subjects and concerns that deepfake throws up. A slightly lesser known fact about deepfake is the term deepfake was originally taken from the username of a guy on Reddit who used open-source face-swapping technology to create videos in which he replaced the faces of porn actors with female celebrities. So deepfake, like many other technologies, started with porn, and in this case is an extreme misogynist corner of the internet. A huge percentage of deepfake videos on the internet are pornographic videos. These videos are largely of women whose faces and bodies are swapped and manipulated without their consent. It's interesting to me that despite taking women's images without their consent being the origin of the term deepfake and its primary application, that so much of the mainstream fear and moral panic that surrounds deepfake is not concerned about the extreme forms of misogyny but instead the potential for technology to be used to steal the identities of powerful politicians and leaders who are largely men. In my work I take quite a lot of pleasure in playing powerful men. I think this is because of the feeling that you're taking back some kind of power. There is an instinctive sense that your body and your identity as a woman are less solid, more able to be morphed and reinterpreted against your will. You can't pin your identity down. It's always slipping away from you. Whereas to inhabit the voice or put on the face of a powerful man is to feel a sense of stability and security that you're unshakable or untouchable. There's a pleasure in it because you don't often get to feel that way. It makes you feel like you'll be listened to. There's another aspect to it's cheeky. You're taking a man's face or voice without their consent. In that way, it feels transgressive. It inverts the expectations that women's images and voices are in some unspoken way the property of men. I thought Bond was an interesting character to take on because he's quite uniquely stable. For all that the actors change, there's very little in the way of character arcs in the film. Bond starts out confident and cool and in control, and ends up just about the same without having learnt too much in the process. There's something destabilising about Deepfake, which I thought would be interesting to apply to such a stable character. There is in Bond a positive projection of a specifically British, white, upper-class male sense of security, an unspoken assertion that this is the way the world should be and will always be. I think much of the appeal of Bond is that it tells white men, don't worry, you're all right, your power is not under threat, everything is going to be okay. I think what's interesting about Deep Fake is it allows people who are not inside this world of stability to assume it or to fake it. To me there's something fun about playing Bond, such a traditionally misogynistic character as a woman. In writing the script I was thinking about that a lot, so what's the worst thing that could happen to Bond from his perspective? I came upon the idea of loss of stability, a sense the world was no longer under his command. It was no longer made for him and didn't bend to his will. In the film you see Sean Connery slowly unravel as he's unable to cope with a world that slips out of his control. Here's a clip. You just don't die, do you? I could say the same about you. With Marilyn, taking her face felt a wee bit more complex in that I was aware I was using the face of a woman whose image had been over the course of her lifetime and after her death exploited and used to represent other people's ideas and ideas of her. For example I walked past a shop in London recently that had a whole range of vibrators modelled by Marilyn beyond the grave. I think contrary to what is often said about her watching her films I think there's a comic act her comic acting contains a deep irony and a knowing self-awareness of her dumb blonde sex symbol status. It's not unconscious it's something she owns and plays with and satirizes however beyond her films and the kind of Andy Warhol territory of pure image she's absolutely a free floating signifier to which you can attach whatever meaning you desire. In studying Madeline, I thought a lot about the bind women find themselves in. You have to look beautiful in a way that you're fully aware of, aware as a construct, and the consequences of a lot of products and skill with their application. But simultaneously, you have to be aware that men might be deeply unnerved if they became aware of the effort that was required. There's a trend a few years ago in social media for men notionally unmasking women to the idea that you should take a woman swimming on the first date, that way we will see her without makeup. They would no longer be lying to you, deceiving you, and as a consequence undermining your power. Glamour is an old Scottish word meaning magic, enchantment or a spell. I think in the context of Marilyn it's interesting. There is both the allure of glamour but the sense that it's illusionary or untrustworthy. In the film, deepfake adds another dimension to Marilyn's already glamorous image, an extra layer of magic or mirage. From Sean's perspective, Marilyn is both an attractive figure as well as suspiciously glamorous. He suspects she's casting a spell on him like a witch or siren, tempting him to a sticky end. Her artifice becomes the threat in an abstract sense. Here's another clip. You've fooled me. No, the truth is I've never fooled anyone. I just let men fool themselves. Something I considered in writing the film was, would Marilyn be as affected as Sean by a world in which received ideas of truth were destabilized? Would falseness be less terrifying to her because she would have been so palpably aware of it being a lived aspect of her day-to-day reality. I thought an interesting way to approach using Marilyn's image would be to present her as somebody who knows she's just an image and is comfortable with it. She's aware she's a deep fake in the way that Sean isn't and her infinite reproducibility and indestructibility disorientates Sean who's still living as if he's a solid, consistent being with agency over his identity when he's not. In the film, Marilyn seems totally in control of her arseface and appears to be teasing Sean for his perceived authenticity. Say, what happens when a duck flies upside down? What? It quacks up. Get it? No. In writing the characters of Roger Moore, Pierce Brosnan and Daniel Craig, I thought it would be interesting to introduce the sense that they were playing a game. At one point there is quite a literal reference to a video game format. You relate to them less like actors or Bond characters and more like avatars from a computer game. The idea that reality is at some level just a game is interesting to me. The last, however many years of British Conservative government have been littered by figures who treat life and death decisions as if it's part of a game for their own political future. In the film, it's not clear what the reality is. In the clip I'm just about to show you, Roger drives everyone off a cliff with a sense of confidence that it's not real, there are no consequences. What's unclear is whether he's correct. Maybe they're in a video game and you die and are instantly reborn, or maybe he's deluded and he thinks he's infallible, untouchable, the rules don't apply to him when they really do. In this section, I wanted it to look like a bad rear projection. Dr. No is famed for having one of the least believable rear projections in film history. I thought it would also be fun to play with a viewer's sense of reality. The whole film is shot on green screen so nothing in the environment is real. However up until this point you're encouraged to believe that the characters think it's real. However in in this scene, there's a point where that understanding is undermined. This is a slightly longer clip from the film. Okay, never mind. You'll need to come see the film downstairs. You'll need to come see the film downstairs. OK. At some level, the film is totally silly, and I've had quite good fun thinking of absurd situations to put the characters into. For example, putting Pierce Brosnan and Daniel Craig in the back seat of an Aston Martin, with Sean Connery squashed awkwardly into the middle seat. It's been the funniest film I've ever made, at least for me. Partly this is intended to counter the over-seriousness of a lot of discussions that surround deepfake technology. As I mentioned already, a lot of coverage of deepfakes surrounds the idea that they have the potential to undermine democracy by, for example, making a politician say something they never really said. My feeling is that this threat is ridiculously overblown and it's naive about the complexities of our understanding of images. The heady and at times ridiculous artifice of Duck is intended to bring home my feeling that what film does best is not truth but illusion. Film highlights the extent to which our sense of reality is fascinatingly slippery and complex. To treat the moving image as an index of truth is to give it a job it's not capable of fulfilling. complex. To treat the moving image as an index of truth is to give it a job it's not capable of fulfilling. Film can say things that are true and that we see and perceive as truthful, but it does so with a bucket load of artifice. Something which I perceive as more of a threat than, let's say, someone deepfaking a politician to say something they never said, is the exploitation of a postmodern idea that there is no such thing as truth or objective truth. I think this is an oversimplification of postmodern theory. there is no such thing as truth or objective truth. I think this is an oversimplification of postmodern theory. However, I think contemporary fake news goes far beyond propaganda. Propaganda is trying to persuade somebody to believe a specific thing, even if that thing is untrue. However, fake news often operates differently. It creates an environment in which it increasingly seems like nothing is true. You can't believe anything, and therefore there's a false equivalence created between all information sources. This kind of environment is ideal for politicians who want to pose as the only source of truth and render all opposition irrelevant because it's false or fabricated. However, I'm again not sure if deepfake is so much the cause of the scenario as an imagined tool for disinformation to contribute to a situation which already in some factions exists. For me, I think that technology and new technology is interesting in the sense that new things get invented, often to quite a sophisticated degree, without it being clear what their ultimate purpose or application is. or application is. I think with deep fake there is just as great a potential for it to be a tool by which we increase the sophistication of our understanding of truth as it is a tool to trick us into believing something is untruthful. Here is another clip that I hope I've included that plays with this idea. It looks like a duck, acts like a duck, and acts like a duck, then it is most probably a duck. If it looks like a duck, acts like a duck, and acts like a duck, then it is most probably... People of the world For too long they have held you in a false reality I must awaken you to a shocking truth We are not alone in this universe. Thank you. the clips out of order okay um ah this is the clip i meant to show you okay you can watch this first the greatest enemy of bruce is not the lie There is no such thing as proof. And that is a fact. Get it? So process. This video shows the AI learning what Marilyn Monroe looks like and trying to recreate the image by itself. So it's kind of a sped up version of the process. The deep fake process is best thought of as not recreating a person, but recreating a moving image of a person. For example, you can deep fake Marilyn Monroe looking like this. So she has the iconic lipstick, eyeliner, beauty spot, but you can't, as of yet, deep fake Marilyn Monroe without makeup on, because there's no data of her looking like that. Although, possibly, you could do some kind of generative AI version. As a film to make and script, it was very complex. I'll just show you little clips of making off as I speak. I was writing a script simultaneously with running deep fake tests and audio and video, and I kept having to rewrite it due to the constraints of the technology. For example, you can't shoot anybody from behind, or even in profile. Both eyes need to be visible to track to at all times, which is much harder than you'd think. Similarly, you can't enter or leave a shot because the tracking data will be lost. Initially, I started looking at deepfaking a Bond girl. However, the issue was that there's very little data available to recreate a Bond girl. Often they only appear in the film for not very long and they barely say anything. Ursula Andres, one of the most famous Bond girls, didn't even use her own voice. She was post-dubbed. I gradually hit upon the problem that you can only deepfake people who have been heavily documented. So in that sense deepfake is liable to reproduce all the problems and biases of representation that already exist. And you can also only deep fake people if there's enough data of them looking consistent. For example, you can't change their hair and their makeup that much. They have to be around the same age in all of the shots. I light it on people who have a look that is immutable and are in some sense iconic or frozen in time. JFK is only remembered from an era that he was president, similar to Marilyn Monroe. We never get the opportunity to document them age. The fact that almost all of the images we have and remember of them, they are all the same age. And beyond that, Marilyn is wearing the exact same makeup, makes them very deep fakeable. But beyond that, it makes them very powerful, iconic images. So these are some outputs just from Deep Face Lab as I was training the AI. And these are examples of me dressed up in a terrible wig and this very thick prosthetic neck playing the various Bond characters and then the final footage above. How much time do I have left? Are we almost? Bring it to a close. OK. I'll very quickly show you some... Oh, here's one more clip. No, wait a minute. I'll very quickly show you some paintings that I made. So alongside the film, I made a series of paintings which explore similar ideas. They're physical texture canvas that look all the world like oil paintings but in fact they're digitally printed and the paint effect is applied in Photoshop and just ameliorated afterwards with varnish so they're form formally very untrustworthy. They're displayed as pairs or upside down and so one is upside down but one is the right way round. So effectively they're two different perspectives on the same scene. Like in the classic duck-rabbit illusion which I showed above, you can't see both scenes simultaneously. It's like your brain prevents you from seeing both sides of the story at the same time. I'm interested in this idea as a reflection of what I feel is the current state of politics, at least in the UK, where opinion has become so polarised that what is true looks different depending on which side of the divide that you stand on. The characters' faces are mediated. For example, Marilyn Monroe's face or JFK's face is behind a newspaper. So like in Deep Fake, there's a sense of there being something behind the mask, which you can almost see, but not quite. In the paintings, the face alternates between JFK and Marilyn. The world they're in is quite classic or classy. Marilyn and JFK present a very binary, stable world of masculinity and femininity. However, in the paintings, when you turn them upside down, the stability unravels and their gender becomes unclear. There is a sense in which complexity is straining out through the surface of binary logic. I want them to be seductive but sinister spaces that are littered with symbols which imply but don't fully explain the narrative or conspiracy." Maybe I'll tie it up there. I was going to show you loads more, but I've blabbed on for too long. So, thank you. I certainly wouldn't say that you've blabbed on and it's really difficult to tell you to short the talk because it's just such really interesting material. So, we could probably sit here for another hour and just look, ah, that's so cool. But we need to have a couple of minutes, at least, to be able to ask some questions. And so if you have a question for Rachel, please raise your hand, and we'll bring a microphone around to you. Just go ahead and raise your hand. While we're doing that, maybe I can ask a question. It's one that I often get asked. I work in the field of games, and that is, have you considered the ethical consequences of your work? Yeah, I mean, I guess that's maybe what i was touching on a little bit so um what i was touching on a little bit in my thinking about like who do i deep fake and what is the intention of the deep fake i think ethically like i'm taking stuff from bond films but it's not competing with our meaning to be a bond film it's clearly clearly like a satirical, parodic take on it. And also I'm thinking quite a lot about like, if you take a man's image, what does that mean? What power does he have? And what power do you wield by taking that? And if you take an image of a woman, what does that mean? So I feel like it's really a complex thing. But you touched on that. And that was quite interesting that you said, in a sense, because we're only using data that is already heavily available we're sort of reinforcing these stereotypes and so you touched on that a little bit in your talk about how do you sort of combat that or what can we do about that if we want to use these techniques how can we avoid simply reinforcing these stereotypes that are there in the data in the first place? I think it depends what you do with it. I mean, I've not... It's been like two years since I've used DeepFake or finished making this film, really, so I feel like generative AI has moved this on a lot where you can simulate something without necessarily taking exactly from a data set, whereas working with DeepFake, it was really precise. Like, what you fed in was what you got out, and really you could only get out what was available so partly because that was a restraint of the technology I wanted to create a film which kind of commented on that and played on that and took down a quite kind of iconic figure like Sean Connery is probably the most famous Scottish person ever so there's something like intentionally quite iconoclastic about it but that's because I'm using what the technology does best to an extent. Okay, thanks. Thanks very much. I think there's some questions in the audience. We've got our first microphone. Feel free to ask. Hi. Hello. A really interesting talk. Thank you. You say in your art you're really promoting or you're putting out there a lot of instability and ambiguity, exposing the audience to that. Why? I assume you have a purpose. Yeah, I mean, I think that you get used to seeing images in the world to an extent that they become almost banal. Like, I play a lot with imagery, maybe not so much in this film, but imagery that's very seductive and often very saccharine, and it feels familiar and comforting at some level. But then when you engage with the work, there's something that forces through the surface and makes you feel unstable. So I intend to make it possible that you might, for example, watch a Bond film and think slightly differently about it next time. Do you think we have a higher level of capacity to deal with ambiguities and superimposed meanings than we usually are asked to have? In the context of art? Yeah, in film and in the environment. Yeah, I think so. I think that like, I don't know, I think people are really smart and, like, I don't know, I think people are really smart and often, like, mainstream media doesn't take account of quite how nuanced people's, or the possible nuances that people can appreciate in film and in art. So, yeah, I think we do have capacity to understand something in a complex way. Thank you. Okay, just if you have another question, just raise your hand. We'll bring around the microphone. There's a question up here. Thank you. Okay, just if you have another question, just raise your hand, we'll bring around the microphone. There's a question up here. Thank you. I just wanted to ask what software and what programs did you use for the project? Because I don't know which project are good to work with DeepFace. Yeah, sure. I use DeepFaceLab for the face. And I worked just in my studio with Tim Dalzell, who's somebody I work with a lot, who's also my studio assistant. And yeah, we just spent like months really gathering the data, testing it, working out what it could do. But yeah, that's just like open source software, Deep lab and for the audio I used an uber duck which just acts just kind of incidentally also has duck in the name and I worked with Martin Disley who's a coder at Edinburgh University and he took the model and kind of reworked it specifically to what we were doing and again there was was just months of collecting data and training the model and the data. And then everything else is like green screen, After Effects, Premiere, like all the usual stuff. Okay, we have one question up here. I wanted to ask, since you're acting all the characters that is in the movie, isn't it hard to produce as well as act all the characters? Because you can't see the shots and especially the entire process of the entire making that well when you're acting as well. That's a good question. I've come up with this kind of weird process where, because I play the characters in my films a lot, where I've got like a screen where I can see myself. But yeah, it is kind of tricky. I think the thing about it that's not tricky is that I sort of know I want, so at least in that stage of the process, you're not trying to communicate to somebody else what it is that you're after. So there's a degree of flexibility. And it also meant that like, I could just wear the one suit to play all the bonds and that saved us a lot of money. So yeah. meant that like I could just wear the one suit to play all the bonds and that saved us a lot of money so yeah okay and then we have one more question up front and I think then we'll have to move on to our next talk hi thank you for your talk talk and for this great artwork. I really love it. What was the most difficult part of this project? What was the hardest thing to solve? Yeah, I think the hardest thing was how long the deep fake took to come out. So when we were training the faces, it would take like two weeks from when you set it off to when you got the output. And that would be like enough to fill up Sean Connery's face for two minutes. And that would be only Sean Connery. So we had like five computers running simultaneous. And occasionally they just put out stuff that just didn't make sense. It just looked shit. And we didn't know why. So I think there was a part to the thing that's sort of fascinating about deep fake and ai is that like mysterious part in between that even the people who develop the ai don't fully understand but that lack of like having control of all the variables which you would normally have with software was like i think the most difficult part. And the backgrounds were also generated or? I think if I did it now I would generate them though because at that point it was like generative AI was not really at a point where I could do that but yeah it would be like better if they were but no they're just 3D modeled. Okay thank you. Thank you so much for your talk and for answering all of our questions. Let's give a warm hand of applause. Thank you so much. If we're lucky, maybe Rachel can stay until the break, and then if you have some questions, maybe we can chat a bit later today. That would be awesome. Thank you so much. Our next speaker is Nicolas Gouraud, who's based in Paris, and he has a background in visual arts and visual studies. His works have been shown at museums in pretty much all over the world, and also at the Ars Electronica Festival. And today he's going to be talking about his approach to critically using new media as a documentary tool. So let's give a warm applause for Nicolas Gouraud. Hi. Sorry. Sorry. Hi, hello everyone. So I'm Nicolas Gouraud. I'm an artist and filmmaker. So yeah, for this talk, I want to rather than focus on one work that I'm showing in Ars Electronica, which is called Unknown Label, I would rather want to show you a little bit of the previous work I've been doing so you get an idea of the general approach that I'm using for my work. pour vous montrer un peu le travail que j'ai fait auparavant pour vous donner une idée de l'approche générale que j'utilise pour mon travail. J'ai donc un background en arts visuels et en études visuelles. Et j'essaie de toujours utiliser, dans mes films et les projets multimédia que je fais, des outils qui sont importants pour le projet et essayer de trouver une façon critique ou de faire en sorte d'utiliser ces outils. I make tools that are meaningful to the project and try to find like a critical or like a side way to use these tools. So I'm going to show you like different projects. And like the first one is a kind of an older project, which is installation and which used back in the days. Like it was also like it's becoming kind of old school, but it was like a face recognition software, which was used in a strange way to identify faces in clouds. So it was a way to recreate this idea of pareidolia, where it's a human process in which we project meaning in shapeless form. So this is a very quick example of the kind of project that I can use technologies in a kind of like a different way. And after this project, I had the opportunity to collaborate with an architecture, a forensic architecture agency in London, which I'm sure you are all familiar with. avec une agence d'architecture française à Londres, avec qui vous êtes tous tous familier, mais j'ai eu l'occasion de travailler avec eux sur un cas spécifique, comme au Cameroun. Pour moi, c'était une révélation car j'ai découvert certaines méthodologies qu'ils utilisaient. because I discovered some of the methodology they were using. So coming from testimonies and drawings from witnesses and also images found online using the methodology that is called OSINT, so Open Source Intelligence, in which you basically use the resources that are available online on YouTube, Facebook, or whatever. And you try to make sense of these images with the use of 3D technology. And that's something that was quite influential for me in my next project, which is called This Means More, which was like a short film, narrative short film, which dealt with the like the kind of privatization of theace du stade, du stade de football, en utilisant un cas spécifique, qui était le stade du club de football de Liverpool. Je voudrais vous montrer un clip court, de cinq minutes, de ce film, de ce mismo, qui était aussi une installation de double écran dual. de ce film, de ce blah, blah, blah. Yeah, yeah. I think you can just sit around the table. That would be good actually. It's too nice. You should feel it. It will shine. It's hot. What is the Kop? Well, to me, it's the place where them people meet. It's where you go to support Liverpool football club. It's a subgroup of, you know, of the community. This means the hill, you llawr Loughan. Pan oedd y copi yn ddechrau, roedd yn y llawr Cinder a allai ei gynnal allan. Ac wedyn, wedi dod yn teras ac wedi rhoi'r llawr i'w ddynnu. Roedd y copi fel stwctliad o'r Tereyson, o ddwy adnoddau ar ôl y golau ar ddiwedd Llywodraethu. Roedd yna ddwy ffyrdd o ffyrdd arnynt, wedi'u llwyddo mewn ffyreth o'n zigzag. Roedd yn teras agored. Doedd yn cael ei gilydd i mewn i adranau neu pennau neu unrhyw beth. Gallech chi ddod i mewn i'r copi ar un ochr a mynd yn ôl i'r och ar un ochr o'r cop ac i fynd yn ôl ar y ochr arall o'r cyfnod oedd yn ystod y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod oedd y cyfnod o dyna sut y byddai'r cefnogaeth leol yn gallu mynd i'r gêm. Roeddent yn gwybod nad oeddent yn cael cyfleoedd mawr. Roeddent yn gwybod nad oeddent yn mynd i weld y gêm yn y llaw. Roeddent yn ôl y golau ac yn sefyll. Ond hynny yn ei hun yn rhywbeth y byddant yn fawr o ac yn ymddygiad. The building is a Rydyn ni'n mynd yno. Roedd yr egni o'r copi yn y cyfnod cyntaf. Roedd y harpeth yn ôl y gol, lle mae'r cerdd a'r ddwyed yn cael eu gwneud. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnod cyntaf. Roedd yn y cyfnodni o'r copi yn y cyfnod cyntaf. Roedd y harffet yn ôl y golau. Roedd ymgynghyrchu, y cynyddu ac y sgwrs yn ddechrau yno. Roeddwn i'n hapus ac yn ffoddol o fod yn rhan ohono. Roedd hi'n symud bob amser. Roedd hi'n symud o'i llaw i'r llaw. Oherwydd roedd hi'n parhau. It always swayed, it always moved. He was swaying from side to side because it was that packed. As Liverpool had attacked, there'd be like a wave of people falling forward, obviously, on the tiptoes trying to see what was going on. o'r hyn a oedd yn digwydd. Bydd pobl yn y tŷ yn y tu allan yn ceisio gweld rhwng y llyfrau a'u symud, ac yna bydd hynny'n eu chyflwyno'n fwy ychydig. Ac os ydych chi'n ychwanegu hynny i dros 1,000 o bobl, bydd hynny'n mynd i ddifrifo'r tŷ i'w symud i'r tu allan. Felly, gallai fod yn anodd, yn sicr. Ac mae'n bodd amser lle nad oes gennych chi gwybodaeth am eich symudiad eich hun. Ond rydych chi wedi dysgu i ddynnu'n cyflawni hynny ar y 1000 o bobl, byddai'r cyffredin yn mynd i'r cyfan i'r blaen. Felly, gallai fod yn anodd yn sicr. Ac fe fyddai amser lle byddai'n cael gwahanol gwybodaeth am eich symudiad eich hun. Ond fe wnaethoch chi ddysgu i'ch rhedeg y ffynion. Fe wnaethoch chi gwybod beth oedd yn dod. Fe wnaethoch chi gwybod, wrth i'r bwrdd fod yn dod, sut fyddai'r cyfan yn mynd. Mae'n anodd i'w ddisgrifio. Byddai'n rhaid i chi fyw ynddo. Felly, os byddwch chi'n byw yn hynny ar ffyrdd cyffredinol, byddwn ni'n gwybod yn llawer sut fyddai'r cyfan yn mynd. Ac os oedden ni'n rhywfaint yn ystres oherwydd ein bod yn cael ein cyffredin yn cyffredin, roedden ni'n gwybod y byddwn ni' ymdrin ychydig yn ystod i'r llawr, ac yn ychydig yn llai llai o gyfnod, ond yn ychydig yn llai llai o'r fwy. Rwy'n cofio'r flwyddyn cywir ond roedd George Best yn chwarae. Mae'n ystod y 70au ac yn sefydlu ar y bar, roedd yna fwy o sgwyd, a gafodd ei dynnu i ffwrdd i'r bar. Roedd yn anodd i bobl ddweud mai roedd yn Sobol, felly roeddwn i wedi cael fy nghymryd dros y llyfrau. Roedd yna'r ymdeimlad o bobl yn helpu'n gilydd. Roedd yn fel ysg, menod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, roedd yn ystod y cyfnod, ro I can't stand to do All that I've said Because you're not you And you know that can't be right You're not you And you know it should be right You're not you You're not you You're not you You're not you You're not you... C'était un clip court qui souligne un peu les images que j'utilise dans le film. L'idée était de raconter la histoire du stade de Liverpool, et comment il a changé entre les années 70 et aujourd'hui, et comment il a été privatisé dans les années 90, après un drame qui s'est produit à l'interview de mon soutien. Donc, sans trop aller dans les détails, concernant la méthodologie, pour moi, c'était très important de commencer par rencontrer les supporters, pour aller sur le terrain à Liverpool, pour les rencontrer, les interviewer, et aussi, comme on voit un peu au début de la vidéo, pour voir avec eux comment le stade a été construit, des dessins historiques et des dessins de leurs part, et ils pourraient aussi décrire leur expérience. Mais ce que vous voyez après, la simulation du public est un exemple pour moi de comment je peux utiliser un outil spécifique qui est lié au sujet. Pour ce projet, j'ai découvert quand j'étudiais, j'ai découvert comment les gens faisaient des annonces pour des commerciaux de football. Et donc maintenant, la plupart des commerciaux, je veux dire maintenant, c'est comme advertisements for football commercials. And so now most of the commercials, I mean now it's like a few years back, but they are all shot on green screen. Basically the player come and they play a little bit with other people. And everything that you see, the stadium and all that is added afterwards. So it's like fake crowd, CG crowd. So now the stadium that like is seen on TV or on internet, fake crowd, CG crowd. Donc maintenant, le stade que l'on voit sur la télé ou sur Internet, le plus que vous voyez sont les CG crowds ajoutés derrière les joueurs. Donc j'ai voulu, l'idée était assez simple avec le projet, c'était de apprendre l'outil, découvrir comment il a été utilisé. C'est un outil appelé Golem, qui est en fait un outil utilisé par de grandes entreprises VFX pour faire ce genre de commerciales, et qui essaie de l'utiliser pour raconter la histoire des supporters. C'est le mot de la vidéo, c'est-à-dire ne jamais montrer la face, à part le début, de ne jamais montrer la face des supporters, mais plutôt de montrer la simulation et d'écouter la histoire des supporters. C'était l'idée derrière le projet, et j'ai essayé de le faire avec le prochain projet, Tourba, qui est aussi lié à la culture du football. Il s'agissait plutôtrait sur une chose spécifique, c'était l'invasion du pitch. Donc, quand les supporters entrent sur le pitch après une victoire, après n'importe quelle raison. Et c'est quelque chose que j'ai découvert en faisant de la recherche pour mon film « This Means More ». Et c'est un phénomène qui est devenu de plus en plus rare parce que maintenant, le stade est très rempli de sécurité. more and more rare because now the stadium is very security packed. So basically, the idea for the project, it's multimedia. So it's like real-time animation. So I worked with a programmer to recreate a crowd simulation software. So for the people interested, it's using a softwareé Godot, qui est un software vidéo-games open source. L'idée était de recréer une invasion de pitch en utilisant les mêmes outils que pour les commerciaux. Les images que vous voyez sur les avatars sont prises de l'archive actuelle de la piste d'invasion. L'idée derrière cela, c'était un peu comme... Pour cette peinture de Bruegel, ce qui m'a été intéressant, c'était d'avoir des gestes différents, un catalogue de gestes des gens qui font des rituels. C'est pourquoi j'ai essayé de reproduire dans le travail en regardant des archives de l'invasion de la piste So that's why I tried to reproduce also in the work by looking at some archives of pitch invasion from different countries and turning the archives into 3D avatars. So I got a short clip here to show you the process, but it was a very low-tech process. It was like hand painting. So it was before, it was really at the beginning of like all the AI tools that could turn a picture into 3D characters. So I worked with an assistant and we just basically painted everything. So it was a very artisanal and very old school process. But you see the idea was like to kind of bring back the archive into the crowd simulation. So it's like an homage to the crowd. et j'ai trouvé l'idée de remettre l'archive dans la simulation du public. C'est un hommage au public. Et aussi, ce public est en train de s'agir ensemble, ils ne se battent pas contre l'autre, ils partagent le même espace. C'était la motivation de ce travail. J'ai aussi réalisé un film, un short film, qui est connecté au projet que je présente dans Ars Electronica, qui s'appelle VO. Cette fois, j'étais plus intéressé à la façon dont les voitures auto-pour-se qui conduisent par elles-mêmes sont entraînées. Pour moi, c'était intéressant parce que je voulais me concentrer plus sur le travail des personnes qui entraînaient les voitures. Le film commence avec un accident qui a eu lieu en 2018 aux États-Unis, qui était le premier accident mortel entre voitures auto-pour-sauveurs sale conduite par Uber et un pédestrian. L'idée était donc que cet accident a révélé le rôle des opérateurs de véhicules, que nous appelons VO, qui étaient en fait des conduiteurs de sécurité. Donc des personnes à l'intérieur de la voiture, quand elle conduisait, mais elles étaient obscures, elles n'étaient été parlés. Le film est donc, de la même manière, des témoignages de ces opérateurs qui décrivent leur expérience. J'ai donc pris un clip de cinq minutes pour vous montrer le traitement visuel que j'ai choisi en termes de graphique. in terms of graphics as well. Like a vehicle operator, you're spending in like an eight-hour workday, like seven hours in the car. I don't know how many people were watching things on their phone, but there were a lot It was, you know, like having your mother drive you. Unique techniques A. A Norske Norske Thank you. E aí E-mail-signal Thank you. Je choisis ce texte car il montre un peu la diversité de l'esthétique du film. Mais l'idée, comme le film, était d'essayer de faire expérimenter le travail de ces personnes et la connexion qu'ils avaient avec le voiture, sachant que la voiture se conduisait la plupart du temps et qu'il fallait s'en occuper en cas d'accident. Mais le fait est que l'être humain est très mauvais à faire ça parce que le temps d'attention est très bas. Donc, cela crée un sens de confiance dans le software, ou dans le véhicule en ce cas, qui s'appelle l'autonomie complaisante. Et cela a conduit à la crash. C'est ce que je voulais faire pour faire l'expérience du spectateur dans le film. C'est pourquoi c'est un peu hypnotique à un moment donné. Je voulais juste souligner quelques choses, mais ça a commencé avec ces images qui ont été publiées, leakées par l'entreprise Uber, qui étaient des caméras de la voiture et de l'intérieur du véhicule. Donc la femme que vous voyez ici, c'est l'opérateur du véhicule à l'époque du crash. Et Uber a publié ces images pour blâmer les opérateurs de sécurité plutôt que le véhicule. L'idée du film était donc de garder cette structure visuelle, de faire un champ contre champ, pour voir ce que le véhicule et le véhicule le voyaient. C'était le type de dispositif qui filmait le visage du véhicule. Au-dessus du véhicule, on voit le sensore qu'ils utilisent, qui s'appelle le LIDAR. like filming the face of the vehicle operator. And on top of the car, you can see the sensor that they are using, which are called LiDAR. And so I wanted to use these same devices of the LiDAR for the visual of the film. So that's what you see in the traveling sequences. They are basically points, like data points, that are captured by the LiDAR of the car. So I got in touch with a French startup which were using this kind of sensor, and I could then use them and make this kind of traveling. So the idea was always to kind of do the interviews with the vehicle operators, and at the same time trying to imagine a visual way to et en même temps, en essayant d'imaginer une façon visuelle de raconter la histoire. Donc, on est dans le film, dans un coup subjectif dans le voiture. On voit à travers les yeux de la voiture. Et puis, au final, on a ce genre de documents qui soulignent un peu ce que j'ai dit à l'origine sur la technologie de l'O-Synth. Donc, ce sont des documents qui ont été écrits par la police de Phoenix, qui a montré le record de ce qu'elle jouait sur son téléphone lors du crash. C'est donc quelque chose qu'Uber a utilisé pour blâmer le pilote, pour dire qu'elle regardait une série sur Netflix plutôt que la route. J'ai donc essayé de le contrer dans le film. C'est donc une façon de travailler contre la logique de la société, mais en utilisant des documents publics disponibles. Ce travail, pour finir rapidement, a été le premier pas dans un projet plus long concernant l'entraînement des voitures de salariés. Cela a conduit au projet que je'm showing in Ars Electronica this year, which is called Unknown Label, which takes the process one step back. In that it's about the way the car is trained to recognize the object. So during the crash with the Uber car in the time, the crash occurred because the car was not able to determine, to identify a person crossing. Okay, the person was crossing, but the person was crossing outside of the pedestrian crossing, so the car was not designed to recognize a human being outside of pedestrian crossing. So I got interested to know more about this process. So I got in touch with people who do micro workers, basically, who work online in this kind of task. And I'm not going to delve too much in details, but basically I got in touch with them and they shared also a lot of documents about the process, so how they can identify the different objects. So here you see a bit of the different categories pour identifier les différents objets. Ici, vous voyez les différentes catégories, comme les animaux, les objets humains, etc. Je voudrais finir par parler un peu de Grimm, qui est une des catégories que j'ai découvertes pendant mon recherche. Il peut sembler un document complexe et extensif, while I was doing my research. So it can seem that it's a pretty complex and extensive instruction document. But at the same time, you have this kind of very simplistic and very potentially harmful categories where a human that is lying under a blanket or a sheet can be identified as an object if we don't see any part of the end of the leg. So this was something that I discovered also during the research and that then I introduced in the film. So I'm just like going to show you also like the, kind of the screen capture of the software they use. And I will let you discover the work if you want. It's gonna be screened I think later today at 4.30 in Met Campus. And if you have any question, I hope I was clear and let me know if you have any question. Thank you. clear and let me know if you have any question thank you thank you so much for the insight into your projects they're incredibly sort of inspiring in the use of different technologies together very innovative ways uh we have some time for some questions so if you have a question and i expected a question there, then we'll pass on a microphone and I think we'll just start right with your question. Sorry, I don't mean to monopolize the time on questions. Fascinating talk. I believe you have a point of view about this driverless car training with respect to the people who are supposed to be the safety valve that are looking at... Can you tell us what that point of view is? Yeah, yeah. So for the VO part, for the project, I think it was... Yeah, it's just like they are the... I don't have the word in English, but they were like the first row of people. C'était juste comme si les personnes qui étaient les premières à être sacrifiées par la société, comme les chauffeurs de sécurité. Ils étaient là pour prétendre se protéger de la société. Et c'était très inférieur pour moi. Et maintenant, il est important de le dire, trois ou quatre ans après l'accident, il n'y avait donc aucune prosecution. Seulement parce que Uber payait la famille de la victime pour résoudre le cas. La seule personne qui a été convaincue et convaincue était le chauffeur de sécurité. Elle était donc, au final, convictée pour un homicide involontaire. J'ai perdu le mot. Une négligence? Oui, une négligence. Une grosse négligence. Elle était donc là pour protéger la société. Je pense que c'est une situation très inférieure. protect the company and I think it's a very unfair situation and the film is trying to hint at that but not in such an obvious way because I don't want to do a manifesto or whatever but it's just like showing that it's much more complex than what Uber is trying to make us believe in a way. Thank you very much. Okay, thanks for getting us started. We've got another question up front. Right over here. Hello. Thank you for your talk. I have a question for the... How is it called? Vero? Vero, yeah. Vero work. The part where you see the data points and where the car drives through the city, there's like a line in the sky. It's like the driving path, but what is it? Yeah, actually it's funny because it's very simply like a c'est simplement un glitch, pas un glitch mais je faisais la recording avec le LiDAR et c'est une misalignement, donc c'est une partie du véhicule qui est ici. Mais j'ai décidé de le garder parce que c'était pendant le pass, donc il a l'air de guider le véhicule. Mais c'était un glitch dans the process of making the images very basically. Okay. Yeah, it was really cool. It looked to me like a prediction vector. Yeah, it was really quite cool. Yeah, yeah. Because, I mean, like the data that you see, then it is processed. And so you have the vectors process. But I didn't have access to this data process. So I was just using the raw data. process, but I didn't have access to this data process, so I was just using the raw data. And then this kind of line was for me this idea of like the path, guiding the path, so it was working. Okay, we have any other questions in the audience? They're right in the middle. Thanks so much, this is fascinating work. I was just thinking about your most recent project and how you got information about the labeling. And you said I think it was like a micro task users or workers or something like that. And I was sort of surprised that you could get that information. Were they supposed to share it? I mean, maybe we're recorded and we shouldn't ask this question, but it seems quite incriminating that this was made accessible. Yeah, so that's why also it's also part, I mean, I didn't talk so much about that, but it's part of like the documentary approach and when the filmmaking approach, like I first need to contact the person. So I got in touch with journalists who introduced me to the people. And then you have to gain the trust. So I did several interviews and then they start to share that data. qui m'ont introduit à ces gens. Et ensuite, il faut gagner de la confiance. J'ai fait plusieurs interviews. Et ensuite, ils ont commencé à partager ces données. C'est un jeu de confiance, et pour garder cette confiance, les gens qui ont partagé ces données ne sont pas nommés. Je les ai posées et ils sont OK de mettre leur premier nom, mais pas leur dernier. Donc, il y a toute une sorte d'anth Donc, j'utilise toute cette déontologie comme réalisateur de films, pour ne pas exposer les gens qui me partagent des informations, car ça peut être incriminant pour eux, et dangereux. Même si la plateforme qu'ils travaillent pour, que je ne vais pas nommer, se termine bientôt. Donc, ils ne s'en fassent pas vraiment attention. is closing, is going to close soon. So they don't really care so much now. But when I was doing the film, it was still a bit unclear. So for me, it's very important to protect the people, because that's part of the ethical questions that I ask myself when I do a doc. It's a documentary for me. I use kind of graphics, computer graphics, and stuff like that. But what I wanted to insist on in the talk is it's a documentary approach before anything. Thank you. OK, are there any other questions from the audience? In the meantime, while we're looking for questions, I have one. And in that work in the beginning, you have probably the most extreme close-up I've ever seen on camera, which I think is the eye. What were you going for there? And how did you realize that look? So yeah, it's pretty dark, so I hope you could understand. And it's a bit abstract, so you have to see it in the film to get it. But it's basically like a champ contre champ, like what I was saying, because then you see the eye of the safety driver. And then the camera, which is basically a very simple visual l'œil du pilote de sécurité. Et puis la caméra, qui est un dispositif visuel très simple pour dire que c'est une question de perception. Donc la toute idée du film est résumée dans un montage de deux shots. On a la perception de l'homme qui essaie de garder la voiture et l'automatisation de la voiture. Et vous avez la machine qui est à l'aise en regardant cet œil. Pour moi, c'était un montage fort à mettre. Et ça a résumé l'idée principale du film. Merci. Si vous n'avez plus de questions, je vous invite à regarder le film à 4h, je crois, Okay, thanks. If there are no more questions, again, I'd invite you to take a look at the work. It's at 4 p.m., I believe, at the MedSpace, where there's a lot of films and animations. Thanks so much for your talk. Hopefully you'll stick around a little bit later on, and we'll see you around. Thank you very much. Okay, we're coming to the end of our artist position panel, and our third speaker for today is Paul Trillo, and he is the winner of the Golden Nikkei in the category AI in art. His works are usually a combination of experimental techniques and technology, and they range from generative AI visualizations to drone-based smoke visualizations, even to Super Bowl commercials. So let's give Paul Trilla a warm welcome. All right. Thank you for having me, Ars Electronica and the expanded animation section of the festival as well. Very honored and it's been very inspiring seeing all these different perspectives as well as the presentations today. So yeah, thanks again for having me. So I will just kind of flip through a bunch of my journey using AI and how it applies to traditional filmmaking, traditional VFX, traditional animation, as well as the experimental and video art. And hopefully maybe, I don't know, give people a slightly different perspective on how to use AI that, you know, there might be more challenging ways to use it that it isn't necessarily just a shortcut or a lazy way to kind of get to something that it can actually force you to discover new ideas and reallocate your time back into the creative process. These are some early works I did. These are all done in like, well, early in terms of early in generative animation, generative video work back in 2022. So not that long ago, but I guess an AI timeline. It's a while ago. And a lot of this is, yeah, it's traditional VFX, traditional animation, even sort of even frame by frame work. So the one in the middle is using DALI. I was a beta tester for DALI and really was just curious, can this tool be used for visual effects versus just generating images that are kind of in the training data, can you manipulate your own imagery to create something new? And can you make animation or video out of it? So at the time, there were no kind of generative video models out there. DALI wasn't designed to do animation, and so it kind of caught the attention and took the OpenAI team by surprise that they could actually use their tools for that. And it kind of surprised me, too, that there's sort of a new way of making imagery and a new way of kind of bringing your imagination to screen and so I just started to lean into more experiments to kind of see where this was all going this was you know some of the some of the clips aren't playing yeah this was very, very, you know, preliminary kind of developing new techniques where I'd shoot a video and then chop the video up frame by frame and upload each frame into DALI and then erase part of the image, download it, sequence it, and then use another AI to kind of do the frame interpolation. And so, yeah, it was akin to like stop motion. But it was exciting because it kind of showed this future of like infinite choice and which is, you know, can also be maybe one of the problems with AIs is knowing when to stop. And so again this was this was just very like kind of taking a machete into the jungle and just kind of seeing what was possible. Not very controversial at the time and so I just kind of kept following the lead to see where it goes. Here are some other works, including some 3D scanning neural network tools. Oh, here we go. So then later in 2022, stable diffusion open source was released and sort of changed the way we use AI from that point on. It was when the tools started to be put into the hands of the public in terms of image generation to see what could be done with that. And this was maybe, yeah, just a month after Stable Diffusion went open source. I was approached by an agency, AKQA, to do this animation for GoFundMe that was kind of bringing this mural to life, a community mural to life. And let's see. Okay, there we go. And the original pitch was to just kind of pan and scan around a 2D AI-generated image. And knowing that Stable Diffusion just came out, I thought, oh, I think there's actually maybe a more complex way we can we could use AI and traditional animation and that's through a process called image to image where you can take an image that you've made and then run it through AI and have it kind of do a style transfer to it so again very early on so it still had this kind of AI flicker effect but we shot real actors on a sound stage, and some of the background assets are generated in AI, and some of them are 3D assets, and we kind of composed 3D environments, and then we implement as much control in the process while appreciating a bit of the chaos and that fresh glitch effect that comes with AI where it feels like every frame is a unique image. So yeah, it was very encouraging and here's a little side by side of what that looks like. There's my dog and me. my dog and me. You can kind of see how you can go from very crude compositions and run it through AI and it kind of composites things for you. So that's like showing the layer by layer process and then it kind of is run through stable diffusion multiple times to give it this more painterly feel. I kept expanding on these sort of image to image techniques and combining traditional production with traditional VFX but seeing what more aesthetically can be explored with AI that maybe is unique to the tool rather than doing things completely in 3d which you know has its own aesthetics to the VFX there There was something that felt almost more tangible. The texture of some of the AI rendering offered something new. And it also opened me up to trying more ambitious ideas. And so this project was with a French music artist named Jacques. He had done something, he had his music playing at the Louvre, and so he kind of bartered with the Louvre to use it as a backdrop for a location to do a short film. And so we had about eight hours to shoot at the Louvre, and we had really no idea what the hell we were going to do. I had storyboarded it, but all the effects and how we were going to accomplish it were very much TBD, to be determined. But it was exciting because I was going after ideas I maybe would have shot down before. And again, that's maybe one of the promises of AI is you don't do as much self-editing in the creative process. So now instead of being like, oh, this is going to be too difficult, too expensive, or it's not going to look good, and then you maybe kind of talk yourself out of exploring certain ideas. What was encouraging is being like, I don't know, let's try it. Maybe we can pull this off. And so I ended up kind of discovering a film that I would not have made prior to AI. So we can watch a piece of this. Let's see. Sponging all the bullshit beamed by this city It became hard to hear from my heart When I realized it wasn't too late yet By the time that i talked it was all the tears that i cried you were looking at it like it's water on mars all the words that i told you were looked inside of myself myself myself myself 아이씨, 나한테는 안 돼. Thank you. guitar solo ¶¶ Thank you. so Thank you. © transcript Emily Beynon I'm going to go. so So the idea kind of came out of... Thank you. Thank you. So yeah, the idea came out of crying in public. I just have always been fascinated by people that cry in public and what gets someone to the point that they're so vulnerable that they kind of let themselves break down like that as well as people that can feel such intense emotion from beauty or art that they break down and they kind of enter this emotional state. It's called the Steinhall syndrome. So you can kind of enter a very emotional or even hallucinatory state when you experience great beauty. And so that I found was like a compelling idea and we just sort of started from that little kernel and kept following different tangents, as well as this idea of transformation, which is kind of intrinsic to AI, is transforming and morphing from one thing to another. And so we kind of took those themes and we ran with it. And the idea is this kind of teardrop that is peeling back the veneer and the polish and craft and beauty behind these sort of historical works and maybe showing a little bit more of the grotesque pain and agony that's hiding underneath beauty. And I think AI can be used to make really beautiful images or even like these hyper real overly aesthetic images that get to the point of grotesque for me. And so I found using AI to kind of balance beauty and the grotesque was interesting. I also thought it would just be fun to use AI in the Louvre, this sort of beacon of art history, and use it to literally destroy art history, which a lot of people are crying about. So I thought it would be a fun tongue-in-cheek kind of concept in that way. Here's a little bit about the process and the variety of tools and techniques that were used. There's over 80 VFX shots in the video, which was something that was, again, so ambitious I would have never pitched this prior to having these tools, but it allowed us to dream bigger. So it wasn't necessarily less effort, it just allowed us to explore kind of more experimental effects and that, and again, arrive at visuals I wouldn't have maybe gone towards without without it but it still took many many months to kind of complete and yeah I think you know what was for me what I what I liked was what it does aesthetically that that I think 3D rendering wouldn't give you which is something that feels more true to the image so for instance like this shot we made a practical rock and then kind of ran it through style filters and morphed that rock and transformed it into other objects but what it's doing and with AI rendering is it's sort of looking at the context of an image and trying to match the aesthetic of the input. It's a process called in-painting. So it's like looking at, oh, okay, it's seeing the lighting of a shot or it's seeing that we're like referencing Renaissance paintings it's seeing you know the camera perspective the lighting Everything kind of intrinsically aesthetic to the image and it and the in painting matches that so it does a better job in some ways than a lot of 3d rendering does and And creates a more tangible feel. So, yeah. This VFX breakdown is also playing at the Lentos Museum, which you can check that out later. Here's a bit of the process going into it. Again, drawing a bunch of scribbles, not knowing how the hell we were gonna pull this stuff off, but showing how much kind of, we did stay true to the storyboards and tried to implement as much control in the process rather than letting AI kind of dominate the process. And something I'd like to emphasize is trying to stay true to your vision when using these tools because it can be very alluring to see you know pleasing AI image and be like okay great good enough I'm gonna put that on the internet and say I made it. But there's something kind of, I guess, missing in that, like, you're letting the AI make a lot of the decisions for you. And so I think it's really important to, as much as possible, control your vision and not lose your voice kind of in this technology. So this piece is for the music artist Washed Out for the video, or for his song Hardest Part, which won the Gold Nika Award for, thank you. Thank you. Which, yeah, an immense honor to say the very least about it. I mean, to be at a festival that is kind of founded on computer-generated imagery and electronic art, to be the first year to include AI as part of the kind of lexicon there and to be given this award is just means a lot to me. So I really appreciate that. So this video, unlike the previous video where we shot things and there's a lot of traditional VFX being applied and a lot of control. This is fully generative video using OpenAI's Sora. And it was just too ambitious of an idea to try to pull off. And it's just one of those many ideas that you're like, ah, that's fun, that's interesting, but I'm just gonna kind of put it in my back pocket or put it in the drawer and sort of forget about it and you sort of build up this graveyard of ideas over time that you kind of just walk away from and so what was invigorating about this was was being able to kind of resurrect ideas from the graveyard ideas that I that would have never been made without these new tools and technology. And there was something also, I think, inherent in using generative video that I felt conceptually tied into this piece, which is sort of a trip down memory lane, and the nature of memories and how memories are, you know, they're inaccurate, they're deceptive, they can even lie to you, they're subjective, just like AI is. So there was a very clear connection there between, you know, seeing something that you might believe, but knowing that maybe some of the details are not true. And so I wanted to kind of lean into that connection while, again, trying to tell a story in the process. And I found myself drawn towards the realism of this Sora model as much as it's kind of uncanny and disturbing. There's something sort of interesting in that it's a mirror of our reality rather than a real depiction of our reality. And I think cameras and computer animation have their own biases and maybe are too perfect sometimes. And when talking about dreams and memories, there are imperfections. There are things that are not necessarily true to reality, but they kind of have an emotional truth. And that was sort of what I was trying to get at with this piece. And what I found exciting about this piece is that this is a new sort of camera to look at memories and to look at dreams. Because I find, you know, dream sequences or memory sequences in films and cinema can be a little too sharp sometimes. They can be too 4K in a way that you don't really buy it as a real dream. And there's something kind of smudgy about this that I almost found a little more intriguing, more mysterious, and that you can kind of project your own ideas of what you're seeing onto it, along with some of the kind of hallucinations that it makes, where, you know, the physics don't quite make sense, and yeah, the sort of lighting doesn't make sense. Oh, thank you. doesn't make sense. Oh, thank you. And so, yeah, there are kind of aesthetic things that are unique to AI that I think we should embrace. These are side effects of the training data versus actually literal things in the training data. They're kind of the different concepts that are sort of budding together, and they produce these sort of glitches, these aberrations. And that is something that is totally unique to AI that we haven't seen. And so if there are people that are, you know, maybe cautious of AI or skeptical of it, I think one use of it is to kind of explore those aberrations and those side effects, the things that other tools would never give you. Let's see, what else do we have? So, this is a piece I'm still finishing, but figured I'd play it since it's part of the animation talks here. Well, we can just watch it and then I can discuss it. I shall find my salvation My soul, my soul, the soul My soul, the soul My soul, my soul Maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, maishen, Ascension Ascension Ascension Ascension Ascension My son, my son, my son, my son, my friend, your friend, your friend. I'm sorry. Thank you. So that piece is called Ephemera. It's still a work in progress, but you guys got a little preview of what's coming here exclusively. So that was done with, this is my friend Hawk and his sort of performer partner, Erica Klein. They're two dancers in LA. And I have been similar to I was at Nick's pre the previous talk about pareidolia and this idea of us constantly projecting humans onto nature I thought was really interesting and it's actually sort of how the AI images are, the process of diffusion imagery, where you're starting out with a noise pattern and then the AI is sort of projecting some sort of prompt or dreams and is looking within these noise patterns and through multiple steps, defines just a bunch of black and white noise into imagery, which is, sounds complicated, but it is sort of how we see as well, where if we were not, if we were not, we're not like instantly trained to see the world when we're babies, we sort of trained to see the world when we're babies. We sort of learn to see the world and we learn to identify patterns, just like the stable diffusion process identifies patterns within an image. And so we are constantly projecting, you know, faces into trees and bodies into clouds. So the piece kind of came out of that. And here's a little process of that. We actually shot this over three different days. So we had an initial shoot and then realized we were missing footage and kind of did some reshoots and we were able to kind of seamlessly blend higher-end cameras with just iPhone footage and use AI, again, as more of a rendering tool than to let it do all the work for us. But something that I have struggled with in the past, previously to AI, is trying to generate realistic, believable cloud footage or cloud people. And so I was determined to use AI to kind of, yeah, create a more believable believable cloud formations that I think were 3D rendering and 2D compositing sort of fall short. So that was sort of the motivation here. I'll give you a quick preview of this and then I think we probably need to open it up. This is another work in progress piece called The Most Perfect Perfect Person featuring the YouTube artist and performer Poppy. And it came out of this performance art piece she did where she trained an AI model on all of her data, all of her YouTube videos, all of her song lyrics and everything, and created an AI chatbot version of her, a sort of digital twin of herself. chatbot version of her, a sort of digital twin of herself, and then does these kind of live events or live interviews where she plugs the AI into her ear and lets the AI version of herself dictate all of her words and actions. So she's sort of relinquishing her free will to the amalgamation of herself, which I thought was a sort of fascinating and fun idea, and so we have turned it into a short film. And so this is just a little bit of a preview, also showing some AI sort of storyboarding and concept frames. So at the bottom there you see the sort of concept art and then what kind of made it into the final film. So great in the brainstorming process we generated hundreds and hundreds of images before we kind of landed on things that worked. And then here's a little preview that shows Sora, OpenAI Sora being mixed with being able to upload your own videos and photos into Sora and use it again as a visual effects tool rather than relying kind of blindly on text to video. So this is video to video or video mixed with just live action. ¶¶ This is sort of visualizing the training process of how a large language model is built. So we actually took phrases from Poppy's YouTube videos and represented them as sort of these clones in this sort of white void space. And we see the good information kind of gets to stay and the rejected data points get kind of dropped into this dark void and then some chaos ensues later. I'll skip through this project because I think we have to go to questions, I believe. Yes, thanks you so much for your insight into your works, and particularly for the stuff that's not even public yet. So we're a very special audience for that. Totally. There's a lot of really interesting stuff. Again, if you have a question, raise your hand. We'll bring a microphone. While the microphone's coming around, maybe a quick question. You mentioned before one of the challenges using these types of tools is sort of knowing when to stop with the tweaking. What's your approach to doing that now? ALEXIS MOUSSINE- Yeah, I mean, it kind of just comes from experience directing on set. When you are given a bunch of choices you kind of just have you learn to listen to your gut and your intuition um so when whatever the wardrobe designer is like do you want the yellow dress or the blue dress or the red dress and you just say yellow or what you know and you kind of just know for whatever reason it's something it's tied more thematically or something's more true to the piece. And so you just kind of need to hone your intuition over time, but it certainly gets arbitrary where something can be equally as good as another. And you kind of just have to make a distinction, but I think it's really about, yeah, trying to fine tune your voice and your vision and what feels honest. Do you give yourself like a time limit or something? Cause on a production set, you've got time limits and you're sitting at three in the morning and you're working like, should I make this one little tweak yeah i mean that's sort of the i guess irony with ai is that you're like oh i can do stuff faster but i don't actually i have less time now than i used to have so it's actually eating up more of my time being able to kind of iterate so the iteration process ends up um kind of blossoming which is which is great because you end up discovering things you would have going down routes you would have never gone down okay see if we've got a microphone on in the audience yet there's a microphone over here do you have a question or yeah okay we'll start with your question thank you for your talk, obviously, generative AI is a very touchy subject. And my question is, how do you handle criticism when it comes to generative AI, especially creating videos? And has there ever been anybody that came up to you and said, I actively hate and despise what you do? Which is not me. I loved it. OK. uh not yet so maybe after this talk that might happen I'm not sure um not face to face but I have definitely received um messages um and some of some criticism is valid some of it is based in misunderstanding. I am not too affected by it because I've been directing for 15 years and so you kind of have to grow a thick skin and you also realize the trends of technology and that technology is relentless and never stops and you can choose to make art with it and guide the public narrative on making more provocative work with it rather than letting the companies kind of decide the narrative of how these tools should be used. And so I try to, in each of these pieces, at least challenge the use of AI or hack the way AI is used. And I'm moving more towards kind of bespoke models and working with my wife is a painter and we've trained two models on her work and we're working on an animated piece based on her paintings. And moving towards things again that are using my own kind of data points as a point of reference. OK. We have a microphone there. I'm just going to give my microphone. You've been waiting very patiently. Back there, please ask your question, and you can have my microphone in just a second. Hello. Thank you so much for very interesting presentation. I'm here. Hi. Yeah. I have two technical questions, very short. One is that you mentioned the stable diffusion. And I just wonder, have you used Deforium for any one of these projects? And then the second question is, you briefly mentioned training AI model based on the paintings from your wife. Have you used or have you modeled, have you trained any model for the projects that you have shown? Thank you. Yeah. So, yeah, so actually we skipped over a project that I did fine-tune a model on. But Deform was used for one or two sequences in that GoFundMe animated, like painted animation thing. But right now with the generative video models, you're sort of beholden to the kind of core model, which is either Sora or Gen 3. But what I'm excited about with Sora is the ability to kind of upload your own footage and manipulate your own footage. And so to me, this is a much more exciting way of using the AI tools is still being able to shoot and still being able to like have the production as part of the process and not leaning entirely on whatever is in the kind of training data. Okay, I'll give you my mic. Thank you. Yeah, my question is, what is the time span between your inception of the ideas and the final product? And also, how are you going to reconcile the acceleration of the development of AI tools because they'll become obsolete before you release? Sorry, the first part of the question was... Yeah, your time span between your inception and the release of your artwork. Yeah, I mean, I've been, I think those initial experiments I showed, I was turning those around in like a week or so, or week and a half, some of those experiments, some just a few days. Really to just kind of explore the limits of the tools,, and more in like the 2022 era of the work. But now this year I've sort of been spending longer and longer amounts of time from that inception of the idea and the kind of delivering it. So like the cloud thing we shot in February and it's still not fully finished yet. February and it's still not fully finished yet. The Louvre film we shot at the beginning of 2023 and it was released in August of 2023. So putting that like the time savings or whatever back into the project and I think it's it's really to try to future-proof the project. So there may be things that might look dated at some point here, but I'm really trying to both capture a moment in the technology, but also create work that hopefully feels timeless. Okay, we have a question back there, and then there's lots of questions, but it was the first hand back there. Hi, thank you for the talk. First of all, i wanted to ask um how do you make sure that you get consistent imagery and generative ai what's your best way or maybe best points to make sure that the video looks consistent like it's shot on a real camera yeah um i mean a lot of it is uh generating a lot of material so i don't know if you're referencing the Washed Out music video, but that video had 700 clips were made, and I only used about 54 of them in the final piece. So a lot of that was kind of eliminating things that didn't feel aesthetically aligned. I think it's also about being as detailed as possible in the writing process so that you are eliminating any kind of misunderstanding or miscommunication and understanding where the constraints are. I think something that does bother me with a lot of the AI filmmaking that's done is this kind of shot to shot. They look like things are pulled from different sources and it's it's too random and it just kind of takes a lot of like battling that or persistence and not not just kind of like again letting the the AI models kind of drive your voice and your vision and really kind of being persistent and trying to find something that feels like true to the project. So it just comes from a lot. But also with the cloud thing, for instance, we there's like four different looks or five different looks where it's like sunrise, noon, like sort of overcast, storm, and then sunset. And we developed a model of photography for each one of those sections as well as IP adapter reference images for those shots. So that, because I had three different animators working on that. And so there was a lot of inconsistency from shot to shot. And so a lot of the work was to kind of fine tune each shot. Each shot took about five days to do to make sure that there was that consistency. Thank you very much. OK, I think we're going to just take one more question, and then we'll take a lunch break. And I'm certain that Paul will stick around for a little longer and take your questions individually. But I saw your hand first, so I'm going to pass on the mic. Okay, so I didn't see. We'll start with you. I'll give him the mic. Two more questions, and then again, I'm certain Paul will stick around for a little bit. Hi, Paul. We'll start with him. It's not just... Okay, finally it's on. I have a mean but playful question in the sense that AI is not just a tool because it brings some creativity, it adds some stuff to it, more than just a tool. So how do you credit it at the end of the film? What role did it play? What title do you give it? There's also Comfy UI, I saw some of the tools in Comfy UI, like the control nets and stuff like that. These are made by people who are artists and technicians. So what's the role that they play in your production and where do they end up in the... What's the percentage that they take part in your films? Yeah, I always, I mean the first fully generative video I did was a film called, it's not here, but it's called Thank You for Not Answering. It was made with Gen 2 when it was still in beta. And I did say directed by, I put myself in quotations, directed by Paul Trillo. Because it did feel like a collaboration. And I do think there is an oversimplification to just say that AI is a tool because it fundamentally changes the creative process. And so it's a little bit more than a tool. It's a little bit more than a tool, and I think it's maybe disingenuous to just define it as, oh, it's the same as 3D modeling or painting or something. There's another aspect to it that fundamentally changes how the work is made. So where's the line? Because in the beginning you made a comment like, hey, you do a prompt, you create a picture, you put it on the internet, and you say you made it, but of course you didn't do that much, and you clearly do 100 times more work than that, but where's the border? What moment can you say, okay, this is more mine than it is the AI? I mean, again, I credit the tools, and I do know some of the people behind Comfy UI, and they will be credited when the cloud film comes out. But I mean, I think it's just the number of hours you put into a project, the more ownership I think you have. I'm not really just releasing AI images on the internet and saying they're mine. Actually, we should make this a little bit more playful because I would love to see the title. So you used to have, for lighting or something, you would have a director of photography or something. And if the AI now does some of the lighting, because it does a lot of that stuff, you credit it as director of photography, stable the fusion. Like, where do you put it in there? I just think it's kind of fun to play with the role that it plays. So I don't mean to, I'm kind of mean, but I'm not, it's fun, It's a fun area for sure. For sure. Sorry, we're a little over time, but yeah. Yeah, sorry. Quick question. So this is, am I right? Is this the first AI art video? The one just won the Golden Inca? It's the first to win the AI in Art Award at Ars Electronica. Yeah, but how is this going to influence filmmaking, for example, like what you just did? Yeah, I mean, it's going to fundamentally change the way we shoot things. I still believe film production will continue, but the number of shots, the coverage you need on set, the way you kind of spray down a set, maybe the way you scan a set, we might start to train bespoke models per scene for a film. So if you need to do a pickup shot or you need to change an angle on something you shot on the day, you have that malleability. The poppy film kind of goes into that where we took our footage and we started to extend and create new shots with it. So it basically means that the footage is infinitely malleable and we've never really had this degree of malleability with our material before. So yeah, it's just infinite possibilities. Thank you so much both to Paul and to the audience. It's definitely a topic that's worthy of more discussion and you're welcome to stick around a little bit and maybe chat about it. We're going to take a bit of a lunch break. This is the end of the Artist Position panel. At 2 o'clock, we'll be starting with our art paper panel, the second one, which is about AI and speculative futures, so very similar topically. So we'd ask you to join us again around 2 PM. We're also here tomorrow as well. Please take a look at the website for the extended program. Thanks for all your great questions. Thanks for the great talk. And we'll see you later this afternoon. Thank you. Thank you.