Thank you. Okay. Hi, everyone. Nice to have you here. My name is Martin. I'm a professor at the University of Applied Sciences, Upper Austria in Hagenberg, so close to here. And I want to welcome you to the second art research paper session called AI and Speculative Futures. I'll just give you some brief information about the session and how it works. In total, we will have four presentations. Three will be in person. We will also have one presentation from my guest, San Diego. I'm not sure. We'll ask her where she is. And the presentations are scheduled as 20-minute blocks. So, 15 minutes talk, 5 minutes Q&A afterwards. So, if you have any questions, just raise your arm. And we have mics. So, we'll give you the mics. You can ask your question. That's not a problem. And this is it, basically. So, without further ado, I would start with the first speaker, Luca Caccini. Luca, come to us. Let's welcome Luca. Hello and welcome. My name is Luca Caccini and today I will present my paper, Bar Arnor is a Day with Economies post-media speculative politics. In this presentation I will directly show you the artwork, an extract of the artwork, and I will read some extract for my paper that I find most relevant. This paper examines the role of speculation as a tactical strategy in the context of post-media activism, with a specific focus on the work of artists by Arnaud Rizadeh and her piece Teslaism, Economics at the End of the Future. The concept of speculation is situated within the broader discourse of tactical media, which emphasizes the subversive and interventionist potential of media practices. By embracing speculation as a mode of resistance and word-making, Norisa Day's work exemplifies the transformative power of art in an age of media ubiquity and digital acceleration. Ultimately, for Bahar, rather than fighting speculation, which from her perspective became a primary mode of address issues from the left, we need to be able to think about how to socialize speculation, communalize speculation, how to risk together better. For those of you who are not familiar with the artist, Norisa Day was born in Iran in 1988. She completed a doctorate degree in art at Goldsmith London and she continued her practice as a researcher and artist and she's the founder of Weird Economies. Francis Fukuyama in The End of History and The Last Man in 1992, theorized the inadvertent conclusion of any project of human society. Thirty years later, it is still difficult to imagine an alternative course of history, but at the same time we find ourselves at the impasse of not being able to imagine how this model can continue unchallenged. In the face of the fractures and crises that characterize our contemporary times, such as ecological collapse, migration crises, pandemics and wars that are accelerating on a global scale, the neoliberal system is mutating in an increasingly dysfunctional model of authoritarian necropolitics. In Mark Fisher's 2009 books Capitalist Realism is There No Alternative? focused on the argument that art and imagination have been monopolized by capitalists since the rise of neoliberalism. Fisher's assessment of contemporary art's accessibility to capitalist co-optation resonates with the longtime criticism of the avant-garde movement in the 60s. The notion of capitalist realism was initially presented in May 1963 by a collective of West German artists and separately by the Japanese artist Akasagawa Genpei in his manifesto, Theses on Capitalist Realism. Drawing on the term's suggestive association with socialist realism, socialist realism, a government-endorsed artistic trend that emerged in the Soviet Union during the early 30s, commonly portrayed idealized images of collective work and exalted the working class. Capitalist realist approach portrays the capitalist equivalent commercial featuring new customer products and images of satisfied customers. Parallel to the Western avant-garde critique of capitalist realism in the 60s, a number of Soviet technologists envisioned computers as machines that could enable automated communists and cybernetics as a solution to problems that they were confronting in the centrally planned economy. After Scarcity from 2018 by Bahar Nourizadeh, is an experimental film essay that investigates this history of the Soviet Union working towards the goal of a constructive and fully automated planet economy. It argues that socialism was an integral part of the communist technological evolution and that computation does not necessarily involve the financialized autocracy that exists in the current society we live in. Norisa Day raises important concern about the redistributive historical justice might entail in modern society. Weird Economies, the platform she founded, a co-authored and socially conscious project funded by Noris-Ade aims to chart out economic imaginaries that deviate dramatically from the traditional financial structure. By utilizing a chrono-political approach that integrates past, present and future economic opportunities, the initiative functions as a journal, programming space and social experimentation site. In her project, Teslaism, Economic at the End of the Future, which you are seeing now, she provides a critical view of the possible outcomes of unregulated capitalism and economic expansion. Teslaism is a narrativized third-person racing and musical video game that showcases Elon Musk and his autonomous vehicle, which also refers as his romantic partner and mentor, on their journey to a shareholder meeting set against the backdrop of a futuristic Berlin transformed into a gamified landscape. The narrative leveraged the newly established Gigafactory in Berlin to explore the concept of Teslaism, a term coined to describe the evolution from post-Fordism to a new era characterized by enhanced production and consumption mechanisms. In the author's words, the idea of Teslaism emerged out of a segment of my PhD work that was trying to demarcate financialization as an era distinct from what's commonly known as neoliberalism which is poured for this mode of production. The transformation in the automobile industry have historically provided that storytelling device to read the ideological shifts in societies and their modes of production in parallel, and the car becomes somewhat a narrator of this history. We don't need to think too far to notice that there's something different about Tesla to Toyota, which provides a prototypical image for the post-40s paradigm. In line with political philosophers like Michel Feher, I think sharpening this distinction between what neoliberalism purposes to end and what in its real-world manifestation as financialization is, is quite important. Feher suggested in theory original neoliberalism, figureheads were trying to summon the profit-seeking entrepreneurial subject but in the path to the formation of this subjectivity they accidentally gave birth to the credit seeking portfolio manager. The difference is that the latter plays on the arena of speculation whereas the former is driven by rational incentives that leads to present games. The game was specifically developed for the Tresor 31, Techno Berlin and Great Freedom event. Norisa Day employs satirical imagination and an exaggerated extrapolation of current capitalist trajectories in order to emphasize the unsustainable nature of these trajectories and so to warn against the accumulation of an excessive amount of power among elites who are not accountable for their actions. Criticism focuses primarily on the potential consequences that could occur in real world as a result of unchecked growth, the influence of corporations and the visions of elite technological leaders. Norizadeh's political strategy proposed that it is possible to push the boundaries of our current economic system and stimulate innovative thought by employing artistic expression and the dissemination of speculative narratives. Through the utilization of creativity and imagination, we are able to overcome the constraints of realism and come up with a regional solution to the crisis that we are all experiencing collectively. We have the ability to inspire new perspectives and solutions that have the potential to shape our future if we challenge traditional norms and investigate alternative possibilities. By being open to speculation, we are able to imagine a world in which economic structures are reimagined and overcome for the purpose of improving society as a whole. Thank you. Thank you so much, Luca, for this interesting talk. Do we have any questions from the audience? In the meantime, I don't know. Ellen, you have a question. We have a question here in the second row. You get a microphone so everybody can hear you. So thank you so much. So like, why are you reading your paper to a soundtrack as a video is playing what was your choice to do that because it's important to also listen to the soundtrack because it was developed for a techno club which is a experimental space for showing up art right and it was, the soundtrack was developed by a famous techno artist and it's part of the artwork. So I hope you were able to listen. Wait, wait, wait. That's my question. Why did you choose to read an academic paper as a Club techno installation was showing what was your reason to do that because that's the point I mean it's this whole artwork is questioning if it's especially if video games or media in general could be shown in different spaces and with different methodologies. So in my case, I just wanted to do it in a different way. So to show off the video, because I think it was more, maybe not more interesting, but easier for you to understand the whole structure of the video game and video. So yeah, that's the reason, basically. And also, the soundtrack is part of the artwork, and it's very important for understanding it. Because the context of the soundtrack is a techno club. I mean, it was shows in a techno club, so it's part of the whole discourse. Okay. We have a second question here. I just wanted to ask, like, which club is this and where it is? Is it like a physical one or is it just a concept? Tresor in Berlin, which I think is closed now. Tresor, yeah, I heard of it. Yeah, unfortunately. Okay, okay. But it was 2018, so. Okay, yeah, cool. Very nice. Back then, yeah. Hello, thanks for sharing the work, the thoughts, and the time. I really like the structure of, like, video, and talk. I feel we're in the 21st century and we're just hyper-asked to concentrate on many things at the same time, so it was kind of refreshing. But my question is, is there a way we can access this game with it not necessarily being in an installation? That's another interesting point, because in another discussion I had about this game, I was discussing if we can even consider it a game somehow, because it missed the concept of interactivity. So it is structured as a game. It has a storytelling. It has the imagery of a video game, but still is sort of a prototype. It's not playable at the moment. But yeah, you can access the video. I definitely suggest you to check our website. And there's a lot of interesting projects. And you can... Sorry, what's the website? Bahar Nour is the website. And platform is Weird Economist. They develop many, many interesting projects there. So I also think you can access the proceedings of this conference. And there is Luca's paper inside. And I think you also have some links inside, or at least maybe you can access it there. I also have a question, Luca. So, I mean, how do you ensure that the players understand the message that you're trying to convey? Because this is like, I'm putting it super simply, okay? This is like when I play GTA, I also drive a car. I also control my player from a third-person perspective. And I guess there are also kind of like messages with capitalism. I see expensive cars and all this stuff. So what is different about your project specifically compared to a regular game? And how do you ensure the participants really understand what you're trying to convey? I think in the context of art, you can play with speculation and you can use the video game media as a structure to think about different realities and think about different structures and a game is particularly relevant as a media in that sense because it has some rules and structure that you always have to follow. So it's interesting to create this sort of alternative storytelling inside the games that you can actually play with and the building of the game itself is something that can you know provide a different methodology to how to develop different realities. Thank you so much. Do we have another questions from the audience? On the back, no? No questions? So maybe the last one. So what are your future plans with this project? Do you plan to develop on it or is this like, because you said it's part of the PhD, right? No, no, it's, I'm talking about Bahar Nourizadeh, the author is not, it's, I mean, she still work with research and she has this platform which is called with economies and she still developed a very interesting project as I said so I suggest to check out their website and see if what's her future development in her artistic career are and it's okay great so let's thank luca once again thank you so much for being here and sharing your thoughts great so now we have the second talk i think this is the virtual run is it so just can i make it yeah i think like I think it's let's see whether Amy will join us virtually afterwards so we we continue with the third one instead of the second one okay so is Amy in San Diego? Yeah, she is. We don't know where she is, right? Oh. Okay. Relax. Just relax. We're watching a video right now, right? Hi. Hi. Hi, my name is Amy Alexander and I'm sorry I can't be with you in person today. I'm going to talk a little bit about my art slash research project Deep Hysteria and some ways I think art can leverage algorithmic bias in AI systems constructively. So first, some history. For centuries, hysteria was a medical and mental diagnosis that assumed that females had an innate predisposition toward an anxious and nervous emotional state. And although that diagnosis has been retired, stereotypes of women as nervous, fearful, and uncertain continue to impact how women are perceived and treated. For one example, while more women than men are diagnosed with anxiety, a Google image search for the word anxiety will return a far disproportionate number of images of women, and these women tend to be depicted in stereotypical female hysteria poses. While the gender stereotyping around anxiety is obviously harmful to women, it's also harmful to men when they experience anxiety. Gender stereotypes around women's emotional states are also evident in the cultural expectation of smiling as women's default expression. A neutral facial expression on a woman tends to be read as disgust, distressed, or unhappy. unhappy. In recent years, image analysis software services have made facial analysis algorithms widely available to the public, and these services try to analyze for various attributes of the image, like the face position or different qualities of the image, as well as demographic data of the person in the image, like age or gender presentation. But some of these systems, like Amazon Recognition, also include a feature to classify the faces according to the emotion expressed. Now, Amazon's documentation does have a disclaimer that says recognition only determines the physical appearance of a person's face, quote, it is not a determination of the person's internal emotional state and should not be used in such a way. But we don't know what portion of users read or pay attention to this warning and limit their usage of recognition thusly. Now these emotion detection technologies have received their share of criticism, but researchers like the psychologist Lisa Feldman Barrett have found that not only can't deeply determine emotion just from looking at a picture of a face, but we humans are not good at this either. So it turns out that we people need situational context to understand what a particular facial movement means. Otherwise, we tend to get it wrong. Barrett's team also found that there's some variation between cultures in how a given facial expression is interpreted. So even this idea of recognizing the physical appearance of emotion seems a little bit suspect. As you might expect, when deep learning models for facial analysis are trained with image data classified by humans, problems tend to creep in. These problems started receiving attention in the past few years with respect to classification algorithms, as the researcher Joy Bola-Weeney and her team found. So Bola-Weeney's team found that various commercial image recognition systems did a poorer job in classifying darker-skinned females than they did on either males or lighter females. And if you look at the image on the left, you'll see a summary of Bolognini's team's findings with respect to how these systems classify gender. It does a poorer job on the darker females than any of the other groups. On the right side of the slide, you see this is also Boll and Weenie's team's image. So they submitted a picture of Oprah Winfrey to the Amazon Recognition Service for analysis, and it determined with 76% confidence that Oprah Winfrey was male, which is obviously incorrect. This was 2018 when they did this work. So why do these things happen, first of all? Well, historically, many training sets have lacked diversity. It's also possible that darker and female phases might be technically more challenging for the algorithm, even given a balanced data set. But knowing that, and by testing and verifying results, developers can augment training data and refine their algorithms to compensate for these problems. Which in fact many companies in the past few years have done, often in response to the work of Bolognini and other researchers in bringing these problems to light and publicizing them. But in the case of emotion detection algorithms, the thing being classified is subjective. Whether performed by human or machine, the identification of a person's emotional expression is inherently subjective. Unlike quantitative or demographic characteristics, it's not clear how developers can validate the performance of an algorithm that determines emotional expression. In other words, how do they know how well an emotion detection algorithm works? Since biases in emotion interpretation are so deeply embedded socially, a system that reflects those may go unnoticed. Race and gender stereotyped results may simply appear to be correct. So here's a test that I did in 2024 this year with the same image of Oprah Winfrey that Joy Bolognese's team had used in 2018. I submitted that to Amazon recognition, along with an image of the American football player Travis Kelsey on the field looking kind of tough. And you notice that Amazon Recognition is now doing much better in terms of gender classification. It's giving Oprah a female classification with 99.9% confidence, which almost seems suspiciously good. Like, I don't know if that means maybe they're testing or they're training on images of celebrities or Oprah herself, but it's doing much better. However, with the emotion classification, it's showing that Oprah appears to be disgusted with 98.4% confidence. And of course, in fact, she's really smiling slightly. So that, you know, it raises questions of gender and racial stereotyping. Certainly there's a concern of gender disparity. You notice the Travis Kelsey image. He's on the football field but appears to be calm. Now back to the Deep Hysteria project. So Deep Hysteria is a still image series I created that repurposes algorithmic bias in the service of unraveling this deep human bias. It's both an art project and a research project. So my process was that I retrained a generative deep learning model on images of YouTube vloggers. I then used this model to generate artificial people with relatively neutral looking expressions. I then created a series of variations of these fake people across the gender spectrum. I then submitted all of the gender-varied images to Amazon Recognition to analyze for emotion as well as gender and estimated age, and I ignored any results that Amazon said it had less than 50% confidence in. And although the deep hysteria faces are varied across the gender spectrum, and thus they include non-binary faces, recognition only has male and female designations, not non-binary, so it will label all of those images either male or female, but regardless, those images that Recognition identified as female were more likely to be labeled with stereotypically feminine emotions. And here we see a progression of a gender-varied face, and it's Amazon Recognition labeled mood, gender, and age. And as you can see, the more feminine the face gets, the more confused it is labeled as. Now, while the predominant mood analyzed by recognition for both the male-identified and female-identified neutral expression images was calm, a significantly greater number of the male identified images received that designation. Female identified images were more likely to be analyzed with emotions like fear, confused, sad, surprised, disgusted, as well as indeterminate. So Deep Hysteria, the project, groups these calm identified males beside their non-binary and female counterparts. Now I produced a handful of these image sets for exhibition, but I actually generated and analyzed a couple hundred in the research and the paper goes more into the specifics of the data distribution across that full dataset. Here's another image pair and their labelings. And here's a triplet with male, non-binary, and female images. So we understand that these algorithmic classifications are biased, and that that's a problem. But might they also be doing something useful? Well, they reveal the collective bias among the various people who classified those training images. And probably among a broad swath of society. But I'm hoping that they also do that for the viewer of the artworks. Like if we're honest with ourselves, when we look at these images, do some of these labelings kind of look right to us? Now, there's a tendency to approach AI bias with despair as it encapsulates and it can amplify really awful human biases. But I think it's essential to keep in mind that the biases originate with the humans. So we can reimagine AI models as data visualizations of not only their training data, but also human bias. And biased algorithms have the potential for positive social impact as well as negative. We can redeploy them in the service of revealing and interrogating deeply embedded social biases that we might not otherwise be able to or willing to see. This can help us demand accountability from developers for the systems they design, like just as Joy Bolognini's group was able to do. But hopefully we as artists can also use these visualizations to shed light on our own and our viewers' implicit biases and to help hold all of ourselves more accountable. So more details are in the paper and on the website and thanks very much for listening. Amy, can you hear me? Yes, hello. Oh, great, we can hear me? Yes, hello. Oh, great. We can hear you as well. Amazing. Like around, let's say, 3,000 people are listening right now to you. No, I'm just kidding. Not so many. But still, the room is really packed. And we have an amazing audience looking forward to your answers to the questions. I'm pretty sure we have a lot of questions. So we start the Q&A. Okay? You can hear me hear me used to okay okay great and we can hear you as well so you can just normally answer um we're hearing pretty well so do we have any questions from the audience in the meantime i have a question amy i mean maybe it's like a super simple, no, actually it's not a simple question, but what do we do about it in terms of how can we solve this super, super, super complex problems, right? So we know about the systemic bias and it's not a justification. Like there are like evolutionary explanations for it. If we see a pattern of a tiger's fur, then we should run, right? So categorical thinking was super important for survival. Today, it's maybe not as necessary anymore. So what is it like, do you have any solutions or can you propose any solutions how we can mitigate this problem? Because AI is kind of like a manifestation, right, of systemic bias. Right, yeah. So yeah, my thinking with the project is that yeah ai is a manifestation it's a data visualization of a collective us at some level and what i thought was interesting in making the images was that i realized that i you know like i agreed with some of these labelings even though i can see like it's sexist and I'm a woman, but I would still have those perceptions because I was socialized with every, you know, everybody else in my community. use these algorithmic biases in the software to help us understand and reveal to us our own biases that we're maybe not usually ready to admit. Because I think if you ask most people or many people, oh, do you have this gender bias by any chance, people would probably say, oh, no, no, I'm good. But you start to realize that you do. So I think that's one, you know, my takeaway for myself and what I hope people will think about from the project is that by doing this awful thing, amplifying these algorithmic biases, these AI systems that we all kind of tend to hate, can actually be doing something, I guess, educational and start us recognizing those biases. Because that's, you know, I think that's the step one is to just even acknowledge it to ourselves. It makes a lot of sense. Like from the subconsciousness to the consciousness, basically starting with this. Yeah, makes a lot of sense. Thank you so much. So do we have any other questions? Oh, yeah, we have a question in the back Hello Thank you for this very important work two things. Will you be at the AI ES? Conference in San Jose in October the ethics conference on bias in AI Did you understand question question, Amy? Oh yeah. I heard the question. Yes. Not that I know of. It could happen. Oh, and the question is, I came in, I'm sorry, a little late at the beginning of your presentation. And the data set that you created algorithmically to make these neutral faces, what metric do you have to know that it is in fact neutral, other than the process by which it was created? And I'm as biased as anyone else. And I'm not saying the classifications are in any way accurate that came from the machine. Yeah, no, they're not, there's no metric other than my generating images that are, they're subjectively neutral. The idea was to make them, make the faces look as close to one another as possible and to send the same image through, yeah, through to Amazon recognition and see how it recognized them. But no, I don't have any quantifiable weight. It was actually quite difficult. Because of the bias in the training sets, which in the longer paper it gets into that, like the training sets, more women smile. So if you do two things, it's... So I had to actually manually algorithmically tweak them, like add eye opening or something like that to one side of the – try to get them visually very close, as close as possible to each other, the male and female faces metric and there's not really supposed to be any quantifiable way of saying, oh, these are neutral faces other than people say, why didn't you work from the Flickr face and data set? There's different reasons for that. But one is I didn't want people that were smiling from the camera. So I tried to generate people that were just somewhat neutral faces. Thank you. We have another question. I think this is a follow-up to that question in a way. I was wondering about this process of genderizing the images. And you started to talk about that, that you made some tweaks. I wondered to what extent the tweaks were done by the software that you were using, and if so, could that have brought in some of the bias that you're talking about in the way in which the differently gendered images were being created? Or was that more of a manual process of transformation? Yeah, so. Generates and I think there's an image that gets in the slide deck of like the nine images as they as they go across and from male to female, like if you just do that, they they will automatically start smiling when they're uh when they're female because more women smile in their videos right we're socialized to smile so we make a video we're like this and it it seems to skew up, you know, skew the whole training data. So, and I, if you read the paper, I kind of talk about, I actually did an analysis of that numerically. So when I, and this is also a reason why I couldn't just use some of the off the shelf thingsshelf things where you can say oh make this face female because if you take a face and it's a male face you make it female it starts to smile a lot of times um so what i had to do was manually counteract that and by this was again done with you know style gam to ada so i had to actually find the vectors for, you know, smiles and open mouths and things and start like adding additively and subtracting until I could get the faces. So it's like it is it is in software. It's numerical because all these faces are numerically represented, which is why I had to train my own model so that I could have this numerical control. And yeah, but then it was a process of eyeballing them to get them back to looking like they're the same basic facial expression. So you might disagree with me on some of them. That one's got a little bit turned up now, but this is as close as I could get them is the answer to that, to make them visually, the male and female sides, as close as possible. Because if you try to do it algorithmically, it won't work. The women will start smiling. I mean, it is all algorithmic. But if you try to do it by just the default, they will start smiling. Yeah, I have the question like also related to this, but like, I wonder if, if the reason like thank you for the research and pointing out the bias and everything, but I'm wondering if it's not something which is initially wrong by detecting emotion based on face, because that's also something that you mentioned before, where, yeah, like, there is already so much, like, I assume, like, all this labeling, it literally just comes from these, yeah, supervised training sets where some human says, this girl looks surprised or this woman or this man looks calm and so I'm wondering if there is alternatives like I don't know something like pupil detection or other kind of data points that could get us like a more accurate reading of the human emotion without like falling in the trap of just labeling based on what we perceive a person to be, which is how I assume the labeling is happening. Like, yeah, I was wondering if you have any thoughts on that. Yes, thanks for the question. And in the paper, I kind of talk about Kate Crawford's research and Lisa Feldman Barrett's research in like what a problematic premise it obviously is to try to label the faces and you know label faces and say oh this represents this emotion and even Amazon recognition over the last years has sort of backed away from you know pretending it can it can really do this and say well you know it's really just for appearance of emotion. We don't know how many people actually pay attention to that. So it's, you know, there's cultural differences. It's very problematic. And a lot of people have done this research that, like, this is just a kind of, you know, a fool's errand to try to label these two-dimensional representations as anything that could really represent uh internal emotion and um but yeah one of my the things that i that interests me in this is this idea of this you know trying to label these internal subjective representations, right? So it's like, it's impossible. How do you validate that? Because it's presumably this is a supervised data set and they labeled these pictures of people, but they knew it. How do you know whether it's working? Because you can't go up to a person that you labeled as happy and find out if they were really happy. And if their idea of happy means what you think it's different than trying to label something like somebody's age where there's like a there's an answer so the impossibility of validation is one of the things that i think is um you know problematic about the the entire enterprise of emotion detection and revealing, right? Like it's very, you know, it's like something very, very snake oil, right? Like why is this even a product in the 21st century that people would try to market? it um people would try to market so um one of the things that i would uh uh suggest is that we question why there should be a like that's your internal state what else should we figure out like what you're thinking about and you know what your political leanings are like what are we what are we doing so like i i wouldn't personally even go for pupil detection or anything. I would go for, like, don't try to do that. And it would be impossible even really to tell if it's valid. Probably. Yeah, maybe there's something that will be quantifiable and that will be quite frightening if we do find such a thing. But yeah, I think it's problematic to even believe that AI systems can classify based on subjective criteria. Thank you. Okay, we have two questions left. Oh, Even three. Amy, a lot of questions here in the room. Okay. Let's go. I hope it's okay. Do you still have time? We have three minutes left. We have to continue. Three minutes left. Let's do the last two questions. I was just wondering, I mean in your work you focus about the difference between male and female recognition of emotion. And did you find that just focusing on male, there was also a disconnection between, like, what is recognized and what actually was, the emotion displayed? I'm not sure I heard the first part of the question. We basically saw the disparity between female and male emotional recognition. Do you have any thoughts about disparity just focusing on the male agenda which we kind of give it forgiven that it was right or right oh yeah very yes and that's a very good point which i i touch on in uh the paper and you know one of the things when I've presented this work to some of my students they've pointed out like there's all these concerns about like women being over diagnosed as being nervous and you know or even having anxiety but we as males feel like we're under diagnosed with anxiety and there's these stereotypes around it. And I thought, well, you know, this is a really good point, right? Like, so the problem is that we, we have these stereotyped representations. I think I showed one of the Google images, right? Like these, this, going back to this, you know, hundreds of years of this female hysteria, right. But what does that, you know, hundreds of years of this female hysteria, right? But what does that, you know, that's also harmful to men as well. And you're right. It's whether it's men or female, these are completely subjective. They're misread. So in some ways, yes, I do present it as though the male face seems to be the control, but it's not, right? Like it's wrong on both sides and it's harmful to both genders to have these stereotypes. So very good point. Thank you. Okay, we have a last question in the back. I don't know. I'm audible. Okay, good. What do you know about the companies that are subcontracted to label all of these images? I mean, I'm not sure if it's very well-paid labor, maybe it's outsourced across different countries. Yeah, how much do we know about the demographics of the workers who are used to label the images? Yeah, I don't think we know very much. We suspect that we, because the companies are very secretive, we suspect that these are supervised learning sets that are labeled by workers who we have our suspicions that they may be these low paid workers in, you know, different parts of the world. But in some cases, there are like known emotion training sets, but we don't know what like any given company is using. So I think we have our suspicions, but we don't know. Okay, so one last one. Do people use Paul ekman's research anymore in any kind of emotional classification methodology you know but um i would assume that maybe they do but i'm not yeah i'm i'm not sure about that. I'm more focused on just like the, you know, the problematics of the systems. But yeah, Kate, like again, Kate Crawford has written extensive research on all of the histories and the problems of the way we classify research. classified research and Lisa Feldman Barrett as well on the problems of trying to contextualize emotion just based on an image like we can't we can't do it without context gotcha thank you okay so Amy thanks once again for being with us virtually right so it took a bit longer but I think as you cannot hear, be here, unfortunately. So, but we, yeah, this is what we want to show. This is the audience. Thank you so much for having me. I'm so sorry I couldn't be there in person. Thank you so much, Amy. Thank you so much, everybody. Have a beautiful day. Okay. Thanks, you too. Bye-bye. Bye. Bye. So as I said, if you want to get in touch with Amy, she's referred on our website as well as in the proceedings. So there is the email address provided. So just feel free to get in contact with her. Okay, so let's continue with the third talk of today's session. It's presented by Ellen Perlman. Language is leaving me. Let's welcome Ellen. Okay, I have to give a trigger warning. There's going to be very disturbing material in this presentation, so I'm warning people and I'll repeat the warning closer to the presentation. Okay, so epigenetic trauma is something that's passed on through our DNA and it's very quantified at this point and it's especially prevalent in cultures of diaspora and I consider myself So this is the only science thing I'm going to show, which is our DNA turning on and off, so it's really real. The science has been discovering what epigenetic trauma is. So in my previous work, I made Noor, a brainwave opera, and Ebo, an emotionally intelligent, artificial intelligent brainwave opera, and they were always about World War II, but I had no experience of World War II. I grew up in America and I was born after World War II. So why was I making oper this, GANs, machine learning, and the seeds of AI cinema were happening at the same time that I was asking myself these questions. the day which was 2021 so long ago image and text was something we all learned about some you know it didn't happen that long ago and if you look in the date on this it's January 5th 2021 when clip was born and I'm not going to go into this very deeply but here is an example from Night Cafe, which was one of the first commercially used products. Here's the word. The sun is shining, but it's raining outside using normal clip. Then Dolly broke. And I don't know if everyone remembers. Look at the date. It was around the same time. You couldn't get into it, but the Russians hacked it, so I used RooDolly immediately. And here is RooDolly with the sun is shining, but it's raining outside. Remember those days so many eons ago. Then OpenAI released Glide, which was the precursor to stable diffusion. And here is Glide talking about the sun is shining but outside it's raining. So I was going where are these images coming from? Who's making the decisions? Can understand my epigenetic trauma? What is my epigenetic trauma? Where is AI going with this? So these images and are coming from specific data sets as a data set kind of conference in the beginning and so what I tried to do is I took the see the difference in the beginning. And so what I tried to do is I took the word queer, which in the English-British language means strange or odd, and now it has been genderized and sexualized, but this is straight from the dictionary. And I ran the question queer through the five, six data sets, and there were supposed to be six different data sets, and everything looked the same. Ambiguous person with rainbow flag, bricks in a couch, or framed face, and I was going, wait a minute, this is millions of images, why does everything look the same? So then I threw in boy and girl into the mix. And back in 2021, this is what you got. And I went, well, there's a little difference, but we've still got those rainbow flags floating around and all this other stuff. Why is this happening? So then I was decided, what happens if you run these terms queer and queer boy and queer girl through Chinese and then it started getting really weird liquor naked bodies I was going what still got the flags and then I started running them through Hindi and Tamil and it was got even worse. It was like it was either a piece of toast with pesto on it or you know weird words but if you notice one of them still sort of has the flag. So then I started I was working with some people who were native Hindi and Tamil speakers, and I said, what if we throw these words in the native language in the slang? And it really went off the rails. I was like, oh, this is so amazing. I'm so into this. What can I do with it? So then I started looking at the data set in Clip and I was going, oh, aircraft, flowers, hateful memes, ImageNet, you know, BirdSnap. I was going, that's how it's figuring it out? There's weird. That is really weird. This was 21-22. And then what happened in 2022 in March is Lion 5B from OpenA, well from Lion Foundation actually popped. And I went, oh, now we have five billion images to play with. So then I found out that this guy, Christoph Schuh, I can't pronounce it. You guys can pronounce it because we're in Austria so you can do it, made lion aesthetics and I went who's this guy Christoph and what's a lion aesthetic and he's not an artist and how dare he. So I went into lion aesthetics and I found a lion aesthetic predictor and I went he's got to be out of his mind. I mean like this is ridiculous, right? So I really dove down into that and I go okay so like what are they using and what numerics are they using? I was like you know it's like secret undercover agent. What are these people doing? And if you see the text strings are in different languages, right? So I went hmm this is weird and then there's clip aesthetic score predictors so all these images are returned with aesthetic predictors and I got really pissed off and I find like no so then as I was thinking about epigenetic cultures in diaspora, I thought, then this popped in February 23rd, 2022, and Mark Zuckerberg said, oh we're gonna make something for all languages, and I went, oh my god, now it's really happened, right? Now we're in trouble and you know the question before is who's coding it and Kate Crawford knows this very well. There's the AI factor, factories of who's coding it and I'm going oh this is going really well. So then I decided that I had to really go into my epigenetic memories. I'm not Polish, but it was the closest I could get. I'm half Lithuanian and half Belarusian. So I didn't have time to go to Lithuania. So then what happened is I did some initial tests again in 2022 of some of my epigenetic memories. And this is back in the day with night cafe and I was going like oh this is really intense and I had a little story I started telling about my epigenetic memory so then that didn't work out so great so I decided I want to learn to use Google Colab and Amazon Web Services which is the default thing and I had to start dealing with this stuff and I was screaming because I was going, oh my God, I have to learn this. But I did. And in 2021, 22, I decided to make my first AI movie, which I hope I can play. I'm not getting this in the viewer. Can someone come up here because I'm only getting preview mode. I'm not getting the actual, oh, here, I see it. No, it's okay, I got it. So I actually made in collab this movie in the end of 2021 and he breaks out those shiny things, those teeth. And I went, oh my God, I made a movie, an AI movie. I'm going to pass out. You know, I was like, oh no. So then what happened is all these styles started getting introduced. It was like, and a lot of the artists here know what I'm talking about. It was like nonstop styles you could apply. like non-stop styles you could apply. So then I decided I was gonna make another 12-second movie and you're gonna see noise turned into an image here. This is how it did it in Colab and how you could move and this was again right at the beginning of 2022 and I desaturated it. And I'm gonna play it one more time so you can see the noise and the algorithm actually rendering the image. Okay. So I was overjoyed. I went, oh my God, I made a movie again. So then, wait, let's move on. I started making a little movie about my epigenetic trauma, and I compared it here. My friend and I were in Latvia, driving from Riga to Laipaia. About midway, we entered the quaint village of Azaput, full of small, centuries-old wooden houses and an old stone castle, the Livonian Otter, built by the Crusaders. So the first one is part of the little movie I made, which you'll see more of, and the other was how I rendered that, and I was overjoyed. I went, wow, this is really smoking. Okay. So then in 2023, I was awarded as Fulbright Scholar in the Department of Mathematics at the University of Warsaw. I would get me those mathematicians. We started making customized solutions. So I decided I wanted to do this movie and I wanted to interpret it in five languages. So the language was English, the original, Yiddish, Chinese, English, the original, Yiddish, Chinese, Tamal, and Hosa with the original scripts and see how AI took not only those languages but those image databanks. And this is some of my crew and I actually was able to get this guy Tony out of Ukraine because he was a really great coder and I wrote a letter to the Ministry of Youth Sports and Culture and got him out because he helped code the thing. And then because I needed such a killer server, I had a friend in Hong Kong who lent me their server. You know, server power and I could pipe in through a VPN to render this stuff. So I created 57 scenes in my five minute film and I had to split them all into JPEGs and then compare them and then render them again as movies. So it was pretty intense. And then happily automatic 11.11 came out for those of you who are working in this field. So here is an example of an old, 1111 came out for those of you who are working in this field. So here is an example of an old, I used a lot of archival footage of this woman, and you see how it's interpreted in five languages, Xhosa, Tamal, Chinese, and Yiddish. Same image, image to image comparison so this was the FFmpeg creation where we could reassemble them with files that Tony wrote and I had to deal with a lot of flickering which I'm not going to go into now but when we reassembled them they had a lot of flickering you can just see a little bit you know I had to find the correct rate so they wouldn't drive me crazy as I watched them. Okay. And then I decided the soundtrack was going to be biometric, so we created EMG sensors from scratch, built the board, sent it to Shenzhen. Okay. And this was how I wanted the performance to look like. And then what happened, and this is going to be like where it gets really gnarly, is this premiered at the Copernicus Science Center on October 7th at the exact same second that the war in the Middle East broke out. So I'm going to give you a trigger warning now, and here is the piece comparing images to images with my narration in different languages. My friend and I were in Latvia driving from Riga to Laipaia. About midway we entered the quaint village of Azaput, full of small centuries-old wooden houses and an old stone castle, the Livonian Order, built by the Crusaders. My friend first took me to a yellow stucco and brick building, the Cultural Center. She wanted me to see it. Then we went to a very large structure that was an artist residency. It was called the Surdy Residency. The house had backyard studios with a photo darkroom, ceramic woodworking facilities, and other crafts. The building was so old, it is listed as a UNESCO Intangible Cultural Heritage Site. The head of the residency, my friend and I all had coffee. I told my host I was an artist from New York City. That is all I told her. She looked distressed and said she had something very important to give me we'd never met before. I could not imagine what she wanted to give me. She excused herself, got up and left the room. Then she came back and handed me a small publication called Narratives about the Jews of Azaputh. She said she worked on it for a year interviewing people who were still alive and could give testimonies before they died. She wanted to give it to me as a gift. I did not know what she was talking about. We finished our coffee and my friend and I continued our journey. I put the book away and did not look at it for weeks. Then I read it. It was a collection of memories of people who lived before and during World War II and had knowledge of the Jewish community in Azaput. Some of the recollections are personal and some are snippets they heard from their parents and other adults who are no longer alive. Many of the people who tell these stories are still living, but they are old. So now we hear bang bang. Our neighbor hit himself. He was the head of the town council. A woman had an infant pressed to her breast. She wouldn't let go of her baby. had an infant pressed to her breast. She wouldn't let go of her baby. He ran up to her and shoved her with the butt of his rifle, shoved her in a pit with that baby. I saw when they were digging them up years later that one had some kind of little bundle cramped in the bones. The years went by. It was four years after the war. They started to dig all of them out. That day, Mother told me to graze the cows on that side. One of my cows smells that horrible smell. You see what they did? They put barbed wire around. I saw not just our neighbor there, but some of the other shooters. Two had whores next to them drinking bottles are scattered around a naked fat woman is lying in the bushes right there at the old Graves Edge right there they were digging them out they pour chlorine over them now the whole world stinks. One of my cows goes totally crazy. It was a hot day. The pit is opened and they start pulling them out. And the cows are bellowing and running, probably hopped up from the stench of chlorine. They pull them out in pieces. The skulls were already naked. There was also rot. The bones, they were horrible. And then, about the gold, the teeth, I have to tell it to the end. The cows are bellowing, and one is hanging over the barbed wire and tore up her milk bags, and they're laughing. And what do I see? In the sun, I can still see one of those naked skulls before my eyes. The gold is gleaming. Gold teeth in their mouths. One of the men, right at the edge of the grave he was eating, it doesn't bother him what he's doing, wipes his hand like this and grabs. Next to him are these pliers. And he goes over and crunch, crunch, crunch. Give me that bag, he says to the other Latvians. And he breaks out those shiny things from that skull, those teeth. So this is image to image recognition in different languages. Just so you have a sense of what's going on. That I didn't make these other images, this is what AI saw. I also had the audience member wired up, so their facial expressions changed it, but I'm going to not play that part of it because of her time. And what's come about of this is I've been able to make a chapter in a book called Trauma-Informed Placemaking. This piece and others are coming out next week in New Dramatur of contemporary opera practitioners perspective from Rutledge and the Harvard Metal Lab. It will be a featured article in Leonardo 2024-2025. It is a finalist for the Lumen Prize which will be announced in October. And right now I'm a visiting research scholar at NYU Tandon School of Engineering and a HarvestWorks New Works grantee and I'm moving this along to make it a full-on performative epigenetic biometric opera. Thank you. Okay, Ellen, thank you so much for this talk. Are there any questions from the audience? We are running a little bit out of time, so we have time for maximum one question. Oh, yeah, over there. But you're still sticking around here, right? So you can also ask questions afterwards offline, okay? Thank you. Thank you for this amazing work you've done. I wonder if you're aware of the translation processes going when you prompt something in a language other than English, does it go over data sets from that original language? I mean, when you prompt in Yiddish, I'm not sure there is so much data to take from. The answer to that is first I use Google Translate. The answer to that is first I use Google Translate, you know, and then I feed the original script into Stable Diffusion purposely you know, and then I feed the original script into stable diffusion purposely because there isn't that much information and there aren't that much data sets and I want to use data sets. First I started with Chinese because there's some information you could see Chinese pulled up sort of mostly softcore Chinese porn, you know, I mean you could see what it was pulling from and In the new opera, I'm gonna try and use more data sets from more underserved Communities to see how even more whack it gets So yeah, I'm fully aware of it Thank you Let's thank Ellen once again for her talk. Thank you so much, Ellen. Okay, and now let's continue with the last one, with the last talk. It's by Maria Pfeiffer and Peter Holzkorn, right? Great. You introduced the title, so I don't need to introduce it. It's okay? You introduced the title, so I don't need to introduce it. It's okay. You introduce the title. That's good. So, yeah. Let's welcome. Yeah. So, hello. Good afternoon. Yeah. Everything okay? Everything set? Super. So, welcome to the presentation of our paper titled 23 Nanometers, a case study of data art research as a model for practice in data art and science I'm here together with my colleague and co-author Peter Holtzman and we're also standing in for co-authors Nicolas Navoe and Matthew Gardner we are all working as artists and researchers at Ars Electronica future lab and in this talk we will introduce this concept of data art and science, discuss a research model that we've developed to explore this field and finally apply this model to analyze a specific work that is called 23 nanometers. This is the case study. So just to begin I want to give you some context, some background, under which framework this work was developed. So the Data Art and Science project was initiated last year, 2023, by Ars Electronica Future Lab in cooperation with Toyota Koenig-Alfa and also Shiga University. And it aimed to explore the idea how data art and data science can merge to create new forms of artistic expression and understanding. A central idea from this collaboration was also the development of a DAS or data art and science curriculum which is emphasizing not only creating data-driven art, but also integrating scientific principles and research methodologies. The project is still ongoing. You can check out the 2024 iteration at the Post City if you have time at the Open Future Lab and also in the Deep Space 8K. In this collaboration currently this year, it is really more about also the region that Shiga University is located. It's a prefecture in Japan, and it's really about also establishing a center there about data art and science and to revitalize this region. So here you can see what we did last year. So the project's first step really involved commissioning five data art and science pieces for the Deep Space 8K. And they were exhibited last year at the festival and are also now part of the regular Deep Space program. So these works include Harmony Lost from Arnold Deutschbauer together with Future Lab, Akiko Nakayama's work Isun, Mother Fluctuation by Akira Wakita and One Who Suffers by Equadra Tur. So all these artists worked with open data sets related to topics such as mental and physical health, climate change and biodiversity. And the fifth work that we will present in depth today is 23 nanometers. And in the other pieces, they worked with data sets that were freely available, but this is actually an artistic exploration of an ongoing research initiative that investigates how the exhaustion of car combustion engines can be measured in a new more individualized way but more on that in detail later and So we also conducted artist interviews and really for so the idea was to commission artworks Where we could really follow this process of how the artists work with the data Explore the data and find the interesting stories within these datasets. So now before we can know more about the work and see something about the work, so I want to explain to you a little bit about this data art and science as a term, because it's not only this collaborative project framework framework but also this idea behind the project to investigate how data art, art that uses really data as a material and data science can intertwine to create powerful narratives and that engage also audiences on multiple levels. So we use this term that data and science and thus to describe this fusion. And it's really important to consider this within these fields of data science, data visualizations, and data art. So these disciplines that all have their own unique qualities and strengths. And for us, this important thing is not only this addition of opposites or different of different qualities but really their common potential to contextualize social cultural and political phenomena and to make these phenomena seem better understandable and so perceive them also as changeable so there's always this call to action on social political issues through data and through data art involved. So this is really also for us a research field and data we see data art and science as a specific way to look at works of data art that step into this exchange with data science and data visualization and to open potentials for deep real-life impact on the world. So in the next slide you can see this data art research circle. This helped us really to align this process and to follow the process of how data art and science pieces or data pieces come into existence. So we based this on the typical research process and wanted really to have here a process that can help us to discuss and get a bit deeper understanding of all kind of different works from really this having an idea, making observation that gives you a, where the artistic inquiry begins over the development and production, but work is finally formed and refined and is getting into existence, and then the publications would you would say in the scientific world that is most of the time an exhibition and really then also coming into discourse coming into context and resonance with the audience and that can lead you to new questions we also applied this framework on the piece that we will present today so the 23 nanometers and with this I like to hand over to you, Peter. Thank you. Yes, 23. I have my own. Thank you. 23 nanometers. It's a data art video loop, four minutes, that has been shown in Deep Space 8K, as mentioned. And we worked together for this with researchers from TU Graz, Graz Technical University, who were doing this project on remote emission sensing. And I'm going to use these steps of the circle that you just introduced in the next few slides to kind of trace the creation process of this and how you can apply this to a creation process for a work such as this. So first comes observation, questions, investigation. We connected to the researchers at the TU Graz to find out what the data is about and you know how to interpret it What the purpose of their research is? starting out with the origin and the meaning of the data and Okay, so the discussion was what did I do they had these These sensors set up in different cities across Europe capturing They had these sensors set up in different cities across Europe, capturing emissions of vehicles at the point of emission in real-life traffic. So that's unusual. Normally this is done in a lab. And that was a limited time and place. They also got car registration data from police databases, so automatic license plate reading, anonymized. But still very interesting, the disconnection, very surprising to us that this connection was made they had second kind of data sets they call it plume chasing meaning they had a moving sensor on a vehicle that was following other vehicles so two different data sets over certain dates and times that we were able to work with okay so based on that we have this first understanding um as as mentioned we get observations from this you know initial initial observations we know that there are health effects that are not well understood of these emissions uh we know that these real life traffic measurements have different like give us different knowledge than what we know from the labs we know that knowing about such things can influence policy decisions. This automated license plate connection could impact privacy concerns, just being aware of it. And also connections such as further speculation, how could we automate systems that react on the knowledge of such emissions? And we create questions from that that would lead to the creation of data art piece so what about this data could be really new and is unknown here to for what about it has the potential to change our behavior to impact how we feel about things and how we can we imagine it because it's invisible basically it's super, and how can we fuel human imagination to get, you know, to make it visible, to make it tangible in a way. The second set of steps, summarizing investigation, envisioning experimentation is really about getting to know the data, and here we use this, you know, get the raw data from the researchers at TU Graz, you know, understanding that there's from the researchers at TU Graz, you know, understanding that there's a lot of inconsistency in the data, they had different parameters that were measured in different data sets, there are gaps, there are different labels, just getting familiar with it using data science and data visualization toolkits to explore it and gradually trying to understand, you know, what are kind of the gaps in the data, what are the scopes and extents, what are the different parameters. So we had all kinds of things measured. We settled on nanoparticles. We settled on nanoparticles between 300 and down to 23 nanometers of size to just have this key focus, to have something that guides the story and choosing the parameter that seemed the most interesting and the most, you know, the most newly discovered and the most newly measurable. Once that was settled, trying to understand patterns there, you know, discovering that there are certain brands, for example, that are surprisingly, you know, have surprisingly high concentrations of these nanoparticles. It could be a glitch, could be kind of an outlier, could be different patterns, trying to find interesting stories and patterns in these data sets. If you can see it, but there are the different car types down there. It's really very, you get all this information of the data. So if this is a Mini Cooper S Clubman or a Volkswagen Polo or so on. The second kind of data that we got was this plume chasing. It's a very different character. So immediately, we had geotagging to work with here. So immediately, being able to plot this and discover the geospatial structures that are inherent in this data and how that could be translated to stories. In this case, they drove all over the Czech Republic with this car, and these kinds of time was converted to colors just for the sake of exploring the data and continuing on through these phases, which leads to the heart of the production, which is we call experimentation and production. This is more like a loop in the creation process, starting in Unity game engine to use generative methods, use particle systems to map different parameters onto the controlling agents in these systems, trying to find different mappings, what could work well, what could lead to interesting visual storytelling, where does it cross from data visualization to data art, how abstract, how concrete to be, and iterating on these things leading up to the final piece. I'm not going to show the whole thing, just a snippet of less than a minute, which is out of the second scene where we take this plume-chasing data, turn it into kind of a sculptural landscape, and following it, kind of going into this, which at the same time is growing and transforming as we approach it and you have to imagine this in deep space you are in this kind of room immersed in this color in stereo vision as well so it becomes kind of an yeah more like a mood and like immersed in that mood and it gives you a hint of what it represents, but it doesn't explain it. So that's where this degree of abstraction lands in this part. The final steps, we're summarizing four of the final steps again. As mentioned, it is presented in Deep Space 8K. It is also in the current program. What's interesting, we have been discussing also with the guides, with the info trainers at the museum, how the different pieces, not just this one, but also the other ones of these commissioned artworks, land with the audience and what is easy and hard to explain. It turns out that this one is relatively easy to explain because it turns out that this one is relatively easy to explain because it also traces a story from more concrete to more abstract. You see these different scenes of the data representation, but at the same time, in the beginning, you have this short moment where you have a size comparison just kind of drawing you and you know what kind of scale are we working with. So it also goes from a more almost like a data visualization to a more abstract data art approach. I want to briefly mention also the technology and techniques used. So one thing that was crucial in the creation process is these there are all these tools coming out in the Unity engine. It's a shader graph, VFX graph that help iterate fast when working with data as a substance, as like a sculptural material, because they are visual programming tools that integrate with the rest of the engine very well and allow you to prototype rapidly. And the second technical point that I want to raise is that it was decided to do this as a rendered video because of the constraints that we had at the time getting a Unity project to a stereo vision deep space format. But other than that, it is generative. So it could be applied to new data. It could be run and appear differently every time. And it also inspired us to work on a layer of translation between modern Unity and Deep Space 8K stereo formats. With this, I'm going to return to this cycle. I just have kind of highlighted here these four steps because, in a way, they are the heart of the process and they are also we could almost regard them as a sub loop because they are the most iterative of those but I don't know do you want to add I just want to add that this is really just for us it was a helpful framework just to really also cover the whole journey of an art piece that it takes but of course it's not unless should not be understood as to really also cover the whole journey of an art piece that it takes, but of course it's not, should not be understood as separated steps, but of course more like these loops that follow each other and that can, you do not also need to have all the steps in an artwork, but it was really this, trying to grasp onto something, not from an aesthetic point of view and not from the content wise but or provide a framework that is rather neutral and can be um yeah can be just helpful to look at artworks um that are super different and uh yeah so that's kind of what i wanted to say yeah yes that's it so i think you already mentioned at the beginning that we are continuing the project. Yes, the project is still going on. At the moment we have a work that is working. It's called Memories for Futures. It is doing... We went to Japan and did a workshop with residents there, older residents, and they could scan with Gaussian splitting technology, could scan objects that they think are important in there. We've seen it in the little village that is kind of, yeah, under the danger of being, yeah, that no more people live there and now build out the work out of that and also there is some other projects that investigate the history of the fishermen there and some workshops and talks that you can experience in post city okay that's it thank you great thank you so much for the talk now we have a bit time left so do we have any questions for the Q&A? Oh yeah, in the back, sorry. The sky is opening up here. It's good. I'm really interested in this division between data visualization and data art. And of course you're highlighting the data art side here. But when you think about presenting this type of project, do you feel any compulsion or opportunity to try and work on both sides of that equation, you know, to show people the underlying data and infrastructure as well as have this kind of affective, immersive experience like you described. Yes, absolutely. And the five pieces that we introduced in the beginning, they also had, they ran the gamut of completely subjective, almost just inspired by the data, to the more concrete and really getting that into a challenging kind of data visualization that is so immediately challenging that it's not really explaining things anymore. So we regard it as data art. And that would be more towards this piece that we looked at would be very close to a visualization because it is easier to go along with the story and explain what this data represents. But still, if you don't have the explanation, you are still challenged to use your imagination and interpret it, the world that you suddenly find yourself in, and try and figure it out and make sense of it. And I think you touch a point here that I mean is really important. I think most of the time it's not just the work as a standalone thing, but it is so much depending on the presentation, all the things surrounding it, the meta information so to say. So is there a guide that explains it to you? Do you know, maybe read a paper about it or not? This is a work of art, it's never standing alone. So you have either the, sorry, I don't want to be on the photo with the. So it's really, I think it really is also exploring these intersections and these overlaps between where is it more visualization, where is it more art, and where do they overlap, and what do you need to understand something and to evoke also a response in the audience, or not only in the audience, also in your collaborators. Okay, great. That was professional. Thinking about water bottles while answering a question. Amazing. Okay, let's go to... No, it's all about the pictures. All about the pictures, right. We have a second question here well I'm interested in this idea of mapping how you take one element of the data and you map it to something else and And you showed how you were experimenting with the different forms. So could you talk a little bit about your thinking in regards to art and design principles, the principles of animation and time and perception? And how did those factor into that decision-making with the mapping? Yeah, I mean, in this case, ultimately it was structured in three parts that would draw you in from, you know, oh, actually I'm shown and explained something, but then it's not really explained. You know, going from this hair size comparison, going to the scene where the data is, you can still kind of get formulated formulate a pretty good idea of what you're looking at to the more abstract scene where you don't really know what you're looking at anymore, but you are now pretty deep in this visual world and you know that it's about particles and you're trying to imagine what it could be. And this imagination, I think that's the core of the design principle for us. And I just wanted to... It makes total sense. And I just wanted to say that I think you said something very important. It's all about decision-making. And I think everybody of you who has realised an artwork knows you may have to take so many decisions because there are so many ways to go. And then you just have to make also a decision based on a lot of things concept also pragmatism functionality aesthetics taste and then you go to the next step and it is these decisions that you have to take to come to a final piece and this is really I think also very interesting where and maybe that relates to the first question as well. I think maybe that's where you cross the line. In data visualisation, you still use your taste and your aesthetic, but you're still trying to explain something very clearly, as clearly as possible normally, even if you have an agenda in data visualisation. But in data art, it's like an expressive piece of art where you make all the aesthetic decisions and you authored it. But at the same time, there's this other layer that people can connect to. It's not just personal experience. It's kind of a layer of measurement of the world that is just not on the surface. It's just like one layer deeper. Okay. Do we have another question? If not, I would say thank you so much once again. Thank you. Thank you. Okay. This is it. Let's conclude the session. Now, I'm kind of, like, not satisfied, but not due to the quality of the presentations. Don't get me wrong. Don't get me wrong. I like, this was like really the critically addressing AI, right? Like from the negative perspective, but guess what? The next session is about creative AI, how we can leverage it for doing good. So I hope you'll still stay here. Like you don't need to stay here. We have 30 minutes break, I guess. Maybe there's the agenda in a second. So in 30 minutes, we start the next session session it's about creative ai from the art paper track so thank you so much for attention and yeah have a nice day