Hi everyone, my name is Sarah Grafani-Palermo, Sarah GHP, and I'm here today to talk about sort of aesthetic resistance to the algorithmic changes that are being foisted upon us that we've all discussed. A brief note, some of the video slides do have kind of flashy images, so if those are not good for you, this may not be the best talk. They're not super flashy, I just, you know, don't want anyone to get hurt. Okay, so this is in a lot of ways a follow-up to a talk that I did at a symposium organized a few years ago by Cervus. So I was like really excited to come back. And I was talking about basically AI art as smooth art and the way this aesthetic characteristics reflect the kind of repressive nature of algorithmic logic and is gross not just to look at but inside. And basically, in the years since I've done that talk, everything has just gotten worse, right? When I spoke before, there was no chat GPT, there was no stable diffusion, there was no co-pilot. Open AI was not a household name and normie journalists were not generating images to post as their article illustrations. The dangers of algorithmic art were less the concern of Hollywood and more just normal new media grad students. And very few people's parents knew about neural network. But of course, since then, instances of AI have metastasized. Kids are using ChatGPT for their homework. Artists are using it to write their grants. Some people gave Chatbot my name, which I find very rude of them. And very few of the folks I find outside maybe this conference who are embracing this brave new future, or maybe even some people at this conference who think they can domesticate AI and make it its friend, are thinking really deeply about the ideology or, as we like to say in the 21st century, the logics that lurk beneath how long it's been under construction and what an unconscious embrace of it will bring. And so in an attempt to cheer myself up in the face of this ongoing temptation to doom and to sort of up the stakes of my last catalog, which was just to be like, here's how this art is bad. I'm here to enlist all of you in my sort of aesthetic persistence, which is to say messy, overwhelming, emotional. And there's certainly not a lack of reasoned objections to AI, right? We've seen so many of them, but I think there's a lot to be said for looking at it aesthetically and also taking it on on a cultural front that's maybe a little bit less direct. And sometimes I do worry that focusing on the aesthetic is too roundabout in terms of change. But then every time I get concerned, I'm just reminded that ideas and the power to execute them are already woven together, and so taking on the execution and making the idea implicit might not be the worst thing ever. And I came up with a very interesting demonstration of that fact, I thought, when I was preparing for this talk, which is the journalist Moira Weigel took a really deep dive into Alex Karp's PhD thesis. Alex Karp, we've talked, you know, about Musk and Peter Thiel and stuff, but Alex Karp is like, I don't know, the hipster Elon Musk. He's the founder of Palantir and, you know, friend of noted techno-fascist Peter Thiel. But before any of that, he was actually a PhD student in Frankfurt writing a paper about aggression in Adorno and Freud in the 1990s in the German sort of history battles. And what really stands out in that work for this discussion is the way that Karp's cultural historical conception of unconscious truth expressed in aggressive speech acts matches this like concept of a hidden identity and truths hidden in large-scale data, both of them yearning for release through this sort of kind of analysis. And in his latent yearning for latent yearning, I think Karp reveals his truest latent yearnings, which is to say these are very deep ideological expressions as well as logical ones, and understanding them and countering them, the cultural front is a useful one. You have to get people in there unconscious, right? Not to be too Freudian about it. And even more, if this algorithmic turn is a movement whose logic pulls together the textual, the historical, and the concept of unconscious drives, then maybe it's good for us to turn our eyes to the visual with a sort of contextual and conscious attention. I have some slides that everyone has to set the tone about why AI sucks. This is Kate Crawford's breakdown of series of lenses about AI that I find useful. The last one is mine. I don't need to go into these super deeply, but right, there's the lenses of earth and labor, which are about the extractionism that we've talked about, but also in terms of labor, the disciplining that is made possible with technology, that as the sort of resolutions of technologies become smaller and smaller, you move from the stopwatch that Taylor is using to decide how people should move into, you know, ongoing computer surveillance. The workers who are under these systems are more and more disciplined. And that's a useful lens to look at it. The next two lenses, data and classification, dig into the function of AI socially and technically. Data is hard to get, and the unending quest for that has led to inappropriate data sets and decontextualized ones. So you have things from the mug shops to word corpuses that ingrain biases from 1965. Every image you see, pretty much every image classifier is based on ImageNet, which is based on WordNet, which is just a corpus that some dudes at a university put together in 1970, and that's kind of structured. All knowledge that people believe computers possess now, which is bizarre. And when this data is classified, it's ossified, which is to say removed from what makes real sense and pinned into false sense, presented back to us as truth. I think Kate Crawford summarizes it really well, right, when she talks about that this is fundamentally an act of world-making and containment. And the affordances of the tool, then, become the horizon of truth. Affect is a cool lens that you should all read about. It's kind of a case study. But it ties together the lower-level lenses she uses, data, classification, et cetera, into the last two, which is to say state power, right? It's a product of the military and state power. AI is a sort of military deployment into civilian life, which is why you can't domesticate it. And in parallel, the private companies release these tools that help people falsify truth so you create a world where there's all of this state power and sort of no claim to truth to fight it and it makes it a very difficult battle or a very difficult thing to oppose using logical means and also you shouldn't forget that the art is just very ugly like all of those other things are reasons we shouldn't use it and the ecological argument I think in a sane world would be a sufficient argument, but it's also really ugly and I think maybe that's an even better argument, right? Because maybe this ugliness is AI's portrait of Dorian Gray making manifest all of the corruption that lives inside it, and we can see it. So then the question becomes, can we develop a counter-aesthetic? What is the counter-aesthetic to that? And of course we can develop a counter-aesthetic. I'm going to get a sip of water. But I want you to think about, you know, these positive things while I do that. But then the question becomes, what does a counter-aesthetic look like? So if the more conceptual harms of AI can be seen mostly in terms of, I would say, the violence of classification, which people often consider as context removal, then it can seem natural to want to solve this by putting context back. And that often goes back into doing sort of more identity-driven or politics-first work. But I think that that's a false temptation temptation because if you start confusing context and identity, you start reifying the kinds of categories that AI uses. You validate really just the fundamental idea. And your arguments become arguments of degree and not of kind. So instead of going backwards, what does pushing forwards make easy? So this is sort of what I fumble with in my practice. And I started asking, you know, what does algorithmic art make easy? What does algorithmic art make possible? And sort of what fits in between the two is a little place to start working. And the answers I've come through mostly is that algorithmic art, AI and otherwise, make ease easy and it makes resolution easy. And together that characterizes smooth art. But then so what is the counterpoint to smooth art, right? That would be something that emphasizes unresolvability. And one way to do that is simply by visually refusing to resolve, right? And embracing this sense of frustrated expectation. So in this excerpt, you can see a number of different ways in which this base image, which is generated using an SVG library I wrote, it starts off very hard-edged and very distinct, but the sketch here refuses to resolve it. It will not stay still using various kinds of masking and positioning transformations, right, feedback modules, but you can't grasp it. And it makes it unresolvable visually, but also this video itself is part of an unresolved body of work. Sketches, investigations, improvisations, performances. It's something I struggled with at first being like, is my output just sketches? But I think there's something really valuable to refusing to resolve. And so when people see this and interact with it, I want them to feel some frustration and some longing and to sort of find a way to find a resolution to it that is not pinning it down, but can be enjoyable and graspable in a slippery way, maybe. And that's very different from these sorts of trapped, smooth answers to conventional desire, which is what you are seeing presented to you. And so I want people to see and experience this sort of stuttering, you know, because I think there's a real, it sounds a little silly, but I feel like there's a real beauty to watching a computer fail at doing what it's supposed to be good at doing. And being able to uncover that and enjoy it and go along with it, I think is really valuable. And you get these emotions of tension and resolution. And it's not technical on, it's not desire. of tension and resolution. And it's not technical on, it's not desire, I'm not engaging with the audience as a user or a consumer, right? So the unresolvability, I think, is a very important part of the aesthetic pushback. And that's an emotional answer. But we can also look at this in a totally technical perspective. So something that sits squarely between easy and possible is also feedback, right? And feedback gives us a handle. So I don't know if everyone remembers the creepy deep dream squirrels from like, it's almost 10 years ago now, which is terrifying, but they worked to give a handle on how that system works. You see all the creepy eyeballs because the system was looking for eyeballs. And that is feedback. Feedback shows you the grain of the system itself. So then there's other opportunities to bring feedback and a glitch aesthetic into the work. And that's probably to me one of the most important parts of the resistance aesthetic. This one uses a frame buffer emulator and then I've rearranged the frame semi-randomly. But I also really love how the bright colors combine with the grunginess of feedback and uneven resolution to sort of make a joyful dirtiness, right? Like again, if the future is smooth and perfect and promising, to engage and make grunge beautiful is something I think really useful. You can also just make it fun, right? So this is some other frame buffer work using feedback to sort of create destabilizing senses and a sort of disintegrating fun. Feedback can also be a material in itself. So this is an excerpt from a piece that I made using the Fairlight CVI, which is a video synth from the 1980s. And it's very interesting because it's, I think, the first video synth that combines the analog and the digital. So you have analog input and a digital frame buffer, and it lets you sort of composite these effects. And so this is a piece that's done by basically taking a synthesizer that is supposed to make you 1980s music videos and pushing the sliders and everything else far beyond what it's supposed to do until you basically have this feedback that's a material in and of itself and that you can work with in an improvisational sense. And it's not, you know, the game of life. It's not a generative technique or a mathematic approximation. It's just taking the electricity and seeing what you can sort of force it to make for you. Another tool for interesting glitch art that I've used with feedback techniques is a data mosher. So this is based on a glitch technique where when you compress digital video, it does not send you every frame of the digital video. There are keyframes that have all the video and then the subsequent frames of that are just the pixels that change. So data moshing art is basically what happens if you delete the keyframes and you just only look at the changes. And this is interesting too because it's an emulator of that. So it's not the kind of glitch art where part of the art and part of the work is understanding the machine itself, which is an interesting practice, but you can almost even take that aesthetic and then, you know, continue to work with the idea of destroying the machine without having to do it yourself, which is, I think, an opportunity to sort of go further. Playing with feedback also reveals something about the nature of digital versus a resistance aesthetic that I really like. It actually calls back to something in Anna's piece. I think of things sort of as the tension between the binary and the range, right? There is zero or one and zero to one. And the digital is almost entirely binary and the range is much more analog work. You have a signal that goes up and down and can therefore hold middle positions. And so I think about this then when I go back, right, that if a data set comes into being when it's being classified, segments have to be invented and applied because that's how AI predictions work. And without classification, there's no categories. Without the binary, there's no computing. So then you can start playing with resolution and binaries and creating this grain by once again breaking the computer, in this case, messing with resolution. And so if we go back and think about the point of resolution in terms of labor and control, fucking with resolution and coming up with the feedback or the other visual indications of that I think is a way to undo the process of classification itself. You can bring continuity into aesthetic resistance work in a lot of ways. This is another example, again, with the Fairlight combining the analog and the digital, and I really enjoy the way that the analog pass shows you how the digital has a harder edge and the analog a softer one. There's some other sketches. this sketch. You can actually see as different rectangles come up, them being moved from the analog over to, it like gets drawn back into the digital to the frame buffer in the back. I also do this like you saw yesterday with cable knit sweater. You can bring the analog in literally through signals, through performing with people. You can also bring in the range or continuity through layers. It's more of a temporal continuity, right? This is another frame buffer sketch. And here's another deeper layered version. There's versions of all of these online, too, because, I mean, obviously, I think they're worth watching in their entirety, but it's sort of different ways to look at that. And then finally, I think continuity can be brought into our practices through improvisational composition methods, whether in a performance or in the studio, right? Creating a video through continuous interaction with the computer is another way of looking at continuity. Doing it in terms of performance gives us opportunities to be parts of DIY spaces like this one, which means I probably don't have to tell people here, an opportunity to build communities that are outside of the machine and therefore outside of the surveillance. And so one other thing, which is if you like the idea of doing sort of an AI resistance and you're not ready to start doing all the feedback just yet, my last call is to tell you all you can resist AI by writing memes. Dasha and I are going to be doing that tomorrow, so I thought I'd make a plug for it here. And until then, thank you so much, and get in touch, look at stuff.