I'm happy to introduce, to have here on the stage, to introduce Adio Dinica, who is, among other things, also part of DARE, the Distributed AI Research Institute. other things also part of their the distributed AI Research Institute. The presentation that Adio is giving today is entitled Invisible Labor, unveiling the Sub-Saharan data workers powering AI. I think it's very valuable contribution and I'm really curious to hear what you have to present. This is a topic that we as I said before we have been mentioning data workers, click workers, mentioning in various program points of the festival from the keynotes to the different exhibitions and the workshops. So yeah, and you bring us a very interesting case study. And so yeah, and maybe there is also a chance for later to get a couple of words of what the error is in general, how the institute works. I will be very curious to also have a few words on that. Yeah. So, thank you. The stage is yours. And, yeah, have fun, dear audience. Thank you. Thank you very much, Davide. So, where to begin? Okay, so I believe I'm preaching to the choir here when I talk about artificial intelligence, right? At least that's my assumption. So I'll try to work with that assumption. So, I mean, everyone talks about AI nowadays, like the buzzword, you know, and everyone is familiar with everyone's favorite toy, ChatGPT, you know, it's the new shiny toy in the AI playground. So everyone is, you know, talking about this. But when we talk about all the different types of AI that are there, when we hear people talking about this. But when we talk about all the different types of AI that are there, when we hear people talking about AI becoming sentient, which I believe is rubbish, but we all hear this talk about what AI can do for us and everything else. And we have seen, I think yesterday there were many presentations which were talking about the long-termism, the cosmism, and other people who have all these wild, interesting ideas about what AI can do for us. But I think one of my main criticisms of these people is my colleague Edir Timnit wrote in one of her papers about the test-grill bundle. My problem with them is that by talking about long-termism, you know, about the future of mankind and future problems, we neglect the now. We forget about the problems that are happening now. And the focus of my presentation, which is part of a project that we're working on at DARE, is to try and focus on the problems, on the issues that are in the AI industry in the here and now. So when you see these faces and someone says AI, does that ring a bell in your head? Do you think there's any connection with these faces here and AI? Would you ever think? I don't know. Would anyone think, yeah, okay, AI, yeah, makes sense if you see these faces? Most likely not. But when we talk about AI, I think it is essential that we bring it down and stop talking about it as artificial intelligence. Because I mean when we get really serious, is it really artificial? And is it also really intelligent? I think predictive algorithm is more or less a more accurate term, you know, than talking about it as artificial intelligence. But, of course, I had to put myself there for control just to make sure that I was there. You know, I didn't download pictures from the internet. So this is a picture from Nairobi in Kenya. And so to really get going, I think I'll start by saying this. Machines are dumb, right? Or dumb depending on which language you want to use. So basically machines are stupid. So therefore when we talk about artificial intelligence, we first have to talk about it in this light that machines are dumb. And since they are dumb, they need what? They need people. So no matter how sophisticated a system can get, remember at any given time, it always needs a human babysitter. So a very good example is Amazon. They created this amazing recruitment tool, you know, AI recruitment tool. To those who know, this tool turned out to be amazingly sexist. It was not hiring any female whatsoever. It was only hiring males. So we've already seen like downsides in the exhibition about how many different types of AI tools are being developed. And all of these have bias. So they always need a human being somewhere there to kind of fix things. But what do I really mean when I say machines are dumb? And where do human beings come in the picture? beings come in the picture. So we all know that for AI systems to be developed or for an AI system to be developed, they are always, I would say, mainly when people talk about it, to put it in a very simple term, people talk about there are many two groups of people. So there are the engineers, right, the coders, the geeks from Silicon Valley. Everyone thinks about them. And when you think of AI, you think of AI, you think of these people. We've seen the Mavericks, Elon Musk, et cetera, et cetera, and all those other popular people. So, okay, these guys write code. These guys provide the money. But we all know that for any AI system to work, it needs data. And number one, where does the data come from? And everyone right now as we are here, we are creating data, right? But this unstructured data, it needs someone who cleans it up. It needs someone who organizes it. And as I said earlier, machines are what? A dump. If you just take all this data in this room and give to a machine, it has no clue what to do with this data. So we need human beings. So we have different platforms where the labor is sourced. I will not read this because I am very confident in your ability to read. So I won't do it for you. So to get a bit technical, not too technical, this amazing graph here by Tobarokovil and Kassili basically kind of takes us through the whole process where we have data generation. As we are doing here, we're generating data, right? Someone has to code. It has to clean that data. Then the annotation process comes, arranges it. So this data is in different forms, audio, visual, textual, et cetera. And then the annotation process comes, arranges it, so this data is in different forms, audio, visual, textual, etc. And then the modeling happens, and then we have output. 90% may be accurate, 10% may be wrong. But remember that 10% inaccurate is the distance between me being placed in the scene of a crime and me being left to mind my business. So whenever we talk about AI saying, yeah, okay, it's very intelligent, it's fast being good. Yeah, it's almost good, right? That almost means someone being discriminated against, being harmed. So moving on with the whole idea of machines being dumb, let me see if I can... I'm not sure how many of us here are really familiar with what goes on behind the scenes of different AI tools. So I will play this very short clip, which kind of tries to explain to us what really goes on. A machine does not know that this is a car or whether this car is parked correctly. Someone somewhere has to go and train the machine, has to feed into the machine and this work is done by human beings in different parts of the world. Eight hours a day they are doing this, drawing these little boxes, drawing this little box around cars because a machine on its own has no idea that this is a car where does this car stop where does this car go so someone somewhere has to do this work has to train this work and this work is not automated this is an actual person with a mouse drawing this every single day now imagine if this was your job drawing these things every single day. Now imagine if this was your job, drawing these things every single day. Because machines, as human beings, we are able to learn. When I hear people saying that, okay, machines can learn the same way as human beings can, I'm left with a feeling that I can't really describe, whether it's a rage or disappointment, because I'm like, no. As a human being, I can be able to learn in a very different, very organic way. If any of you have ever raised a child or seen a child grow up, you know how humans learn. And machines don't learn that way. So basically, this is how the data annotation in the autonomous vehicle training system works. So there are people who have to be involved in this. I'm not sure how many of us are familiar with a case where a self-driving car ended up hitting a person because the person was pushing an e-car's wagon, a trolley. And in this training data, the car had no idea that this is a trolley. But there is no human being who will see a person pushing a trolley and run into them because they've never seen a trolley. Right? A human being will say, okay, I don't know what they are pushing, but I should not hit them. But a machine will say, well, in my training data, I have no idea what that is. I know how to avoid a human being, but that thing, I have no idea. So machines are dumb. And here, for facial recognition, there are human beings who need to be plotting, teaching a machine that these are eyes. This is a sheen. This is everything that we have. So therefore, when we talk about AI systems, we have to always think about these processes. They are the processes that go on behind the scenes for us to get wherever we are. And my next point is, so after machines are dumb, stupid, human beings are? Now, I'm not talking about malaria, headaches, stomachache, or any of the back, but I mean human beings are, I don't know if it's sickles or cycles, whichever term you prefer to use. So whatever depravity you can think of, whatever terrible thing you can think of, there's a human being somewhere in this world who's taking videos, who's writing text, and uploading this nonsense somewhere on the internet. So many of us, when we go on Twitter or X, I still prefer calling it Twitter, we see something like that thing on the far right there saying sensitive content. And we then have the option to click and see what is that sensitive stuff, or we avoid it totally. But there are people, so for it to be that sensitive stuff, or we are void totally. But there are people, so for it to be labeled sensitive content, it means someone somewhere has had to see this content before you and I to label it as sensitive. So who are these people? So when we talk about these systems, many times we think, okay, it's algorithm designing this. But what if this is in Farsi and the algorithm is trained in English? Is it going to picture this in Farsi? Is it going to know that this is harmful content in Farsi? Definitely not. So what do we need? A Farsi-speaking person. So some of the workers that I spoke to in Kenya were refugees from the genocide in Tigray. And they were hired because they specifically speak Amharic and Tigrayan. And large companies who train AI systems, companies like Facebook, for example, needed Amharic and Tigrayan-speaking people to moderate content about the ongoing war. So now imagine you're a refugee running away from a war. There's a blockade. You have no idea, like what colleagues here were saying, that they have no connection to what's happening in Iran. Now you are hired to moderate content on Iran, and you start seeing your family pop up. One person in particular was moderating content on Facebook and saw in a group people plotting to kill his father. And they alerted those above him to say, hey, this is my father they're plotting to kill here. Shut down this group. But of course, you know, it takes time, as they say, which I believe is nonsense. And what happened is next time he was moderating content, a picture popped up of a dead body. Need I explain who was dead? It was him who saw this picture first. So when I said when you see these faces do you think of AI? These are not faces of my friends I took and said, ah, yeah, I'll post for a photo. These are actual people who are involved behind the scenes in AI. So the question, of course, is where does this old data describe the things that we need for human beings, right? Machines being dumb and other human beings somewhere else being out of their minds, you know, and not in a good way. So we need people who train these machines. We need people who regulate the excesses of some of our fellow beings. And where does this labor come from? I mean, of course, sub-Saharan Africa. Why Africa? Because I'm in dark continent, you know, so, well, come on, whatever happens, it happens, right? And unfortunately, this is what happens to most of these companies. They go and operate in sub-Saharan African countries because these are places where they can get away with murder. If they open an office in Leans and hire people and don't offer them any form of psychological counseling for doing this job, the authorities will be on them like a ton of bricks, right? But we are looking at a continent like Africa. Of course, I'm the only African in this room, so I can speak of Africa as a country. If any of you do it, I will scream racism. But I can, right? I can say, yeah, in Africa, even though I'm talking of several countries here. At least I hope no one will ask me if I know Colossae from some African country. I don't. But anyway, so because Africans need employment, so these companies often come to us and say, of course, we are creating employment in Africa. And of course they are. But what kind of employment is this? What you are seeing here is a timesheet from someone who was working, doing data annotation for a company called Sama, which is an outsourcing company, Californian company, but operating in Kenya. So this is not something that I made. I know people are creative with, what is it called, coral draw, and these are the new tools, or stable diffusion. I mean, I don't know. But this is an actual log sheet from someone who was working this job. And as you can see, this person was working for how many hours? 11 hours, 35 minutes. That's one shift. Now imagine, you are doing that clicking, that's what we were seeing over there, drawing little boxes, 11 straight hours. Oh, you're a content moderator, seeing this content every single day. So when we talk about when I say that humans are sick, and I said imagine the most depraved thing you can think about, someone is doing it. So one of the people I spoke with in Kenya, the workers I spoke with in Kenya, imagine as a researcher. I'm a social scientist, right? Sitting across from someone who says to you that I am, who is in his mid-20s, who says, I am no longer sexually active. My wife left me because of the stuff that I saw doing this job. And I got fired by the company because I could no longer work. Because imagine so in using his exact words, he said to me imagine seeing a video of someone raping a child and seeing the child's pussy get torn. These are his words, not mine. And you are seeing this. So every time he goes home and he tries to get intimate with his wife, you know what comes to his head. And for this, how much was he getting paid? 80 cents an hour. This is how much they get paid. There was a Times article which says they were getting paid less than $2 an hour. And I went there and I spoke to the people. I saw their contracts. And we were talking of less than, we were talking of 80 cents an hour here. And the length of the contracts, the other contract over there, is from April 1, 2024 to April 5. So they're a four-day contract. When those four days are over, okay, we'll tell you whether we'll call you or not. Sorry. So it's not like a contract where you know what happens after the contract. No. They'll tell you. So when I began, I asked a question. When you see these faces, do you think of AI? Do these faces make you think of AI? Do these faces make you think of AI when you see them? Or you only think of the code? You only think of the amazing things you can do with AI? So, this is invisible labor that is involved in the AI work or the AI systems that we have today. Without people like these ones, we would not have AI systems because machines are stupid. They need people to train them. They need people to regulate what is homophobic, what is racist, what is transphobic, what is anti-Semite, what is Islamophobic? What is racist? What is transphobic? What is anti-Semite? What is Islamophobic? Alone machines cannot tell this stuff. They need people. And these people get to see the worst of the worst before all of us do this. And get paid 80 cents an hour. And get contracts sometimes ranging from the maximum is six months. After that, oopsie, it happens. And what do these big companies, these big tech bros who speak of long-termism, who speak of conquering the planet, who speak of human beings living to 100 years, who speak of AI being sentient, it is at the back of these workers. So, I know for a fact many of us have clicked some, have identified stairs, have identified all boxes containing bicycles. What are we doing? A colleague of mine from Day, Adrienne Williams, has an article out where she calls people like, you and me who do this work. Yeah, it's work. You are not just clicking something to identify as a human being, no. You are training someone's system somewhere. You are actually actively involved in machine learning, but you are doing this for free. So she calls these zombie workers where we are just training these systems for free. So every time you click capture or recapture, no, you are not identifying your humanity. You are trading some system somewhere, and you're doing this for free. Every time we do different things that we do, we are providing free labor to AI. So what can we do about it? Should we just accept it, or should we take a stand? So on the 20th of June, please keep this date. You can go on our website events. So this is where we officially launch this project I've been presenting today. So the part I was presenting today was basically focused on sub-Saharan Africa, but our project is much bigger than that. It involves workers from different parts of the world. We have workers from Syria, from Lebanon, from Venezuela, from Brazil, and surprise, surprise, from Germany. So this exploitation I'm talking about is happening globally. And on the 20th of June, we are going to officially launch a project where we have engaged more than 15 workers from different parts of the world who have been doing this work. So having said all of this, hopefully not traumatized you, or maybe hopefully having done that, I say thank you very much for your attention. Thank you. I say thank you very much for your attention. Thank you so much, Adia, for the presentation. And as I said before, and as we prepare this question, so it's totally fake, no. Can you give us a couple of hints also, additionally already on the air, and how the project works, because I think it's the question that we all have in the, I speak for everybody, but that we have in the room is like, okay, what can we do? Or what, can we do it at the 20th of June? Tell us everything. Okay, maybe, okay, good. Can I get the screen? Can we do it at the 20th of June? Tell us everything. Okay. Maybe. Okay, good. Can I get the screen? Yes, thank you very much. So I am a researcher at this organization, DARE, the Distributed AI Research Institute, where our main focus is to try and mitigate as much as possible the harms of AI, and also to try as much as possible to encourage the development of ethical AI. So we're not only shouting stop it, we're also trying to say what can we do, what can be done. So the project is called the Data Workers Inquiry Project, where we decided that we wanted to understand. Because many times when we are here based in the West, right, or in the global North, we often purport to speak for people who are in the global South. But in this project, because one of the things we focus on at DARE is that we focus on co-creation of knowledge with communities. So we decided this project that we don't want to speak for workers, rather we want them to speak for themselves. We are simply providing them a platform where they can speak. So the project therefore involved recruiting co-researchers, we call them co-researchers because they are actually compensated for their time. We are not helicopter researchers who go to Kenya, jet in, take a few nice pictures like the one I had. Of course, I took pictures. But beyond that, we also empower these local communities in creating their own stories. So on the 20th of June, we're going to have these workers. They are different. And mind you, this part I'm talking about, these workers, are not your typical because what many of these companies purport to show us is that these workers are uneducated. This is simple work. I mean, come on. Who can fail to click around a box? But these workers I'm talking about from different parts of the world are people who have degrees. For example, the workers from Syria and from Lebanon have master's degrees. But sometimes some of them are refugees from Syria living in Lebanon. They're not allowed to work any other job. And this is the only work left for them. And these companies know. And for this work, they get to annotate or to label 10,000 pictures and get paid 140 euro. 10,000 pictures clicking. You can imagine how many hours that's going to take you. And sometimes a client may decide in the middle of the project, say, well, I'm no longer interested in the project. And that's's it you have maybe clicked 5,000 pictures yeah okay sorry we'll wait for the next project so this is what we decided to do in this project say okay let us let these workers speak for themselves explain what's going on and they're doing this in very different ways some of them are artists so they're drawing what they went through some are creating animations, some have video documentaries, some have podcasts in different languages. Mainly we try to keep it in English, but we also have a podcast in German. We also have some writings in Arabic because the workers are from different parts of the world. So that is what the project is all about. So on the 20th of June, that's when we officially launched the project. And then those of you who would be able to join us will be fully online. And if you go on our DARE website on our events page, you'll get all the information that you need about this particular project. But zoning back more about DARE. So we are an interdisciplinary team made up of researchers as well as workers. So not all of us are academics. Some of us are actual workers. For example, we have Crystal, who is the leader of TechOpticon, which is an organization which tries to stand for workers who work on Amazon Mechanical Turk. We also have people like Asmelash, who is working on developing a natural language processing tool, a translation tool, which is, according to independent reviewers, I'm not saying this because he's my colleague, but based on independent reviewers, his translation tool for Tigrinya and Amharic, if you open languages, is way better than Google's Google Translate. So Tigrinya and Amharic, Ethiopian languages, is way better than Google's Google Translate. So we have different workers. Adrienne, a Amazon driver, is also part of our team. And she leads a research project that looks at the theft of wages perpetrated by Amazon. So we have different workers. And our founding director, Timnit Gebru, I think probably some of you might be familiar with her work. So she's also part of that team. Alex Hanna. So we have different focuses. Some, for example, one of the projects is that we are focusing on ImageNet, trying to audit the data sets of ImageNet, how were they created, who created them. So this is basically what we do at DARE. and I think our website has a lot of information about our research philosophy and how you can also join us in this path because often unfortunately going against big tech is a very dangerous thing to do because you get cancelled at every turn. If you are coming out there and saying Nick Bostrom should be shut down because he speaks nonsense, you will definitely not get many fans. So we're very happy when he was shut down. We're like, yes, we've been saying this for years. So that's all about the Distributed Artificial Intelligence Research Institute. And I'll be more than happy to answer any and all questions. We have one here. We start here. As you said, there's a lot of risk. I wanted to ask you if in your group, has there been anyone that actually faced the consequences for example like driving for Amazon and then having the information and sharing them would there be the consequence of them getting fired from Amazon? So from our team like I said we are very diverse workers so of course while Adrian is no longer driving for Amazon, I'm very sure that if she applies her name or not, she's among the most eligible candidates. But we also have different people there. For example, Merone Estefanos, whose work involves working around AI dealing with refugees, because she's a champion for refugees and migrants. She's personally responsible for more than 100 rescues of refugees who had been kidnapped and with their ransom demanded. So she has a 30 year sentence, jail sentence, waiting for her in Eritrea. So if she's ever to step in Eritrea they will send her to jail for 30 years. So this is one of the people that we have in our team. And a bunch of our team members often find it very difficult to operate in normal spaces because these companies sometimes they send people after you. And of course employment wise, so Timidu, of course, was fired from Google before she found a day, but she's not getting employed anyway. Yeah, that's also like a thing we have in our group. For example, at the beginning, we were like really thinking about should we go, like should we be honest with our names, about should we go, like should we be honest with our names, with what we're gonna do, but somehow, like it's still the internet, so we kind of were connected anyway, and this has like the result for us to not being able to go back to Iran. So I can feel it, your colleague, probably. And there was also like the questions, the people who have to do this work is the payment also depending on for example they have to click 10,000 pictures and if they don't click the 10,000 pictures they can kind of like get a decrease or something in the money. So what many of these companies do, okay of course first I totally totally sympathize and empathize with you guys, because through working with my colleagues, I've come to understand these situations where you can't go back to your country for standing up for what's right. And our stance and organization has been very clear on different events happening. For example, what's happening in Gaza. Our stance is very clear. And we have not received many fans for our very clear stance. So I totally empathize with you because, unfortunately, we live in a world where, you know, things happen. But to answer your... Sorry sorry can you repeat your question sorry the question was for example they work 11 hours and half and they have to click like 10,000 pictures does it has consequences if they maybe like are able to click half of it like 5,000 pictures? Are they getting fired or do they get less money? Do you have any insight in this? Yes, definitely. So the worker who gave me the 11-hour timesheet only got paid eight hours out of those 11 because eight hours was on the contract. So you get paid for what's on the contract, which is eight hours. How do they get to 11? You get to 11 because the company comes to you and says, hey, by the way, the client says you need to deliver the work by this date. So if you work for eight hours, you won't deliver the work. If you don't deliver the work, the client will be angry and they'll cancel the contract. So you guys have to work and finish the work. So you work 11 hours. And then when it comes time to get paid, they're like, well, your contract says eight hours, so we pay you what's on your contract. And then if you click 5,000 pictures when you are paid for 10,000 pictures, most times, sometimes, of course, you're lucky you get paid for what you have done, but most times you only get paid at the end of the project. So if you click 7,000, we say 10,000, you're not done. So you don't get paid. This is what often happens. And this is deliberate, by the way. They know it, that you get tired, you get exhausted, you get mentally anguished or famished. And they're like, okay, cool, quit. We'll get someone else. So before I became a researcher, I also worked for one of these platforms, Fiverr. And I've heard these cases where I'm a writer, so I was writing for different people on these platforms. And I had cases where I finished writing a piece of work, and I submit it, and the client says, okay, do a revision. And then when I go back to submit the revision, the client is gone. And that's it. And how do you track them? And I don't get paid. And you move on to the next client. And this is something that happens often. So I am lucky in the sense that I'm an academic, so I pursued academia and I could get out. But there are people who cannot get out. cannot get out. Sorry, another question. Like, there is a researcher at MIT, I hope I say the name rightly, Joy Burlavani, and she's the author of Unmasking AI, and I thought it was very interesting because her research based on, like like AIs and that AIs can only detect white people and white faces because mostly the coders are white men who don't feed the machines information about female persons or how they look like and like, yeah, like people that are not white. And I just wanted to say, like, maybe for anybody who's interested, Unmasking AI is a very interesting book. And also there is like a documentation about it. Yes, definitely. Joy's book is amazing. And yes, there are different cases of that happening. For example, in the US, in the last three months, there is, I think I spoke about this yesterday when I asked the question, those who were present, there was a lady who was eight months pregnant. She was identified by a physical recognition software as being at the scene of a crime. And then the guy who reported the crime had had sex with this woman, and then she had hijacked him. So then she was arrested because facial recognition said she was the person. And then only 11 hours later in a jail cell did they realize, no, she was the wrong person. But then that's when I say that machines are stupid. And as human beings, the moment we give our urgency to machines, we also become stupid. Because in this instance, all that was needed was to say, excuse me, sir, did the woman who robbed you, was she eight months pregnant? The guy would have said no. And this poor lady would never have been arrested. Oh. Another case, another guy in, I think it was Texas, was also arrested, black guy as well, because this facial recognition software had placed him at the scene of a jewelry burglary. And at that particular time, the guy was serving time in jail in a different state. So, and he was told, oh yeah, you robbed a jewelry store. And he was in prison during that time. So yes, many of these systems are developed for mainly middle-aged white men, for white people. And therefore, anyone who has a darker skin tone, it is trouble identifying people of darker skin tones. So which means, I think the exhibition downstairs talks about these different biases and I don't need to necessarily get into detail with them. But this exists. So to be honest, as a black person, whenever I see videos or cameras, I get very agitated. Because that might just mean I might have to answer for a crime I have no idea about. I wanted to add that also in the keynote from Selena Savic the first night, there was also a very nice perspective on the fact that not only AI, but also like even photography, also film, was performing very poorly representing darker skin tones. And this is something that we carry on with us since dozens of dozens of years. So it's, and when you see like this timescale that's horrifying to imagine what, how many of these cases have been happening over and over. So yeah, Valentina, you had a question? Is it on? Yeah. So my question is, like, my next question is not, like, I'm fully aware that pressure points are necessary on all levels to deal with the now, which has to do with the past and not. But I was wondering from your, yeah, from your perspective and from there, which role unions can play, like the potential of unions, and not to put the work on them. That's not the intention. Thank you very much for the question. So definitely, I mean, part of our work is also inspired or in some way influenced by the work of our marks so the workers inquiry project which marks late is what's gonna we can wait on so basically unions have immense power in terms of being able to put workers together and speak with one voice so for example the workers in Kenya some of whom were on the picture, if not all of them, are actually actively involved in trying to form a union that works, that tries to get together workers, African workers who are working behind the scenes in AI to fight for their rights. And we've seen Amazon has been for years trying to bust unions because they know the power that we have when we get together. So I definitely believe that by giving people one voice, because what happens with this work is many times people work in very different corners. So I'm talking about workers we have who are working in Syria and Lebanon. So this is one person working in their own office or room doing this work and at the end if they speak out they are fired and the case is closed but if they can get together in a union then they can speak together with one voice instead of when they're individuals because alone individually these companies are it's very easy to fire them but if they get to a union so yes unions have a very big role to play in this. And you know, speaking to what Davide, you were mentioning about even cameras here, when I first began my PhD in Bremen, we were invited for a group photo, like photo shoot of professional photo shoot by a professional photographer. And I never used any of the photos he took. Because whoever was in those pictures was not me. So to take pictures I took when I was in Zimbabwe. Those are the ones I use now for my social media and everything. And this guy meant no harm. He thought he was doing his best job, but when I looked at the picture, I was as black as coal. And now I'm very happy to be with my skin, but I was like, yeah, I'm not this black. And definitely if you take that picture, and now you're trying to match that picture with my actual me, of course the system will say, no, this is a different person. So it's not necessarily because I was that black, but it's because that person was not me. And if any facial recognition software was to be used, chances are it might say different person here. Thank you. You have a question? I have to turn it on. Hello, hello. I'm sorry. Okay, yeah. I have a question because I don't know if you can answer that as an individual. It's a bit a big question. But I'm thinking a lot about it for, I think, a year now. And it's like this history of the camera and the, you know, the colonial, I don't want to say tone to it, but the colonial appropriation of the camera that is used to like observe and objectify that what is seen through it and the way we consume, you know consume suffering through our feet. Like in the moment where we can swipe through that suffering, it gets objectified, it gets consumable, it gets something that gives us somehow dopamine because we're addicted to it. So we're kind of in a reflection hole and case of this identification with the things we consume through social media and always being in the position of the teta, what is teta? Fiat? Perpetrator. being in the position of the perpetrator, because we're the observer. We take the position from the, do you know what I mean? When we look through this history, and I just, I don't know how to deal with it. Like how do we use this device? How do we, like, what should we do? Like how do we entangle this problem? I don't know if you get what I mean, for example, but for example, if you look at Nazi documentaries or something, a lot of footage that is used is done by actual Nazis. So when we watch these documentaries, we take, even if we don't want to, the position of these perpetrators and of the Nazis to, I don't know, look at something educational. And I think the suffering that we consume and the suffering and the position we take when we look at these pictures is also, even if we want it or not, we're put in this position. And also how, what is the material of a smartphone? I mean, it's also exploitation. And it's just like, I don't know, do you have anything to say? Like, what should we do? Should we, like, boycott phones? Should we destroy our phones? What should we do? Thank you very much for the question. So there is one of my favorite proverbs from the African continent says that unless the lion has its own storytellers, the story of the hunt will always glorify the hunter. So we consume media produced by other people about us. So, for example, when you think of the history of Africa, it begins with the arrival of the Europeans, not with the arrows of the natives. That already is wrong. So my encouragement, which is also part of what you'll see in this project, is to get people to talk about their own stories. Because that is one of the first ways we can get to the bottom of this. Are we telling our own stories? You guys are here telling your own stories. But if someone was to tell your stories for you, they would frame it in a very different way. So you asked if we should boycott phones. I don't think we should boycott phones per se. But I think we need to be careful about our consumption of certain things. I personally think there are certain things we give attention to which deserve zero attention. And then there are things which, for example, when we look at phones, their production. Perhaps let's look behind the scenes. There's the Fairphone, for example. I'm not sure if there are people who are familiar with this. Maybe let's be more conscious and say, okay, if I'm using this laptop, where was it made? By who? We have seen different products like cotton, for example, diamonds, which are certified, these are blood-free diamonds, and some are blood diamonds and you can't use them. I think that's maybe where we can begin to be more conscious of our consumption. For example, when we were working on this project, one of the core researchers came up with an idea of making an animation. And they had found a tool online which they could use to make the animations. And we said, okay, give us a few days. We went behind the scenes to find out who is behind this animation software. Who made it? What data is it using? Where did they get it? How did they get it? And at the end, what are their terms and conditions? At the end, we were like, ah, we're sorry, we cannot use this particular software. Look for another one. So that's conscious decision in terms of what we do in our consumption every day. What tools do we use? For example, as an organization, we've stopped using Zoom because Zoom clearly updates its user guidelines assumption every day, what tools do we use? For example, as an organization, we've stopped using Zoom. Because Zoom clearly updated its user guidelines, or whatever, these terms and conditions, and now says that it can take our data when we have meetings to train its AI tools without our permission because we're using their system. So we read this and we're like, oh, hold it right there. Let's jump to alternatives. So this is one of the things that we can do. When we see certain companies are exploitative in nature, because it is my dream that maybe someday all these AI tools and softwares will come with a tag to say this was certified exploitation-free. If it has no certification, then we don't use it. Everything else has standards. Why doesn't AI have standards? Because this whole Zuckerberg move fast and break things mantra for me is absolute rubbish. Yeah, okay, move fast, break things. But those things are people in Syria, in Kenya, in Lebanon, in Venezuela. So when we say move fast and break things, people are not things. Syria, in Kenya, in Lebanon, in Venezuela. So when we say move fast and break things, people are not things. What do you use as a platform to make calls? The question was what do you use as a platform to make calls instead of Zoom? We actually use Google. Ah, Google. Yes. But if I may step in, there are many other platforms that can be also tested. Sure, for the documentation I push advertisement hashtag now. But this is also what not only Servus AT, but many other associations and individuals are doing. Like to try to provide tools that are not connected to Big Tech and to this kind of dynamics. Of course there is always a negotiation and we also are critical towards specific purist approach of, ah, yeah, use Google. But of course it's a process and everybody is in their own position and also connected to what you were saying, like being aware of the limits of that, being aware that all these systems are kind of like automated feeds of content that require and catch our attention into things. They are also a wonderful way of generating data that is also then given to somebody to be trained. So it's like a whole system that is also then given to somebody to be trained. So it's like a whole system that is connected to our bodies to strip away data and over that is cleaned and processed and in the end ends in the pocket of let's say Jeff Bezos and other peers of them. Of course we can't be in a completely isolated space. Everybody needs to find the limits where we are, and also due to, of course, logical things, we can't live in our bubble. We have to interact with others. So technologies, they do it, but it's a process, no? no so I think it's very and I'm really glad we have in this conversation here because that's that's actually the the context where the Amor festival comes is really try to get better and exchange also this information what are the tools how do we do we use them what can we do better I will be very very interested maybe you have it somewhere on the website. If there is like a sort of a list of tools that, I don't know, Dyer approved tools that will be super, super nice to know. Or also Niharika, who is sitting in the back from the Free Software Foundation Europe, also might have some tips. Like, not now, or maybe yes, but I'm just trying to kind of open up also the discussion, but I think that there are many, many layers to get active, and Tolkien is really important about this. Yeah, Gina, you had a question? Yeah, thank you so much for this wonderful presentation. I found it really inspiring. I have a question. I actually had a similar question about union, but now I want to ask if there are any talks among your community that are dealing with some cooperativism and platforms. Are there any attempts or I understand maybe it's not enough resources and energy between workers. I also used to make digital workers' conference. But are there any talks or ideas among workers themselves or among more broader community that are trying to deal with cooperativism? Are there ways how we can build another platform that will be built on kind of cooperative principles where workers own the platform or are there any directions that are trying to look into this alternative infrastructure, let's say, how we can live and survive. Thank you. Thank you very much for that question. So at DARE specifically, we don't have a project dealing with that. But however, with colleagues from the industry, I know for a fact that there are different colleagues who are looking, exploring things like platform cooperativism. So, yeah, I think that's the answer I can give you. And I would be happy to talk more about that because I believe personally this is one way where we could get to this point. Even though evidence from my research has shown that perhaps maybe not. But there is hope that perhaps maybe platform cooperativism can get us to a stage, because I think this is what Mark said, right? When the workers own the means of production. Hello. Thank you for the talk. I agree with all tenets that you made, but the problem that I encounter when I discuss AI, or I likeizing material the workers behind AI and so on have to deal with. What would you say to those true AI believers that say, yes, but this is only a short phase until the machine is intelligent enough. Unfortunately, yes, maybe it is traumatizing a little bit bit but it's only a short time until the AI is intelligent enough to learn it on its own and so on. How would you counter this argument that this is just a terrible but necessary step for the evolution? When I was younger and a political activist in Zimbabwe, some colleagues had a show which was called the Minister of Impending Projects. So this minister was involved with impending projects, projects which will always be in the pipeline but never actually materialize. Why do I talk about this? I think yesterday there was a presenter who amazingly showed us videos of Elon Musk talking about how they're just a few months from autonomous vehicles. And then he was like, oh, just a year. I'm very confident. And he has been confident of this happening in just a year for the last five years. My point, therefore, is talk of AI being intelligent enough is, for lack of a better term, absolute rubbish. We're very far from that, if we ever get there at all. That's the first thing I'll say. And then the second thing I'll say is when these people say it's only a short time. Only a short time is the content moderator committing suicide in front of his laptop in the Philippines because of the content he's seeing? Only a short time is the young man in his 20s who now is sexually inactive because of the content he's gone through. Only a short time is the traumatized Ethiopian refugee who can no longer sleep except after taking heavy drugs because of the stuff they have seen. Only a short time is the girl from South Africa, Christian girl, devout Christian, almost to the point of zealotry, who now drinks alcohol more than probably anyone in this room and can only function when they're in an alcoholic stupor. So before we talk about only a short time, let's remember this only a short time means actual people involved in this. Will AI ever be smart enough? I don't think so. Already we have seen attempts at, you know, this synthetic, like generation of synthetic data to then train and it doesn't work. We'll always need human beings. For example, I'm not sure how many of us are familiar with the fact that many of these companies were touting AI tools like facial recognition for theft detection in shops. It emerged a few weeks ago that actually there were dozens of workers in Bangladesh and India who were actually behind the cameras, you know, observing. And then when they saw someone being suspicious, they then called the shop and said, hey, check the guy in the red jacket. And then the manager went, and this was being sold as an AI tool. But there were people in India, in Bangladesh. So as long as we have people who are doing this work under the guise of AI, I do not, I feel very, I would say, I don't know how to use the word annoyed, because annoyed means I don't want to talk about it. No, I want to talk about it. But this is how I feel when people say, AI will soon be able to do all this work. I don't think so. Because I've gone behind the scenes. I've spoken to actual workers who are doing this work. And this is not a working machine we're able to do anytime soon. And we have seen this already. So imagine, because the thing is, the machine has to learn every single thing. If you don't teach it, it can't, you know, it can't like extrapolate. I can. If I see a fruit, which I'm like, okay, what kind of fruit is this? I don't teach it, it can't extrapolate. I can. If I see a fruit, I'm like, okay, what kind of fruit is this? I don't know it, but I'll look around. Are birds eating it? Okay, so it's edible. I won't die. A computer won't be able to make this connection. It will say, okay, in my training data set, I was given the list of edible fruits. This one is not. So off with it. I give an example of fruit here, but think of other life-threatening situations. I hope I've answered you. Thank you for the question. I think the next question is from Linda, which I just recall their work as Kairos, which is entitled Suspicious Behavior and Ideal Behavior. So I'm sure there is an expert question coming. Yeah, I've been working the past years with these topics, but through art and how art debunks the capabilities of AI. debunks the capabilities of AI. I also have followed your work and Timitke Burr and Alex Hanna. I know their work and it's just amazing things that you are doing. I'm very much looking forward to the 24th and thank you for being here today. I was, I have, because I have been reading a lot up on this, I have a very specific question. I'm wondering how, is there somebody who is doing research on how these platforms are marketed, like Appen and so on, are marketed to the click workers? Is there something that could be done there also from debunking the marketing mechanisms of what these platforms promise the workers? Short answer, yes. On the 20th, when we launched our project, we have some workers who are talking about how they were recruited. So I think that, to an extent, speaks to that. But maybe the more broad and more focused idea of how they are all marketing, we don't have it really. But when you look at the couple of projects we are launching, they will talk about how these workers got into this job because I've always been asked this question why are these workers doing this job why and many of them so for example when you when you join us and read one of the document one of the the booklets written by one of the workers from South Africa but was working in Kenya, she will talk about how she got convinced to leave South Africa to go to Kenya. That's like a couple of countries in between. And leaving her daughter behind, because the promises she was given were just out of this world. She arrived in Kenya and things turned south. Yeah, it's also amazing that there's actually immigration happening here also because that's, as far as I have understood, it's the same thing between Venezuela and Colombia, that a lot of workers are, and also like we talked earlier, that people are having VPNs from different countries to actually try to kind of hack this situation to get better paid because you get differently paid in different countries and so on so yeah that's I'm looking forward to hear more about it we have another work as well who again who launched his own report which was as audios of actual people talking, who was working on the platform Remo Tasks. It recently pulled out of Africa, Africa as a whole, and all the workers who had their money in the system lost everything. The company pulled out without no warning, no like, okay, in two days' time, please take out your money or anything. Just wake up and, oh, we can log on. So he also talks about what you are describing, how they were now talking with people in the U.S. who would register for them accounts with a U.S. address. Then they use VPN and they would get 10 times more money than if they were using a Kenyan IP address. So these are things that are happening. If you are doing the same task and you're based in Kenya, you get paid, and someone is doing the same task, same task, but is registered in the U.S., they get paid 10 times more. So why is this? So these workers were now using VPN, and then they talked to people in the U.S. who give them addresses and open bank accounts for them and PayPal, whatever, with US addresses and they're getting paid and then remote tasks found out. And then they first suspended Kenya and then they realized, oh, it's not just Kenya. Every worker is doing this. So can we blame these workers? So on the 20th, there is another worker from Kenya who was talking specifically about this situation. I used this hook also to advertise the exhibition in BB15 which is the unknown label from Nicola Gouraud and who is also was doing a similar process of interviewing also click workers and there is this exact same situation which they describe how some workers are describing how they use VPN and how they have like parallel communication infrastructure between the workers to solidarize and actually distribute the tips amongst each other, which is like super important for them to know how to kind of circumvent the systems. But then of course, then there is also the other infrastructure of the other workers from the company who are trying to sneak into these private chats and these two groups to kind of know what the workers know and build the systems. I'm highly interested in that. Make sure you also visit the exhibition. It's open today from 3 to 6. I think it's a good contribution to the discussion of today. Are there any questions or comments? I, no, I don't see them. So I will say thanks a lot, Adio, for your wonderful presentation and extended Q&A and a big applause. Vielen Dank.