So, we're open to take questions, are we? Yeah. Who wants to go first? I mean, we heard now, it's been a while since we heard, yeah, since we heard about the open source licenses. So maybe you have to dig that out of your brain. First a question on the IKEA thing. I guess there's probably not really a way to really know, but do you suspect that they also keep some kind of a record of you having applied before so or is it like every time a fresh start with every application or do you think that they factor in previous applications i i for sure they have some kind of record that we applied for different jobs there i'm sure that they have a record of us applying before there, but I don't know. I haven't had access to the systems. What we plan to do is still to be in contact with... They have an AI ethics department in Sweden, so we are planning to get in contact with them and ask some questions. There's also an email that you can, as a job seeker, you can ask questions. So we are also going to try to contact IKEA and ask these kind of questions that are raised now. I don't know if then the people who are answering those questions will have access to see that we have applied for a long time, or if they will... But we probably will also, at some point, we probably will also at some point maybe try to get in contact with them first as job seekers and then maybe at some point we'll say that we are doing this art project and we would like to have the art research project and we would like to see if some any any other kind of contact or answers but that's still the plan yeah I have sort of a follow-up question on the IKEA project. I'm sorry if I missed this, but are you planning on applying at some point with a fake CV? So this is actually something that a lot of people have asked us. And we are keeping it open still. But in a way, we kind of want to do it as ourselves. Me haluamme tehdä sitä itse, koska se on koko asia, että yritetään olla tämä ideali henkilö, ideali liikkuvuus. Kuinka paljon voimme kivata itseämme ja edelleen olla itseämme? Edelleen olla oma oma. being the like to have our own but maybe it would like what we learn on the way maybe then like kind of a last point will be to try to apply as the ideal or what we think would be the ideal or what we think would be the ideal for for ikea for a specific position so we there there's like i mean we have been applying a little bit for different kind of positions um what's been available so there's like kind of there's been 10 hours at the Casa or there's been a sales agent at specific departments. So it's also a little bit of different things. There's also one at logistics where there's additional questions actually if you think you are able to carry 10 kilos so it's also about not just like kind of we don't want to give this feeling of that we would actually be like because we have a PhD, that we are actually suitable for these work things. But that's also kind of the interesting and funny thing for us to do is to try to be that, try to make our CVs look like we would be suitable or like think about these things even if we are actually not, even if there are people who are actually deserve these jobs more than we do. But in your quest for how to become the ultimate applicant, I'm assuming they don't give you any feedback, right, on your application, so you don't actually know what it is that they're looking for? Is it mainly just from research of YouTube videos and other people's experiences? Yeah, I mean, we are looking a lot at, like, kind of how people who have applied for these jobs, of course, like, then internationally also, so you will have very different working cultures in different countries. Also the privacy statements, they are pretty similar for Finland and Austria, because we do the residency in Finland, so we are applying there, and we are applying in Austria. But then in the States, it would be then like the... Also, the legal framework is different for applying for a job there. Did you also try to do an information request, how ICARE is processing your application? No, that we haven't done, so that could be a good thing to look into because we could get more information. That's why it's nice that there are people with legal background here, or maybe journalist background. Just to add to it, as a data subject, meaning someone whose personal data is being processed, you always have this right to ascertain from the organization as to what data they collect for what purpose, how long do they store it, and yeah, so these rights are available to you as provided under GDPR, which are also modeled into the local laws, so for example, Austria and Finland in in this case, would also provide the same. And so that's always an option for anybody who's sent a CV and whose personal data has been processed. Yeah, I mean, quite a lot you can get to know. Like, we were making this diagram from this privacy notice, so there they already tell, like, where your data is storaged and so on. So I suppose there would be overlaps, but maybe there would also be more information. Yeah, I just had a question. What you presented was about being hired by IKEA. And we know that labor conditions have changed to the point where performance review is an ongoing part of being an employee. And was part of your research, or is part of your research, also looking at how to become fired by IKEA? So what was the question? Sorry. I guess how would AI play into this performance review cycle or evaluating whether you remain to be a good employee? Yeah, that would be a next step for some kind of behavior. Maybe that's the third artwork in the trilogy. I wanted to then go back to Niharika's comment because I just know, I mean, GDPR and data protection, how do they actually are or react or how will that work in terms of AI system because then like as you were saying before this the office or also the days before like the obfuscation of data is massive so we actually don't know. And so how that comes into this legal framework, or how can maybe a group of artists doing this kind of research, how can we poke and actually get this information? So first of all, for any organization, using completely automated means of processing your personal data, there should be a disclosure about this, that we're using completely automated means of processing personal data. And if it's creating significant effects onto the lives of the data subject, which is the person whose personal data is being processed, significant financial or health-related effects, then one has the right as a data subject to ask for a human in the loop. So essentially, you have a right to have a human being involved in the entire process of this automated means of processing. So you always have that right. You also have the right to object to this processing, to also get any of the personal data rectified. So all of these rights are available. There's this Article 22 under the GDPR that provides for this, right? And yeah, this is in regards to simply automated means of processing. Then when it comes to AI as such, it's not just the application of GDPR here, but also the AI act. And it has a snowballing effect here because there are lots of interlapse here we don't have the AI act in force right now but it's going to provide more rights to such data subjects to people whose data is being processed transparency is one requirement especially for these AI systems that are marked as high-risk AI systems, as Linda had also mentioned, in terms of recruiting activities, et cetera, for employment purposes. But yeah, these are the rights that you have. And yeah, so you can always object to processing. You can always demand to have a person, a human in the loop. But what does it actually mean to have the human in the loop? Is it that somebody looks at the dashboard? Exactly, exactly. To have a human presence in the process of processing personal data. And then you also need to, as an organization, deploy certain technical and organizational measures. So, because, you know, in this case, employment data or even, you know, if it comes to your health data, all of these are sensitive data, yeah? And so you need to, as an organizational, deploy certain measures that can range from, say, pseudonymizing the data, having encryption in place in transit, data in transit and at rest, and maybe also have a robust retention schedule in place. So at the end of the retention cycle of your data, it needs to be destroyed, it needs to be deleted or anonymized. And once it's anonymized, then if there's any automated processing, it's fine because it's not personal data anymore. So all those attributes of personal data that are attached to the data in hand gets removed. And as soon as it gets removed, then it's not within the purview of GDPR. And hence, then if an organization needs to derive, say, some sort of analytics from it, it can. But one condition being that this should be irreversible anonymization, meaning it shouldn't be that you have some data still in your backup, and you're saying that I've anonymized, and then later you can relink it to the data that you have stored because then that's not anonymization so there are different ways of how companies can derive analytics from the data from the personal data that they hold of the users but you need to deploy certain safeguards so like practically like when I'm applying let's like hypothetically Kun minä käytän, niin käytän hypotetiikkaa, että IKEA rakentaa vahvasti AI-modellia heidän asiakkailleen. Heillä on tietoja, joita he suoraan verrattuna saavat statistikoihin siitä, mikä on heidän suunnitelmansa. they will have some kind of population, some kind of data that they compare to get statistics of who is ideal for there. So if they strip my identity, they can use my data to... If it's not possible to connect it to me as a person, then they can use it for training their models. Exactly, under the GPR, yes. Also, under the AI Act, and when it's enforced, you will see that there are certain requirements for AI systems like these, which are counted as high-risk AI systems, to have certain transparency requirements in place. High-risk AI systems also have certain transparency requirements in place. High risk AI systems also have to follow certain, let's say, measures again in order to deploy the AI systems. So for example, you need to carry out certain certification, you need to have certain codes of conduct, and all of these are these measures that are provided by the AI Act for any organization to deploy in case they want to use these high-risk AI systems. will use a platform or most of these companies where we are applying will actually use third-party platforms that provide these kind of application tools. So it's actually those people who have to show that their models are... They are the ones who are like kind of... The providers. Yeah, yeah. So there are obligations for the providers as well under the AI Act. And if I'm talking about purely GDPR, say when the data, when you have the personal data in its raw form, then also there are agreements between the controller and the processor. So controller being IKEA in this case, and processor would be any third party that it's outsourcing this automatization to. So even they are obliged under the GDPR to follow the mandate of the law, to have those organizational measures in place, to follow basically the mandate of law, and even then the data subject or people users can rightly so enforce their rights to have, again, a human in the loop, to object to such processing. So there are safeguards in place to ensure that personal data is processed in accordance with the mandate of law. And again, the same is maintained in the AI Act as well. Though we'll see the practical application of it when it really is deployed and when it comes into force. But yeah, as of now, those are the safeguards that are provided under both the AI Act and GDPR. Yeah, I would like to ask you a question. It's interesting, your interpretation that is conform to the maximum of respect, I would say, to what we can do as what it's offered and where we can navigate. respekti, kaj lahko povedamo, kaj lahko povedamo, kaj lahko povedamo in kaj lahko navigiramo. Moja vprašanja je, kako se pripravljamo s normativizacijo, ki jo prejšnjič vprašamo, da se razmišljamo o razmišljanju. Vseeno so, sem si verjena, nekaj težih kakšnih kakšnih, ki jih na drugi strani ne imajo. Ne imajo vse težave, da jih ne počnejo vse te procese, da lahko počnejo svoje prava. To lahko uspešno uspešno uspešno uspešno uspešno uspešno. To je to, kar me zanimlja. Če veš vse te stvari, to je to, kar me zanimlja. Če veš vse te stvari, sem si prav, da vidiš tudi težave, tudi tiste, ki imajo moč, legislacijo, mekanizme, kako se vsega preprastiti. Moje vprašanje je, če od svojega razmišljanja what you ask. And my question is if from your estimation, in an utopian or political way, you can share with us if you see that these points that they are open and that we should maybe take for our advantage. This will be a question to you. Sure. I mean, first off, I think it borders on a very philosophical discussion as well, this utopian paradise of developers vis-à-vis the users. And if you're specifically asking about how in the past it's so happened that the organizations, the companies have used this to their advantage, if I'm getting it right. Yes. So, okay, let's, for example, I mean, let's just take the example of free software here, open source and free software. Through my talk, what I've tried to convey, the idea that I've tried to convey is that there's this concerning trend of popularization of openness in AI. And you might ask as to why this is so. Why is there this concerning trend? Why are people actually rushing after having these open systems and labeling themselves as open because this is the first stop of this pernicious activity. First is to get this label of being free and open source and then you move on to harvesting onto what you've gained out of it. So for example, now we have a lot of legislations in place, or at least they're in the pipeline, that have this requirement to provide transparency, for example, the AI Act. There have been a lot of guidelines in the past. So there have been these ethical AI guidelines for trustworthy AI by the high-level expert group in the European Commission. There's been this Montreal declaration for trustworthy AI. All these were guidelines, so they weren't binding, to bring in this transparency. But now we have the regulation, please. So first off, this need for being open stems from these guidelines, not guidelines, but rather this regulation, and which is why, you know, there's this concerning trend, because now everybody wants to be open. They want to follow the law, and so they're being open. But on the flip side, there's also this rights ratchet model. I don't know if you're aware of this term, but this is the pernicious side of being labelled as free and open source software or AI systems. So what this rights ratchet model is about is that it's a very common practice and very pernicious, so to say. So you first place yourself in a position where you claim that your AI system or what you're producing, your company, your organization, is completely free and open source. You do this by way of a contributor agreement. So what happens is that you invite the community to contribute to your code, and then you harvest onto their copyright. And we have a lot of examples out here. invite the community to contribute to your code, and then you harvest onto their copyright. Yeah? And we have a lot of examples out here. For example, OpenAI, yeah? So it first started as a non-profit organization that wanted to, you know, bring in trustworthiness into AI systems, et cetera, and then they became Caps Profit, yeah? So this is the pernicious side of this. Now, increasingly so, the companies are motivated by the regulation. So in order to be transparent, at least under the AI Act, because the fines are really lofty, they try to portray themselves as open, but they're not, as I've tried to demonstrate today. as open, but they're not, as I've tried to demonstrate today. But there's also this other side of how they want to, or the pernicious side of labeling themselves as open, because they have a commercial interest at the end. Thank you very much. I have a question for the iCare project. You addressed it a bit, but also because you come, like you said, you both have a PhD, and so you spend a lot of time in the educational apparatus, and it's full with ideas of coming from what is called psychotechnic, labor psychology, all these competencies that are actually quite like, how to say, the idea of competencies come from OECD regulation, paper, by 2005 I think from the European Union, a regulation and now in all the universities, what was like this idea to see more transparent how everyone is kind of like educated for the job market. Did you also thought to like put your project in this bigger picture? It's like pick Arno from Kindergarten Alarms. Yeah, sorry. Did you also thought to put your project in this bigger picture of like the educational apparatus with all the competencies that go back to labor psychology to have like the bigger clear picture, what are all personal ability to really use them specific in like in the in the assembly chain and so on yeah I mean I like we we have had like kind of this is the like kind of patchwork of of a project. And I would really like to go deeper into the kind of the histories of how this kind of categorization, like what are these assumptions based on? I haven't done that research yet. But yeah, of course, it's much wider than employment. A lot of these tests are also used for education, for qualifying for education and so on. So, and I mean, and then also like the companies that are now building the AI is just an add-on on these taxonomies that have been developed. And this is, of course, like kind of comes to the kind of core of this, how we classify people in a society. So I think we are not quite sure how we somehow be reflected also in the artwork, or at least in some maybe reflections, future presentations about the artwork. about the artwork. A question on the free and open source licensing stuff. So I might make some wrong assumptions, so please correct me if that is so. So what I understood is that basically the position of the Free Software Foundation Europe is that for AI systems, the licensing has to be pure in the sense of open source purity, in the sense of the four freedoms, and that basically no extra conditions can be applied that make it, for instance, ethical. And I'm wondering if, sort of reversing that, if that also means that the position of the Free Software Foundation Europe is on non-AI systems, that you would also completely advise against ever having any ethical on the ethical aspect of it. We're not saying that, you know, why we're lobbying against it is not because of these ethical concentrations, but rather anything that denudes the meaning of free and open source. So if you have a license, even if it's not based on ethical concentrations, but is, you know, maybe it's commercial, but has any of those restrictive terms that are not aligned with those four freedoms in case of a free software and the components that I mentioned for open source software, if they're not aligned, then it's not that. Then it's not a free and open source license. So, for example, I've given you examples of these restrictive practices in terms of ethical concentrations, but as such, any prohibitory terms and conditions in any license that does not align with these four freedoms, and yet they call themselves as free and open source is a contravention of the license. So that is the stance of Free Software Foundation Europe that for all the AI systems now, we're not saying that, you know, you shouldn't be using Meta's Lama 2 license, for example. If you want to use it, there are certain restrictive practices in place, but for Meta, it is incorrect to say that it is a free and open-source license. So as I said, you could say it's a responsible license. It's something else, but it's not this, because it carries a certain definition. It has a definition. It carries a meaning. And if you don't follow that definition, then suddenly your legal department as an organization will be inundated with applications that would be dealing with this legal interoperability between these licenses with restrictive practices with the existing free and open source license. It will be extremely hard for, I can imagine, for any company out there to have them both aligned in their AI systems. So that's the crux of it all, that these licenses could be any XYZ license. I don't want to put any label onto it. But these are not free and open source license, just because they don't provide the freedoms that come with a free software license. So I got it correctly, basically, that the recommendation is not so much that people should not put these clauses into their licenses, but that it's a problem that they get labeled very vaguely as open then. This is what you object to. By virtue of having these vague licensing terms and conditions they're not free and open source yeah you seem to have one last very short question we can take okay that's the last please I hope hope it's going to be short. I have a question about what you mentioned about putting a human in the loop. How much do you think that really makes a difference? Because my understanding is like a human is very likely to just trust what the machine is saying, no? Well, no, not really. And why I say so is because I see a lot of garbage on Chad Cheapity as well. It's one thing to see it, and it's another to actually believe and follow it. So, which is why it's a safeguard provided under the legislation, only because a human has the right consciousness to understand and differentiate between what's garbage and what's not. And this is a principle also for any AI system. It's called garbage in, garbage out. If you feed garbage, you will get garbage. And which is why you need a human to distinguish between what's pure bullshit and what's not. And yeah, which is why it's an important safeguard. At the outset, any such automated processing for sensitive aspects or for sensitive information is prohibited, but under certain exceptions, certain legal grounds, it is still permissible, but only on the condition that you have this human in the loop. But that also means that always that you can't really say that these technologies are in any way neutral or de-biased, because in that sense, there comes a person who has the power position to decide what is garbage and what is not. And there you, that's then politics. Thank you, that's a very good last point. And with that, I would like to thank both of you, Njerika and Linda, for your really interesting and diverse approaches towards this ethical AI topic. And I would like to announce that this is the end of today's sessions here in AFO. So we will continue next in BB15. There will be an exhibition opening or a tour through the exhibition starting at 5 p.m. It's Unlabeled. Unknown Label. Unknown Label, yeah. Sorry. Unknown Label. And then afterwards there will be also dinner at DH5 today. So thanks a lot for listening, for all the interesting questions, and have a nice evening. Thank you so much.