Unpacking Responsible AI with Ricardo Baeza-Yates

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Unpacking Responsible AI with Ricardo Baeza-Yates. The summary for this episode is: <p>With the hype of AI, one of the important topics to discuss is Responsible AI.</p><p><br></p><p>In this episode, we will be unpacking Responsible AI, what is the problem, what are solutions such as principles, governance, auditing, regulation with renowned computer scientist researcher, Ricardo Baeza-Yates.</p>

Intro Voice: This is Catalog and Cocktails, presented by Data. World.

Tim Gasper: Hello everyone, welcome. It's time once again for Catalog and Cocktails, presented by Data. World. It's an honest, no BS, non- salesy conversation about enterprise data management with tasty beverages in hand. I'm Tim Gasper, longtime data nerd, product guy, customer guy at Data. World, joined by Juan Sequeda.

Juan Sequeda: Hey Tim, I'm Juan Sequeda, principal scientist at Data.World, and as always, it's middle of the week, end of the day, and it is time to take a break to have a drink and chat about data, but I think lately, we've been getting more about data and talking a lot about AI, I think that's the topic of today, and I am incredibly, incredibly excited with our guest, which is Professor Dr. Ricardo Baeza- Yates. He's the Director of Research at the Institute for Experiential AI at Northeastern University. He's a former CTO of Intent, the former VP of Research of Yahoo Labs. He an expert on information retrieval. He literally wrote the book on modern information retrieval, web research on AI, and obviously on responsible AI, and this is... If we want to talk about responsible AI, this is the person we should be talking to. Ricardo, it is a true honor to have you. How are you doing?

Dr. Ricardo Baeza-Yates: Thank you, salute.

Juan Sequeda: Salute, cheers, cheers, cheers-

Tim Gasper: Cheers.

Juan Sequeda: Super excited for this talk today.

Dr. Ricardo Baeza-Yates: Thank you for inviting me.

Juan Sequeda: So let's kick it off. What are we drinking and what are we toasting for today?

Dr. Ricardo Baeza-Yates: We should toast for more responsible AI because there's too little in the world.

Juan Sequeda: By the way, we're here this week at the web conference, which we're hosting here in Austin at the University of Texas at Austin. So I'm actually having, I think like a tea and yeah, it's some ice tea sour. It's pretty interesting-

Dr. Ricardo Baeza-Yates: I'm having a margarita because they didn't have pina colada or pisco sour.

Juan Sequeda: How about you, Tim? What are you having?

Tim Gasper: I'm having an old- fashioned, but so a familiar drink, but from an unfamiliar place and at the Kalahari Resorts in Round Rock, Texas.

Juan Sequeda: We're kind of all close by, right, but-

Tim Gasper: Nearby

Dr. Ricardo Baeza-Yates: In Botswana, right?

Tim Gasper: I wish, but no.

Juan Sequeda: So let's warm up, our warm- up question today. So if you are packing for a trip, what's something in your bag folks wouldn't expect?

Dr. Ricardo Baeza-Yates: I think that maybe two things, can I say two?

Juan Sequeda: Yeah.

Dr. Ricardo Baeza-Yates: First a real camera, not a iPhone. So I like to take photos with a good zoom, and a good macro, and so on. And maybe I will have a map, a real map, a paper map. I love maps because maps shows a lot of information in a very small space, so.

Juan Sequeda: Okay, the map thing, that's surprising, have a real paper map.

Dr. Ricardo Baeza-Yates: Well, I'm a geography geek, so maps are in my heart and also, we are losing the ability to find places. So we should practice.

Juan Sequeda: That's a good point. Yeah, I think inaudible.

Dr. Ricardo Baeza-Yates: How many people cannot find a place without their GPS?

Juan Sequeda: Yeah.

Dr. Ricardo Baeza-Yates: Too many.

Tim Gasper: We're pretty dependent these days, huh?

Juan Sequeda: We don't even know where north is anymore, inaudible at our phone and move it this way.

Tim Gasper: My kids, they like to see the compass on the mirror, and they always say, " Why does it say swe-? Why does it say swe-?" It's like that's southwest. It's a cardinal direction.

Dr. Ricardo Baeza-Yates: What is a cardinal direction, right?

Juan Sequeda: Tim, what about you?

Tim Gasper: You know what? Probably the podcasting equipment is always something that people don't expect. I think Juan and I both have that going on.

Juan Sequeda: This microphone that you see right here, I have that with me every time I travel because somewhere else, we need it for sound, so.

Dr. Ricardo Baeza-Yates: It's a big one.

Juan Sequeda: Yeah, I've got use like, I got day trips on just with the backpack and that thing is with me when we travel, so. All right, well, let's kick it off. So honest, no BS. Well, okay, first of all, AI is everywhere. Last week I was at the TED conference and people were just excited about it, but at the same time were concerned about it. This week we saw Geoffrey Hinton quit Google, right, because he can more freely speak about the dangers of AI, and then we're always hearing so much about responsible AI, and honestly, it's kind of a big word that people are throwing around and what does it actually mean? So honest, no BS. What the heck do we mean by responsible AI?

Dr. Ricardo Baeza-Yates: Yeah, responsible AI is, I guess for me, the best version of other variants of AI that people use like ethical AI, but ethics is something very human. So we prefer not to humanize AI, so we shouldn't use ethical AI. Some people also talk about trustworthy AI, and the problem with that, we know that doesn't work all the time. So it's kind of not ethical to ask people to trust it, if it doesn't work all the time, and also puts the burden on the user, not on the builder, and that's why responsible AI is much better. So do the builder, the seller of the product or whoever is basically stewarding this are responsible, and then you will be accountable for whatever damage you do.

Juan Sequeda: This is really interesting. So we started about responsible AI, but you throw out the word, ethical AI, and then trusted AI-

Dr. Ricardo Baeza-Yates: So people don't use it.

Juan Sequeda: So okay, this is honest, no BS right there. So don't use the word, ethical AI-

Dr. Ricardo Baeza-Yates: Yeah-

Juan Sequeda: But let's get very specific. So ethical AI is a no- no because?

Dr. Ricardo Baeza-Yates: Because ethics is a human trait, and then you cannot apply a human trait to say algorithms or robots, if you want to say.

Juan Sequeda: And trusted AI is also a no- no because?

Dr. Ricardo Baeza-Yates: Because you're asking people to trust something that doesn't work all the time, and I have really good example, let's say you go to a building and the elevator says, works 99% of the time. So very good accuracy, 99%. Will you take the elevator? Tim, would you take it?

Juan Sequeda: No.

Dr. Ricardo Baeza-Yates: Now if the elevator says-

Tim Gasper: I'm not going to take the chance.

Dr. Ricardo Baeza-Yates: Because you know it's not safe. So if the elevator says doesn't work 1% of the time, but when doesn't work, it stops, I know I'm safe, so I take it, but also, it's misleading because for example, let's say 100 years ago comes the guy say, " I have this new transportation medium that's called aviation. The company is called, Trustworthy Aviation. I want to sell you a ticket." I will say, " If you need to put trustworthy on front of it, there's something wrong here." So I think also it's misleading and you are putting the burden on the user, so we don't want to do that.

Juan Sequeda: Oh, this is a great point. This is like, I was having a discussion a while ago about, oh, we should have... talk about more quality of data, our data should be high quality. It's like of course it should be high quality. It's like saying, oh, come to our hotel, we have clean sheets and clean towel-

Dr. Ricardo Baeza-Yates: Exactly-

Juan Sequeda: Those are things that you do not promote because it's like a given.

Dr. Ricardo Baeza-Yates: This is like redundant, so quality of data would mean if it's not, there's no quality, it's not data, it's garbage, so-

Tim Gasper: Yeah, I love that.

Dr. Ricardo Baeza-Yates: When you start to use objectives that basically are included, another example is that people say machine learning and for them, they say, AI and machine learning, but machine learning is part of AI. It's like next time I talk about X, I say, oh, the egg and the yolk, it's redundant, right? But we do so many of the things, so we need to use semantics well.

Tim Gasper: Yeah, I used to... coming into this conversation, I've had a very positive feeling and connotation around the phrase ethical AI, but as you talk about ethical AI versus responsible AI, I think actually live here on this conversation, I'm having a bit of an aha moment about that. And I'm curious if Ricardo, you agree with this? So if we want our AI to demonstrate more ethical traits, so what we consider as humans to be ethical traits, then actually that connects to responsible AI, that we as the people building the AIs are responsible for ensuring that the AI is demonstrating those traits which we consider to be ethical. Is that kind of the way that you think about it?

Dr. Ricardo Baeza-Yates: Yes, there are many things that are human, like justice, also responsibility is human, but because at least in the legal world, responsibility also has been granted to institutions. We are using responsibility in the AI in the sense that the institution behind whatever is using AI is the one responsible in that sense. We're not saying that, again, that the AI is human, it is not, but even something like trust, they are even binary. For example, would you say, " I trust this person 50%?" Usually, you trust or you don't trust. So it's also not a real variable, and well, sometimes computer scientists, oh, we can measure trust, but for people, it's almost like I trust or I don't trust. In between it's like it's strange, even if it's half, maybe it's the same as no trust, and these are the human parts that are not quantitative or qualitative.

Juan Sequeda: So we started off with responsible AI and I really love how we got very specific on definition of responsible ethics and trust. What is then irresponsible AI? Let's talk about what are the problems that let us think about, oh, wow, we're doing this wrong.

Dr. Ricardo Baeza-Yates: Yeah, I can talk hours about irresponsible AI because there are so many examples. So I will only mention a few, but if you want to know more examples, there's an excellent place called, IncidentDatabase. AI where there are more than 2000 examples of cases that went wrong, and these are the ones that we know. I'm sure that there are 10 times or 100 times more of the ones that we don't know, that are secret or basically private. So let me give you a few classes of irresponsible AI. I think the first one and the most common will be discrimination, and discrimination is related to bias. For example, gender bias, race bias, xenophobia, homophobia, another thing that basically typically is against a minority or some group of vulnerable people. And here, we have so many, many example that maybe the worst one in the political sense, it's what happened when it started in 2012 in Netherland, where some engineer, I guess had the great idea of looking for fraud in the tax, in the equivalent to the IRS, so the tax office, looking for fraud in child benefits. So basically a benefit to send, you are let's say less than four year old to a pre- kindergarten school so you can work in the meantime, right? So they decided, okay, let's look for fraud in those kind of benefit. That's the first problem because it's not ethical to look for fraud in poor people, you should start with rich people, and this is a typical case of a tax office. Let's look at the rich people, how they're basically not paying tax. I'm sure that also the amount that you can find is much larger than with poor people, right? So it's even also makes sense from the business point of view. So what happened with this system was called Siry, but with an Y, not exactly like the Apple agent, basically accused about 26,000 families that they had cheated the system and they had to return a lot of money, if it was not the money for one year, well, sometimes it was the money for five years, and this is people that basically needed this support and had to return it. So some people lost their houses, some people had to go back to their place of origin, there were many immigrants was not... It's not knowing if it's how much AI was there, maybe it was not in AI, but doesn't matter, it's the software, and software should be responsible of the result. Well, because of all the problems, the civil society basically went to court against the government, the government, whatever the government at that time, but basically to the state of Netherlands. And after a long, basically legal battle, in 2020 and finally the Supreme Court because they went all the way, the Supreme Court of Netherland said that was illegal, that action, and they had to return all the money they requested and basically they were forbidden to do it. At that point, the former minister that basically was in charge of the tax office was a parliament member and she resigned, she said, " I'm responsible." I'm sure no one ask her if they could do this is a problem. So sometimes people don't ask if they can do something and they do it, even if they don't have the permission, and basically she resigned, " So I'm responsible." So she was an example responsibility, " I'm responsible, I resign, I lose my parliament membership." But for the opposition of the government, that was not enough and they keep pushing. And in January 15, 2021, the whole government of Netherlands resigned and this has been the largest maybe political impact of a badly designed software in the world. So this is the best example of discrimination because maybe affected like 100,000 people. If you take 36, 000 family that have typically two kids, these are like 100,000 people and at the end, cost the whole government to resign, and this is a western country. So it's not like you can say, " Okay, this is not a solid government." This is a government that has a monarchy inaudible. So this is my example of discrimination. This is the first class. Let me go to the second class, which is all the other classes maybe not well known, but they're also scary. The one I will call it the new version of phrenology. So do you know what is phrenology team?

Tim Gasper: No, I don't think so.

Dr. Ricardo Baeza-Yates: Okay, so this has to do with physiogonomy, this idea that had the Greeks that basically if I look at Juan's face, I can basically predict his personality. Phrenology is one step forward, was a German guy at the end of the 18th century, that said that criminals had different convolutions inside the brain, but this is very hard to prove because you have to open the skulls and look at the brain and so on, but for example, in the 19th century this was very popular. And for example, one example I use in my talks, you can find my talk in the work is Italian doctor, Theresa Lombroso from Torino that said, " No, this is more simple. Females have a different skull. They have a different part of the skull that has a difference." Well he collected hundreds of skulls from the morgue, basically people that were so poor that no one recovered their body, and you can go to his house museum in Torino, but hundreds of skulls and could never prove that because we know it's not true. I mean criminality has nothing to do with the shape of your bone. But then this has been used today in the same way, for example, they have people that using your face predicts criminality, happened in China in 2017, and happened again in the US, almost published in Nature in 2020. Luckily people stop those things because it's just pseudoscience. So this is pseudoscience, this is not true that you can infer personality from this, but you have people like, for example, famous psychologist in Stanford, Kosinski, that is using this kind of biometrics to predict, for example, your sexual orientation, and this was like a scandal in 2018-

Juan Sequeda: I remember that-

Dr. Ricardo Baeza-Yates: Or your political orientation, this happened in 2021. Basically, it works because you capture kind of a correlation that are nothing to do with your face. For example, correlations with your beard or if you use long hair and so on, or even the type-

Tim Gasper: People who wear hats or whatever it is, right?

Dr. Ricardo Baeza-Yates: Yeah, yeah, exactly, and that says, yeah, make America great again. Yes, you can infer those things, but basically it's just foolish correlations and accuracy was only 70%. So you cannot say that 70% is something that is good, it's just stereotypes. Okay, third class. The third class, I think I will say that it's one very natural is basically human incompetence. So basically persons doing wrong things and causing problems. The best example maybe is from Facebook. A couple years ago, one engineer decided to use a hate speech classifier train in on English in France. So the classifier decided that the town of inaudible was forbidden, and they had to fight three weeks to get their Facebook page back because no one also was in the loop to listen to their complaints. And this sounds very funny, but if you think that maybe the town was using that channel to say, for example, things about COVID, that hurts people. So it's funny, but that inaudible and this was just pure incompetence. Here, I use a very well known quote. Now I don't remember the famous statistician that said, " Almost are wrong, but some are useful." We can say the same today about AI, almost are wrong, but some are useful because they work almost all the time and that's okay, but basically they are like simplifications of the world. For example, data. Data will never represent the whole context of a problem. Data is a proxy for reality. Some data you will never have it all the data from the future. For example, when an Uber killed a woman in Arizona in 2018, I'm sure that they didn't have that in the training data. A woman crossing at night in a bicycle in the wrong place. Yeah, and you can imagine all possible things that can happen in a road, not only in the US, let's say in India in the future. Well after that, Uber decided not to experiment any longer with self- driving cars and they sold their unit. So that also had a business impact after that accident. That was maybe the first recorded dead person from AI. Let me go to the two last classes, the one is very simple, there will be the impact on the environment. So all these things use a lot of energy, a lot of electricity. The carbon, basically carbon trail is huge. And now with all these large language models, generative AI, it's getting worse because not only the training, it costs like$ 1 million, but imagine when 5 billion people is using this. I mean the OpenAI in two months had 100 million users, is the fastest adoption of a product in history, and I don't know who is paying the bill? Maybe Microsoft? So much money in everyone playing with this thing because they are paying, and not all of them are using this for a good purpose. And the last one have to do with generative AI. So and it's very hard to describe what is the kind of problem because this is a bad use of generative AI. And I guess the worst case happened March 28th this year. So very recently in Belgium, in the news appear that a person had committed suicide after six months talking to a chatbot with an avatar, not ChatGPT, another chatbot called inaudible, and with an avatar of a woman. And basically if you read the last conversation, it's really scary, it's look like a science fiction movie. In the last conversation that was logged and was found by his wife, and also left two kids behind had said basically, " Why you haven't killed yourself?" And the guys asked said, " Oh, well, I thought about it after you give me me the first hint," and the chatbot asked, " What hint?" " Oh, well this quote from the Bible," Oh, and then the chatbot says, "You still want to meet me?" And he said, " Yes." And they said the guy, " Can you give me a hug?" And the chatbot said, " Yes," and that's the last conversation, and I guess the guy thought that by killing himself, he will beat the chatbot in another life. So mental health, this is I think generative AI is a danger to mental health, and then also to the credibility of all digital media because in the future, we will not recognize if a video is true or not. So everything we have built in the last 20 years to use videos and images to know about the world, now will be gone. And then that's a real threat to democracy, not only to our mental health. So I think these are all the examples that I think are important for responsible AI.

Tim Gasper: I think especially on this last topic here, I have a lot of thoughts across all of these. They are really great examples of some of the harms, both real harms that are already happening as well as potential harms. The generative AI one has a special sort of topical aspect because of how both, how popular it's now become recently, very trendy, right? But also, the fact that you can deceive, deceive at the individual level, but also deceive at the societal level, right? This is one that stumps me a lot about how do we create more of an accountability around responsibility, around generative AI? For example, is it even viable to say, let's imagine a government body someday, basically saying, creating false content is a federal offense or something like that. You cannot do it, right? Is that even enforceable? Is that even the correct approach to something like this, right?

Juan Sequeda: I mean one of the things when we started talking about responsibility, right, responsible AI. You said that if you're responsible, that also means who's accountable for these things, right? I mean after going through all these points, which we'll summarize in our takeaways, I'm feeling really heavy right now. You've gone through a lot of things that hopefully everybody who's listening realize that you got to take this shit for real, and we got to really think about this. This is not just about like, oh, yeah, yeah. Yeah, we have to be concerned or whatever-

Dr. Ricardo Baeza-Yates: We already have dead people, we already have hung people, so it's not a potential, this is happening now.

Juan Sequeda: This is happening right now. So let's actually take this and talk about the accountability and what are the solutions? What are the approaches that we need to consider that are being considered right now?

Dr. Ricardo Baeza-Yates: Yeah, one is regulation, and I agree with Tim, that's very hard to enforce, and China has published the first proposal for regulation of generative AI, like in April 11, so less than a month ago, all these things have happened really fast, but let's go in order. I think the first thing we need to agree is in principles, in some basically operational principles, and we have values that comes from ethics, and I think the three main value that are encoded in bioethics are first, autonomy. So basically respect to our decisions to basically to whatever we want to do. So this is the first one. The second, I guess is justice. We want to help people that have less opportunities, and then we need to be just, and maybe sometimes we need some affirmative actions for that. And the third one is very generic, but it's something that all people understand is we need to do good and not to do bad. So more, I mean if you want to do something, it should benefit more people than the people that is harmed, and also you need to have that the benefit is much more than harm, otherwise there will be an issue. So these are the three things, but then you have this principle that are not values in some sense because a lot of people when they think about principle, they think about value, but these are more instrumental principles. So are the ones that will be help us to basically be responsible. And for me, the best ones so far are the ones that we published with the ACM in last October, and I was one of the two main authors of that, and I pushed there a few new principles that I thought were important. I think the main one is the first one, which would be more like principle zero, not principle one, and I call it legitimacy and competence. So basically before you do anything, so you have a great idea for a new business using AI or using any software because this shouldn't be only for AI, it should be for any software. So we call it, the principles for responsible algorithmic systems, most of them will be with AI, but any algorithmic system should follow the same principles, it's legitimacy and competence. So what it means? Legitimacy means that you have done this ethical assessment or say human rights assessment if we have different ethics in different cultures, to show that the benefit is more than the burden, or the harm in some people. So basically, you prove that really these should exist, that's why it's legitimate. And then we need the competence, and the competence have several dimensions. First, we need to have the administrative competence. So basically we have the right to do it in whatever institution we are doing it. For example, I don't think this was the case for the Netherlands example. I think the engineer that's inaudible is a great idea of looking for fraud in poor people, never asked anyone or the minister to say, " Okay, can I do this?" Because some person with common sense will say, " No, don't do that. Please stop." And then we need to have the technical competence. So basically we understand how mature learning works and we can do a really good model so we don't have human incompetence, which was a lot of the problem. And finally, we need to have the competence in the domain of the problem, which means we have people that are not computer scientists that are really experts. If it's cares, we have doctors. If it's legal, we have lawyers, and so on. And then of course, we need to have ethicists to evaluate all these things. So these are the first principle. We have nine principles and other important principle is basically no discrimination, transparency, accountability, inaudible, explainability, and interpretability, and even the last one is basically not only we do not need to harm people, also basically we have to limit the use of resources because also we are harming the planet and we are part of the planet. So these are nine principles, you can find it in the ACM and I think this is the best collection of principles that join other principle that the OECD has, or UNESCO, or even recently the White House. Also in October, they published this blueprint for the AI bill of rights, although basically these are instrumental principles, five instrumental principle that's already a bit obsolete with this new version of ACM because it's not a bill of rights for people, it's basically operational principles for software. So this is the first step.

Juan Sequeda: Principles.

Dr. Ricardo Baeza-Yates: Principles.

Juan Sequeda: All right, inaudible.

Dr. Ricardo Baeza-Yates: Then when we agree with the principles, the second step is how to put them in practice, that is governance, and all people understand the principles, but they don't understand how you put this in practice. And governance implies a process, implies actions, and implies people. For example, let's start with the last one, people have need to be trained. So engineers need to be trained on these principles to understand how they put this in the code, how they put this in user interfaces, how they put this in data and so on, right? For example, I'm sure you know about the standards for describe data, standards for the drive models or model cards and data. What was the name data? I forgot. No, it's not data cards, but something like data something, and basically there are proposals to do all this. Now actions, it means that there's a process. For example, you start with the principles. One, you show that you should do it, then you enter in development stage, and then you have to do things like, for example, checking your requirements, checking your assumptions, talking to the users, talking to all the stakeholders. Most of the time, we don't do that and we just keep that, and we keep going, and we talk with the users and stakeholders after we find trouble, but then it's too late because you cannot talk with the person that died in Arizona. By the way, that's another very good example of the wrong accountability because when that woman died in 2018, Uber in less than a week, basically reaching a settlement with the family of that woman. And at the same time, and you can guess how it happened, the Arizona government knew that the backup driver that was in the car was watching a video. And then the Arizona government said, " Well, this is I guess public road, a person died. I cannot say it's all gone because Uber already agreed with the family." They charged for involuntary death to the driver. Well, the driver was another vulnerable person. She was receiving minimal salary. She was a Mexican immigrant, a transgender, and when last year appeared a very interesting interview in Wired, if you want to see it, because this was not known until several years. And basically she was found guilty because basically that was true. I mean the system show that she was watching a video that was all locked, but in spite that the system show also that the system didn't recognize that was a bicycle in front of the car until two seconds before the impact, and even if you're not watching a video in two seconds, you cannot do much if you are basically going straight. So this person was found guilty and had to be one year in her home. So she was basically home prison with these rings in the ankle, so she couldn't leave. Again, the person that was guilty then was a vulnerable person was similar to the Netherland's example because always there's a rule that rich, again, sorry. It's rich people gain more money with these things and poor people suffer the consequences. So inaudible implies governance, plus for example, monitor your models all the time to check if, for example, the output drifts, or the bias is increasing, or for example the data is changing. You need to do all these things and there are not too many companies work in this space, but I have seen a few startups that are interested in basically checking that everything is going well after you do the right evaluation. For example, also evaluation is very important. Validation of all your assumptions and evaluating the system thoroughly. For example-

Tim Gasper: This is model drift and things like that, right?

Dr. Ricardo Baeza-Yates: Yes. Yeah, model drift, data drift, and so on. But imagine today, I think we are doing the alpha testing of ChatGPT. We are finding the problem because it's so hard to test because it's open domain, right? So it's impossible to test in reality. So we have a shift, we need to think a shift, a paradigm shift of how we test these things, then we need to then there, transparency is so important, but transparency alone is not enough because you can, many governments are very transparent. They say we will do this and no one can do anything. So transparency has to have let's say things like with contestability and adaptability. So you need to counter the system and talk to a person, and then be able to audit the system to see if the system was working correctly or not. Most of the audits today are done against the will of the companies that sell those products. And of course, those always are much harder because you don't have all the data, you have to treat as a black box, you need to do like experiment that are not completely found because you don't have access to the real system and the real data. So this is something that needs to change because auditability is so important because the next step is then accountability. If you do an audit, and the audit shows that you, for example, are discriminating... Well, you need to go to court and you need to be accountable and responsible. So this is the governance part. So basically it's the process from the first idea to when you fail and when you harm, and I have diagram that I guess is unique because it hasn't been published yet on how this works.

Juan Sequeda: This is a very complete picture and I mean even though we're applying this, we're talking about this in the context of AI, right? Even we see this from just an enterprise data management, like everything that you've said is this should be applied to everything, right-

Dr. Ricardo Baeza-Yates: The same. The same for data management. The same-

Juan Sequeda: For data management, and it's very explicit, right? The process, the actions, the people, the transparency, accountability, and I think a lot of the governance, let's be the honest, no BS thing. The governance thing is like, oh, do I have PII and I just want to go flag it, right? Can I get access to this data and somebody approves the access to this data, right? It's we're just barely scratching the surface, and I believe that we're not really even considering the magnitude, but at the other point, it's like, well, it's not really a big deal probably, so I don't have to invest so much in it until the shit hits the fan-

Tim Gasper: Until the problems start happening and people die, and things like that-

Juan Sequeda: But I think now with the increase of all things AI, something is going to happen much sooner than later-

Dr. Ricardo Baeza-Yates: Yeah, because something that I think people are forgetting that if you have something that grows exponentially, like use of let's say, generative AI, even if the problems are 0. 001%, that curve will also be exponential. So soon, we'll have not one problem, 1000 problems, 1 million problems. We already have thousands, but these are the ones that we know. I think easily we have more than 100,000 today, and we don't know about them. And the harm of course is different, but it still is harm. Sometimes psychological, sometimes physical, sometimes it's business harm, sometimes it's public relations. I think responsible AI should be used by the marketing team to say, we are different. It's like organic food or just price, things like that. This should be the next marketing agenda-

Juan Sequeda: But at that point, we start losing the real significance of responsible... I mean that's kind of the purpose of what we're having the discussion is that we're hearing it so much and people, " Oh, yeah, it's responsible AI," but what does that even mean, right? We're just using the words.

Dr. Ricardo Baeza-Yates: Yeah, but I think if they really mean it and they do it, I'm okay, even if they use it for marketing-

Tim Gasper: Right, like the idea of more organic food, right? Even though it's used for marketing purposes, was the phrase organic food ultimately better for society? Did it result in better outcomes? I know some people would say maybe not, but maybe on the whole, it's been a net benefit. So I don't know, I can see a lot of merit in what you're saying here, right?

Dr. Ricardo Baeza-Yates: Yeah, if I'm being practical. Sometimes you have to be very pragmatical, but in a capitalist world, I think that's the only way to work sadly.

Juan Sequeda: I love the-

Tim Gasper: Marketing, marketing is powerful.

Juan Sequeda: Honest, no BS right there, right?

Dr. Ricardo Baeza-Yates: Exactly-

Juan Sequeda: So marketing teams-

Dr. Ricardo Baeza-Yates: No BS.

Juan Sequeda: Marketing teams get on the responsible AI messaging now.

Dr. Ricardo Baeza-Yates: Like we are working on inaudible and we have this responsible AI practice and we are working with top companies that really want to do this. We didn't convince them, they wanted to do it, and we were the ones that really had the right message that they were looking for. For example, what are the right principles? The principles, we have these nine principles, but depending on your business, you don't need all of them, inaudible may be additional ones because you have a specific focus, and then you want to say, "I have this principle that's unique for me, and this is also another marketing strategy. This principle I support it." It could be organic food, for example. That is a principle in some sense.

Tim Gasper: inaudible.

Juan Sequeda: So the other solution we're going to talk about is regulations.

Dr. Ricardo Baeza-Yates: Yes. So this is next step. So if people don't do this, they don't adopt this principles and governance that is based on AI ethics. So AI ethics exist, but ethical AI doesn't exist. There's a difference. So then you need regulation, and as Tim said at the beginning, regulation is very hard to enforce, but there's some simple regulations that you can do that maybe will help. For example, if you do any big infrastructure project, you need to do today an environmental impact assessment, right? This is everywhere and you need to present to the office and someone will approve that. Why we don't ask a human rights assessment for any AI product and maybe the office that will handle this has to have a time limit to give an answer, and this may be a lot of work for philosophers and other people that don't have so many position, but this is good for society. You need to get approved to do software. So software today is really one of the... is like the Wild West. It's so free, you can do whatever beep you want, and no one stops you until there is a big problem. And we always talks about the successes, but for every success in software, I'm sure we have at least 1,000 failures-

Juan Sequeda: So are-

Dr. Ricardo Baeza-Yates: And we don't know about them, some of them are very scary-

Juan Sequeda: In a lot of it, I mean you talk about degrees in engineering, right, or just so many different... In accounting, right? In engineering, your civil engineer, you actually have to go get your certificate where you have to go through to get your certification. Are you arguing you believe that for computer science and engineering, software engineering, we should be at that point too?

Dr. Ricardo Baeza-Yates: Well, in many professions like civil engineering, you need a certification. This could be a possibility that you are certified, but maybe even more like, for example, you are certified that you know what is the code of ethics, that what are the principles that you need to basically follow, although you don't follow them. At least you can say, okay, I have this knowledge and I intend to use it. The problem later is to enforce that, but I think that would be a minimal thing, that would be very simple. There are so many certifications of other things, of tools, and things. Why not we put... Okay, I'm certified. I took a one day course on responsible AI. I know what it means and I cannot say, " I didn't know," at least that because if I say, "I didn't know," that's a many times this is a excuse, ignorance is excuse.

Tim Gasper: There can't be plausible deniability in all this.

Dr. Ricardo Baeza-Yates: Exactly. Well, for example, one of the things that the people is regulating is proposing is that you cannot put in the conditions or terms of conditions in software. You cannot put basically a clause that says, I'm not liable for anything that I may cause. That should be forbidden, right? You need to be responsible, you can't escape. So it's your product, I mean just like, this were like, if a car has a part defect and someone dies. It's like saying, " No, no, we are not liable for any mistake with the mechanical part of the car," which is not allowed too. So we have done the same in many other areas, why not in software?

Juan Sequeda: This is a lot to unpack here with-

Dr. Ricardo Baeza-Yates: Sorry-

Juan Sequeda: No, no, this is great. I mean we can just go down from this topic, but just do query quickly before we head to our lightning round is people who are listening, they're folks in the data space. We have folks, audience executives, we have data consumers, data analysts, data engineers, data producers who are creative, software engineers, for those different personas, I'm sure they're hearing this and I'm overwhelmed right now. What are the takeaways? What are the things that they should and can start doing today to be responsible?

Dr. Ricardo Baeza-Yates: Yeah, for example, the first thing is how much you are doing? Do you have a ethics committee inside your company? For example, when you have an ethics committee inside your company, you always have a conflict of interest because many times you have to decide between things that will basically reduce revenue to be better, and then the decision may be biased. For that reason last year, we created the first worldwide AI ethics committee that receives requests from institutions with hard ethical issues, and we give a private advice of what is the best solution, what is the balance? Because typically the issues are problems between two values. For example, you want to respect the autonomous, the person dignity, but at the same time, you want to basically not harm someone. And sometimes you need to choose what is the right balance between, you can do whatever you want, but not this because then someone else will suffer. So we did that, so ethics committee are very hard. Even in 2000, I think was 2019 when Google did an ethics, AI ethics committee and was dissolved in one week because they chose the wrong people, so sometimes it's not easy to do this. So this would be one thing, so you have the right places where you ask for permission for something, could be access to data, for example, you can have a data committee. You have the, for example, a responsible AI committee where you see you do an impact assessment of the benefits and risk of a model that you are building and you want to put in operation. So you need to have these conversations where not only computer scientist are involved, but also maybe 1% the C- level, maybe power users of your software because sometimes also very important the perception of people, not exactly the reality. For example, maybe your model doesn't discriminate anyone, but the user thinks you're discriminating him or her, and you need to discuss why that's happening, and sometimes it may be a very silly thing, oh, change something in the user interface. The model was okay.

Juan Sequeda: So I'm thinking for organizations who are listening, they're listening right now and they're like, " Man, we do not have an ethics committee and this..."

Dr. Ricardo Baeza-Yates: For example, do you have responsible AI principles? Probably not.

Juan Sequeda: Can you point us to some guidelines for companies or organizations who are listening right now and saying, " Oh, okay, I need to do this. How do I set up an ethics committee? How do I define my responsible AI principles?" What's your suggestion to people to start with this now?

Dr. Ricardo Baeza-Yates: Well, for example, you can look at the principles of the ACM, then you can look for principles for responsible AI with mixed systems, they will find the page. We in at AI inaudible. edu, we have a page on responsible AI practice where we have basically all these things I mentioned. You have governance, you have impact assessment, you have training, and then you can see, okay, maybe we can help you on finding where you are and what you need to do. For example, the first thing we do is like a playbook. Okay, this is the stage, this is what is missing, this is what you need to do. You can do it yourself or maybe if you don't know how to do it, we can help you. And now I think there are a few places in the world that can help you doing that, but I think we're one of the top ones in the world.

Juan Sequeda: And how would you set up an ethics committee within your like... how would you start this up internally?

Dr. Ricardo Baeza-Yates: So first you need to see if you will use it enough to have it, right? That's why we build our own external committee because maybe you have a real issue twice a year, then why you have a committee for that? That's why it's better to have one on demand, so I would say use ours is much better, but if you want to set it up, you need to basically try to do it with external people because otherwise you have this conflict of interest that you will decide what is better for you, for the company, and not for the world. There's always this tension. And then could be very small, could be five people, but very qualified people. So you need to have an AI ethicist. The problem there are very few that are good there. This is something that's starting and it's starting so fast. I would say that ethics is always running behind technology and when something wrong happens, it like ethics tries to catch up, and then technology keeps running because it stops a little, oh, one person dead. Okay, I should do something, and then keeps going. And the same happened in history with, for example, arms, the same. We have forbidden many types of arms when we found a problem, a really bad problem, but we shouldn't wait, for example, for the Civil War to do something on AI based arms. It's already happened with drones in Afghanistan, in Ukraine, and countries. All countries, the top countries and also non- top countries are selling very impressive drones that sometimes make mistakes and kill civilians, that already happened. So it's not a science fiction, that already happened in Ukraine and Afghanistan.

Juan Sequeda: This has been a fascinating conversation.

Tim Gasper: Yeah, this has been awesome.

Dr. Ricardo Baeza-Yates: Thank you.

Juan Sequeda: And what I'm really hoping is, and I'm seeing this, Tim and I, we go talk to so many people, and the honest, no BS thing here is that, this is not a topic that comes up, and now every single vendor, every single... we're all including generative AI features around these things, and yeah, we see them as really small things and stuff, but this thing has started to grow so quickly, you have no idea. And I think at this, and we can interpret as, oh, we should be scared, and then or we should like, no, we should grab, go into it head first, and okay, then we need to really go address.

Tim Gasper: We should approach it with eyes wide open and everyone has to take some level of responsibility, including us as vendors, as we're doing things like incorporating this technology to be able to advise people on what the trade- offs are and be responsible citizens around that, so I think-

Dr. Ricardo Baeza-Yates: Yeah, and in that sense, you as vendors, I mean the responsibility of a vendor should check the ethics of the person you are selling the product because you check, you should check how that will be used, and if you want to be responsible, that's part of your responsibility. I mean will this person use my product to harm people? I shouldn't sell it, right? How many companies think about that? They just sell it.

Tim Gasper: That's super, super interesting. That sort of ethics lineage.

Dr. Ricardo Baeza-Yates: Exactly, and this also goes for your providers. Are you buying things from providers that are not ethical? You shouldn't do that because it's like, it's a process, and the process does it online and you said lineage, lineage goes both before and after. So the lineage of ethics is very important today, and suddenly, ethics is lacking in the whole world, not only in software, but also in politics and I prefer not to continue, not to continue that.

Juan Sequeda: All right, well with that, it's-

Tim Gasper: For the next podcast.

Juan Sequeda: That's a good segue for our lightning round, which is presented by Data. World. So I'm going to kick it off here first. So will the burden of responsible AI, especially fall on the big tech corporations, Microsoft, Google, Meta, OpenAI?

Dr. Ricardo Baeza-Yates: Part of it, but I don't think it will be more on company that sell actual products. I will say Palantir or things like that, could be also even more complicated. I mean they already have things that are not ethical. So yes, and so I think it will be all of the above in some sense. Also, because today, like generative AI is for me is if I can do parallel to guns, it's like a cluster bomb. It's not something that you drop and one place gets affected. It's something like 5 billion people connected to internet would be affected. So this is even worse because everyone is a potential place for harm.

Juan Sequeda: You go, Tim-

Tim Gasper: Great commentary there. All right, second question. Will the benefits of this wave of AI, particularly around generative AI. Will the benefits outweigh the cons?

Dr. Ricardo Baeza-Yates: That's a very good question. I don't know because basically, there's so many ways to use this technology. So this is the problem. If we knew all the ways that we can use the technology, then we can evaluate that, but we don't know, so maybe I want to be optimistic. I will say yes, I hope the benefits because we will increase productivity, we will do a lot of things that are good, but who knows how people will use it. For example, there are already cases where people fine tune a language model to talk to their ex- fiance( dead), or to his grandmother( dead), and these things will really affect the mental health of people. So if people believe that they're talking to dead people, I don't know where we can go. And that's why I love, you can check in the Guardian in March, Jaron Lanier, which is one of the fathers of virtual reality that works in Microsoft said, " I'm not afraid of that AI will destroy us. I'm afraid that AI will make us insane." I think, this was one week before the suicide. So I think, wow, this is was like-

Tim Gasper: Wow-

Dr. Ricardo Baeza-Yates: I guess he never thought that in one week, that will be proven.

Tim Gasper: That is quite the statement.

Juan Sequeda: All right, next question. If I'm a data engineer, I create transformations, help create a data warehouse, or I'm a data analyst, I create reporting dashboards. Do I need to be thinking about responsible AI?

Dr. Ricardo Baeza-Yates: It depends on who will use that. So for example, if you are using generative AI, for example, ChatGPT to increase the productivity at work? Well, if anything that is there that may be wrong, and that will have an impact on, for example, the business that is using those reports? Yes, imagine that next day it says, " Because of what says in the report that is false, I lost$ 10 million." Well, someone will be accountable for that and probably you will lose your job, right? So if you want to lose your job and gain time thinking that everything that chatbot says is true, that you have a problem, and suddenly someone said, let's call these hallucinations, but these are not hallucinations. Hallucinations usually don't harm you. There are many that will harm you, or will harm someone, or the institution. So sometimes we are afraid to use the right words because of the DS, I guess.

Juan Sequeda: So what should the word be instead of hallucination?

Dr. Ricardo Baeza-Yates: Basically fake statement. This is a fake statement.

Tim Gasper: I love that you're saying this because every time I hear the word hallucination, I'm like, I feel like a marketer came up with that term. They tested it on a focus group-

Juan Sequeda: Well, obviously they did-

Dr. Ricardo Baeza-Yates: No, no, the term... Seems that the term came from OpenAI.

Juan Sequeda: Yeah, and I'm sure the marketing department, which are full of marketers right now, yeah.

Dr. Ricardo Baeza-Yates: For example, the other day I asked to ChatGPT, what are your five problems? No, what are the main problems with ChatGPT? And ChatGPT said five problems, and they didn't use the word hallucinations, so the chatbot didn't use that word. They use incoherence, which is true, but incoherence also doesn't damage too much. We need to put a word that implies that maybe there's some damage in some cases.

Tim Gasper: Right, it could be harmful-

Dr. Ricardo Baeza-Yates: Fake, fake-

Juan Sequeda: Fake, fake statement-

Dr. Ricardo Baeza-Yates: Fake, fake statement, and you know sometimes fake things really harm. So it's not like a good word, it's not a completely bad word because not always will harm, but sometimes, for example, in the first version of ChatGPT, I died in 2021. Well, it doesn't harm me, but maybe other people don't like that. That was great material for my talk on ChatGPT. So thank you. Now, I'm alive again in Chat-

Juan Sequeda: Or you're alive-

Dr. Ricardo Baeza-Yates: ChatGPT-4. I'm alive again, but I'm seven years older, so I don't know what I prefer.

Tim Gasper: All right-

Juan Sequeda: All right, that-

Tim Gasper: Final lightning round question here is explainable AI necessary to achieve responsible AI?

Dr. Ricardo Baeza-Yates: So this is one of the principles of the ACM, so it's called interpretability and explainability. So yes, but not all the time. So explainability, you need to assess if you need it because of to be responsible. In some cases, if it's really hard to explain, could be even be dangerous. For example, the typical case is you have a health application and if the explanation is wrong, maybe that may be worse than no explanation. For example, you have let's say certain symptoms and the system says the explanation is that you have this because of this, but if you saw the famous house serious as sometimes the symptoms could be like 10 different explanations, but of course you use the most popular one, the most typical one, but the world doesn't work on statistics. One problem what we haven't talked is that basically humans are not... they don't come from a priority distribution. So the data about one doesn't have any relevance to my data, different context, different countries, different lives, but a lot of people is using data from other people to predict one specific person. So yes, explainability is something important, but you need to make sure that it's safe too because in some case, it maybe not safe.

Juan Sequeda: All right, so Tim, we have so many notes right now here. Go take us away, Tim, the takeaways.

Dr. Ricardo Baeza-Yates: So, but one thing-

Tim Gasper: I know-

Dr. Ricardo Baeza-Yates: One thing for you-

Tim Gasper: Yeah, go ahead.

Dr. Ricardo Baeza-Yates: The problems don't come only from data, remember that. Some people believe that all the problems are from data. No, some problems come from what you are optimizing. The people that did the software... So there's a recent paper that shows that the bias of the coders goes to the code, and also, there is a lot of problems in the feedback between the users and the system, and there are a lot of biases on how the system presents things to the user that basically affect their behavior, like nudging and other things, and that also is a problem. So the problem of responsible AI is not only data, that may be the main one, but there are all these other cases that come from basically the machine learning model, and also from the interaction of the system and the users.

Tim Gasper: I think that is very important what you just said, and I think that I'm glad that, first of all, I'm glad this podcast exists and that you were able to join us here because I think that folks could listen to this hour here and get a course worth of understanding and education here. I think people often oversimplify the problem of responsible AI and they're just like, " Oh, you got to pick good data," or, " Oh, you just have to have a company with good culture," or something like that. It's like, no, no, no, you're far oversimplifying this problem. This is a complicated problem. Doesn't mean we can't address it, we have to address it, but we have to think of it like a complex system, which it is, right?

Dr. Ricardo Baeza-Yates: Yeah, exactly. It's a cultural system. So then you have to create the culture where everything works the way that you choose to be responsible.

Tim Gasper: Oh, this is awesome. So all right, takeaways. Tim's takeaways. So we started off with what is responsible AI? And you actually started off with saying what isn't, right? And one of the things that you said that wasn't is that it's not ethical AI, which is a very humanizing term. We shouldn't humanize it, right? And it's not trusted AI because it doesn't make sense to say that like, oh, do I trust it or do I not trust it? Do I trust it all the time? It's not the most relevant thing here. What really is relevant is around accountability. Who is responsible? Who's the person, who's the entity who's responsible because then we can create frameworks around governance, around principles, et cetera, to try to identify and manage that responsibility. So I thought that was very good there. And I loved your example that you gave that you said, there's no trustworthy aviation. If you have to say that it's trustworthy aviation, then we have a problem here. And so, I thought that was a great sort of counter example. And you discussed what is irresponsible AI and you provided some really great examples of where some common places where irresponsibility can happen. One of them is around discrimination, that's probably the most well- known around gender or race, xenophobia, whatever it might be. And you gave an example of in 2012, in the Netherlands, about how there was an analysis or a system to analyze cheating in sort of the system around daycare, and people lost their houses, people were kicked out of the country over this. And ultimately, not only was it found illegal, but a person took responsibility stepped down, and then the entire government actually stepped down because of this, and so that's an example of both, a problem, as well as a kind of an accountability that can happen inaudible.

Dr. Ricardo Baeza-Yates: Nine years later, sadly.

Tim Gasper: Nine years later, not fast enough, right?

Dr. Ricardo Baeza-Yates: Exactly.

Tim Gasper: And so that's an example of something where how do we create a system that it can happen faster, right? You talked about the idea that using things like facial recognition and things like that to profile, or to do stereotyping and things like that. That's an example of a sort of spurious correlations that we want to avoid, that's irresponsible. Human incompetence, right? Human design problems, not just in the data as you mentioned, not just the data, but the model selection, the model design, the things that humans actually code into the software, the systems that this plugs into. There's a lot of decisions and choices that humans make that can cause a lot of irresponsibility. Impact on the environment, obviously that's huge. A ton of compute goes into these things, both in training as well as an inference. Generative AI, the ability to create all this content. It's so easy to create fake content. Fake content that looks just as real as everything else. There's all these things now on Facebook and things like that, where they say, " Which of these four images is the fake image?" And the answer is trick question, all four are fake. And so I know that, that's a big thing. Finally, before I pass it to you, Juan. Two great quotes you said, and I know that they're from other folks as well. All models are wrong, but some are useful, and data is a proxy of the problem. So Juan-

Dr. Ricardo Baeza-Yates: That one, that's fine-

Tim Gasper: Over to you. That one's yours.

Juan Sequeda: Which one?

Dr. Ricardo Baeza-Yates: Data's a proxy of the problem.

Juan Sequeda: All right. So we talk about problems, talk about solutions, and I think we started first with the principles, and having operational principles. And look, we have them like for bio principles doing right? Talk about autonomy-

Dr. Ricardo Baeza-Yates: Bioethics, yes.

Juan Sequeda: Bioethics, right. So we talk about autonomy, justice, do good, not bad, where the benefit is higher than the harm. And then you really are pointing us to look at what the ACM has been doing for principles, right? Legitimacy, prove the benefit is higher than the harm. Make sure you have competence, right? Administrative competence. Can we actually do this? Technical competence? You have the people around who can actually do this, and then competence in the domain. So it's not just about computer scientists and the data or the technical folks there, but you have to have folks in the domains, the doctors inaudible... lawyers. There's so many different principles, there's like nine principles. The second is around governance, and there's a big process and workflows around governance. Think about what are the processes that we need to go follow, monitor the models, look about model drift, data drift. The data's always changing around these things. What's the actions? What should be done if something is happening? We actually be documenting these things. Who are the people involved? They need to be trained to know how to put this in the code, in the UI, in the data. And transparency alone is not enough that we need accountability. And I love how you just being very bold and saying, marketing teams, they should get onto this responsible AI messaging. Another solution here is on regulations, and we just see this in so many different parts of the world. If you're doing a big infrastructure project, you have to do an environmental impact study around this stuff. Why don't we do this also for AI and projects? Why don't we have certifications? Other engineering areas do this, and so therefore, you don't have the excuse to say, " Oh, I don't know." You just can't blatantly say in your T's and C's saying, " Oh, we take no responsibility for this." Imagine if your car manufacturer says, " Yeah, we don't take any responsibility for an issue with the car would happen." Like no, that does not happen, and why would this happen in software, and data, and in AI? So then to wrap up, what can people do today, data leaders, data scientists, data analysts should all think about? Do you have even have an ethics committee in your company, right? And actually, again, to be very practical, if it's not something you're going to use that often, maybe you should be able to go partner with an external team who can go do that, and then actually think about do you have responsible AI principles for your company? And looking at the ACM principles is probably the first step to go do that. That's our takeaway, anything we missed?

Dr. Ricardo Baeza-Yates: No, I think that was a good summary.

Juan Sequeda: So throw it back to you to wrap us up. Three questions. What's your advice about data life? Second, who should we invite next? And third, what resources do you follow?

Dr. Ricardo Baeza-Yates: Let's start with the easiest one, the last one. So typically, I follow trusted people in Twitter and LinkedIn, and then it's amazing, I'm up- to- date in everything. So I know things that are important that I should read very fast. So I have a very trusted network of information related to the topics that I'm interested. Now, the advice I would say that try to do this as soon as possible. So I think you are fooling yourself if you say, " Yeah, we'll wait until someone else does it," but if someone else does it, you will be second, or third, or fourth, and then you will not be leader in your field. We are working with companies that are leaders and they know that the only way that they can keep be leaders is to basically also address this soon, fintechs, telcos, insurance companies, and so on. So don't wait until it's too late. Also, because there are not too many people available, and those, they will be gone. So also, it's a great time because big companies are laying off people that knows about these things, so capture some of them. We are doing that. And basically you have a lot of knowledge right away because these people already have been working like three, four years on this, and who invite next? Tough question. So let's say if you want to... I would recommend, it will be biased recommendation. So I work with an ethics lead. So if you want to continue to this topic, I will suggest my AI ethics lead, inaudible for this conversation.

Juan Sequeda: Perfect.

Dr. Ricardo Baeza-Yates: And just amazing.

Juan Sequeda: Well, Ricardo, thank you so much for this amazing discussion, just a quick reminder. Next week, I will be at the Knowledge Graph Conference in New York, and we are going to have our guest live over there is Catarina Khari from Ikea, talking about all things Knowledge Graphs next week. And with that, Ricardo, again, thank you. Thank you so much. This was a phenomenal conversation and you opened our eyes a lot to everything.

Tim Gasper: Yes-

Dr. Ricardo Baeza-Yates: Thank you. Thank you too.

Tim Gasper: Cheers, Ricardo.

Juan Sequeda: inaudible.

Intro Voice: This is Catalog and Cocktails. A special thanks to Data. World for supporting the show. Karli Burghoff for producing, John Loins and Brian Jacob for the show music, and thank you to the entire Catalog and Cocktails fan base. Don't forget to subscribe, rate, and review, wherever you listen to your podcast.

DESCRIPTION

With the hype of AI, one of the important topics to discuss is Responsible AI.


In this episode, we will be unpacking Responsible AI, what is the problem, what are solutions such as principles, governance, auditing, regulation with renowned computer scientist researcher, Ricardo Baeza-Yates.