Intangiblia™ en español

Rosa Celeste - Gobernar la IA desde lo humano: privacidad, poder y responsabilidad

Leticia Caminero Season 6 Episode 5

Send us a text

Una decisión automática puede cambiar tu vida en segundos: negar un crédito, filtrar tu currículo o replicar tu voz en un deepfake. En esta conversación con Rosa Celeste nos metemos de lleno en cómo construir IA responsable que optimiza procesos sin convertir la privacidad en daño colateral. Hablamos en primera persona sobre lo que sí funciona: gobernanza clara, transparencia real y equipos interdisciplinarios capaces de unir ley, negocio y tecnología para reducir riesgos y acelerar con cabeza.

Desmontamos el mito de que “anonimizar” basta. Explicamos por qué la reidentificación es un riesgo real y cómo contrarrestarla con minimización de datos, evaluaciones de impacto, controles de acceso, retención limitada y auditorías independientes. También abordamos las zonas rojas del momento: clonación de voz, identidad sintética y desinformación amplificada por modelos que no conocen límites. Discutimos responsabilidad algorítmica y explicabilidad: quién responde, cómo documentar variables decisivas y por qué la revisión humana significativa no es opcional, sobre todo en crédito y reclutamiento donde los sesgos pegan más fuerte.

Nos detenemos en el terreno regulatorio y en los derechos del usuario: pedir explicaciones, oponerse a decisiones automáticas y exigir límites al uso de imagen, voz y datos sensibles. Recorremos herramientas prácticas para líderes: métricas de riesgo, pruebas de equidad, controles de uso, marcas de agua robustas, trazabilidad y entrenamiento del equipo en principios éticos accionables. La idea central es simple y poderosa: la IA puede ser tu mejor aliada si la diseñas con reglas claras desde el día uno, alineada con la cultura de tu organización y con respeto total a los derechos fundamentales.

Dale play, comparte este episodio con tu red y cuéntanos: ¿qué control te gustaría exigirle a cualquier algoritmo que evalúe tu vida? Suscríbete para no perderte nuevas conversaciones y deja tu reseña; tu opinión ayuda a que más personas encuentren este contenido.

Descubre Protección para la Mente Inventiva – ya disponible en Amazon en formatos impreso y Kindle.


Las opiniones expresadas por la host y los invitados en este pódcast son exclusivamente personales y propias, estas no reflejan necesariamente la política o postura oficial de las entidades con las que puedan estar vinculados. Este pódcast no debe interpretarse como una promoción ni una crítica a ninguna política gubernamental, posición institucional, interés privado o entidad comercial. Todo el contenido presentado tiene fines informativos y educativos.

SPEAKER_03:

A great empresa of e-commerce international who, for example, utilizing the intelligence artificial to recollect curriculums. And they were analyzing those curriculums and rechazing the perfil of the homes and priorized the homes.

SPEAKER_02:

Oh, wow.

SPEAKER_03:

And we use it for a time after finally and Intangible.

SPEAKER_04:

I'm very pleased to have a compatriot. She is brilliant and managed with distress into the privacy, auditors, algorithms, and dilemmas ethics of a global. Rosa Celeste has assessor for Europe, has impulsed politics of intelligence artificial, responsible, and gives the privacy from standard personality and professional. Preparing this conversation is a poderosa with legal, technological, and a vision clear about the future digital.

SPEAKER_03:

And Swiss?

SPEAKER_04:

Muchas gracias. Este es tu hogar, and aquí tienes una compatriota para abrazar andar siempre presente. Andas a la materia. ¿Cuál dirías que es hoy el mayor malentendido que enfrentan las organizaciones tocadas inteligencia artificial sin comprometer la privacidad?

SPEAKER_03:

Yo no lo limitaría a un solo malentendido. Primero, the mayor of the organizations piensan that implementar la anonymization is suficient. Saco algunos datos que pueden identificar a las personas and this is suficient. When the reality is that identificators with other bases of data, combination with other bases of data. So, the anonymization is sufficient. It needs other technical organizational and political that can control the data personality. This is one. The second transparency. What data uses, when, what, when we supposed, with what we compartment. All these things don't compar and inform the person. So it's an opportunity to optimizate processes at the same time. And if the organization said, No, this is a freno, it's a barrier, when in reality is an opportunity.

SPEAKER_04:

Totally. And the second is that it's an opportunity. It's an opportunity to have the things with the process adequate, and empresa or organization. I'm gonna take my positive too. I don't have other manner of presentarles my life. Really? It's very bonus. Well, the IA has advances surprising. You can see a library sort of problem in salute, productivity, and services public, but vigilancia and abusive. Desde tu perspective, how do we innovative technology not in an excuse for invading nuestra vida privada?

SPEAKER_03:

Claro, it's important. Some decisions interne, there are directions, there are organizations that decide what data are, and a particular organization does that necessarily mark regulatory. And both include, I don't generalization in the world, but almost in Europe that there are authorities to control that alphabetizing and educate the education. These are the danger, they can exhibit if nobility, no contact, and it is bastard for trying to maintain this balance of respect the privacy of data of the people.

SPEAKER_04:

And so they're letter mutable, because if there is execution, aunque the norm is created, nobody is implementing, so it's letter muerta. But in this case, have a government authority vigilante to that have the motive to do it for ethically, but also have the reason to have it because they are monitoring and vigilant. And if it does, there are consequences. Exactly.

SPEAKER_03:

And lamentably with these sanctions, with this balance, no empresa desarrolling the conformity at the nivel that it required.

SPEAKER_04:

And much organizations dicen using the intelligence artificial deformation. But at the end of the hechos, the compliance parece formality conviction.

SPEAKER_03:

It has a governance interpretation suficiently fuerte and structured, with roles defined, with métricas and ries. And for ello, one of the equippers interdisciplinary or controller or regular intelligence artificial in the internal. We need the jurídico, the equipped marketing, the program, the segurid technical. In reality, it's impossible to design an intelligence artificial utilization of data personality, since an equipment interdisciplinary. And at the same time, I permit construct a major confiance, a legitimate data personality or not. But sort of mantra cómo desarrolling this intelligence artificial, cómo utiliza, what are the frenos, and basically necessary in what is the ethical requirement in the design and utilization of the intelligence artificial, interrelation with the use of data personality.

SPEAKER_04:

Okay, so it's a thing of inclusivity. In the equipping, inclusivity, in the process, in the system to create something that is according to the standards.

SPEAKER_03:

Exactly, and according to the objectives organizational, because there are objectives and culture organization. Each organization has a culture specific. So it's necessary to design a governance, so structured that me permit this control and at the same time the responsability to utilization. Sí, claro, hoy en día sabemos que la mayoría de la inteligencia artificial que se utiliza para voice cloning o identidad sintética, no tienen ese límite de yo poder pedir o exigir ese derecho de, oye, no quiero que me clones digitalmente, no quiero que utilices in my voice para crear deepfakes, para crear situations ficticias that no pueden ser controladas hoy en día. Because much desarrollada the technology in this way that no sabemos lo que es real, falso, de lo que es fictitio. Para ello, realmente, a priori, if we desarroll this dare, the empresas can have the organizations to not personal, not utilizing the images, not utilizing the voice. And many algorithms of intelligence artificial don't have these limites techniques. The mayor are autonomous in this sense, and we have the responsibility algorithmic. This is a point bastard, but we define what is responsible for what. Is the design, is the utility final, or is simply what we are implementing? So in this sense, if we have the mayor of the decisions and cases, it's impossible or bastard to do a linear. Some people analyze. And who designed how you get these decisions, how this control autonomous. So this part of the responsibility algorithmic today false at level.

SPEAKER_04:

Persian culpable is faster. There's to connect the culpable.

SPEAKER_03:

And it does have bastards, in many general are lent, but it's much because we can use an expert that would enter and analyze that no solarly se limites.

SPEAKER_04:

And technically the reality, mañana no looks.

SPEAKER_03:

Absolutely. Oh Dios mío, fatalities.

SPEAKER_04:

And if you have a person who is fragile or vulnerable, it's more facilitating to manipulate. Totally.

SPEAKER_03:

These types of points that my identity for creating an image fictitious or a video fictitious, the identity synthetic, all these points are controlled. Exactly. There are regulations the balance between the privacy of data, and much is a posterior, a priori.

SPEAKER_04:

The images or the video or the voice is away. Exactly. Okay. And I basically with me, I was like. Inclusive, obviously there are things that not perfect for the manner that I pronounce all these things, all of expressions that you know that are synthetic. But my mother will distinguish between the artificial and real, but my mother. No, this is a thing. Totally. Because they imitate things. And this is a friend in the alarm.

SPEAKER_03:

I know an opportunity. And when, for example, you can permit and the professional and different projects that you have, the innovative, the technology, the is maravillosa. So it's the balance, as the control of what conscience you are utilizing and guarding the part ethical at the time. But the utility of the IA is maravillosa, you use it all the time.

SPEAKER_04:

The consentiment, if you are done, okay, perfect, but I can't give permission. Antes. Antes. Antes. Antes, it is privacy differential. The intelligence artificial is explicitly, ethical. But no sobrecargot because the risk crew.

SPEAKER_03:

Tratar to utilizing the data strictly necessary for the objective with which the intelligence artificial and technology. You can use the data strictly necessary. You don't say anonymization. No zero with no. And because other data, more than that. In this way, try to implement this principle of minimization and maintain auditors independent that can have this control continual personality, the privacy in many general and relation with the technologies and the intelligence artificial. It's basically a privacy to respect the principles basically from the principle of the life of the personality, and at the same time of the technology that we are designing.

SPEAKER_04:

It's a problem that if you don't, don't. But there's much adversion in this, because no, for size.

SPEAKER_03:

For six, totally, but we are consuming a base of data, guarding data for a time indefinite, many ways that I can definitely conserva. But not, for six, dinner. Because what it is, you have to justify a lot of authority of control. If you don't justificate, I can justify a lot of those, and then the organizations for which you are paying the sanctions at the end. But the risk reputational. Absolutely.

SPEAKER_04:

Exactly.

SPEAKER_03:

For example, the part of the supervision public is absolutely necessary. But for else, there are recurs, we need experts in the area, and there is a responsibility for it. But it's absolutely necessary. The reality is that the control and not reemplazed by the conformity internal that organization. So this balance is present. So I insist that the regulatory is more important in the design of the control. And the duty digital, this is necessary. In Europe exists this control, it's necessary that we are informed by ethical because we interact with an intelligence artificial. In Students, at the point of the derechet to the consumer, it exists. But in other places, at a national general, no.

SPEAKER_04:

And have conversations and revelation information and what they are doing.

SPEAKER_03:

And the control individual is necessary. Nobody controls the utilization of those personality like we have to do. They say no being part of a decision automatically for an intelligence artificial. In other parties no. And I try to say that many people do decisions, for example, when recollecting the curriculum, the resumes, the curriculum of people, and they have decisions automatically for rechazing perfiles.

SPEAKER_04:

And no one.

SPEAKER_03:

Exactly. That's equal to the credits. There are banks of the world who are utilizing the intelligence artificial intelligence for a bit and say, No, this person is not qualified, and there's nothing revisando. Okay, but what?

SPEAKER_04:

Because this would be found, like we've seen in various occasions, in data historic that are discriminatory. Absolutely. Because people who living in a certain sector who have characteristics that put in a group specific economic or sociocultural group have access to the credit. Exactly. But they can't escuch that a paper is horrible.

SPEAKER_03:

And this is one of the results of the ethical in the design of the utilization of the existed balance, a control, existed discrimination. And they were analyzing those curriculums and rechazing those perfiles of homes and priorized those of those homes. Conjusta causa. Exactly. Entonces siempre ese punto de tener ese balance entre ética, privacidad y los objetivos. Claro, se entiende, hay objetivos empresariales detrás, pero se tiene que tener ese balance y siempre respetar esa ética. And igualmente, tratar de exigir the explications necessary of how functional intelligence artificial, how you got to this decision. And for example, in Europe, you can exigere calls that decision automatically, what are the parameters or the variables that you used, how you got to this decision? But say that in other parts of the world this exists. And much people are discriminated, and there is a discrimination via the intelligence artificial that hoy no control death.

SPEAKER_04:

These are the people that you will prioritize, because historically. But all this has a reason. And the intelligence artificial is that it magnifices. Because in an application like a person in a bank application in the day, the intelligence artificial applications in a minute. So they magnificate that. But what terrible.

SPEAKER_03:

And if there's a superintelligent that you go, we am going to exist today, but it needs this control, and imaginate, no tendril of the victims of the technological. No, I'll talk about my experience because you know that we have to have fundamentals solid in three areas: human, and the regulation technical. But I have to have this connection basically, and at the same time, the technical basic that we implement. What are, for example, these types of technologies that controls that control, like what the people don't know how to get to those points. These are various models for the intelligence artificial, various models. When I look at model, I don't know of model of algorithms of trust, of intelligence artificial or machine learning. I refer to it, I have to have this connection technical and understand that if I are an equipment multidisciplinary, I have to communicate, I have to adapt my discourse. I have to adapt my discourse for other things, for the intelligence artificial and protection of personality. This is basically. And there are expectatives of that they facilitate to enter what are the pautas, the reglass to start in conformity. And this balance is absolutely necessary. And for ultimately, you know that there are a little bit of connection of geopolitical. Exactly. And really, a soft skills, something that you say, well, no, no, I'm enseñing, but it's something that you can design to enter and at the same time entertainment. She can try to design, you say empathy professional. In the world international, just in the public or in the private, I can try to design this empathy international to adapt to the necessity of each one of those interlocutors.

SPEAKER_04:

Exactly.

SPEAKER_03:

Aquí las leyes are more restrictives. In Stato United, like comienza a nivel, a nivel federal. So it's the discurso, and we have to try to enter it, and the exigencies are different for that.

SPEAKER_04:

Exactly. But apart from this, you have to be a bad communicator, adapt to your life, and be sensible with the reality of the people.

SPEAKER_03:

And technically, I'm a business, but the connectivity technical adoption in master, in certifications that I adopted, and I have this connection technical with it, it is impossible to work in. Exactly. No, no, no. It will be more. It will cost, and at the same time, no balance that you need. So to coexist and those three points, you need the concept, technical, and at the same time, including adaptation to the part that you say organization, but it's more about the political and the culture of the empresa or the organization.

SPEAKER_04:

Because you are the context. Exactly. And much of those that are possible technically or are insensible in the cuid with it. The session flash. Elige one option and no loanes mucho, simply elige the primer that you entice. Lista? Privacidad total or innovación sin barreras. Privacidad total. Preferir salud predictiva a cambio de tus datos médicos or salud genérica para mantenerlos en secreto.

SPEAKER_03:

Esa es difícil. Esa es la idea. Si esa está difícil, yo es la primera opción. Ok, ok. Salud predictiva.

SPEAKER_04:

Sí, salud predictiva. Consentimiento eterno por una sola vez. O tener que confirmarlo cada semana.

SPEAKER_03:

Uy, no. Confirmarlo cada semana. Se cansarán, pero confirmarlo cada semana. Pero las cosas cambian día para hoy. Sí, las cosas cambian. Sí, no se puede dar un consentimiento eterno. No, no. No, no, no. Demasiado riesgo. Sí, totalmente.

SPEAKER_04:

Una inteligencia artificial que predice tus decisiones o una que guarda tus secretos.

SPEAKER_03:

Ay, la que guarda mis secretos.

SPEAKER_04:

La confidente. Sí, totalmente. Derecho al olvido absoluto o historial digital transparente para siempre.

SPEAKER_03:

No, derecho al olvido absoluto. Absolutamente. Ok.

SPEAKER_04:

Un assistente personal basado in an intelligence artificial that screwing tusks a man.

SPEAKER_03:

No, no, no. The first option, the intelligence artificial that I'm assistant marathon. Lo mejor que no puedo pasar.

SPEAKER_04:

You am the intelligence artificial. Yo también is a very good amigo. Ayuda a facilitate much of the things. Yes, Dominican. You got to have a bizcocho and me a little past. Yo, I'm terrible cocinera. O sea, yo cocino para sobrevivir. Ay, no, yo soy buena in this.

SPEAKER_03:

You cocino bien.

SPEAKER_04:

Te voy a invitar a mi casa.

SPEAKER_03:

Es verdad, I cocino bien. Y me gusta, y me gusta. Y me gusta. Sí, sí, sí.

SPEAKER_04:

Y me dio antojo and me ayudó a los dominicanos. But it's difficult to say. Es difficile. Y es el mejor, para mí el mejor. Sí, porque es que sabe como ninguna otra cosa. Sí, sí, sí, el mejor.

SPEAKER_03:

When you dije que estaba allá. Yo vine como con 5 kilos.

SPEAKER_04:

Y felicidad, y felicidad. Tú sabes la falta que a mí me hace en la cocina de mi mamá. Ay, totalmente.

SPEAKER_03:

Una cosa increíble. Ay, eso, y no se compara. Y es que no sabe igual, yo lo intento, pero es que no sabes igual aquí. No, es que no, no son los mismos ingredientes.

SPEAKER_04:

Tú te has comido un mango de aquí.

SPEAKER_03:

No, ni lo he intentado. Digo, sí lo he probado, pero no saben igual. Te da una tristeza en el corazón. Es pura agua. Imagínate, en San Cristóbal, en mi casa me crié con quatro. Mi papá enfermo con eso. Then I had quatro plantas de envio. Tranquila, mata de mango en Dominicana. Then he had quatro of all the tips. And those days they llegaba. Yes, yo no soy muy fan del mango. La verdad. Y le doy gracias a Dios, but for this, because I crié from pure mango. But no, igual que el aguacate. El aguacate a mí, lo que me duele aquí es el aguacate. Ay, Dios, ese aguacate chiquito. The gente no entiende que un aguacate dominicano te bajas como carne, como cena, como proteína.

SPEAKER_04:

Aquí yo le echo sal, pimienta, le echo balsámico, le echo aceite.

SPEAKER_03:

Yo le digo, yo digo, eso es el aguacate dominicano, digo el del Caribe igual. Porque una amiga cubana una vez me trajo un aguacate dominicano y me dijo, lo único que te pude traer, tú quieres más de ahí. Pero eso es lo que se puede. Me tocaste el corazón. Ay, qué lindo. Y yo me tocaste el corazón. Yo no le pedí nada, imagínate.

SPEAKER_04:

No, pero con eso con una alegría.

SPEAKER_03:

Me acuerdo que vivía con mi hermana. Mi hermana se lo llevó y no lo quería soltar porque eso es como oro aquí. Sí, no es una cosa increíble.

SPEAKER_04:

Yo, esa es la cosa que más falta me hace cuando vivo afuera es la comida. Igual, igual, igual. Mango, aguacate, guayaba.

SPEAKER_03:

Pero aquí aparecen, pero no se sabe igual. Pero no está tan bueno. Aquí el dulce de guayaba. Ay, el dulce de guayabas. El dulce de guayaba. El dulce de guayba. No, es que no sabe igual. No sabe igual, para nada. No, para nada. Eso sí, es lo más. Para mí eso es lo más duro de estar fuera de mi país. Porque mi papá y mi mamá yo me lo traigo. Pero la comida. La comida.

SPEAKER_04:

Mi mamá siempre me trae café cuando viene, porque es lo más fácil de traer.

SPEAKER_03:

No, yo el café me he adaptado al italiano.

SPEAKER_04:

Pero es que el aroma del café dominicano es diferente. Tú sabes lo bonito, los recuerdos de yo crecer in my casa. The primer en la mañana es el café. Es una cosa irreplicable. Que te lleva en el corazón. It's duro.

SPEAKER_03:

And the gente a uno se deshumaniza a poco when it is duro. Sí, porque llega al corazón. Yo sé. Y dicen que el secondo cerebro is ahí. If you no comes back or something that te gusts, no vas a ser feliz. No, totalmente, totalmente. Pero que tú lo ves. Se ve. La gente, when you comes back, in the bien, the gente is feliz. No, se ve, se ve, se ve. Se ve, se siente. No, no, no, es verdad. Ah, sí. Para mí, la comida is amor.

SPEAKER_04:

Es amor. Bueno, nothing. Volviendo al paréntesis dominicano. La última pregunta, Flash. Un mundo where your data are una moneda, o donde simplemente no hay datos that you puedan recolectar.

SPEAKER_03:

Qué abstracto. Que los datos sean una moneda. Bueno, depende, puede ser una moneda, pero no va a acceder a ellos que yo lo permita. Ah no, jamás. Entonces, la seconda option, no, jamás, jamás. No, no, no, no.

SPEAKER_04:

No se negocia con la privacidad. No, no, no. Con los derechos, no, no, no, no, no. Fundamentally, no, no, no. Okay, okay. So cog la paleta. Okay. Aquí we have two options. Futurista significa que es falso in un futuro 100 años orejano. Y verdadero is que está passando, está por pasar, or in cualquier momento pasa. El consentimiento como base legal desaparecerá y será substituído por sistemas automáticos de confianza. Ninguno. Ok. Nada. Sueñe muy feo. Las plataformas te ofrecerán modo privado total, pero con una tarifa mensual. Verdadero. Una persona podrá exigir que ninguna inteligencia artificial imite su voz, rostro o estilo digital.

SPEAKER_03:

Creo que algunas existen. Sí, sí, sí, existe ya. Gracias a Dios. No estamos recogidos.

SPEAKER_04:

Pero hay, hay. Habrá una red digital paralela sin cookies, sin rastreo y sin publicidad. Futurista. Viviremos in un mundo sin privacidad y nos parecerá normal.

SPEAKER_03:

Yo espero que no. Yo voy a. No. No, no, no. Rechazado. Rechazo. No, no, no, no.

SPEAKER_04:

Las personas preferirán tener un clon digital that represented in redes in the way of public directly. Okay. Exactly. Sí, claro, because it's more fair. Exacto. Las personas empezarán a alquilar su ADN para entrenar modelos de inteligencia artificial personalizados.

SPEAKER_03:

Y eso existe, lamentablemente.

SPEAKER_04:

Muy triste. Tu historial de búsqueda será usado por bancos para definir tu nivel de riesgo financiero.

SPEAKER_03:

Eso existe ya.

SPEAKER_04:

Uy, cuidado con lo que googleas. Totally. Y finalmente, nacerá un movimiento activista that defiende el derecho tocado.

SPEAKER_03:

Existe ya.

SPEAKER_04:

Existe. Pero es un derecho absolute. Sí.

SPEAKER_03:

El derecho tocando. Sí, sí, sí, exactamente, exactamente. Aquí existes. In Europa exists. In other ways. Exactly. But exists that are tratando. It's that puede served. Todo es el balance. And the perspective of the derecho, the intelligence artificial and technology. Todo depends.

SPEAKER_04:

Yes, because if there are informations that are of interest public, ya sea que hubo actividad criminal o cualquier other type of coach parecida, hay un interesse in que se recuerde, in que no se quede en el olvido.

SPEAKER_03:

But it's de utility pública and that no están accesible al internet, top. So this exists and this entrance dental, ah, you can exigere el derecho to the old driven in manner general, this exists for the parted, por ejemplo, this derech doesn't applique in the parted.

SPEAKER_04:

Perfect. And we're going to get the preguntable. If you don't have a consequence to those who are creating technology and doing decisions about intelligence artificial, what we olvides that al final try to protect the humanity.

SPEAKER_03:

In the sense that we have a software and all.

SPEAKER_04:

No, nothing like the cafe.

SPEAKER_03:

Y has been a placer conocerte igual.

SPEAKER_04:

The intelligence artificial would predece, optimization decisions and automatic processes, but no enseñar to our principles. This is human. And for sure, there are people like Rosa who recuerden.

SPEAKER_00:

Hablando claro sobre propiedad intelectual. ¿Te gustó lo que hablamos hoy? Por favor, compártelo con tu red. ¿Quieres aprender más sobre la propiedad intelectual? Suscríbete ahora en tu reproductor de podcast favorito. Síguenos en Instagram, Facebook, LinkedIn y Twitter. Visita nuestro sitio web www.intangibilia.com. Derecho de autor Leticia Caminero 2020. Todos los derechos reservados. Este podcast se proporciona solo con fines informativos y no debe considerarse como un consejo u opinión legal.