Gemma Galdón, algorithm auditor: “Artificial intelligence is of very poor quality”

Gemma Galdón, algorithm auditor: “Artificial intelligence is of very poor quality”

 

The founder of Ethics advises international organizations to avoid discrimination. Distrust the expectations of the sector:

 

 

 

 

 

Gemma Galdon algorithms
Gemma Galdón, algorithm consultant and expert in ethics and artificial intelligence, in Madrid.MOEH ATITAR

“To propose that a data system is going to make a leap into consciousness is a hallucination”

Artificial intelligence is not just for engineers. It can be literary and, at the same time, a reference in the global debate about the social and ethical repercussions of what these systems do. Gemma Galdón (Mataró, Barcelona, 47 years old) graduated in Contemporary History and received a doctorate in technological public policies; She is the founder and first executive of Éticas Research and Consulting, a company that examines algorithms to ensure their responsible use. “Being aware of how society has solved old problems gives me a useful perspective to work with new problems,” he says in a coffee shop in Madrid. “12 years ago, when I got my doctorate, there were very few people in the social sciences who worked with technology.” His company advises European and American organizations. He has his suitcase packed: he immediately returns to New York, where he lives and where on Thursday he received one of the Hispanic Star Awards, awards for agents of change in the Spanish-speaking community, at an event at the United Nations. He had to move, he says, because in the US “the market is more receptive to responsible AI.”

Ask. What is it like to audit algorithms?

Answer. Well, it is to inspect artificial intelligence systems to see how they work, but first of all to ensure that their impacts on society are fair, that there is no discrimination. And, furthermore, that systems do what they say they do.

Q. And what problems do you encounter?

A. At first these systems are as discriminatory as society, but after a very short time they are much more discriminatory than society. Because what AI does is take a lot of training data and look for a pattern. And the boss is always the white man with a stable job; In the case of banks, it will be the ideal client. Any profile that is a minority or anecdotal is eliminated from the sample. So a woman has much less chance of being diagnosed with endometriosis through AI, because historically we have not diagnosed endometriosis.

Q. There are those who say that AI cannot be thoroughly examined because not even its creators fully understand how it works, but rather that it learns on its own.

A. False. That idea of the black box is a myth, pure marketing . I think there is a certain desire on the part of the AI sector to see it as something magical, to believe that it is something that we cannot understand and to take away our ability to intervene. What we have seen is that we can audit when a client hires us and teaches us practically everything, but also from the outside we can reverse engineer and see how a system works based on its impacts.

Q. You have advised political institutions to regulate AI. What do they want?

A. What has happened in recent years is that the legislator, with very good intentions, has generated a very abstract regulation, very based on principles, and the industry has complained of not having concrete practices. We have an industry born at the mercy of Silicon Valley, accustomed to that idea of “move fast and break things”, without being aware that what it could break are fundamental rights or laws. Sometimes there is a certain obsession with asking for the code or the foundational models. They have never been useful to me. We are asking for a level of transparency that is not useful for auditing, for inspecting impacts. If you know that there is an inspection moment in which we evaluate certain metrics, you have to start making changes. With which we change the incentives of the technology industry so that they take into account the impact and bias, any type of dysfunction.

Q. Are you disappointed or satisfied with the AI law that the European Union has agreed upon?

A. It seems to me to be a giant step in regulation: it is the first law on these issues in the West. What disappoints me is Europe's role in going further, in creating a market linked to responsible AI. Both the United States and Asia and China are getting their act together.

General Artificial Intelligence is as close as when Plato spoke about the possibilities of other types of worlds

Q. Is everything that is presented as such artificial intelligence?

A. We are surrounded by very poor quality artificial intelligence. It is no longer an issue of bias, it is that it does not do what it says it does, and makes decisions that humans would never make. An example is the system that was implemented to evaluate the performance of teachers in the educational system of several states in the United States. Some workers who saw how their performance changed in manual and algorithmic evaluation took it to court. The court ordered an audit and it is observed that the only inputs that are taken into account to decide if you are a good teacher are your students' results in mathematics and language. It's a glorified Excel. If the principals of those schools had been offered this as a spreadsheet that records results in mathematics and language, they would never have sold it.

Q. Will responsible AI prevail?

A. I am optimistic. When we audit, we find biased systems that also perform poorly. Artificial intelligence is of very poor quality and at some point the industry is going to have to do better. These systems were born from entertainment tools like Netflix, which can have a high margin of error. If the movie that Netflix recommends is not the one you want to watch after another, nothing happens. But if the AI wants to work in the medical field recommending a treatment; or in personnel selection, deciding who we hire or who we fire; or in the allocation of public resources... it has to work well. Right now, the AI we accept is not only biased, but it also doesn't work well. The good thing is that both problems are solved at the same time. When the problem of bias is addressed, the other inefficiencies are also addressed.

Contra interview with Gemma Galdón, algorithm consultant and expert in ethics and Artificial Intelligence
Gemma Galdón, on November 27 in Madrid.MOEH ATITAR

Q. The departure and reinstatement of Sam Altman as CEO of OpenAI has been linked to an alleged sensational advance towards Artificial General Intelligence (AGI), or superintelligence, something that threatens humanity. Do you believe it?

A. General Artificial Intelligence is as close as when Plato spoke about the possibilities of other types of worlds and lives. Humanity has always dreamed of automatically reproducing human consciousness. We have been able to dream science fiction futures. There is a debate about the IAG that has nothing to do with technological capabilities right now.

Q. Aren't machines going to surpass humans?

A. The way we humans think, creativity, the new, has nothing to do with AI. A very simple exercise: if we give a system all of Picasso's work before 1937 and ask it: what is Picasso's next painting going to be?, it will get anything. And in 1937 he painted Guernica . People evolve in our way of expressing ourselves, loving, working, writing, creating. To propose that at some point a statistical and mathematical data system will make a leap into consciousness is a hallucination.

Q. What ChatGPT does when it invents answers to questions is also called hallucination. It's unreliable, right?

A. It is the case of a lawyer who works defending victims of pedophilia and ChatGPT makes a biography of a pedophile. Because? Because his name appears with these words most of the time or more times with these words than with another, with which he associates this word with you and that's it.

At some point we have to consider removing polluting technologies from circulation, such as cryptocurrencies.

Q. You study the social impact of AI. And what about the ecological impact? Because data centers have become big wasters of water and energy.

A. It doesn't make any sense that right now, when they do an environmental audit in your company, they come to see what type of light bulbs you have and don't look at where the servers are and how far the information has to travel. There has been no desire to quantify the environmental impact of data processes and to encourage the industry to have servers close to the place where the information is provided. It is a debate we have not had yet. In the era of climate change it makes no sense that almost everyone talks about technology as the solution and not as one of the problems.

Q. So, we don't even talk about cryptocurrencies . With what they spend.

A. Just as we are removing polluting cars from the street, at some point we have to consider removing polluting technologies from circulation. We still have to start banning blockchain architectures when the social value is not perceived. What cryptocurrencies provide is a speculation tool, an investment mechanism more similar to a pyramid scheme... If we were saving lives I would say: look, it's still justified

Richard of Querol

Richard of Querol

He is deputy director of EL PAÍS. He has been director of 'Cinco Días' and 'Tribuna de Salamanca'. Graduated in Information Sciences, he has been practicing journalism since 1988. He worked at 'Ya' and 'Diario 16'. At EL PAÍS he has been editor-in-chief of Sociedad, 'Babelia' and the digital table, as well as a columnist. Author of 'The Great Fragmentation' (Harp).