Jaime Sevilla, scientist: “99% of resources will end up in the hands of AI”

Jaime Sevilla, scientist: “99% of resources will end up in the hands of AI”
Europe
SpainSpain
Interview

 

Having already transcribed the interview with Jaime Sevilla (28, Torrejón de Ardoz), founder of Epoch AI, one can't help but think: "How strange it all is." The world that looms if the development of artificial intelligence maintains this pace is strange. So is spending an hour talking about the possibility of machines dominating humans, while people are having coffee and croissants in the background. Even the hyperspace-like backpack Sevilla is carrying at a café in downtown Madrid on a Wednesday morning.

—Do you have a supercomputer there?

- Can.

Right now, few people have a clearer—or at least more informed—vision of trends in the field of AI than Sevilla. This is precisely what Epoch AI, a non-profit organization specializing in analyzing the progress and evolution of this technology, does. Their goal is to predict future trends by developing rigorous tests that measure the intelligence and performance of current models. A recent example is Frontier Math, a project carried out in collaboration with OpenAI, where they created a set of advanced math problems to evaluate the level of mathematical reasoning of language models. So far, they have been able to get it to solve no more than 2% of the problems they have posed.

The New York Times included his project among the "good tech news" of 2024, and Time magazine compared Seville's initiative to the work scientists do when developing climate change prediction models, used to guide environmental policies. The truth is that few fields are as difficult to make predictions as artificial intelligence. Proof of this is the sudden arrival of the Chinese DeepSeek model , which necessitated email requests for new information for this interview.

Question: Is OpenAI still ahead of any other company in developing artificial intelligence?

Answer: In terms of results, no company can solve Frontier Math problems like OpenAI, and it's also performing better on other tests. Anthropic is a bit behind: it hasn't yet caught up with augmented inference, but it's getting there.

Q: Why are you so clearly dominating the race in this technology?

A: There are several factors, but one of the most important is the scale of the models: the greater the computing power and data volume, the greater their capabilities. OpenAI stood out early on precisely for its commitment to this strategy.

Q: Has DeepSeek changed this paradigm?

A: DeepSeek has released its new model, R1, which competes with OpenAI's models. Still, I believe OpenAI maintains the lead, especially in light of the results of its proprietary model. When it comes to the amount of data and computation, my opinion remains the same: models trained with more resources generally perform better.

Q: Do you think AI company leaders are afraid of what they're doing?

A: I don't know if "scared" is the right word, but I would say that it's very difficult to predict in advance the capabilities that artificial intelligence will develop as it scales and more resources are dedicated. To some extent, there is gradual growth, but that process hasn't been studied in depth and isn't fully understood.

Jaime Sevilla, an expert researcher in artificial intelligence, photographed in Madrid.
Jaime Sevilla, an expert researcher in artificial intelligence, photographed in Madrid

 

Q: Is there a threshold that, if AI crosses it, we should consider stopping its progress?

A: There are more or less prosocial applications of artificial intelligence. I'm concerned about its use for terrorist operations or large-scale internet scams. There are many uses I'd rather not see so advanced.

Q: I'm referring to the point where AI becomes a threat to humanity.

A: In principle, it should not be a problem for us.

Q: Should machines rule us?

A: For now, they act as virtual assistants, but at some point, it will make sense to give them more independence.

Q: For example?

A: That they can run their own business, running it without inefficient humans slowing down the process. This will be a huge incentive to create more independent artificial intelligences, taking care of an ever-increasing part of the economy. They'll be able to do everything we can do, but at a much lower marginal cost.

Q: How do we know that at that point, the AI won't consider us a nuisance?

A: You don't have to see us as a hindrance. We can be business partners with cordial relations. For me, it would be like living with great entrepreneurs who create wealth for everyone.

Q: Why would such an advanced AI want to do business with humans?

A: Because, at the moment, humans possess all the capital in the world; they need our investment. There are also tasks where it may be necessary or legally appropriate for a human to be involved, since an AI can't go to prison, for example. Perhaps it's good to have humans who can expose themselves to that risk.

Q: How long before AI reaches that level of independence?

A: We're still a long way off. Current AI isn't coherent or consistent enough to develop such long-term strategies.

Q: Are humans still more reliable strategists than the best artificial intelligence?

A: Yes.

Q: What do you think about the “accelerationist” thesis, which holds that technology, in this case AI, will solve all our problems?

A: I consider myself somewhere in between the two. It's a technology with great opportunities and challenges. We should move forward gradually. It's moving quickly now, but society hasn't disintegrated. I think we're in a good balance. The question arises when we reach that level of independence we were talking about.

Q: In what sense?

A: A useful analogy is to imagine a trillion geniuses entering our economy at the same time. It can be difficult to manage.

Q: I keep thinking about the paradox of an “imbecile”—who in this scenario would be humans—giving orders to a genius.

A: That's probably not the case. In the long term, it's quite possible that 99% of resources will end up in the hands of artificial intelligence.

Q: Doesn't it scare you?

A: It's not necessarily bad. That 1% that's left for us will be much larger than what we have now. Being an ordinary citizen today is better than being a king 500 years ago.

Q: Your reasoning is logical, but it's still terrifying.

A: That's the challenge: understanding this technology and building a social contract that allows us to coexist with this impending social and economic force.

Q: Does it really make sense to build our own “master”?

A: We shouldn't interpret it as if we were slaves; we could be partners. We can design a social agreement that benefits us all. That doesn't eliminate the risks: in the future, we will be the economic minority. And minorities, historically, have had very difficult times. However, over time, they have gained rights and improved their quality of life.

Q: We expect to become, comparatively, increasingly poorer.

A: There's no consolation for that: it will be a very unequal world, and we'll have to adapt. Today, there are already billionaires with much more money, and we live with that.

Q: Do you think AI will develop a compassionate conscience toward other species?

A: It may, and I hope, that a plurality of opinions will emerge within artificial intelligence itself. Some will view us more favorably than others. Who knows, perhaps the way to ensure our survival is through a legal contract drafted with the help of AI, guaranteeing our long-term survival.

Q: And in the worst case scenario?

A: It could also happen that they decide to create a large computing center on Earth, and to do so, they have to reduce the temperature by 100 degrees. Who knows, maybe they'll remember that humans can't survive at that temperature.

DANIEL SOUFI . El Pais, Spain