POPE FRANCIS PARTICIPATES IN THE G7 SESSION ON ARTIFICIAL INTELLIGENCE

POPE FRANCIS PARTICIPATES IN THE G7 SESSION ON ARTIFICIAL INTELLIGENCE

SPEECH OF THE HOLY FATHER FRANCIS

Borgo Egnazia (Apulia - Italy)
Friday, June 14, 2024

A fascinating and tremendous instrument

Dear ladies, distinguished gentlemen:

I address you today, leaders of the G7 Intergovernmental Forum, with a reflection on the effects of artificial intelligence on the future of humanity.

«The Holy Scripture testifies that God has given men his Spirit so that they have “skill, talent and experience in the execution of all kinds of work” ( Ex 35,31)» [1] . Science and technology are, therefore, an extraordinary product of the creative potential that human beings possess [2] .

Now, artificial intelligence originates precisely from the use of this creative potential that God has given us.

Such artificial intelligence, as we know, is an extremely powerful instrument, which is used in numerous areas of human activity: from medicine to the world of work, from culture to the field of communication, from education to politics. And it is fair to assume, then, that its use will increasingly influence our way of living, our social relationships and the future, even the way we conceive our identity as human beings [3] .

The topic of artificial intelligence, however, is often perceived in an ambivalent way: on the one hand, it is exciting because of the possibilities it offers; On the other hand, it causes fear of the consequences that could occur. In this regard, we could say that all of us, although to different extents, are crossed by two emotions: we are enthusiastic when we imagine the progress that can be derived from artificial intelligence, but, at the same time, we are afraid when we see the dangers inherent in it. its use [4] .

We certainly cannot doubt that the arrival of artificial intelligence represents an authentic cognitive-industrial revolution, which will contribute to the creation of a new social system characterized by complex epochal transformations. For example, artificial intelligence could allow a democratization of access to knowledge, the exponential progress of scientific research, the possibility of delegating exhausting jobs to machines; but, at the same time, it could bring with it greater inequality between advanced nations and developing nations, between dominant social classes and oppressed social classes, thus endangering the possibility of a “culture of encounter” and favoring a “culture of encounter.” discard".

The magnitude of these complex transformations is obviously linked to the rapid technological development of artificial intelligence itself.

It is precisely this powerful technological advance that makes artificial intelligence a fascinating and tremendous instrument at the same time, and requires reflection commensurate with the situation.

In that direction perhaps one could start from the realization that artificial intelligence is above all an instrument . And it is spontaneous to affirm that the benefits or harms it entails will depend on its use.

This is true, because it has been this way with every tool built by humans since the beginning of time.

Our ability to build tools, in a quantity and complexity that has no equal among living beings, speaks to us of a techno-human condition . Human beings have always maintained a relationship with the environment mediated by the instruments they produced. It is not possible to separate the history of man and civilization from the history of these instruments. Some have wanted to read into all of this a kind of deprivation, a deficit of the human being, as if, because of this lack, they were obliged to give life to technology [5] . An attentive and objective look actually shows us the opposite. We live in a condition of ulteriority with respect to our biological being; We are beings inclined towards the outside-of-us, in fact, radically open to the beyond. This is where our openness to others and to God originates; From here comes the creative potential of our intelligence in terms of culture and beauty; From here, finally, our technical capacity originates. Technology is thus a trace of our ulteriority.

However, the use of our tools is not always directed unequivocally to good. Even when the human being feels within himself a vocation to the beyond and to knowledge lived as an instrument of good at the service of his brothers and sisters, and of the common home (cf. Gaudium et spes , 16), this does not always happen. What's more, not infrequently, precisely thanks to its radical freedom, humanity has perverted the goals of its own being, becoming an enemy of itself and the planet [6] . Technological instruments can suffer the same fate. Only if their vocation at the service of humanity is guaranteed, will technological instruments reveal not only the greatness and unique dignity of the human being, but also the mandate that the latter has received to “cultivate and care for” the planet and all its inhabitants ( cf. Gen 2:15). Talking about technology is talking about what it means to be human and, therefore, about our unique condition between freedom and responsibility, that is, it means talking about ethics.

In fact, when our ancestors sharpened flint stones to make knives, they used them both to cut hides for clothing and to eliminate each other. The same could be said of other much more advanced technologies, such as the energy produced by the fusion of atoms, as occurs in the Sun, which could be used to produce clean and renewable energy, but also to reduce our planet to ashes.

But artificial intelligence is an even more complex tool. I would say that it is a sui generis tool. Thus, while the use of a simple tool - such as a knife - is under the control of the human being who uses it and its proper use depends only on him, artificial intelligence, on the other hand, can adapt autonomously to the task at hand. assigned and, if designed that way, could make decisions independently of the human to achieve the set goal [7] .

It is always worth remembering that the machine can, in some ways and with these new means, choose through algorithms. What the machine does is a technical choice between several possibilities and is based on well-defined criteria or statistical inferences. The human being, on the other hand, not only chooses, but in his heart is capable of deciding. The decision is an element that we could define as the most strategic of an election and requires a practical evaluation. Sometimes, often in the difficult task of governing, we are also called to decide with consequences for many people. Human reflection has always spoken in this regard of wisdom, the phronesis of Greek philosophy and, at least in part, the wisdom of Holy Scripture. Faced with the wonders of machines, which seem to know how to choose independently, we must be very clear that the decision always falls to the human being, even with the dramatic and urgent tones with which it sometimes appears in our lives. We would condemn humanity to a hopeless future if we took away people's ability to decide for themselves and their lives, condemning them to depend on the choices of machines. We need to guarantee and protect a significant space of human control over the selection process used by artificial intelligence programs. Human dignity itself is at stake.

Precisely on this topic, allow me to insist that, in a drama like that of armed conflicts, it is urgent to rethink the development and use of devices such as the so-called "lethal autonomous weapons" to prohibit their use, starting now with an effective commitment and concrete to introduce increasingly greater and significant human control. No machine should ever choose to end the life of a human being.

It must also be added that the good use, at least of the advanced forms of artificial intelligence, will not be fully under the control of either the users or the programmers who defined their initial objectives at the time of developing them. And this is both more true since it is very likely that, in the not distant future, artificial intelligence programs will be able to communicate directly with each other, to improve their performance. And, if in the past, human beings who used simple tools saw their existence shaped by the latter - the knife allowed them to survive the cold but also develop the art of war - now that human beings have modeled a complex instrument, You will see that this will shape your existence even more [8] .

The basic mechanism of artificial intelligence

Let me now briefly dwell on the complexity of artificial intelligence. Basically, artificial intelligence is a tool designed to solve a problem and works through a logical chain of algebraic operations, carried out based on categories of data, which are compared to discover correlations and improve its statistical value through a self-learning process based on the search for additional data and the self-modification of its calculation procedures.

Artificial intelligence is designed in this way to solve specific problems, but for those who use it the temptation to obtain, from the specific solutions it proposes, general deductions, even of an anthropological nature, is often irresistible.

A good example is the use of programs designed to assist magistrates in decisions regarding the granting of house arrest to prisoners who are serving a sentence in a penal institution. In this case, the artificial intelligence is asked to predict the probability of recidivism of the crime committed by a convicted person based on predetermined categories (type of crime, behavior in prison, psychological evaluation and others), which allows the artificial intelligence to have access to categories of data related to the private life of the detained person (ethnic origin, educational level, line of credit, etc.). The use of such a methodology – which sometimes runs the risk of de facto delegating to a machine the last word on a person's fate – can implicitly refer to the biases inherent in the categories of data used by artificial intelligence.

Being classified in a certain ethnic group or, more prosaically, having committed a small infraction years ago - not having paid, for example, a fine for parking in a prohibited zone - will, in fact, influence the decision about the granting of house arrest. On the contrary, human beings are always evolving and are capable of surprising with their actions, something that the machine cannot take into account.

It should also be noted that applications analogous to the one we are talking about will multiply thanks to the fact that artificial intelligence programs will be increasingly equipped with the ability to interact directly with human beings ( chatbots ), holding conversations and establishing relationships. of closeness with them, often very pleasant and reassuring, in that such artificial intelligence programs are designed to learn to respond, in a personalized way, to the physical and psychological needs of human beings.

Forgetting that artificial intelligence is not another human being and that it cannot propose general principles is sometimes a big mistake that stems from the deep need of human beings to find a stable form of companionship, or from a subconscious budget, it is That is, the belief that observations obtained through a calculation mechanism are endowed with the qualities of indisputable certainty and undoubted universality.

This assumption is, however, far-fetched, as an examination of the intrinsic limits of the calculus itself demonstrates. Artificial intelligence uses algebraic operations that are performed according to a logical sequence (for example, if the value of X is greater than that of Y, multiply X by Y; otherwise divide X by Y). This calculation method—called an algorithm—is neither objective nor neutral [9] . Being based on algebra, it can examine only realities formalized in numerical terms [10] .

We must not forget, furthermore, that the algorithms designed to solve very complex problems are sophisticated in such a way that they make it very difficult for the programmers themselves to understand exactly how they are able to achieve their results. This trend towards sophistication risks significantly accelerating with the introduction of quantum computers that do not operate with binary circuits (semiconductors or microchips), but according to the fairly articulated laws of quantum physics. On the other hand, the continuous introduction of increasingly efficient microchips is the cause of the predominance of the use of artificial intelligence by the few nations that have it.

The quality of the responses that artificial intelligence programs can give, whether more or less sophisticated, ultimately depends on the data they handle and how they structure it.

Finally, I would like to point out one last area in which the complexity of the mechanism of so-called generative artificial intelligence ( Generative Artificial Intelligence ) clearly emerges. No one doubts that today magnificent instruments for access to knowledge are available that even allow self -learning and self-tutoring in a large number of fields. Many of us have been surprised by the applications easily accessible online to compose a text or produce an image on any topic or subject. This especially attracts students who, when they have to prepare their assignments, make excessive use of it.

These students, who are often much more prepared and accustomed to the use of artificial intelligence than their teachers, forget, however, that the so-called generative artificial intelligence, in the strict sense, is not properly “generative.” In reality, what it does is search for information in big data and prepare it in the style that has been requested. It does not develop new concepts or analyses. It repeats what it finds, giving it an attractive shape. And the more repeated you find a notion or hypothesis, the more you consider it legitimate and valid. More than “generative”, it could be called “reinforcing”, in the sense that it reorders existing content, helping to consolidate it, often without controlling whether it has errors or prejudices.

In this way, there is not only the risk of legitimizing the spread of fake news and strengthening the advantage of a dominant culture, but also of undermining the budding educational process ( in nuce ). Education, which should give students the possibility of authentic reflection, runs the risk of being reduced to a repetition of notions, which will be considered increasingly incontestable, simply because they are continually presented [11] .

Put the dignity of the person back at the center in view of a shared ethical proposal

To what we have already said, a more general observation is added. The era of technological innovation that we are going through, in fact, is accompanied by a particular and unprecedented social situation, in which it is increasingly difficult to find meeting points on the major issues of social life. Even in communities characterized by a certain cultural continuity, heated debates and clashes are frequently created that make it difficult to reach agreements and shared political solutions, oriented towards the search for what is good and just. In addition to the complexity of the legitimate visions that characterize the human family, a factor emerges that seems to bring together these different instances. There is a loss or at least an obscuration of the sense of humanity and an apparent insignificance of the concept of human dignity [12] . It seems that the value and deep meaning of one of the fundamental categories of the West is being lost: the category of the human person. And so it is that in this era in which artificial intelligence programs question the human being and his actions, precisely the weakness of the ethos linked to the perception of the value and dignity of the human person runs the risk of being the greatest damage. ( vulnus ) in the implementation and development of these systems. We must not forget that no innovation is neutral. Technology is born with a purpose and, in its impact on human society, it always represents a form of order in social relations and a provision of power, which enables someone to carry out certain actions while preventing others from doing so. This dimension of power that is constitutive of technology always includes, in a more or less explicit way, the world vision of the person who has created or developed it.

This also applies to artificial intelligence programs. In order for these instruments to be for the construction of good and a better future, they must always be ordered for the good of every human being. They must contain an ethical inspiration.

The ethical decision, in fact, is one that takes into account not only the results of an action, but also the values at stake and the duties that derive from those values. This is why I have welcomed the signing in Rome, in 2020, of the Rome Call for AI Ethics [13] and its support for that form of ethical moderation of algorithms and artificial intelligence programs that I have called “algoretics” [14] . In a plural and global context, in which different sensitivities and plural hierarchies in value scales are also shown, it would seem difficult to find a single hierarchy of values. But in ethical analysis we can also resort to other types of instruments. If we find it difficult to define a single set of global values, we can find shared principles with which to face and reduce eventual dilemmas and conflicts in life.

For this reason the Rome Call was born. The term “algoretics” condenses a series of principles that reveal themselves as a global and plural platform capable of finding the support of the cultures, religions, international organizations and large companies that are protagonists of this development.

The policy that is needed

We cannot, therefore, hide the specific risk, because it is inherent in its fundamental mechanism, that artificial intelligence limits the vision of the world to realities that can be expressed in numbers and enclosed in pre-established categories, eliminating the contribution of other forms of truth. and imposing uniform anthropological, socioeconomic and cultural models. The technological paradigm embodied by artificial intelligence runs the risk of giving way to a much more dangerous paradigm, which I have already identified with the name “technocratic paradigm” [15] . We cannot allow a tool as powerful and indispensable as artificial intelligence to reinforce such a paradigm, but rather we must make artificial intelligence a bulwark precisely against its expansion.

And it is precisely here where political action is urgent, as the encyclical Fratelli tutti recalls. Certainly "for many, politics today is a bad word, and it cannot be ignored that behind this fact are often the errors, corruption, and inefficiency of some politicians. Added to this are strategies that seek to weaken it, replace it with the economy or dominate it with some ideology. But can the world function without politics? Can there be an effective path towards universal brotherhood and social peace without good politics? [16] .

Our answer to these last questions is: no! Politics serves! I want to reiterate on this occasion that «in the face of so many petty and immediate forms of politics [...] , political greatness is shown when, in difficult moments, one acts based on great principles and thinking about the common good in the long term. It is very difficult for political power to assume this duty in a national project and even more so in a common project for present and future humanity” [17] .

Dear ladies, distinguished gentlemen:

My reflection on the effects of artificial intelligence on the future of humanity thus leads us to consider the importance of “sound politics” to look at our future with hope and confidence. As I have said on another occasion, "world society has serious structural flaws that cannot be resolved with patches or merely occasional quick fixes. There are things that must be changed with fundamental rethinking and important transformations. Only a healthy policy could lead it, bringing together the most diverse sectors and the most varied knowledge. In this way, an economy integrated into a political, social, cultural and popular project that seeks the common good can “open the way to different opportunities, which do not imply stopping human creativity and its dream of progress, but rather directing that energy through new channels.” "( Laudato si' , 191)" [18] .

This is precisely the case of artificial intelligence. It is up to each person to make good use of it, and it is up to politics to create the conditions so that this good use is possible and fruitful.

Thank you.


[1] Message for the 57th World Day of Peace (January 1, 2024), 1.

[2] Cf. ibid .

[3] Cf. ibid . , 2.

[4] This ambivalence had already been warned by Pope Saint Paul VI in his Address to the staff of the “Automation Center for Linguistic Analysis” of the Aloisiano de Gallarate (June 19, 1964).

[5] Cf. A. Gehlen, L'uomo. La sua natura e il suo posto nel mondo , Milan 1983, 43.

[6] Letter enc. Laudato si' on the care of the common home (May 24, 2015), 102-114.

[7] Cf. Message for the 57th World Day of Peace (January 1, 2024), 3.

[8] The ideas of Marshall McLuhan and John M. Culkin are particularly relevant to understanding the consequences of the use of artificial intelligence.

[9] Cf. Address to the participants in the Plenary of the Pontifical Academy for Life (February 28, 2020).

[10] Cf. Message for the 57th World Day of Peace (January 1, 2024), 4.

[11] Cf. ibid . , 3 and 7.

[12] Cf. Dicastery for the Doctrine of the Faith, Dignitas Infinite Declaration on Human Dignity (April 2, 2024).

[13] Cf. Address to the participants in the Plenary of the Pontifical Academy for Life (February 28, 2020).

[14] Cf. Speech to participants in the Congress “Promoting Digital Child Dignity – From Concept to Action” (November 14, 2019); Address to the participants in the Plenary of the Pontifical Academy for Life (February 28, 2020).

[15] For a broader exposition, I refer to my encyclical letter Laudato si' on the care of our common home (May 24, 2015).

[16] Letter enc. Fratelli tutti on fraternity and social friendship (October 3, 2020), 176.

[17] Ibid . , 178.

[18] Ibid . , 179