Bishop Tighe: ‘Antiqua et Nova’ offers guidance on ethical development of AI

Artificial Intelligence » » Bishop Tighe: ‘Antiqua et Nova’...

As the Holy See releases a document on artificial intelligence, the Secretary of the Dicastery for Culture and Education tells Vatican News about AI’s extraordinary potential and the need for humanity to guide its development with collective responsibility, so that it may be a blessing for all people.

The Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education released a document on Tuesday, January 28, entitled Antiqua et Nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.”

The document seeks to offer guidance for Catholic institutions and humanity as a whole regarding the ethical development and use of AI, according to the Secretary of the Dicastery for Culture and Education.

Speaking to Vatican News, Bishop Paul Tighe said Antiqua et Nova is not the final word on AI but rather hopes to contribute to the debate by providing points for consideration.

“There is a broader understanding of intelligence, which is about our human capacity to find purpose and meaning in life,” he said. “And that is a form of intelligence, which machines can’t really replace.”

Here is the full transcript of the interview with Bishop Tighe:

Q The Holy See has just released a document entitled “Antiqua et nova: Note on the Relationship Between Artificial Intelligence and Human Intelligence.” What would you say is ‘new’ in this document and what does it hope to tell the world, especially the Church?

This document is bringing together a lot of reflections that have been developing organically over the last number of years. AI has been on the agenda for about ten years. It’s been around for longer, and it’s been discussed for longer, but it’s hit the public consciousness over the last ten years and very particularly in the last year or so with the emergence of ChatGPT, which put AI tools in the hands of ordinary users.

What we’re trying to do at the moment is to bring together the reflections that have been emerging from the Church, from various Church organizations. Here at the Vatican, we have messages for the World Day of Peace and a Message for the World Day of Communications. The Pontifical Academy for Life has been working on this issue with the Rome Call, the Pontifical Academies for the Sciences have been convening scientists to talk about AI, and we’ve been dealing with the question of education and AI. There’s an idea to bring it together and bring something synthetic that unites all the different perspectives that have been emerging organically, and maybe put them in one place.

It’s also not the final word; that’s the first thing to be said very clearly. This is something that we’re going to be living with, that’s going to be emerging. But what it is trying to do is to offer people some perspectives from which they can begin to think critically about AI and its potential benefits for society, and then to alert people somewhat to what we need to think about to ensure that we don’t inadvertently create something, or allow something to be created, that could be damaging to humanity and to society.

I would say there’s a certain cautionary element here. Many of us at the beginning of social media were very quick to embrace its extraordinary potential. We didn’t necessarily see the side effects that emerged in terms of polarization, fake news, and other issues.

We want to welcome something that has great potential for human beings. We want to see that potential, and at the same time be attentive to the possible downsides. I think that’s what we’re trying to do here. One day you read headlines in the newspapers that AI is going to be the salvation of us all. The next day we’re reading that it’s going to be the annihilation and the end of the world.

We’re trying to offer people a more balanced approach. The document focuses on a number of things. There are the headline issues that everybody has thought about: issues about the future of work, about war, about deep fakes, about inequality. And there are ethical issues and societal issues that we want to look at.

But in addressing those, we’re also trying to focus on a more basic question about what it means to be human, the anthropological issue of what it means to be human. What is it that gives human life value, purpose, and meaning? We recognize that AI systems can enhance and augment certain parts of our humanity, that is, our ability to reason, to process, to discern, to discover, to see patterns, to make innovations. It can certainly enhance that.

We also want to say that that type of intelligence is not the only type of intelligence. There is a broader understanding of intelligence, which is about our human capacity to find purpose and meaning in life. It’s interesting that many of the people working in AI are very clear that they want to put AI at the service of human good, that they want to have person-centered AI; they want AI for humanity. All these titles are there.

Part of the question we have to ask is: what is it that is good for humanity? What is it that promotes human well-being? And that is a form of intelligence, which machines can’t really replace. We have to understand that in the Catholic tradition, which is rooted in our own philosophical traditions, not just in Catholicism, our understanding of intelligence is more than simply reasoning, calculation, and processing, but includes also that capacity to look for purpose, meaning, and direction in our lives.

The document tries to open up that wider understanding of intelligence in terms of a number of categories. One, it says, is going beyond pure rationality and moving on to issues, like the fact that a lot of the way we grow as human beings is in dialogue and debate with others. Relationality becomes a key part of what it is to have human intelligence: our ability to learn from others. It’s also about embodiment. We’re learning more and more that our minds are not separate from our bodies. They are not something that can simply be uplifted and put onto a computer. They’re organic. We learn through doing. We learn through our emotions. We learn through our intuitions.

These are important for the human wisdom that grows out of all of that. Calculation is a part of that, but it’s not the whole story. And finally, I think what we’re concerned with always is searching for ultimate truths, for what is it that gives shape, purpose, and meaning in life. That’s something that we may be able to use AI to assist us with certain elements, but in the ultimate analysis, that’s a type of intellectual commitment that goes beyond something that can be done simply by a machine.

Q: AI development is evolving at a rapid pace. Why has the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education decided to release this document at this moment?

The Vatican has been attentive to this, and not just the Holy See but the Catholic Church more broadly, and many Catholic universities have been leading reflection on AI and its importance. If we’re honest, it’s the increased public attention to AI in the last year and a half with the advent of ChatGPT. There are other models available of easily-used systems of artificial intelligence that have given an urgency to it.

Certainly from our perspective, within the world of education, all educators are asking questions about the potential for AI to help in education and the risks if it somehow de-personalizes the nature of education. We’re also responding to questions put to us during ad Limina visits, since the bishops want some orientation.

This document comes about and draws together lots of other initiatives and puts them together. It also gives it a unity of vision, which tries to unite the ethical issues and relate them to that more fundamental anthropological vision of what it is that makes us human.

It was interesting that the United Nations have been trying to work on overall systems for the governance of AI. One of the questions that emerged there—at one stage they said, ‘These are obviously questions that raise questions about the future of humanity, but really, we can’t do that because there are too many different views about it.’

Also, UNESCO said that AI—and this was the one that struck me very strongly—is leading to what they call an anthropological disruption. Silicon Valley loves the language of disruption, of breaking down to reinvent. But here, when we’re talking about the nature of what it is to be human and what it is that makes human life satisfactory. It becomes very important that we reflect critically on that, and that we don’t bypass the question about the ultimate meaning of life.

That’s where I think issues that emerge strongly in this document are ones about the risk of increased inequality with AI. You can see this, generally, in terms of the distribution of what has happened to digitalization, which has led to an increase of a very small number of extraordinary wealthy people who have extraordinary amounts of power, for which they are not necessarily accountable to other institutions. So, how do we think about making sure that this doesn’t serve to fracture the unity of the human family, which is economic unity but is also access to power and information?

One of the areas where there is extraordinary potential for AI is in the area of healthcare. But we know that healthcare tends to be already not very fairly distributed. Will this lead to further inequalities in that area? A lot of our reflection and the timing of this is that we need to have something there to address the debate.

This is not the final word. It can’t be the final word, because this is an emerging area. But it’s also trying to make sure that we’re putting down some markers, some points from which people who are interested in engaging with the debate may be able to grasp and work with.

It is written for the Church and for Catholic institutions, but it’s also for all people to offer them, and to say that this is something that is going to have a huge impact on the future of humanity. Let’s think about it; let’s add our voices to it. And let’s not feel that, because technologically, it’s quite complicated that we somehow hand over a competence for the bigger questions, which are about our future as human beings.

Q: In the document overall, there seems to be a recognition of AI’s potential, accompanied by an undertone of caution about its misuse. Isaac Azimov’s Robot series of novels comes to mind when thinking about humanity’s ultimate relationship with AI. Would you say that the document takes a more embracing or a more cautionary view on AI?

I hope it takes a middle ground, not embracing any of the apocalyptic visions. Neither is it trying to imagine that this is going, of itself, to resolve all human problems. It’s trying to see the potential and celebrate the extraordinary achievement that AI is. It’s a reflection on humanity’s capacity to learn, to innovate, to develop, which is a God-given capacity.

We want to celebrate that. But at the same time, it’s saying: we know from past experience so many wonderful innovations that had great potential also became problematic for a number of reasons. Problematic because maybe there were inherent flaws within the systems themselves. Problematic because people could use the same technology for very good things or very bad purposes. Problematic, at times, because the systems—and we’re thinking of AI here—has been developed within a particular commercial, political environment and may already be marked by the values of those environments.

We want to think critically about ensuring that AI will ultimately be harnessed by humanity, used by humanity in a way that ensures that it realizes its potential to be good for all human beings.

We had a speaker here recently, Carlo Ratti, an architect. He was talking about technology, and he quoted an American philosopher and architect, Buckminster Fuller, who said about all technology: ‘We have the choice either to be architects or victims.’ In a sense, this document is inviting people to try and make sure that we are holding people responsible, so that we are effectively going to be architects of something, to ensure, to plan, to determine that it will be used for good, not just leave it to random factors, to commercial considerations, to political advantage. Humanity needs to have ownership of the processes, and be attentive to ensure that there will be a sense of responsibility.

And that’s where Azimov’s Robot series comes in. Where will the responsibility lie? AI machines will do extraordinary things. We won’t be able to understand how they’re doing them at times. They’re developing a capacity to reprogram themselves and advance forward. So, what we have to do is try and say, where is the responsibility? Many people in the industry now talk about AI being ‘ethical by design’. That you should think from the beginning, what are the problems? What are the difficulties? How do we plan in a way that we avoid problems? So, that means, how do we make it secure, that it works well, that it doesn’t malfunction. How do we ensure that it’s not easily exploited by people who would use it for bad? How do we ensure that the databases which are conditioning AI are actually reflective of the whole of human experience, not just that that has already been digitalized. So, how do we ensure that it is something that reflects the best of us as humans?

Therefore, we always try and hold responsible those who are designing, those who are planning, those who are developing, but also those who are using AI. This is the area of layering out responsibility. In the AI area, one of the areas that has been very interesting are some of the professional associations of engineers, and others who are working in the area, are developing their own code of ethics, because they technically have the competence to develop it, but they’re asking the questions: what is it going to be used for, how will it be used, and how do we ensure that it is held accountable to the broader human community?

Q: If you could highlight one aspect of the document, what would it be?

I’m not sure if it’s an aspect of the document I would want to highlight, but what I would want to say to people who are likely to read this text, whether they’re Catholic or not Catholic, the goal is to try and get as informed as you can be about what’s happening here, not to feel disempowered or sidelined. I say this as somebody who is older in life, and saying that to my generation, not just to feel that we cop out.

One thing I would say to people is to begin using the technologies, explore them, see how extraordinary they are, but also begin to be critical of them, to learn how to be able to evaluate them and think about them. So, what I would be taking from this is the importance of responsibility.

Each and every person should think about the level of his or her own responsibility, and that layers up from the user. Am I going to start sharing content that I know is dubious, that I know is there to provoke hate, to take personal responsibility for how I use AI and what I do with it. Then local communities in many parts of the world are asking questions, such as that this is hugely consuming of energy. Will it be sustainable? How do we think about that in our local communities?

Another area that is highlighted in the document—and maybe it’s a parochial interest for us here—is the extraordinary contribution of Catholic universities. They have a wonderful mixture of people who are skilled in humanities and philosophy and theology, and also people who have scientific backgrounds. The hope is that we can make those incubators of thought coming from the interdisciplinary and the transdisciplinarity that you have in those universities, where we can begin the conversations between the humanities and the sciences, to ensure that we think about and reflect on responsible development and uses of AI.

Source: vaticannews.va