A.i. Web Banner

Additional Resources on A.I.

Below you will find summarized two documents: The Rome Call for AI Ethics and The Asilomar AI Principles.  These are statements that we support as embracing worldwide direction for the development and management of artificial intelligence. 

The Rome Call for AI Ethics was initiated by the Pontifical Academy for Life in 2020. It has been signed by over 100 individuals, corporations, and institutions, including major entities like Microsoft, IBM, and Cisco, as well as the Archbishop of Canterbury. It is a significant document aimed at promoting an ethical approach to the development and application of artificial intelligence (AI). Here are the key points:

  1. Collaborative Effort: The document emphasizes the need for cooperation, solidarity, and joint work among various stakeholders, including international organizations, governments, the private sector, and religious leaders, to ensure AI systems and products are not only technically advanced but also morally sound.
  2. Human Dignity: A core principle of the Rome Call is the respect for the dignity of the human person. It stresses that technological progress, especially in AI, should serve humanity and uphold human rights.
  3. Ethical Governance: The call advocates for the integration of ethical considerations from the design phase of AI technologies. This concept, referred to as “algorethics,” is crucial for ensuring that AI is used responsibly and ethically.
  4. Interfaith Cooperation: The Rome Call brings together major world religions to underscore their role in shaping a society where technological advancements align with ethical and moral values. Religious leaders from various faiths have committed to promoting these principles.
  5. Symbolic Venue: An event highlighting the Rome Call was held in Hiroshima, a city symbolizing the consequences of destructive technology and the enduring quest for peace. This underscores the importance of using AI as a force for good.
  6. Global Impact: The document has been endorsed by several key players in technology, including Microsoft, IBM, and Cisco, as well as international bodies like the UN Food and Agriculture Organization (FAO) and the Italian Government. These endorsements highlight a broad commitment to ethical AI development.
  7. Practical Applications: The Rome Call highlights the potential of AI to address global challenges, such as advancing medical research, enhancing educational access, and improving agricultural practices. The focus is on ensuring these benefits are distributed equitably.
  8. Ongoing Commitment: The initiative calls for a sustained commitment to ethical AI governance, requiring continuous dialogue and cooperation among all stakeholders to ensure AI serves the common good.

These points highlight the Rome Call’s comprehensive approach to fostering an ethical AI landscape that prioritizes human dignity, ethical governance, and collaborative efforts across various sectors and religious communities.

The Asilomar AI Principles, developed at the Beneficial AI 2017 conference organized by the Future of Life Institute, are a set of 23 guidelines designed to ensure the safe and ethical development of artificial intelligence. These principles are grouped into three main categories: Research Issues, Ethics and Values, and Longer-Term Issues. Here’s a summary of the key points:

Research Issues

  1. Research Goal: AI research should focus on creating beneficial intelligence, not undirected intelligence.
  2. Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use.
  3. Science-Policy Link: There should be a healthy exchange between AI researchers and policymakers.
  4. Research Culture: Foster a culture of cooperation, trust, and transparency among AI researchers.
  5. Race Avoidance: Developers should cooperate to avoid cutting corners on safety standards.

Ethics and Values

  1. Safety: AI systems should be safe and secure throughout their operational lifetime.
  2. Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  3. Judicial Transparency: Autonomous systems in judicial decision-making should provide explanations auditable by a human authority.
  4. Responsibility: AI designers and builders are responsible for the moral implications of their systems.
  5. Value Alignment: AI systems should be designed to align with human values.
  6. Human Values: AI should respect ideals of human dignity, rights, and cultural diversity.
  7. Personal Privacy: Individuals should have the right to control their personal data.
  8. Liberty and Privacy: AI should not unreasonably curtail people’s liberty.
  9. Shared Benefit: AI technologies should benefit and empower as many people as possible.
  10. Shared Prosperity: Economic prosperity created by AI should be shared broadly.
  11. Human Control: Humans should choose how and whether to delegate decisions to AI.
  12. Non-subversion: Control of AI should respect and improve social and civic processes.
  13. AI Arms Race: Avoid an arms race in lethal autonomous weapons.

Longer-Term Issues

  1. Capability Caution: Avoid making strong assumptions about the upper limits of AI capabilities.
  2. Importance: Advanced AI should be planned for and managed with great care.
  3. Risks: Address risks posed by AI systems, especially catastrophic or existential risks.
  4. Recursive Self-Improvement: AI systems designed to self-improve should be subject to strict safety measures.
  5. Common Good: Superintelligence should only be developed for the benefit of all humanity.

These principles aim to foster a responsible AI development community by promoting transparency, accountability, and a commitment to human-centered values.

Ask Cathy

Cathy is AI-based tool that lets you engage in natural conversation in order to find information about the Episcopal Church, developed by the TryTank Research Institute and the Innovative Ministry Centre.