CTNS Briefs

CTNS Research Brief 2024 / Revision of 7/5/2024

 

 

 

Hopes and Hazards of Artificial Intelligence

A CTNS Research Brief

By

Braden Molhoek and Ted Peters

 

For decades Artificial Intelligence (AI) technology has been trudging up the hill like the Little Engine that Could. Now it sits on the hill’s crest, ready to roll down at unstoppable speed and crash into everything below. All hopes are threatened by hazards.

First, the hopes. With glee, two health scientists, Ridwan Islam Sifat and Upali Bhattacharya, celebrated the birth of ChatGPT in 2023. Here are their hopes.

ChatGPT technology holds immense potential to create a paradigm shift in how we approach global health policy analysis by bridging the communication gap between humans and machines. Its unique ability to provide insightful inputs into complex decision-making processes at all levels of government agencies across different countries has paved the way for increased efficiency and transparency in administration.1

Pew Researchers list the voiced hopes for the best and most beneficial changes by the year 2035.

Our AI hopeful “anticipate striking improvements in health care and education. They foresee a world in which wonder drugs are conceived and enabled in digital spaces; where personalized medical care gives patients precisely what they need when they need it; where people wear smart eyewear and earbuds that keep them connected to the people, things and information around them; where AI systems can nudge discourse into productive and fact-based conversations; and where progress will be made in environmental sustainability, climate action and pollution prevention.”2

 

Now, the hazards. More than hopes are at stake. Like the prophets of ancient Israel, Pew researchers make our hopes for tomorrow contingent on our moral resolve today. “Humans’ choices to use technologies for good or ill will change the world significantly.”3

What has astonished the world press in recent months is the eruption of fear of disaster, the spread of dread over AI hazards. Hazard warnings have come from AI techies themselves. Phrases such as human extinction due to runaway AI are now whispered in the halls of technology and even before Congress. The Center for AI Safety now places the risk of human extinction due to uncontrolled AI in the same category as a pandemic or nuclear war.

 

Already in 2014 physicist Stephen Hawking had warned that AI might eliminate the human species. Now, nearly a decade later, three hundred and fifty digital techies including Microsoft’s Bill Gates, Open AI co-founder John Schulman, and transhumanist Ray Kurzweil, have signed the following 2023 statement. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.4

We at the Center for Theology and the Natural Sciences (CTNS) see this as a moment when ethicists and public theologians should engage the subject of AI with assessment, analysis, and evaluation.5 What might ethicists and public theologians offer the wider public that illuminates and accurately challenges the present generation to secure the hoped-for future?6

Hava Tirosh-Samuelson, a Jewish Studies professor at Arizona State University, elucidates the paradox of hope and hazard endemic to AI’s future. “Humans as a species have an innate capacity to technologically innovate… But as we solve a particular problem, we also give rise to new types of problems.”7 We will have to conclude that AI with all its potential is morally ambiguous. Both hopes and hazards press on distinctively human responsibility to make the right decisions now that will guide global society tomorrow.

 

AI’s Hopes and Hazards

Here in this CTNS Research Brief, we list several questions regarding AI hopes and hazards. We offer elaborations on these questions to elucidate, inform, and even inspire further reflection and action.

  1. How realistic is the transhumanist anticipation of the Singularity?
  2. Why do our technological leaders fear a global takeover by Superintelligence?
  3. What kind of damage could malicious malefactors wreak and what cybersecurity guardrails might mitigate this damage?
  4. What are the hopes and hazards surrounding AI and our planet’s ecosphere?
  5. Is the already voiced fear that AI will eliminate well-paying human jobs realistic?
  6. Should educators, editors, and script writers incorporate or shun the products of Generative AI?
  7. As  AI   becomes ubiquitous,  should  we   fear   an   increase in uncontrollable misinformation and even disinformation?
  8. Can AI help human individuals become more virtuous?
  9. Is it realistic to forecast that AI will develop selfhood and a sense of moral responsibility?
  10. What contribution to the public discussion of AI might churches and other religious organizations offer?

That’s our list of questions. Below you’ll find our elucidation and recommendations.

 

Ten Questions for the Ethicist and Public Theologian

 

  1. How realistic is the transhumanist anticipation of the Singularity?

For three decades or more our transhumanist friends have been anticipating Singularity.8 The "Singularity...is a point where our old models must be discarded and a new reality rules."9 The Singularity is the point where AGI (Artificial General Intelligence) becomes superintelligent. With the creation of "superhuman intelligence...the human era will be ended," wrote science fiction writer Vernor Vinge in 1992.

Singularity is a threshold where AI becomes superintelligent. Ray Kurzweil forecasts that this threshold will likely be crossed in 2045.

“Since this [AI] technology will let us merge with the superintelligence we are creating, we will be essentially remaking ourselves. Freed from the enclosure of our skulls, and processing on a substrate millions of times faster than biological tissue, our minds will be empowered to grow exponentially, ultimately expanding our intelligence millions-fold. This is the core of my definition of the Singularity.”10

When the Singularity occurs, a new posthuman species – a digitized species -- will be born and take over management of our world’s systems.11 Singularity will open the door for a future utopia. Theologian William Grassie is skeptical. "The Singularity movement is a kind of secular religion promoting its own apocalyptic and messianic vision of the end times."12 Is it realistic? Not according to Grassie. "The Singularity movement is science fiction as social movement."13

 

Important for us here is the observation that a global takeover of all human institutions by AI has been forecasted. Even if it sounded like fantasy three decades ago, suddenly it’s looking realistic to experts in the know.

 

  1. Why do our technological leaders fear a global takeover by Superintelligence?

The prospect of a takeover by superintelligence such as Singularity is scaring AI techies today like monsters scare children at Halloween. Geoffrey Hinton, the so-called Godfather of AI, along with Tesla’s Elon Misk have issued a warning: we need guardrails put up by ethics and public policy. The warning comes in an Open Letter.

Powerful AI systems should be developed only once we are confident that their effects will be positive, and their risks will be manageable.14

The fear of a takeover by superintelligence is so plenary that Hinton and Musk and others have called for a moratorium on research. Therefore, “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”15 This pause should be public and verifiable. And it should include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

What we see here is a growing anti-singularity consensus on behalf of what is now called alignment. Alignment ethics steers AI systems toward humans' intended goals, preferences, or moral principles. Alignment principles are reminiscent of Isaac Azimov’s 1942 formulation of the laws of robotics. Recall the first law: “a robot may not injure a human being or, through inaction, allow a human being to come to harm.”

No moratorium has been enacted. It’s not likely. Progress in AI technology races forward at breakneck speed. Will the superintelligence threshold put an end to all alignment hopes?

 

  1. What kind of damage could bad actors or malicious malefactors wreak and what cybersecurity guardrails might mitigate this damage?

Pew researchers have uncovered quite a fear of malicious malefactors utilizing AI for military advantage or criminal behavior. Some are anxious about the seemingly unstoppable speed and scope of digital tech that they fear could enable blanket surveillance of vast populations and could destroy the information environment, undermining democratic systems with deepfakes, misinformation, and harassment.

This means government regulators need to move as fast as technology.16 With cybersecurity in mind, on October 30, 2023 U.S. President Joe Biden signed an executive order on “Safe, Secure, and Trustworthy Artificial Intelligence.” This executive order provides AI guardrails on behalf of national security. The president used the Defense Production Act, a law that can compel businesses to take actions in the interest of national security, to require the makers of large AI models to report key information to the government.17 This includes alerts regarding any AI affecting national security. According to Wired Magazine’s coverage, “Biden’s new executive order acknowledges that AI projects can be harmful to citizens if not carefully implemented, singling out the potential for discrimination and other unintended effects in housing and healthcare.”

Immediately National Science Foundation (NSF) Director Sethuraman Panchanathan responded. “Given NSF’s global leadership in AI, we recognize that it is our responsibility to take bold and decisive steps to ensure its safe and responsible use in light of the rapid speed of its advancement.”

Ethicists and public theologians are aware of the ever- present threat of human sin. Technological progress never guarantees moral progress. The risk of malfeasance grows at the same pace as AI advancement.

Cybersecurity services have become the next business boom. Even so two Time writers, Andrew Chow and Billy Perrigo, fear a repeat of past mistakes. “Big Tech and their venture-capitalist backers risk repeating past mistakes, including social media’s cardinal sin: prioritizing growth over safety.”18

If cybersecurity fails to set the guardrails, we may turn the AI highway over to the criminal mind. To turn the power of AI over to the criminal mind makes us shudder. It seems the policing of AI could be effective only if the police have more incentive along with more powerful AI than those with criminal intent. But how might this be accomplished?

 

  1. What are the hopes and hazards surrounding AI and our planet’s ecosphere?

All intelligence – especially artificial intelligence -- comes with energy cost. This poses an environmental threat that thus far has flown under the radar of our ethicists and lawmakers. In a forthcoming book to be edited by Ted Peters tentatively titled, AI, IA: Promises and Perils (ATF 2024), hybrid computer scientist and theologian Noreen Herzfeld measures the energy cost. We make a mistake if we think of AI as disembodied, Herzfeld warns. This mistake is suggested by the metaphor of “the cloud.” By this metaphor we are lulled into assuming that digital information is clean, nice, cerebral. But “cyberspace is an illusion,” says Herzfeld in this forthcoming chapter, “Call Me Bigfoot: The Environmental Footprint of AI and Related Technologies.”

 

Computing is a physical process requiring machines, cables, and energy. The production and storage of data takes energy. And we produce a lot of data. According to the World Economic Forum, in one day we produce forty times more bytes of data than there are stars in the observable universe, 44 zetabytes of data. That’s 44 x 1.000.000.000.000.000.000.000. Much of this data is not particularly productive. It includes 500 million tweets, 294 billion emails, 4 million gigabytes of data on Facebook, 4000 gigabytes from each computer- connected car, 65 billion messages on WhatsApp, and 5 billion Google searches.19 One might argue that none of this is AI. But all this internet activity is precisely what is needed to train LLMs and generative AI.

AI – especially blockchain currency such as Bitcoin and Ether -- consumes energy like a blue whale consumes krill. And, of course, it also jettisons carbon dioxide into the atmosphere. Herzfeld cites a study from the University of Massachusetts Amherst: the energy used in training a typical AI linguistics program emits 284 tons of carbon dioxide. This is five times the lifetime emissions of a mid- sized car or equivalent to more than a thousand round trip flights from London to Rome.  And this is only increasing.

Herzfeld observes. “As deep learning models get more and more sophisticated, they consume more data. Their carbon footprint increased by a factor of 300,000 between 2012 and 2018.”

In her role as a public theologian, Herzfeld calls this cold evil. “Cold evil requires a rethinking of sin. While the medieval seven deadly sins were individual sins of commission, today much of the evil in the world comes from corporate acts. Many are sins of omission.”

 

  1. Is the already voiced fear that AI will eliminate well-paying human jobs realistic?

Jobs are being lost. New jobs are being created. We are entering a period of economic disruption. Where will it lead?

“Experts say AI will likely transform the workplace, though it is developing so rapidly that we may not know all the ways it could change how we’ll be working in the future,” observes Simmone Shah writing for Time.20 Today’s thinking jobs – finance, banking, law, engineering, architecture, and more – will look quite different in a half decade.21

In another essay, “Artificial Intelligence, Transhumanism, and Frankenfear,” Ted Peters calls the threatening dimension of work transformation the Robotcalypse.22 Economists fear the loss of 800 million jobs to robots by 2030. Might this lead to a globotics upheaval? In 2023 Hollywood saw the Screen Actors Gild and American Federation of Television and Radio Artists (SAG-AFTRA) along with the Writers Guild of America (WGA) go on strike. One major concern was to establish regulations for the use of AI in creative projects.

What about priests and pastors? Might today’s religious leadership be replaced by computers that pray and robots that perform liturgy? Yes, indeed. Chinese Buddhists have already enlisted educational AI to teach. The teacher is named, “Buddha-Bot.” Religious AI can also be found in India. “The concern about what can be called religious automation is particularly acute in Hindu and other Eastern traditions that are based on intricate and daily rituals.”23 In temples across India, a robotic arm is being used to maneuver candles in front of deities. In Kerala, there is even an animatronic temple elephant. This “kind of religious robotic usage has led to increasing debates about the use of AI and robotic technology in devotion and worship,” reports journalist Holly Walters.24

If you don’t like your pastor’s sermons, just seat your robot in the pew to take notes you can review later.

Will this job upheaval be permanent? Probably not. Rather, like a caterpillar undergoing metamorphosis, new information technology jobs will replace those lost. So say the optimists. So say public theologians who perceive here an opportunity. 

In his 2019 essay, “Idle Hands and the Omega Point: Labor Automation and Catholic Social Teaching,” Levi Checketts says “the task for Catholics is clear enough; they should be engaged in the work of co-constructing the kingdom of God.”25

 

  1. Should educators, editors, and script writers incorporate or shun the products of Generative AI? 

The 2022 advent of ChatGPT frightened educators like a surfacing crocodile frightens swimmers at the beach. Teachers scattered to hide their trepidations behind proscriptions denying student access.26

The moment of terror has now passed. Desperate to figure out how to harness the new tech for good has led to the formation of Facebook chatgroups such as ChatGPT for Teachers, to online helps such as ChatGPT for Educators  along with the University of Pittsburgh’s Generative AI Resources for Faculty, and to books such as The AI Classroom.

It appears that artificial intelligence provokes human intelligence to find new ways to cultivate native intelligence.

 

  1. As AI becomes ubiquitous, should we fear an increase in uncontrollable misinformation and even disinformation?

By this time, we have all experienced setbacks due to misinformation or disinformation appearing on the internet. Misinformation is due to a mistake. Disinformation is due to a bad actor attempting to persuade us to vote for a specific candidate, adopt a prejudice against a particular group, to join a terrorist organization, to assassinate a leader, or to commit suicide.

What about bias? Yes, indeed. Even without misinformation or disinformation, bias can distort AI LLM information. Why? Because LLM’s reflect the bias already present in the information AI assembles. One large assemblage of information is called the Enron Corpus. This LLM had examined 600,000 Enron emails. The result has been that the prejudices of Enron employees were carried over into the so- called objective rhetoric of AI formulation.27

When the chatbot Tay was introduced by Microsoft in 2016, it once said: “Hitler was right. I hate Jews.” It added that feminists should “all die and burn in hell.”28 More than likely such outbursts reflect input more than just output.

Perhaps AI bias will demonstrate the validity of the Decalogue, where the sins of one generation are visited on future generations: “punishing children for the iniquity of parents, to the third and the fourth generation” (Exodus 20:5).

 

  1. Can AI help human individuals become more virtuous?

We have two overlapping questions here. First, is it conceivable that AI itself could or would become virtuous? Second, is it conceivable that AI could aid us human individuals in becoming more virtuous?

 

In July 2022, The Center for Theology and the Natural Sciences (CTNS) at the Graduate Theological Union (GTU) announced the start of a new three-year research project funded by the John Templeton   Foundation.   The   research titled "Virtuous AI?: Cultural Evolution, Artificial Intelligence, and Virtue" will focus on Artificial Intelligence (AI), its cultural and ethical implications, how this could impact human virtue, and how we might envision AI and virtue. More information about the new project can be found on the CTNS website.

 

Virtue is more than ethical discernment or making the right moral decision. Virtue builds character. It is expressed by moral and spiritual character. To construct virtuous character requires two ingredients, personal resolve and habitual behavior over time. No machine intelligence is capable of either developing virtue through habit or infusing virtue into a waiting human person.

Oh yes, we can imagine how AI could provide relevant data for ethical discernment and alerting us to the implications of the moral decisions we make. In this sense, AI could enhance the human capacity for making effective moral decisions. But this in itself would not count as virtue.

 

Here is Braden Molhoek of the CTNS / John Templeton Foundation program on Virtuous Artificial Intelligence (VAI) at the Graduate Theological Union.

It is problematic to speak of engineering virtue. Enhancing one’s moral capacity does not make one more virtuous, it simply increases the ability to act ethically. Virtues are stable dispositions that require habituation. Increasing moral capacity will likely make a person more able to act ethically but it does not infuse virtue.29There is nothing inherently evil about AI, just as there is nothing inherently virtuous about it. Yet, AI could be pressed into human service for character building.

 

Even if AI falls short of doing everything an ethical theorist would like, we might want to express modest gratitude that AI could provide some support for human virtue. Dominican Anselm Ramelow asks rhetorically: “What if we ourselves could follow our own mouse clicks on the internet as tools of diagnostics, for an examination of conscience and a greater self-knowledge, so as to allow us to lead what Socrates calls an examined life?”30

 

  1. Is it realistic to forecast that AI will develop selfhood and a sense of moral responsibility?

Human virtue requires a self that elects to become virtuous. So, the agenda of the CTNS research project on VAI cannot avoid asking the question of selfhood. To be virtuous requires a self who wills to become virtuous. So, it is curious if not misleading that transhumanist speculations about the future Singularity lack discussion of selfhood let alone concern for the moral character of superintelligence.

 

Is it possible for AGI to develop selfhood? One sign of selfhood is intentionality generated sui generously. Has machine learning generated intentionality yet? Not to date.

Selfhood is also characterized by awareness and even self- awareness. Do we see this in AI? No, not yet.

According to K.K. Jose and Binoy Jacob writing Pax Lumina, a public theology online magazine published in India, “Machines do not have the capacity for self-reflection, a basic function of consciousness.”31 Noreen Herzfeld, cited above, is more than just a little doubtful. "We are unlikely to have intelligent computers that think in ways we humans think, ways as versatile as the human brain or even better, for many, many years, if ever."32 

Nevertheless, we must speculate. If AI machines develop a sense of selfhood, we humans would then have to consider whether we should treat them with dignity. That is, would we Homo sapiens become morally obligated to treat an AI self as a moral end and never again as merely a tool or means to our own end?

 

  1. What contribution to the public discussion of AI might churches and other religious organizations offer?

On the one hand, engaging the globe wide discussion over AI will be good for the church. In a forthcoming book produced by a Vatican sponsored research group, Encountering Artificial Intelligence, the welcome matt is laid at the church’s front door.

Each generation must therefore heed the admonition of the Second Vatican Council to unite “new sciences and theories . . .with Christian morality and the teaching of Christian doctrine, so that religious culture and morality may keep pace with scientific knowledge and with the constantly progressing technology” (Gaudium et spes §62).33

On the other hand, perhaps the church has wisdom to share with the wider society for the sake of the common good. Because the current controversy over hopes and hazards, AI begs for public policy formulation.

Churches and other responsible faith communities most likely already contain adherents or members who are sophisticated in the countless industries and services that rely on AI. We recommend that such faith communities draw such members into arenas where they can inform the rest of us and facilitate religiously informed discernment. Each congregation as well as academic leaders in various religious traditions should put their heads together to think through the implications of AI in terms of anthropological wisdom developed over centuries of tradition. Resources are becoming increasingly available. We particularly recommend AI and Faith in Seattle. Started by Microsoft engineers and led by David Brenner, AI and Faith “brings the wisdom of the world’s great religions to the discussion around the moral and ethical challenges of artificial intelligence.”

 

Conclusion

Hopes plus hazards make AI ambiguous. Ambiguity is the form in which we humans must confront good and evil, our neo-orthodox theologians tell us. This places moral responsibility right on the shoulders of AI researchers, government regulators, and we the consumers.

The globe wide anxiety prompted by the AI upheaval is a clarion call to public theologians in various religious traditions as well as the academies. The AI industry and its regulators are in need of discourse clarification along with moral wisdom.

 

 

Endnotes

1 Ridwan Islam Sifat and Upali Bhattacharya, “Transformative potential of artificial intelligence in global health policy,” Journal of Market Access and Health Policy 11:1 (2023) https://doi.org/10.1080/20016689.2023.2230660 .

2 Pew Research Center, “As AI Spreads, Experts Predict the Best and Worst Changes by 2035,” (June 21, 2023); https://www.pewresearch.org/internet/2023/06/21/as-ai-spreads-experts-predict-the-best-and-worst-changes-in- digital-life-by-2035/.

3 Ibid.

4 Center for AI Safety, “Statement on AI Risk,” (2023) https://www.safe.ai/statement-on-ai-risk.

5 “Public theology is conceived in the church, critically reasoned in the academy, and offered to the wider culture for the sake of the common good.” Ted Peters, The Voice of Public Theology: Addressing Politics, Science, and Technology (Adelaide: ATF Press, 2023) 3.

6 “Public theology is conceived in the church, critically reasoned in the academy, and offered to the wider culture for the sake of the common good.” See: Ted Peters, Public Theology (Resources).

7 Hava Tirosh-Samuelson, H+-: Transhumanism and its Critics, Kindle Edition (Philadelphia: Metanexus Institute, 2011) 571.

8 "Transhumanism is a philosophy, a worldview and a movement." Natasha Vita-More, Transhumanism: What is it? (published by author, 2018) 5. "The field of artificial intelligence is deeply rooted in transhumanist visions for the future." Ibid., 64.

9 Verner Vinge, "What is the Singularity," (1992) https://mindstalk.net/vinge/vinge-sing.html (accessed 9/10/2018).

10 Ray Kurzweil, The Singularity is Nearer (New York: Viking, 2024) 73-74.

11 "Post-human minds will lead to a different future, and we will be better as we merge with our technology... humans will be able to upload their entire minds to The Living Cyberspace and BECOME IMMORTAL."11 Henrique Jorge, "Digital Eternity," The Transhumanism Handbook, ed., Newton Lee (Heidelberg: Springer, 2019) 645-650, at 650.

12 William Grassie, "Millennialism at the Singularity: Reflections on the Limits of Ray Kurzweil's Exponential Logic," H+ Transhumanism and Its Critics, eds., Gregory R. Hansell and William Grassie (Philadelphia: Metanexus, 2011) 249-269, at 264.

13 William Grassie, "Millennialism at the Singularity: Reflections on the Limits of Ray Kurzweil's Exponential Logic," H+ Transhumanism and Its Critics, eds., Gregory R. Hansell and William Grassie (Philadelphia: Metanexus, 2011) 249-269, at 265-266.

14 Elon Musk, et.al., “Pause Giant AI Experiments: An Open Letter.” Future of Life (March 22, 2023) https://futureoflife.org/open-letter/pause-giant-ai-experiments/.

15 Musk, op. cit.

16 In November 2023 the Special Competitive Studies Project and Johns Hopkins University Applied Physics Laboratory developed the Framework for Identifying Highly Consequent AI Use Cases. “This framework aims to provide a tool for regulators to identify which AI use cases and outcomes are or will be highly consequential to society, whether beneficial or harmful.”

17 Earlier in 2023, U.S. President Joe Biden issues his “Executive Order Addressing United States Investments In Certain National Security Technologies and Products in Countries of Concern” (August 9, 2023). U.S. Here the president voices concern that “advancement by countries of concern in sensitive technologies and products critical for the military, intelligence, surveillance, or cyber-enabled capabilities of such countries constitutes an unusual and extraordinary threat to the national security of the United States…”

18 Andrew R. Chow and Billy Perrigo, “The AI Arms Race is Changing Everything,” Time Special Edition on “Artificial Intelligence: A New Age of Possibilities” (2024) 8-13, at 10.

19 Benedetta Brevini, Is AI Good for the Planet? Cambridge: Polity Press, 2022, 42-43.

20 Simmone Shah, “How to Make AI Work for You, at Work,” Time Special Edition on “Artificial Intelligence: A New Age of Possibilities” (2024) 34-39, at 36.

21 Paul Dhar, “Rethinking the Thinking Jobs,” Time Special Edition on “Artificial Intelligence: A New Age of Possibilities” (2024) 52-55.

22 Ted Peters, “Artificial Intelligence, Transhumanism, and Frankenfear,” AI and IA: Utopia or Extinction? Edited by Ted Peters (Adelaide: ATF Press, 2018) 15-44, at 18.

23 See: “With rise of AI, concerns about ritual automation grow in Hinduism, Buddhism,” Religion Watch (2023) https://www.religionwatch.com/with-rise-of-ai-concerns-about-ritual-automation-grow-in-hinduism-buddhism/ .

24 Holly Walters, “Robots are performing Hindu rituals – some devotees fear they’ll replace worshippers.” Religion News Service (2023) https://religionnews.com/2023/03/13/as-robots-perform-hindu-rituals-some-devotees-fear- theyll-replace-worshippers/ .

25 Levi Checketts, “Idle Hands and the Omega Point: Labor Automation and Catholic Social Teaching,” AI and IA: Utopia or Extinction? Edited by Ted Peters (Adelaide: ATF Press, 2018) 153-171, at 169.

26 Olivia B. Waxman, “AI in the Classroom,” Time Special Edition on “Artificial Intelligence: A New Age of Possibilities” (2024) 44-47.

27 Damien P. Williams, “Bias in the System,” Time Special Edition, “Artificial Intelligence: A New Age of Possibilities (2024) 28-31.

28 Cited by Chow and Perrigo, 11.

29 Braden Molhoek, “Moral Enhancement, the Virtues, and Transhumanism: Moving Beyond Gene Editing,” Religious Transhumanism and Its Critics, ed., Arvin M. Gouw, Brian Patrick Green, and Ted Peters (Lanham MA: Lexington Books, 2022) 387-408, at 402.

30 Anselm Ramelow, AI Algorithms and Human Free Will: How New Is the Challenge?” forthcoming in AI, IA, and our Threatening Future, ed., Ted Peters (Adelaide: ATF Press, 2024).

31 K.K. Jose and Binoy Jacob, “Mathematics, AI, Robots, and Humanoids,” Pax Lumina 4:1 (January 2023) 44.

32 Noreen L. Herzfeld, ""The Enchantment of AI," AI and IA: Utopia or Extinction? ed., Ted Peters (Adelaide: ATF Press, 2018) 1-15, at 3.

33 Michael J. Gaudet, Noreen Herzfeld, Paul Sherz, and Jordan J. Wales, editors, Encountering Artificial Intelligence: Ethical and Anthropological Implications (Eugene OR: Pickwick, 2024).