Monday 20 May 2024

A Response to Andy Crouch's "A Redemptive Thesis for Artificial Intelligence"


I had the pleasure of reading this thesis and think that what Andy and the Praxis guys are close to, is an understanding of what a Christian's relationship should be towards A.I. They frame it as a plot of new ground for entrepreneurs and venture-oriented investors, and have laid out a list of assumptions and directions for where A.I. is headed now, and where their coined term, Redemptive A.I., will and should head in the future. 

To be blunt there are some shortcomings in the approach they've taken in this thesis. Not in intent but in intellectual projection of what they say and think A.I. is and what it can do, versus what it is and will likely do given what we're seeing so far. 

My interaction with this piece is merely an attempt to broaden the platform from which Christians are viewing the thoughts our faith has and spreads in relation to this kind of topic. And is in no way a criticism of the men that were a part of this thesis' composition or the good work they do. For clarity's sake, I have not removed any part of the article so a clear commentary of the piece can be made. 

I am also, to be blunt, a small fish in the tank of those with vested interests in tech and taith. Consider these observations the same way you would a dog who barks because he's a dog. A comparison I shamelessly steal from Douglas Wilson on why he also writes. 


First, the six assumptions.

"Six Assumptions

We begin with the following assumptions. We are well aware that each requires a measure of faith — that is, each is based on a certain amount of evidence that warrants our belief, though none can be proven beyond a reasonable doubt."

This was a nice disclaimer and frames the conversation to follow well. This isn't tech bros talking about this thing called A.I. This is Christians doing that same thing, who may also be tech bros at the same time.

"1. The enduring image

Human beings are, and for the duration of the human story will uniquely remain, entrusted with bearing the image of God. Our ability to discover physical and mathematical patterns in our cosmos, and to develop science and technology that make the most of those patterns, is one of the intended fruits of this image-bearing vocation, meant for the glory of God, the good of our fellow human beings, and the flourishing of creation itself."

This sounds right unless you are willing to take the same conceptualizations they will present in assumption number 4, and apply it to their reasoning in this first assumption. To say that A.I. is as consequential as other advances humankind has made technologically, specifically in agriculture, sends us to the scriptures to look for what such things fit with the narrative. Of bringing God glory and bearing His image. This language from Genesis skips over our first technology. Clothes. One that was not made as a fruit of our image bearing vocation and glorification of God, but rather in shame and rebellion from the clear instruction of the Lord on our vocation and patterns of life. This was likely not done on purpose by Crouch and Co., but it would also be short-sighted to overlook the harvesting of fig leaves and the weaving into patterns, to accomplish what Adam and Eve did as an immediate response to their realization of sin, good, and evil. 

As would be the necessary redemption of that technology by God. Covering them with the skins of animals as they would begin their image-bearing vocation as sinners. Waiting and contributing to the coming of a Messiah in the future. 

Indeed God's will seems to be that technology in general is to be redeemed, but that redemption is more than just intent. While it's possible that God gave them the skins of animals with no animals being killed in the process. It is more likely that they were shown the cost of sin and God chose to illustrate that cost against the technology they presented as an initial response to their sin. He would show them that sacrificial death brings them back into fellowship with God, even after it forced them out of the garden. But that it would also point to Christ in concept. As the fruit of that sacrifice would be the animal skins they now could wear in place of their fig leaves. 

For us to assert that like the good creation that was made by God before the fall, our technological progress and thinking follow suit, we need to be able to demonstrate that these capabilities would have been or were present before the fall as well. I am open to directions to verses that demonstrate this. But until then, I think this assertion falls short.

"2. A pattern of deceptions

In the course of the human story we have become captive to a pattern of deceptions that have corrupted the divine image, compromised our ability to pursue or even know what is best for us, and distorted our application of mathematics and science. We have acted as if we can live independently of God, and the life of love that God offers and commands, with no harm and indeed great benefit to ourselves (this is the most fundamental pattern, known as the sin of pride). More specifically, the “modern” world has been founded on the quest to secure good things for ourselves through some form of pure technique that does not require relationship — with others or with God (this is the ancient allure of and quest for magic, uniquely enabled in modernity by what we call technology). Likewise, modern economies have effectively subordinated all other goods, to the extent they are acknowledged at all, to the pursuit of financial wealth that purports to give us abundance without dependence (the seduction of Mammon). Insofar as we are all caught up in these patterns of false belief and behavior, God is dishonored, human beings are degraded and violated, and creation is exploited and diminished."

This assumption hits the nail on the head like it were a fully loaded roofer's gun. What Andy sees in the application of "pure technique" is a concept I wish most Christians would grasp a bit more fully. Especially considering it's a nail gun and all.

What Technology does at every level is abstract us from the task, problem, and work, that lies before us. And the pursuit of a technique that can do such, only serves God's purposes when it is powered, like the nail gun, by the natural world constrained to do unrelated work. Compressed air has all the force of its wild and free relative in the hurricane, but is held in mankind's dominion over the earth itself to build houses instead of tearing them down. 

In this example, the roofer needs a degree of control over his tools to exercise the technique they provide, in order to provide the good that they can accomplish for people who need roofs. The level of abstraction from a man swinging a hammer and holding nails is close enough to see. The roofer armed with the nail gun does his job better and faster. And as such enjoys the common grace of God when he does so for the good of other people. Because good works point to the greatest good worker, God. As the roofer does so he isn't fostering secrecy to invoke a sense of unbelief in what he is doing. That would be the "magic" Crouch writes about. But instead displays technique in a different light. Skill. Which is something God gives us (Ex 31:1-3)

As technology abstracts us away from the work, it also abstracts us away from the skill used to do that work and as such makes the process less understandable. Eventually, the robot roofers of the future with built-in air-powered nail guns will construct houses in what seems like a magically short amount of time. And if they do so for the profits of some while decimating the incomes of the roofers themselves, what will have is a worship of Mammon and a sacrifice of the roofers to appease such a god.

"3. Very good and also very distorted

AI, like other scientific and technical advances, is part of the “very good” world that human beings are meant to steward and extend. It is a significant advance on much previous technology in the way it recapitulates the patterns of learning and cognition that arose in the course of the development of life (especially the nervous system and the brain). In the case of Large Language Models and similar systems, it also is able to incorporate (via training data) much of the vast achievement of human culture. In this way it is potentially a profound and fruitful extension of human image-bearing, and like other major cultural achievements (such as the invention of writing) it can be expected to unlock good potentialities of the created and cultivated world that were previously inaccessible. But it is also, inevitably, subject to the patterns of deception — including the patterns it has absorbed from its training data — that will tend to bend its outputs and its users in corrupting directions."

I think the word "deception" is doing too much work here. Which is why the idea of using A.I. for good has a friction to it. It's not that we've been tainted by Sci-Fi movies and examples that suggest A.I. will be evil. It's that we don't want to come to terms with A.I. being a product of evil people. Because we are those people.

To say something is very good but also very distorted is a clear contradiction of terms. Contradiction can be a great place for wisdom. I have such a contradictive piece hanging on my office wall. A Picasso print called "Guitar, spring 1913". It shows various cubes of colour and a barely recognizable sheet of music to make an abstract image that makes you think. Until you hear the word guitar in the title and then see what the master was making, and the scene makes sense even through his abstraction. 

When we call something distorted, however, this implies that there is a pristine A.I. that exists as a reference point. Made by pristine hands for pristine purposes. We all know we have no such example. But we all desperately want this to be the case. We all want virgin tech to be as pristine as the nature that is so obviously good. Like the way a mountain forest is beautiful in spring through to winter, life and death. That's because the forest is a part of the good creation and the A.I. is part of what sinful humans do to that Creation. All the things that we know God created as good are things God created without us and are good because of our absence. But A.I. is created by us after those good things. It is made by using those good things. It is a work of mankind, not a natural resource made by the hand of God. And this blurring of the line between the very good natural world and the works of sinful man would indeed be a distortion. But we have a word for this distortion. Sin. Tech is not part of the good world God created, it is the work of sinful men who, without Christ, distort that world.

Crouch understands parts of this as he recognizes the dangers involved with the data sets these A.I. are trained on. Of which no small amount of human sinfulness will be present, even if it's not named as such. But framing the issues A.I. presents to us, without naming it properly as a "works" not a thing of God's own creation, is problematic and would change this and other assumptions in this list as well.    

"4. As consequential as the Internet — or electricity — or agriculture

While the scope of AI’s full potential is not yet clear, it is reasonable to believe that it is as consequential a technological development as the Internet (developed and deployed 1990–2010). But we should consider the likelihood that it will prove as consequential as electricity (1850–1950), and the possibility that it will prove as consequential as agriculture (the “neolithic revolution,” 8,500–6,500 years before the present day). Insofar as all of these were the result of image bearers extending the “very good” world, they created genuine common wealth that continues to benefit humanity and creation; but all of them were also subordinated to foolish and prideful visions, leading to significant damage to human beings, human societies, and the created world; and almost all of their most significant consequences could not have been foreseen by their early inventors and innovators. We can expect all this to be true of AI as well."

My only issue with this assumption is the blanket statement at the end about the consequences of tech not being foreseen. Yes, there were unintended consequences of the production of agriculture, over the span of nearly 10,000 years, No one in 600 hundred BC would bat an eye at the concept of genetically modified food, or know what you were talking about. Even if you managed to find a Rosetta stone to translate the concept backward. Yes, men like Franklin, Edison, and Tesla, bottled lightning and made it into a consumer good, not knowing what it would be used for later down the line, Yes, no one invented the protocols of the early Internet as far back as 1970, likely envisioned the kinds of debauchery and sin you average 10 year old can find and is exposed to via their contributions to the world wide web.

But every person who can think about A.I. has access to the consequences of what A.I. can do. Because we share the intelligence that A.I. seeks to make artificial. We are, in many cases, the quality control basis of such endeavours. We are what we are trying to emulate artificially before we add the scale of ever-progressing technology to a man-made mind, that will one day think like us and the next day think faster, remember more, and be more intelligent than we could ever dream of.

It's this shared proxy with the concept of intelligence that has led hundreds of men and women to warn the world through essays, novels, movies, and books. To let them know what this black box of A.I. might contain for us. And a startling amount of them are proved right while jeered and ignored, as if they weren't thinkers still smarter than the machine. That's because they could envision what it would be like for them as humans to become machines to then envision what it would be when mankind does the reverse. But alas, prophets are never honoured in their time.

As such we will have a bingo card of items to check off one by one, of potential consequences of A.I. and its effects fo the world. And by the end of the game, it will not be a bunch of tinfoil hat-wearing skeptics and conspiracy theorists, standing up to let the world know they were right. It will likely be A.I. doing so as well.

"5. Asymmetrical risk—even without a singularity

There are no good reasons, including no good technical reasons, to believe that AI will somehow usher in a “singularity” in which human beings are replaced in their unique role and responsibility for one another and for the cosmos — even as there is every reason to believe that AI, like all technology, will vastly outstrip human capabilities in specific areas. Fantasies or fears of AI “replacing” us are misplaced (though AI will almost certainly come to replace some tasks and activities currently performed by human beings). But concerns that AI will be harnessed to exploitative ends, or will be deployed in ways heedless of its unintended consequences, are well founded. And like certain useful but asymmetrically dangerous technologies, like nuclear fission, AI may be capable of unleashing vast destruction (as, for example, in the discovery and design of highly virulent biological weapons)."

Yes. Frank Herbert is likely our best source for considering this asymmetry as opposed to the Wachowski brothers. 

"6. The fantasy of the superhuman

While AI may or may not prove to be as consequential as the most dramatic technological developments of human history, it carries unique risks because of its close and genuine kinship to one distinctive way human beings interact with the world (“intelligence”) and its ability to mimic (though probably not genuinely possess) other distinctive human characteristics including personality and purposefulness. Misunderstood or misapplied, AI may hold unique potential for the destructive triumph of pride, magic, and Mammon. These risks apply even if AI proves technically less capable than we may imagine, because the mere fantasy of creating “superhuman” capabilities, and of inventing alternatives to human beings, is sufficient to distort relationships, economies, and societies."

Also Yes. 

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” Frank Herbert, Dune


Now we get to the red meat of the thesis. The idea to propose redemptive directions was a bit of genius. It frames the thesis as a part of the progress A.I. is making and implies an ability to steer it. I don't know if we can or not, but I'm Glad Christians are trying. Lord knows non-Christians are claiming to try as well.

"Six Redemptive Directions

With these assumptions in mind, we offer the following guidance to those called to build ventures that extend AI in redemptive ways — meaning not just within ethical boundaries, but actually seeking to repair some of the damage done by previous waves of technology. This guidance is meant to operate primarily at the venture level. We recognize that many important decisions about AI will be taken at the level of government and policy (such as regulation of the sources and scope of training data), and those regulatory frameworks will in turn constrain and enable decisions made by a handful of very large infrastructure providers (the companies developing and training foundation models). While we hope that some redemptive actors will have real influence “from the top” on policy and may build infrastructure at very large scale, most entrepreneurs exercise their greatest influence “from below” by shifting the direction of innovation through specific applications. Like the Internet, electricity, and agriculture, AI is a general-purpose technology that can be harnessed to many ends. Redemptive entrepreneurs can lead the way in demonstrating that AI can be deployed — in fact, is best deployed — in ways that dethrone pride, magic, and Mammon and that elevate the dignity of human beings and their capacity to flourish as image bearers in the world. AI is best deployed in ways that dethrone pride, magic, and Mammon and that elevate the dignity of human beings and their capacity to flourish as image bearers in the world."

The distinction of top-down and bottom-up being brought to light here is a really good thing. Most people will assume because of the U.I. that A.I. is the app or prompt text box they are interacting with and not the massive warehouses of computers powering A.I. Or the infrastructure of the internet allowing it to get to their device. Aiming that recognition at the dethroning of pride is also, very wise. As for magic and Mammon, I believe there's more here to unpack later on in the thesis.

"1. Redemptive AI will inform but not replace human agency.

One of AI’s fundamental capabilities is its ability to operate as a “prediction machine” that can inform human decisions and choices. But AI’s predictive powers are not being deployed in a neutral environment. Many human beings, in too many dimensions of life, are already limited in their ability to make free and wise choices by unjust markets, inflexible and quasi-mechanical systems, entrenched bureaucracies, and repressive regimes. AI could easily be deployed on behalf of any of these social forces to further suppress genuine human freedom and responsibility. Redemptive AI will actively repair and restore human agency rather than further concentrating or diffusing it. It will provide the inputs and incentives to make better decisions, but it will not pretend to relieve human persons of their responsibility to choose wise paths, especially in areas (e.g., policing, the extension of financial credit, the evaluation of employees, or the management of natural resources) that can only be responsibly undertaken by persons conscious of the dignity of human beings and the created world entrusted to us."

This direction is tricky because of how Crouch is conceptualizing A.I. as something without Agengey but able to control agency. Which is how a person who thinks A.I. is a tool to be used, as it is often framed, would think. It makes sense and allows for extrapolation of ideas and consequences of those ideas to form a thesis. Like this one.

But what if A.I. isn't conceptualized like a tool, but is rather perceived as an extension of mankind. Like the hammer that builds is a harder and more tool precise compared to the bare hand, A.I. is faster and more capable at connections between data sets and recall, compared to a human mind. We extended our hands into the hammer to get what hammers do, and now we are extending our minds into A.I. to get what A.I. can do.

A.I. can't just be parted from human agency, even for altruistic causes, because it is human agency. And is likely more like a strong man's agency being used to overcome a weak man's agency. Cavemen in ancient China and ancient Europe, both had the agency to turn flint into a knife. Both had the resources, both had the problems a knife would solve. As such both had the agency to progress technologically. But A.I. is a layered technology with a high cost of entry. Not just anyone can decide to make A.I. without also partnering with a myriad of social landscape movers and shakers. All doing their own thing for their own reasons. Every tool they make, like the hammer and knife from above, need not be for smashing or slicing their neighbour. But it is never, not also, for smashing and slicing your neighbour. 

The age-old American adage goes "Guns don't kill people, people kill people." But we all know that this is a deflection from the concept that guns are for killing. It's only in perceiving that fact that we can exercise agency around a gun's potential to kill to redeem such a destructive force into a tool used for redemption. You can use powdered explosive drivers to power a tool that places and installs concrete fasteners. And they work really well. But they, even stripped of all firearm language associated with guns, will still be a gun and still end up killing people. Because it will always be an extension of mankind whose second sin after rebellion in the Garden, was murder.

I guess that is why we are told swords will be beaten into plowshares. And if this direction is plottted with that noble course in mind. There are no better ways to dance with our inevitable nature, I suppose.

"2. Redemptive AI will develop rather than diminish human cognitive capacity and extend rather than replace education.

The current reality is that education — the means by which human societies prepare people to make meaningful and lasting contributions to their common life — is inequitable in most modern nations, especially the United States, and is failing to develop the full capabilities of many people. Even those who ”succeed” on the terms of our current educational systems are continually tempted by current technology and media to spend their waking life in “the shallows.” It is obvious that AI could be deployed to accelerate these trends, by providing the means for students to fake competence in a subject, by providing ready-made “answers” to both technical and complex questions, or simply by offering an even more customized and irresistible stream of consumable entertainment. Such a direction would deprive most human beings of the opportunity to become genuinely informed and creative participants in culture. Redemptive AI can make massive contributions to education and lifelong cognitive growth by appropriately scaffolding, supporting, and sequencing the difficult tasks involved in becoming an educated person who possesses both skill and wisdom."

This direction is fascinating if only for the particular use of "Scaffolding" and Sequencing" in it's pursuits of education and wisdom. 

Being a tradesman I know that scaffolding isn't the tool itself, it's how you get to where your tool is needed. It provides the workspace to do what would normally be impossible. You simply are not tall enough to install exterior windows on the 58th floor of the high rise, but you are able to piece together scaffolding to do so and remove it afterward. 

If Crouch means this comparison in that way, then I'm on board. Using A.I. to train people to recognize and then overcome the societal compensations of a world with A.I. would be a much-needed market and use for the thing itself. 

But the other bookend of these three modifiers of "difficult tasks" brings us right back to direction number 1. Namely, it's problems with agency. How do you use A.I. to limit interference with human agency, by allowing it to sequence or order said agency. These two concepts seem to be in conflict but that's only because they aren't headed in the same way. While not stated. I don't think any of these directions are meant to be congruent or parallel. That would render them all the same direction. This one in particular seems to be in conflict with another but not all. And that's because directions (at least on a globe) are a bit subjective. You can head east to get to the west of you. 

If what we're going to do in this direction is build the tools to build better tools and then use those tools to scale back the use of those tools. Then great. But I'm still a millennial, who tragically, remembers what calculators did to my long division. 

"3. Redemptive AI will respect and advance human embodiment.

In sharp contrast to many currents of modern behavior and belief, we believe that having bodies is “a feature, not a bug” of being human. The first few generations of computer technology have abetted a damaging trend toward disembodiment, privileging sedentary mental activity while encouraging if not forcing people to neglect their design as creatures who learn, work, and think best when we are moving purposefully through the world together. Compared to the systems widely available today, AI has the potential to interact much more dynamically with human beings using their full sensory and physical capacities (such as through audio interfaces that allow people to stay engaged with their embodied environment rather than screen-based interfaces that draw them away from it), while also dramatically assisting people who lack one or more typical capacities to participate more fully in the world (such as through brain-computer interfaces for those who have lost neuromuscular capabilities through paralysis)."

Anyone who has watched "Ghost in the Shell" will be able to tell you why this is a bad idea. And I've written elsewhere about the effects of A.I., and the drug-like addiction we have to our desires. Particularly in a worship setting.

So let's focus on the word "Respect", like we're about to handle a gun from the direction before this one, to see if respect is an option here. You are told in most hunters safety and firearms training courses a few basic laws of gun use, before being allowed to have one loaded, and under your full control, in the presence of the instructor. One is to always keep your finger off the trigger until you are ready to shoot. the other is to only point your barrel at something you intend to destroy. Because it will destroy whatever you unintentionally point it at as well. 

A gun pointed are the ground still does what a gun does when discharged at the ground. It's just that it will blow a similar-sized hole to the one you just put in the ground unintentionally, when you intentionally point it at a deer. Or a human. 

The trigger of human-machine interfaces, is sadly, more pulled than not by this point. And not just because Elon Musk has done so much work on Neuralink. There is no putting the human-machine hybrid back in the box. Because you're likely reading this response on the first version of making tech and mankind inseparable. When was the last time you pooped without your smartphone anyway?

But instead of a trigger let's imagine we're respecting something else. A door that stands between you and the future where you're body is something you can hack and one where it can't. Because that's where this leads. You do not get a world where nerve-damaged limbs are exercised from their death-like state, without the ability to also externally possess them with killing intent. There is no way to open that door without opening the other side as well. And it matters not that your direction through that door was noble and altruistic. A direction hinged on the sympathies of the disabled and the hurting. Opening the door allows people to disable and hurt people by the very same means.

There is a reason Herbert declared a holy war on the thinking machines in his books. And everyone who has thought about this knows what side they would actually be on in that kind of war. And they all think they are right. I do not believe the word "advance" proceeding after the word "respect" is capable of abetting the kinds of things that can and will go wrong, simply because we want it to.

This direction should not be pursued.


"4. Redemptive AI will serve personal relationships rather than replace them.

AI shows great promise for being able to fluently interact with the relational dimension of human life (as when Large Language Models are prompted to take on the persona of a chatbot). The clear and present danger is that this fluency will be exploited, perhaps at the willing and eager behest of users, to provide deeply persuasive simulations of relationship. Such simulations will have the power of many addictive substances and behaviors, in that they will directly harness the reward systems of the human mind-body-soul complex while delivering no real benefits and degrading or entirely erasing users’ ability to choose real life. Redemptive AI, while benefiting from its sensitivity to relationships, will never present itself as a person, will not offer to substitute itself for persons or personal relationships, and will not purport to relieve its users of the burdens of genuinely caring for and being cared for by other persons. Instead it will facilitate more relationally healthy pathways for human life. (For example, many employers currently schedule contingent workers’ shifts in ways that are supremely indifferent to those workers’ family responsibilities; an AI “aware” of workers’ family commitments could be deployed to create far more relationally optimal work schedules while also matching or exceeding the economic efficiency of current solutions.)"

This is the one direction I can say, without reservation, I endorse and would promote. The more we turn A.I.'s power towards tools instead of proxies and machinations of human-like things, the better. The world is set to and indeed already facing, a plague of bots pretending to be humans and humans counting on that pretence to be effective. 

In all honesty, this should be the first direction we take with A.I. Not the 4th,

There's a joke here from the trades as well. Something about safety being 4th.

Oh well. I'm sure we'll all be laughing in the end about this either way.

"5. Redemptive AI will restore trust in human institutions by protecting privacy and advancing transparency.

Too many systems today are opaque about their own operations, concealing their inner workings from the public, while relying on extraordinary levels of surveillance and data-gathering about persons. The emerging reality is one in which systems have no transparency at all while persons and their data are rendered “transparent” to corporations, advertisers, and nation-states. Without redemptive development, including technical breakthroughs, AI will exacerbate both of these trends, because as currently designed it is an inherently opaque system, capable of gathering and representing huge amounts of data about individuals, and making that data fully available only to very large-scale owners and operators. What is actually needed is a substantial reversal in which institutions and the systems they deploy become more transparent, while persons and their individual information become more protected. Redemptive AI will be designed to give more clarity, not less, about how institutions operate, while ensuring that individuals retain the dignity of being known through their own choices and disclosures, not through a constant and unchosen stream of surreptitiously collected and analyzed data. (Consider the likelihood of governments wishing to “pre-incriminate,” with the help of data analysis, those presumed likely to break the law. Redemptive AI, while assisting in ascertaining the truth about criminal behavior, will extend the protection against self-incrimination by only providing public justice systems with information about actual criminal behavior, not merely purported patterns in data. At the same time, redemptive AI that operates with high transparency may be able to dramatically reduce uncertainty about the evidence offered in criminal trials, preventing unjust convictions and increasing confidence in the public justice system.)"

This seems like simple projections of various versions of the robotic laws that are given to us in science fiction literature. Indeed redemptive A.I. would be the place where a kind of orthodoxy is made while programming these things so the humans they are both serving and in a legal sense prosecuting, are protected by a set of fair laws that only the humans are really subject to. 

I don't see how this would become the adoptive state of A.I. making in any real sense. And it would hinder Christian made A.I.'s to not play by the same rules as other A.I.'s who have no rules as such.

Considerations of that ambiguous line will need to be made alongside this particular direction, given that we have billionaires who can get tech on Mars, alongside communist dictatorships. No binding legal action could be taken against either if an actor crafted a pre-criminating A.I. for use against its citizens, and then put it on an oil rig in international waters, in orbit, or on the Moon.

This direction, as such, only works on Earth and a few hundred miles off any given shore.

6. Redemptive AI will benefit the global majority rather than enrich and entrench a narrow minority.

Current pathways to the most powerful AI systems are extraordinarily capital- and energy-intensive, lending themselves to concentration in the hands of a few resource-rich corporations located in a handful of countries. Depending on how AI services are delivered and priced, this does not necessarily need to mean that AI cannot benefit the majority of human beings — if it ultimately can be provided at very low marginal cost, it can have a very beneficial effect for low-income users. But without specific redemptive innovation, it is almost certain that the greatest benefits of AI will flow to the already wealthiest and most powerful corners of the world, not least because they are already most entrenched in the data economy (compare, for example, the amount of training data available in English to that in languages spoken only by small groups of people). Redemptive AI will differentially find ways to unlock value at “the bottom of the pyramid” — and will pursue innovations that accomplish that goal without ensnaring the world’s poor ever more deeply in a kind of datafication of their lives which disrupt human connection and largely only benefit the owners of the largest pools of data."

This direction will find its best incarnation in local A.I's that are not tied to the larger processing hubs like the Open A.I. project.

As such, we can hopefully predict and depend on A.I. behaving like other tech and becoming more ubiquitous and democratic as it refines itself. When we get to the point where anyone can own their own A.I., and corporate involvement is one of product production and not data skimming or facilitation of use, then we'll be there.

But as of right now. That's not the world we live in. But that doesn't mean we have to stay here. 

A Call to Repair and Redeem Through AI

At this very early stage in the history of AI, it is extremely tempting for venture builders and investors to adopt a gold-rush, land-grab mentality, racing to claim a stake in the technology by swiftly building infrastructure and applications that promise quick financial returns, assuming (if these questions are considered at all) that ethical reflection and protections can come incrementally, once capital is secured and profit is made.

This paragraph is eerily close to the response of several theologians and influential pastors who claimed we would sort out the theological issues after the Covid 19 pandemic had subsided and that a response would have to come first. At the very least it seems like Crouch, (who I am not accusing of doing such) is aware of the need for measure and principled orientation before action is pursued, which is encouraging. 

But too many recent waves of technology — social media being a particularly vivid example — have followed this pattern, delivering some benefits but also consolidating power in unaccountable large-scale institutions, substituting thin forms of existence for true human flourishing, and extracting huge costs in physical, relational, spiritual, and social well-being. If it is true, as Yuval Harari, Tristan Harris, and Aza Raskin have suggested, that algorithmic social media was humanity’s first large-scale encounter with “AI,” the rise of far more powerful and flexible algorithms is hardly something to be treated as an ethically inconsequential opportunity for massive profits.

Agreed.

We believe redemptive entrepreneurs, while certainly pursuing breakthroughs and moving at the speed of expertise, will build into the very foundation of their products a vision not just of leveraging what the existing system of technology has produced, but of repairing what it has damaged. Redemptive AI can contribute to the ultimate redemptive mission: to liberate human beings to live fully as what they truly are, incomparable, irreplaceable image bearers of the Creator who made all things in and for love.

Amen.


I'm looking forward to where this Thesis goes and the kinds of Christian entrepreneurs that will interact and make use of it for the building God's kingdom here on Earth. 

Andy. If you have any questions or clarifications on this response. Or want to address anything I might have misrepresented or not understood. I am more than happy to talk about this.

Keep up the good work. 



Link to the original Thesis:

https://journal.praxislabs.org/a-redemptive-thesis-for-artificial-intelligence-ff7dafdd01b5



No comments:

Post a Comment