ESN contributor and friend of the blog J. Nathan Matias and some colleagues are writing a series of articles on artificial intelligence to introduce Christian audiences to important topics and themes. They have an introductory essay which will link to additional articles as they appear in the weeks to come. Some of those are already available; for now I wanted to bring the series to your attention, and later this month we’ll discuss the specifics. I’m particularly keen on “Relating to Artificial Persons” as I’m very interested in how our relationships with our simulated human agents interact and influence our relationships with actual humans.
I was particularly struck by this quote from A.I. pioneer Marvin Minsky in 1965: “If one thoroughly understands a machine or a program, he finds no urge to attribute ‘volition’ to it.” I don’t know what Minsky intended 50 years ago, but I can’t help thinking how aptly his comment applies even to human beings. The more we understand our biological and psychological processes, the more we think of them as mechanical or programmatic and the less we attribute ‘volition’ to ourselves. Indeed, it is not hard to imagine a day when computers are deemed to possess comparable intelligence to humans because we have convinced ourselves that our own intellects are entirely computational. Will we have rendered the concept of volition moot in the process? Or will we have discovered a more detailed way of describing what we have always understood as will in the first place? Relatedly, if we learn more about the processes by which God acts, are we lessening our urge to attribute volition to him? Are we eliminating the need to invoke God at all?
On a separate topic, Matias et al introduce the challenge of developing unbiased decision algorithms when the data used to train those algorithms are the result of biased human activities. In many cases, we wind up with algorithms that recapitulate our biases, calling into question whether unbiased results are even possible. Perhaps unbiased decision is something of an oxymoron and we should consider instead what kind of bias we wish to institute.
The discussion of probability, algorithms and bias got me thinking about maximum likelihood methods and the consequences of employing them to develop our machine learning algorithms. Maximum likelihood answers are wrong less often than other approaches by definition, but they can still be wrong. The times when they are wrong will be cases at the margins or in the minority. So what happens when those are people, not just abstract cases? What are the consequences for those people when we choose maximum likelihood approaches? Now, not all machine learning techniques involve maximum likelihood, and algorithms can be evaluated across a range of inputs and outputs. Still, as noted, biases do sometimes wind up in the end result.
I’m not a machine learning expert; these questions are the sincere but perhaps naïve musings of a curious outsider. I look forward to learning more as I follow along with this series and maybe find some answers to my questions or maybe find better questions to ask.
Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two teenagers, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain’s hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer’s cap. His monthly Science in Review is drawn from his weekly Science Corner posts — Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog. His book Faith across the Multiverse is available from Hendrickson.