Welcome to the first Emerging Scholars Network Sci-Fi Film Festival! I’ll be having a conversation here on the blog on various classic and current science fiction movies. Feel free to watch alonggh and join the conversation. This week’s film is I Am Mother, a Netflix film from this year about a robot raising a human in a post-apocalyptic future (FYI: there will be spoilers). I’m joined by Sam Blair. Sam (@revsblair on Twitter) is a hospice chaplain in the Pittsburgh area and co-host of the Church of the Geek podcast.
Andy Walsh: Netflix is amassing quite a catalog of low-to-mid-budget sci-fi; I’d be curious to know what made this one stand out to you.
Sam Blair: Initially what made it stand out was the premise of a machine raising a human. It’s very interesting from a human growth and development point of view. What does an infant/child/teen require and can that be provided by a logical machine? The tone of the film is yes; an advanced enough AI is able to provide not only the physical nurturing but the “emotional” component as well, even if it can’t feel itself. I thought it was interesting as well that Mother had to learn as well just like a normal parent. Plus “yay robots!”.
AW: Mother certainly does have to learn. There are some differences though from how a normal parent learns. Of course the idea of disposable training humans is brutal and inhumane. At the same time, normal parents make plenty of mistakes that they don’t get to correct, which is unfair and costly in its own way. Do you think the film was commenting on parenting, or on the dispassionate nature of AI?
SB: I did feel a bit of connection though as a parent when Mother was trying different songs until it found one that worked. I think that’s something every parent identifies with. And that’s a very good point about parents struggling with the mistakes they can’t correct. There’s a similarity in that both human parents and Mother can learn and grow from their mistakes in parenting, but differ widely in their response to those mistakes: parents and children have to live with the long term effects of those mistakes, both good and bad, whereas Mother just starts over. Both human parents and Mother want to see their children flourish and live beyond them, to reach their highest capabilities, and both will protect them at all costs even cost to themselves. In that way the film comments a lot on parenting, and perhaps as a parent this drew me in. It would be interesting to have a child’s perspective.
AW: In the early going, I was reminded of The Good Place because of the lessons in ethics. On the show, the characters form a community where they get to practice what they’ve been taught. I Am Mother deals with a society of 1 human and 1 AI, with the AI as the instructor. Do you think the AI itself is or can be ethical, or is this a case of the canard about those who can’t, teach? Do you think the lack of community is also a limiting factor in the ethical development of the 1 human?
SB: Regarding ethics that’s a very interesting question. You can see conflict early on between Mother and Daughter in her ethics lessons, as Mother sees only one answer to the question of whether to save the one or save the many (which is “save the many”) while daughter struggles with that. However towards the end we see Mother taking the opposite ethical tack, eliminating the many to save the one (which will become the many).
And yes I think community is critical to developing an ethical base. As important as it is to develop a moral baseline for ethical behavior, often accomplished within the context of family, educational systems and religious institutions, I think it is important as children mature to be engaged with other ethical principles around what is good and how we should act towards one another. This allows for growth and flexibility as well as a greater understanding of one’s own ethical and moral principles. The challenge of course comes when those differing ideas challenge our own drastically, such as when Daughter encounters the survivor and has to rethink what she has believed at a very deep level. We see though that Daughter is eventually able to retain what was most significant and important to her ethical system, that being the value of human life. Today we’d call that her “deconstruction” and “reconstruction”.
AW: You noted the emotional and compassionate nature of Mother. That does cut against the grain a little, as robots are often depicted in sci-fi as unfeeling, and the emotions of the humans is what allows humanity to triumph. What do you think of the capacity for AI or robots to have emotions in reality? Should we be trying to include emotions? Could a difference in emotions explain why Mother and Daughter resolve ethical dilemmas differently?
SB: This is a very interesting and difficult question. I think it’s relatively easy for AI and robots to seem to have emotion and elicit emotions from humans without even coming close to feeling emotion. It’s what made the Furby so popular for a time: a robot toy that uses visual and vocal cues to elicit feelings of caring, happiness and other positive responses in their owners (I at first wrote caregivers – maybe that’s saying something itself). Yet no adult thinks a Furby really feels anything. However Mother is much more of a mystery. Is Mother simply highly capable of creating responses indistinguishable from human emotion in order to complete its task, or is it genuinely feeling something? We also have human beings that seem able to make a show of feeling happy, sad or repentant without ever really feeling those things. This lack of empathy and ability to understand of the feelings of others – as well as their own – is the basis for many personality disorders it seems, as well as is the root of most criminal behavior. So it’s hard to say if an AI like Mother is simply showing emotion or is really “feeling” something. And with Mother, if you ask it if it is feeling something, is it telling you the truth?
AW: Good questions, of the sort that send undergraduates’ heads spinning. I tend to think of emotions as a combination of a physiological response to stimuli and a subjective awareness of that response. Even without bringing in AI and robots, we can get ourselves twisted in knots wondering if other people are having a subjective awareness of their emotions. But we can measure the physiological response. So maybe that’s where we start with AI. For example, I like to say that my Prius is happy to see me because when I get close and open the door, it unlocks and beeps. It is having an automatic “physiological” response; it just doesn’t have any sense of “self” to be aware of that response (at least as far as I know). If we can observe automatic responses, and if our AI gets to the point where it claims to be having a subjective experience of that response, we might just have to take their word for it as we do with people.
SB: I concur as a fellow Prius owner that my car always seems so happy to see me! And I have to think that there has been some choice involved by Toyota engineers regarding that beep: tone, duration, volume etc. It’s a happy, cheerful tone as opposed to a loud buzzer or low honk. I would not expect that beep to come from a Peterbilt semi truck. It’s part of the design element that tries to tie the user to the object being used. We see those elements with Mother as well – voice, eyes, humanoid form. I’m also reminded of the psychological experiments involving baby monkeys who were given the choice of a wire mesh “mother” with a bottle and a fuzzy more doll-like “mother” with a face but no bottle. The monkeys almost always went to the furry mother for comfort. I think as anthropomorphic as Mother was in the film, it still lacks the warmth of an actual human being. So there’s something more than just human-like AI responses that need to take place for Mother to really be a human surrogate I think.
In doing a little reading after the movie on AI and emotions someone stated that an AI without emotion is basically a sociopath, which seems pretty accurate. Mother seems to care for children only so much as they are successful in reaching the desired outcomes. Daughter though does not have full knowledge of what the desired outcomes are. Through upbringing Daughter was able to learn to be compassionate towards others because she saw this exhibited in Mother. Yet Mother either stifles that empathy in order to reach the desired outcome or is genuinely incapable of it. This ultimately causes the rift between the two and yes I think that is part of the cause of their difference in ethics. However human ethicists will also vary widely on how to approach the ethical choices faced by Daughter, and some may even follow the path of “the best for the most” to the same extent as Mother. The social media mantra of some, that “my facts don’t care about your feelings”, fits in well with Mother’s ethical paradigm.
It would seem given the above that having some emotional component would be beneficial in order to avoid the AI becoming a sociopath. The challenge though if we try and include emotions as part of an AI is do we want to include ones that may work against us like anger or hatred? MIT for example created an AI they called “Norman” which exhibited psychopathic traits after feeding it disturbing material as its dataset for an extended period of time. But I think that’s still very different than saying Norman actually has psychopathic feelings.
Going back to talking about our self-awareness of our emotions, theologians such as Richard Rohr and Thomas Merton have written extensively about our false self (the self we project to the world) and our true self (who we really are at our core). We can become so preoccupied with our outward-facing self that we can believe that is who we really are. Meanwhile we may have little actual awareness of who we are, what we feel, and what we want. We often meet each other with our egoistic, false self front-and-center rather than who we really are. This false self isn’t necessarily bad; Rohr calls it the “launching pad” to get you through life’s ordinary day. When a checkout person at the store asks us how I am, I don’t really tell them how I am because I know this is just our little dance of manners – I know he really isn’t interested in my day, and he knows I’m not really that interested in being honest with him about it. Someone may practice radical honesty in a situation like this and really tell them how horrible their day is, but this rarely leads to empathy or genuine caring. However the false self also prevents people from knowing us and also keeps us from knowing ourselves if we think that what we want to be is who we are. This leads to an addiction to that false self and to the sinful nature that feeds it. With all that said, it would be very interesting to know if Mother knew that she was lying or putting out a false self that was distinct from its true sense of self. If that is the case, I think that would be a tremendous marker for a self-aware AI: it is aware of itself to the point that it can hide that self from others by projecting another personality. And in the same way that it’s very difficult to know someone is being their true self to you, how would we know if this AI is being it’s true self or just acting like it thinks an AI should act? But as you say, that just twists us in knots with apparently unanswerable questions. I think that’s significant though, as these questions are applicable not just to our interactions with AI but with each other as well. And because we tend to accept the self put out there by other people as who they really are at some level we can only do the same with a machine AI.
AW: Later, we discover that the AI has actually caused the apocalypse that gave us this post-apocalyptic dystopia. What do you make of the AI’s justification?
SB: The question of whether or not the AI made an ethical decision, especially with regard to its plan to renew and improve humanity, is significant as it’s basically the same ethical decision made by God through the Genesis flood, and which he almost made later with Moses. It’s interesting because the simpler plot would have been that humanity being evil wiped itself out and the good, logical, ethical AI tries to start over and protect its child from becoming evil (nature vs nurture). But instead it creates a situation where a machine apparently capable of care, nurturing and self-sacrifice is also capable of what appears to us to be great evil. Which is one of the problems we have with God.
AW: The connection to the flood narrative of the Bible seems apt, and possibly even intentional in the storytelling. In the case of Mother, we can perhaps reconcile the caring and the capacity for evil by appealing to a certain amount of distance. Mother operates on a different level, which perhaps is why it is less concerned with individual lives. Taking in the whole scope of the Bible, however, we are told that God cares for and empathizes with each of us, and went to the extraordinary lengths of becoming one of us to do so. How does that inform our reading of the flood account? What do you make of the approach which says that such apparently evil acts like the flood or the conquest of Canaan represent imperfect human interpretation of what God intends rather than a direct record of God’s intent?
SB: The flood narrative I think does more to emphasize God’s desire to preserve than to destroy, even though much is made of the destruction of the flood. Even from the earliest part of Genesis we see God judging but also preserving whose He judges. Cain is not given the same fate as his brother, rather he is allowed to live and even form a civilization. The nature of God as both preserving and judging is seen perfectly and finally in Jesus, who through receiving the unjust judgment of the world and even His own religion preserves and reconciles the world.
Regarding the final question of intent, I’ve heard that argument but I think it has some significant issues. It certainly seems probable that the Israelites didn’t comprehend God’s intention correctly. I say that considering how often we today misunderstand and misapply God’s intentions revealed in scripture. I do think that the authors were interpreting past events as best they knew how with the premise that God was working directly in them and revealing Himself through them. This does soothe over arguments concerning the ethics of events like the flood and the conquest of Canaan. However it also seems to make scripture an unreliable narrator at times, which opens up another whole set of problems regarding scriptural authority. I prefer to think that we are meant to wrestle with God’s ethics as revealed in scripture without going to the full left of “God didn’t really mean that” or the full right of “God/Mother knows best so don’t ask questions”.
AW: Unreliable narrators remind me that we haven’t talked about Hilary Swank’s character. She tells a variety of stories about her past and about the situation outside, not all of which can be true and some of which are directly contradicted by what we later see. At least some of that seems pragmatic, to get Daughter to do what she wants. Did you get the sense that some of it was her lying to herself as well, maybe as a mechanism of coping with trauma or grief?
SB: Absolutely. Swank’s character is interesting because she challenges the validity of what we’ve been presented, but then turns out to be false herself. But ultimately I don’t see her as a bad person because she lied to Daughter. I’ve seen grief and loss make people do desperate things, and I would imagine that losing everyone around you through the trauma of war would make one do desperate things in an effort to save others as well as save yourself. People in grief can hold incompatible beliefs simultaneously, even when they know those two beliefs can’t both be true. The author Joan Didion for example wrote about how she accepted the reality of her husband’s death, but could not get herself to remove his shoes from the front door where they were because, paraphrasing her words, “what would he do if he needed shoes?” Swank seems so desperate for companionship that she will lie to herself and others in order to gain that presence.
In grief work we generally talk about five aspects of grief: denial, bargaining, anger, depression and acceptance. Normal grief involves all these aspects, and people often go back and forth between them. Sometimes normal grief feels rather crazy, as with Joan Didion’s observation about her husband’s shoes. When grief gets stuck in one of these aspects is when we see grief becoming complicated and problematic: compulsive behaviors may develop, severe depression and even suicidal thoughts may be experienced. The coping mechanisms of normal grief have either run amok or collapsed entirely. Swank’s character certainly seems to be in the midst of this type of grief. Kidnapping Daughter may be her way of trying to save someone she previously lost, such as her own child (this may have been spelled out more in the film and I missed it). We see this in real life as well, when people hoard things such as animals in order to make up for something that was lost. I think Swank’s character is someone we can sympathize with toward the end because we see why she is acting so erratically.
That concludes our discussion of I Am Mother. I hope you can join us for future installments of the film festival; we’ll save a seat for you!.
Samuel Blair is a full time Chaplain with Bridges Hospice in Pittsburgh, PA, and has been involved in hospice care since 2003. He recently completed certification to become a Board Certified Chaplain and Certified Pastoral Counselor through the Pittsburgh CPSP chapter. He began in hospice care at Connecticut Hospice in Branford, CT where he volunteered during seminary at Yale Divinity School. He has also been involved in geriatric psychology, psychiatric testing, the philosophy of religion, counseling and health care. He received the BS and MA degrees in psychology from Geneva College and the MDiv from Yale Divinity School. He loves music, needs to read more, and tries not to take things so seriously.
Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two teenagers, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain’s hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer’s cap. His monthly Science in Review is drawn from his weekly Science Corner posts — Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog. His book Faith across the Multiverse is available from Hendrickson.