As a researcher for a public health software company, I have experimented a little with predictive models of health-related outcomes. Mainly I have focused on population-level predictions — e.g. how many flu cases to expect this winter in a given county — but on one occasion I tried to predict which individual patients would eventually experience a drug overdose necessitating a visit to an emergency department. Statistically, the results were encouraging, although on further inspection I realized that mainly my model was simply predicting patients with a prior history of drug overdoses or suicide attempts were most likely to have a future drug overdose. There may be a place for such a model, but most likely it wouldn’t tell physicians and public health officials anything they didn’t already know, in general or for specific patients.
J. Nathan Matias and colleagues took up the possibility of using more sophisticated artificial intelligence (AI) to predict mental health outcomes for individuals as part of their series on AI and Christianity. It’s an idea that made headlines recently, as Facebook expanded a program to identify potentially suicidal users. Matias et al mention that Facebook initiative, which had been in testing with a small subset of mainly US users; the latest development is that it worked well enough to justify applying it more broadly.
As the AI and Christianity article points out, there are significant privacy concerns here. In order to identify the at-risk users, the Facebook AI is reading posts and watching videos. Or perhaps you prefer the less anthropomorphic ‘statistically analyzing the contents of posts and videos.’ On the one hand, using less personal language helps to reiterate that humans are not perusing the material, at least not at a first pass; it sounds like human reviewers do look at content flagged by the AI before first responders are notified. On the other hand, how else would you describing the act of consuming written language to decipher its meaning except as reading?
I was a little unsettled recently when I started getting notifications from Google on my phone reminding me about library books and bills due. These were not calendar notifications from events or tasks I created; these were generated by Google based on what it read in my e-mail. On some level, I already knew that I should be assuming all of my electronic communication was being statistically analyzed at the very least by software agents of the companies mediating that communication, not to mention any third parties who might also have access. Still, it was a little unnerving to be so explicitly reminded of this reality.
Based on that experience, I can only imagine how it might feel to someone who is already in a tenuous mental state to be confronted with the reality that a combination of software tools and human employees of Facebook, strangers all, had viewed the sentiments one had shared and intervened in one’s personal affairs. Sure, there are undoubtedly success stories where lives were saved and the individuals were ultimately grateful for the intervention. But what about those who considered the reaction intrusive? Is there a risk those folks will be more cautious about what they share in the future, potentially preventing them from getting help in the future because no one knows they need it? And I don’t just mean that Facebook won’t be able to detect their need, but also that friends and family may not as well. After all, they are presumably the intended recipients of whatever is posted that catches the attention of the Facebook AI.
Considering the unintended consequences of Facebook’s suicide prevention software brings me to a larger point about Facebook and social media platforms. As I touched on a recently, social media may be detrimental to mental health. It occupies time that might otherwise be spent socializing with other people. It exposes use to a lot of anxiety- and fear-inducing material, at least some of which is intentionally engineered to elicit exactly those responses. In fact, the same technology that can be tuned to predict who might be considering suicide can also be employed to manipulate our attention and emotions. In at least some cases, Facebook and friends are simply the latest vectors for tactics already deployed via television, radio, and print media before them. At the same time, the reputation they cultivate as benign conduits potentially makes us more susceptible to manipulation because the original content sources are obscured. We don’t know which sources to be skeptical of because we are scarcely aware there are different sources at all.
I think this reveals the central tension of social media platforms. They are regularly billed as tools to connect people in a way that makes geographic distance nearly irrelevant, giving us access to wider sources of information and allowing us to organize and pool our resources in new and more effective ways. At the same time, they also magnify the ability of a few individuals to influence others. That feature is less obvious, obscured by language about algorithms and AI. In a way, that language functions like the passive voice, hiding the subjects who actively brought about the results. We aren’t turning over our mental health to AIs like Facebook’s, we are turning our mental health over to the relatively small number of people writing those tools and monitoring their output. If that’s what we want to do, and if that is effective, then it isn’t necessarily a problem. I just think we should have clarity on what we are actually signing up for.
Again, this is not a new phenomenon. The printing press, the radio, television and other technologies also help to consolidate influence, allowing one person to affect many far beyond the reach of their voice. That doesn’t mean we don’t still need to have conversations about who is wielding that influence and how we are letting them. We should discuss how to encourage accountability for those with influence. At the same time, and since we are talking about AI and Christianity, I think there is still a place for the local church to provide complementary support and influence for those needs not being met by networks and software. With its roots in geographically organized communities, the church is in some ways the opposite of social media, which is not a liability but a virtue that gives it a distinct and still necessary mission.
Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two teenagers, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain’s hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer’s cap. His monthly Science in Review is drawn from his weekly Science Corner posts — Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog. His book Faith across the Multiverse is available from Hendrickson.