Amy Webb, author of The Big Nine – How the Tech Titans & Their Thinking Machines Could Warp Humanity, describes herself as a futurist, a job I wasn’t entirely sure actually existed outside of science fiction. Sure, plenty of people reason about the future and some do so in rigorous and quantitative fashion, but often in very narrow and specialized areas–predicting stock markets or elections or planning for consumer trends. Futurism strikes me as needing more of a generalist, and Webb seems to fit the bill. She takes the kind of broad view necessary to convey just how all-pervasive AI has already become and its potential for even greater influence. At the same time, she provides adequate detail and specificity in multiple domains so that all readers have something concrete they can relate to. Actually, the book reads like a blend of science fact and fiction as Webb tells us where we’ve been and imagines where we might go. So maybe futurist is something of a science fiction job after all.
The opening chapters recap very briefly the history of artificial intelligence, then provide a more detailed look at the current landscape. The history goes all the way back to ancient Greece and early attempts to understand human intelligence, waves at Descartes, swings by 17th century automata and early computing pioneers like Charles Babbage and Ada Lovelace, looks in on Claude Shannon and Alan Turing and John von Neumann and winds up with DeepMind and AlphaGo.
From there, Webb lays out key features of where that history has brought us–a present moment that includes topics like Cambridge Analytica’s relationship with Facebook and the 2016 election, to give you a sense of how current it is. Several of those features are concerning to Webb. AI research is consolidated within the titular “Big Nine”: Google, Microsoft, Amazon, Facebook, IBM and Apple in the US and Baidu, Alibaba and Tencent in China. The US government is too disconnected and disinterested, leaving the US players subject to market demands and the need to show quarter-over-quarter growth. The Chinese government on the other hand is heavily invested in AI and leveraging the technology for programs like the social credit score or facial recognition to identify and detain the Uighurs. Meanwhile, the world of AI research is relatively tight-knit and insular, resulting in biases in algorithms that can be in turns amusing and insidious. And the academies training tomorrow’s AI scientists tailor curricula around the technical skills preferred by those few companies, neglecting the kinds of ethics, philosophy and other humanities classes that would give students a much-needed but harder to quantify broader perspective on the implications of their work.
So far we are in the realm of fact. In the central section of The Big Nine, Webb describes three possible scenarios for the trajectory of the United States over the next 50 years. The narrative style these scenarios are presented in give a science fiction flavor, albeit one heavily informed by present realities and with an at-times unsettling degree of plausibility. All three involve similar extrapolations of developments in artificial intelligence; the differences are in how the US and other governments, consumers and academia prepare for and respond to those developments. Her predictions span consumer technology, healthcare, social programs, geopolitics, and other sectors. Some of the details may not ring entirely true; for example, there is very brief mention of a robot made entirely of DNA that I think possibly misunderstands the relevant biochemistry. At the same time, much of what she describes about healthcare information technology made sense to me, and I work for a software company whose product interacts with healthcare systems. Webb is taking us into speculative territory, which is her job, and from what I can tell she is doing so in credible fashion.
Webb concludes with recommendations for how to realize the most optimistic of her scenarios. She is for multi-disciplinary steering bodies and against government regulation which she thinks will always be outdated. I’m not as certain governments will be willing to supply more funding without commensurate increases in oversight. More relevant to this audience, she calls for an overhaul of the computer science and AI programs at major universities. Ethics should be integrated into every course rather than treated as a separate bolt-on course, and more room should be made for the humanities in general. That makes sense to me, and I might go even further to say that the specific skills and tools change so rapidly and are so idiosyncratic to each company that some technical training should naturally become on-the-job rather than at school. But universities and disciplines have their traditions and cultures. So I’ll put it to you:
- Do you think it is possible for a school like MIT or Carnegie Mellon to transform their computer science programs to look more like a liberal arts degree?
- What steps would it take to accomplish such a transition?
About the author:
Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two teenagers, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain's hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer's cap. His monthly Science in Review is drawn from his weekly Science Corner posts -- Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog. His book Faith across the Multiverse is available from Hendrickson.