Art Courtesy of Anna Olszowka.
Editor’s note: In the spirit of this special issue, we traveled back in time and dove into YSM’s archives, seeking to track how our perception of scientific progress has changed over the last century. We found one YSM article written by Yale physics major Henry Thwing in 1951 (reproduced here), to which we asked one of our members to write a response (pg. 35). In this side-by-side comparison, we examine how our vision of artificial intelligence (AI) technology—and how it is presented in literary science fiction—has changed between the mid-twentieth century and the present.
Will we one day live in a world dominated by thinking machines? This is the central question of Thwing’s 1951 article in Vol. 25 No. 4 of the Yale Scientific and one that is often posed today. But perhaps a more pertinent question to ask is: do we already live in such a world? In his article, Thwing posits, “If a calculator could be made which would correlate data in ways other than those fed into it, then it would be a thinking machine much as those stipulated by science fiction.” According to his definition, thinking machines are already deeply integrated into our society, from the Tinder and Hinge algorithms that determine who we might fall in love with to programs used by financial firms that shape economic development across the globe.
What captures our attention, and our fear, most about Thwing’s thinking machines is the possibility that they might be able to think autonomously. Most people would argue that social media algorithms and similar technologies don’t “control” our society in the way that past science fiction writers have described: they lack independence and only act on human command. But the possibility that machines could surpass this limit—an idea that in Thwing’s age was relegated to writers’ rooms and dinner party conversations—has become far less remote. With the advent of the language model-based chatbot ChatGPT, our society has had to grapple with artificial intelligence (AI) as a force capable of changing our entire way of life.
Here at Yale, for instance, the Schmidt Program on Artificial Intelligence, Emerging Technologies, and National Power under the Jackson School of Global Affairs describes its aim as to “examine how AI has the potential to alter the fundamental building blocks of world order.” This is what the AI-related science fiction that Thwing describes aimed to do—only now, this task has moved from the realm of speculation to the realm of academic inquiry and policy development. In a poll of 119 CEOs conducted by Yale School of Management professor Jeffrey Sonnenfeld, over forty percent indicated that they believed that AI had the potential to destroy humanity in the next five to ten years. While these respondents may not be the most technically knowledgeable on AI, the fact is that their opinions will govern how we integrate AI into our everyday lives.
Despite recent advances, some believe that human-like, emotional AI will remain in the realm of fiction. In an opinion piece published in The Washington Post in April, Yale professor of computer science David Gelernter argues that software is fundamentally unable to experience consciousness. He suggests that the concept of a conscious computer is akin to a conscious toaster. AI will never be able to “understand” the world as humans do—it will only be able to draw surface-level connections based on the data it receives.
Whether or not you agree with Gelernter, the fact remains that AI will play an increasingly significant role in our society. In his article, Thwing says, “The seeds of scientific fiction today will yield a harvest of new scientific discovery tomorrow.” In this sense, his proposition is firmly validated. As for whether our technology will come to control us, I would argue that it always has, in the same way that we control it. Just as the invention of agriculture changed human social structures from small hunter-gatherer communities to larger sedentary settlements twelve thousand years ago, technological breakthroughs alter not only our ability to interact with the world, but also our core values as a species. Our values, in turn, mold how we employ and develop future technology.
Thwing’s article is fundamentally about the value of science fiction, and thus any analysis of his work would be incomplete without asking what will become of science fiction in the world of AI. I would argue that it will remain in the same place that it’s always been in. Science fiction, and art in general, is not something that we produce for the sole purpose of mass consumption. Perhaps some science fiction novels of the future may be written by non-human authors, but this does not mean science fiction as a whole will become an automated process by which we will become “robotic” consumers. An AI-generated story might spark an idea for a different story written by a human author, which would in turn be incorporated into the AI database.
Just as humans mutually benefit from sharing their literary works with one another, the same may hold true for humans and AI. Beyond the specific details of how this relationship might work, one thing remains clear: humans will always dream about the world of tomorrow, and as long as we do, science fiction will always have a place in our collective consciousness.