Doug Sackman
Doug Sackman
12/8/2025, 9:06:19 AM

Prompt: "Is the Big Research Paper Dead?" (For the Oct 1, 2025 edition of the University of Puget Sound's faculty's 'Wednesday at 4' pedagogical sessions). Yes, we have clung, through all these years, to a research paper. In History 400, seniors write a major research paper, @25 pages, but some are much longer, based on original research interpretation and analysis of primary sources, documents that come from the period and place they are writing about, and relevant historiography and interdisciplinary scholarship. Writing the paper is a journey. I'm teaching the class right now, and we are working through their historiographies; we will have sessions with the small groups, 3 or 4, reading one article or book chapter related to each of their histories, critically examining them and exchanging ideas. These are the proverbially shoulders on which new works of history stand. If you ask me whether we should keep on doing what we've basically been doing with this capstone for decades, at least for the general goal if not the process of the class, my answer might depend a lot on where we are in the semester. I approach it less as all-knowing-sage, than as a coach. While we have basic expectations for the paper and process, my goal for them is that they feel that—in the end—this is their fullest work of research, and their best writing to boot. But at so many points, I feel a kind of vicarious vertigo: we are going to fall off this thing, and plunge into the abyss. Will this student get over the finish line? How can i help them? And I've adjusted the process over the years to get more of them moving along in a sense together even as there are wide differences among the projects and the students and their approach to the work. But after the final drafts are in, which are the third draft of their full paper, I usually feel vicariously proud: this student made it through, and it is their best work and best writing, and they did their best speaking in our conference style presentations at the end of the semester. It has really come up from where they were, and moreover, it is substantial. It is a legit work of history. And there are final project where I read, and weep, in a good way: it's impressive or beautiful, an innovative contribution to the literature, and a wonder. Even without the looming presence and insistent pushing of generative AI, ask me at a certain point in the process about getting rid of the research paper, of the thesis, and I might say, yah, let's throw it overboard, let's do something different, more accessible, and more like where history in the public—a popular genre for consumption, in podcasts, for example—is going. But in retrospect—which is what we do as historians, we look back in order to chart the future, for example like looking back at what the Luddites actually did and what motivated them—I still think that doing the research paper, the thesis, as part of our undergraduate major, is a good foundation for our students' futures. And by extension, our futures. It's a journey for all of us, with struggle and adversity. There's plodding in the valley of doubt and that's part of special beauty of it. And there is a community of learning. It's not just a journey for a journey's sake. It is putting skills and hard work together, over the arc of a semester, into projects they care about, where they do the research, consult the experts, become experts themselves, argue their position, work over written and oral exposition, stand behind it, see it through to completion, and put something new—and crafted by themselves, in community— into the world. And they go into an online archive of the works over decades. That is a creative thing. The product is a creation, but in creating it they are also creating themselves, as scholars, as historians. That is not something machines can do for them, though of course machines can and do help. But offloading the creative work, the heavy lifting, to a machine can be debilitating for the author, rather than body building; atrophying, rather than edifying and contributing to growth, atrophying, rather than developing the ambulatory practices and skills, intellectual muscles and sinews, needed for the journey. On the first day of that class—before I ask them to define history and write about what it is good for—I start with an icebreaker, modified from on e of the traditional camp icebreakers—if you could have a superpower, what would it be. I ask them, if you could have a historical superpower, what would it be? This inspires a variety of responses. Some mention a seance power, an ability to commune with the dead; others to know what historical actors were really thinking when they did x y or z; some want language ability; others to be able to magically format citations in Chicago style (the best style, by the way; i know those are fighting words; but you might want to let that go, else i might have to tell my citation style joke...). and some students go big, simply asking for time travel. But no one has ever said: "I'd like to just be able to push a button, and be done with it." Now, like everyone, my history colleagues and I are dealing with the impact of the existence of generative AI, both what it opens up and what it endangers, adjusting teaching strategies, working on our rules of engagement, and more. Generative AI is here, there and everywhere. It's butting in. I may say I don't use AI, but AI is using me. And it is being pushed. Some argue for economic reasons, and that it is part of a bubble—like tulips, the south sea bubble, dot.com, or, for that matter, the mooch bubble, excuse me, the MOOC bubble. There are several others on this campus more versed on the history of technology than me, but I'll put that hat on for this. It's actually part of my origin story as an historian, for I wasn't a history major as an undergrad, but there was one history course that especially prepared me for this line of work, and it was a history of technology in the US. I was thinking about that course, and its readings, last year when a student in our capstone course embarked on his thesis on the building of the Satsop nuclear power plant—you can see the cooling tower relics on your way out to the coast, which were never completed; they were part of a program, the Washington Public Power System, whose acronym, fittingly was pronounced "Whoops". Walter did a great paper on labor and politics and the environment all factoring in, and he went all the way to Pullman to do archival research for it. His project got me thinking back a book we read in that old course I took as an undergrad by Langdon Winner, called The Whale and the Reactor. Decades before the appearance of ChatGPT, Winner raised a sceptical view of the hype all around about AI, and what it promise. Hype, he claimed, provides "much of the persuasive power of those who prematurely claim great advances in 'artificial intelligence' based on narrow but impressive demonstrations of computer performance." And here's the kicker: "But then children have always fantasized that their dolls were alive and talking." And in our era, that fantasy is now easy to maintain...a mass hallucination, you might call it. Winner also pointed out that the history of technology is often the history of people releasing "powerful changes into the world with cavalier disregard for consequences; that they begin to 'use' apparatus, technique, and organization with no attention to the ways in which these 'tools' unexpectedly rearrange their lives; that they willingly submit the governance of their affairs to the expertise of others. It is here also that they begin to participate without second thought in megatechnical systems far beyond their comprehension or control; that they endlessly proliferate technological forms of life that isolate people from each other and cripples rather than enrich the human potential; that they stand idly by while vast technical systems reverse the reasonable relationship between means and ends." For my part, I do worry that the drive to put machine learning into everything we do in education will one day appear, in retrospect, as analogous to the decision to put lead into gasoline. Sure, it addressed the knock in engines, allowing us to speed faster down the highway encapsulated in quietude. There was an alternative, using ethanol to the same effect, but that could not be patented; the lead additive could, and so the profits could be privatized, and the costs externalized—out the exhaust pipe. So this nation got in its cars and drove everywhere, and polluted the environment, spewing lead out into the places we live. Mobility went along with toxicity. We moved faster, but thought slower. The most extreme version of the lesson from history here can be found in Caroline Fraser's new book Murderland—which traces and indicts Asarco's persistent plume for its spewing of lead out into our environment, creating an environment, conditions on the ground, that aided and abetted the rise of serial killer mentalities among us—thus, killing us. I'm saying that is the most extreme, in terms of object lesson analogies from history. But we don't know really where this is all heading. The Integration of machine learning into our total lives is a massive experiment, the subjects of which are us, and our students, and the cognitive effects, as the recent MIT study shows, is that artificial intelligence use quickly becomes a threat to our human, natural intelligence, and its continued cultivation.

Want to write longer posts on Bluesky?

Create your own extended posts and share them seamlessly on Bluesky.

Create Your Post

This is a free tool. If you find it useful, please consider a donation to keep it alive! 💙

You can find the coffee icon in the bottom right corner.