Artwork for podcast Meeting Street
AI and the Humanities
Episode 146th December 2023 • Meeting Street • Cogut Institute for the Humanities
00:00:00 00:30:08

Share Episode

Shownotes

At a time when headlines repeatedly underscore the dangers of artificial intelligence to human endeavors of all sorts, what role can the humanities play in assessing the uses and limitations of new AI tools such as ChatGPT? What do developments in AI teach us about academic inquiry and humanistic questions in particular?

In this episode of “Meeting Street,” Hollis Robbins, scholar of African American literature and Dean of Humanities at the University of Utah, joins host Amanda Anderson for a wide-ranging conversation on the institutional and disciplinary condition of the humanities at the present time. Through concrete examples, they explore the complex and fast-moving world of artificial intelligence, emphasizing the importance of human interaction, judgment, and expertise to scholarly practice and knowledge advancement.

Transcripts

Amanda Anderson:

From the Cogut Institute for the Humanities at Brown University, this is “Meeting Street.” I’m Amanda Anderson, the show’s host. Since the advent of ChatGPT, there’s been much talk of how new technologies in AI will affect universities and, in particular, the ways in which faculty teach. At the same time, concerns about artificial intelligence more generally have led to the development of university programs and centers focused on what is variously called “ethical” or “responsible AI.” Our show today will explore how the humanities can help to illuminate certain gaps in the current discussions around ChatGPT, and AI more generally.

Amanda Anderson:

With me to discuss these issues is Hollis Robbins, a literary scholar who is currently Dean of the Humanities at the University of Utah. Professor Robbins specializes in 19th-century American and African American literature and culture. She is the author most recently of “Forms of Contention: Influence in the African American Sonnet Tradition” (University of Georgia Press, 2020). She has collaborated with Henry Louis Gates on several important editorial projects on African American texts. She’s also written provocative reviews and opinion pieces for “The Chronicle of Higher Education,” “Inside Higher Ed,” and the “Los Angeles Review of Books,” among other venues. An astute commentator on academic culture, Hollis Robbins is also a gifted university administrator, and that combination makes her an especially appropriate guest for today’s topic. I’m delighted to have her here today to share perspectives on current questions around artificial intelligence and scholarly practice in the humanities. Hollis, welcome to “Meeting Street.”

Hollis Robbins:

I’m so glad to be here.

Amanda Anderson:

So let’s begin a little bit with your own personal story, because it’s an interesting one, in terms of the various twists and turns in your scholarship and your career. Talk a little bit about your history and how you ended up focusing on your chosen areas of interest.

Hollis Robbins:

Well, it was circuitous, I have to say. I mean, I was not expecting to be in academia. I wasn’t expecting to be a professor, let alone a dean. I’d been working in politics. I had been — after college, had a really great job as a secretary to the liquor advertiser of “The New Yorker” magazine. And then years later — it’s a circuitous route through the Kennedy School of Government at Harvard, through politics — I found myself opening “The New Yorker” one day and seeing Skip Gates, Henry Louis Gates, introduce a manuscript that he had discovered purportedly written by — it turns out, written by — an escaped enslaved woman named Hannah Crafts.

Hollis Robbins:

And I was reading the magazine, and he had some pullout quotes, and she had, in writing her own autobiography and fictionalized story, cribbed from, borrowed from Charles Dickens’ “Bleak House.” And I found this so interesting and also interesting that nobody at “The New Yorker” had caught it. I called up some old friends, and within 15 minutes was on the phone with Henry Louis Gates, with whom I’ve been collaborating for 20 years, and opened up a field of scholarship of how it was that African American authors came to use and signify upon Victorian texts. So that’s the weird way I got here.

Amanda Anderson:

Right. So just to clarify, so after you went to the Kennedy School, you decided to get a Ph.D. in literature, but you studied Victorian texts, correct?

Hollis Robbins:

Yes.

Amanda Anderson:

And then it was when you discovered the “New Yorker” text that you kind of became really fascinated by African American literature?

Hollis Robbins:

Right. I went into, or I began thinking about, Victorian novels only because of my work in politics and, at the Kennedy School of Government, studying organizational theory. This was my focus: how organizations work, how new bureaucracies work. I worked in politics, and so when I decided primarily on a whim to go get a Ph.D. in English, Victorian novels like Dickens or Eliot or Wilkie Collins or Trollope, novels that focused on bureaucracies, appealed to me. I didn’t know that would lead me to African American literature.

Amanda Anderson:

That is so interesting, and it may be a partial answer to what I’m going to ask you next. We’re at a moment when the humanities face forms of defunding in certain sectors of higher education and when articles appear regularly noting the decline in numbers of humanities majors, as well as the allegedly unpromising career prospects for those who do study in those fields. And I’m just curious to know how you see your role in this context. I mean, what do you say to your various constituencies on these questions?

Hollis Robbins:

It’s so funny. I mean, you have to ask, and I have to have a stock answer these days, right? It is the discourse, right? And my stock answer is, well, in fact, humanities are flourishing, if you think about humanities writ large. Just this past year, we hired two new faculty members in our English department who study video game narratives that are really dominating much of the narrative space as much as films and books and short stories and what have you. And this expansion of the English department to include new technologies is actually indications that we’re flourishing. And we are flourishing. So if we think of humanities very broadly, we see that these new technologies and new ways of studying these technologies within a humanities focus both aligns with our past ways of approaching texts and the human condition and showing that we pivot and align with, accommodate, focus on what’s new in technology.

Amanda Anderson:

That’s really interesting. I think what’s striking to me too about your response is that you’re really zeroing in on areas of study, which is to say the kinds of work that humanities scholars do as culture evolves, the ways in which their projects themselves evolve, but also are continuous with past projects. And implicit in your answer, I think, is that it’s imperative that the university protect basic research, because if all that one responds to is this notion that it’s not seen as potentially the correct major for certain career prospects, you’re going to endanger whole areas of cultural and historical knowledge. So I think that that’s crucial.

Hollis Robbins:

And I think the issue of ChatGPT and large language models particularly are the area that humanities language researchers can both speak to and show the limitations of. For example — I suppose you’ve used a little ChatGPT lately?

Amanda Anderson:

Sure.

Hollis Robbins:

And the thing is that predictive language models are boring, right? There is no slang. There can’t be slang. So, like, a word like “rizz” — have you come across the word “rizz”?

Amanda Anderson:

No. No.

Hollis Robbins:

So it’s a new slang for charisma. A student said to me the other day, “You’ve got a lot of rizz.” And I had no idea what they meant. And it’s words like “extra.” “She’s so extra.” But ChatGPT, large language models, will always be behind the times. By the time they catch up with a word like “rizz” or even “extra,” or be able to deploy it, it’ll be long past. And so that we who study language in the ways that languages move and embed themselves into our cultural productions will always find ourselves ahead of whatever it is that ChatGPT and large language models produce.

Amanda Anderson:

Right. I’m really glad that you turned the topic to the question of ChatGPT, because I want to segue to artificial intelligence and its impact on the humanities. Now, a lot of the debate, it strikes me, has focused on how we will need to adapt our teaching methods, but as your previous examples about slang demonstrate, there are some much larger questions at issue as well, as you point out in a piece on AI that you wrote for a forum in “The Chronicle of Higher Ed” in May of this year. And there you say, “At the time of this writing, AI writing is technically proficient, but culturally evacuated.” What do you mean by that?

Hollis Robbins:

It’s so interesting that even now things are changing just a few months later. The rollout of the facial and the spoken AI is so interesting. And if you listen to those voices — even though there are, I think, five voices, and one of them is sort of vaguely ethnic, but you can’t really tell — there’s no culture, there’s no slang, there’s no code, there’s no specifics having to do with a particular way of approaching language. Also, it’s consistent — like the person will never use a Southern accent here and a nonaccented verbiage there. So if you think back to Bill Clinton on the campaign trail using a Southern accent when it was appropriate, for example. That we’ll see only standardized speech, and it hearkens back to the way I look at it, late 19th-century desires of the American education system to wipe out regional accents through elocution in high school, that everybody had to recite certain passages of Shakespeare in iambic pentameter in order for students to erase their own heritage, as it were — regional heritage.

Hollis Robbins:

So this seems like that. So when you’re using this culturally evacuated technology, the question is what’s going to happen outside that? And I think slang, regional accents, new languages — I mean, I’m also thinking 20 years ago, back in the 1990s, Henry Louis Gates testified, I think in Florida, with the 2 Live Crew case about whether something was offensive or not, saying, “This is Black language, this is signifying, and to call something offensive or what have you means you’re not understanding the particular language of the culture.” So I think those kinds of questions are going to be happening outside of ChatGPT, and I do wonder whether that will reduce the influence, increase the influence. Stand by.

Amanda Anderson:

Yeah, no, I think that’s absolutely crucial because as you were saying earlier, at least up till now, particularly when you asked ChatGPT to produce some sort of account of some complex, let’s just say, theoretical or historical question, it tends to produce, as you were saying, a highly neutral sounding, rather boring, and hedged account — though, I mean, “hedged” is already sort of a human stance. But there’s a sort of on-the-one-hand, on-the-other-hand tendency, right?

Amanda Anderson:

And so, myself, I’ve written a lot about styles of argument and about the ways in which personality infuses argument and that there are always ways in which certain kinds of assumptions about character or enactment are at play in certain arguments. And this sort of very anodyne, neutral mode is quite odd. And it’s going to jar, I think, as you’re rightly saying, not only with cultural specificity, but the fact that right now in our culture, identity is so important and claiming identity, but it’s complicated when it’s an artificial entity.

Hollis Robbins:

As the kids will say, this raises beige flags.

Amanda Anderson:

Exactly. That’s good. I love that. I love that.

Hollis Robbins:

It’s a true thing people are saying. I mean, it’s hilarious.

Amanda Anderson:

Yeah, that’s wonderful, wonderful. Okay, so to go a little further into this question and to also kind of draw on some of the work that you’ve recently done, in a recent review article for the Los Angeles Review of Books, you compared two very different, new biographies of the 18th-century African American poet Phillis Wheatley. One of the biographies is steeped in historical and material detail, and the other is framed in cultural idioms of the present day and more relatable. These divergences were, in your view, especially notable and concerning in the age of AI — and I should say, I mean, you favored the historical and material version — but say a little more about that because that’s a fascinating claim, and I think it’s actually related to what we were just talking about.

Hollis Robbins:

So I prefer Vince Carretta’s book, the historically steeped one. I mean, he’s the scholar of Phillis Wheatley’s history, biography, etc. And he lays out so many specificities about the captain of the ship called the Phillis that brought Phillis Wheatley to Boston, what people were wearing at the time, what the churches were like, who said and was reading what, what her neighbors were. And David Waldstreicher takes a lot of that data. I mean, he doesn’t bring a lot of new historical research, but what he does is reads these facts in a new way. So, as I say, to read Vince Carretta’s book is for the reader to go back to the 18th century, brings the reader to Phillis to understand what her life was at the time. And Waldstreicher does the opposite, takes Wheatley out of the 18th century to the present, so brings Phillis to the current day. And I think the danger there is how we connect a life with the poetry and her output and a canon, how we come to understand who she is and what she wrote.

Hollis Robbins:

So, for example, he begins with a poem that she writes about a dangerous sea voyage in which two people almost died and suggests that when she’s writing about a sea voyage, she must be thinking about her own middle passage sea voyage from Africa to America. And yet when you look at the time in the place and what she was reading, she was reading Odysseus, she was reading the Bible with Jonah and Paul and their sea voyages. I mean, so much of the literature that she was reading was about dangerous sea voyages.

Hollis Robbins:

So for her to write about a dangerous sea voyage is to insert herself into a literary tradition. Now these days, I don’t know — have you been on many dangerous sea voyages? I have not. Right? So if I don’t know anything about this, it would be easy for me, and it also was easy for ChatGPT to say, yes, she had a dangerous sea voyage, so she must be writing about that. And that kind of facile connection I think precludes the kind of work that we actually do in reading texts and trying to understand how they emerge and what they mean and how they fit into a genealogy.

Amanda Anderson:

That’s really interesting, because that, in a way, also shows an instance in which ChatGPT is marked by the idioms and frameworks of the present day. It is unhistorical. It’s culturally specific, not in the ways that you were talking about earlier, that it fails to capture, like slang and different forms of speech, but rather it’s channeling a certain therapeutic culture based on assumptions about what writers are thinking of when they write.

Hollis Robbins:

That’s interesting, because what you’re suggesting is that it’s present but not quite present enough. And that’s really interesting, right? Because it’s not longitudinal. It doesn’t really read for influence. It flattens the present, and yet it’s not present enough. That’s really funny.

Amanda Anderson:

Yeah, yeah, yeah. So let’s talk a little bit about ethical AI. So, as I mentioned in the intro, there are many new centers dedicated to ethical AI in universities. And then there’s various organizations and corporations that have asserted their commitment to principles of ethical AI, which are often understood in terms of transparency, fairness, privacy, dedication to human rights, and careful monitoring of AI systems for discrimination, bias, disinformation, and exclusion.

Amanda Anderson:

So this seems to me a laudable effort — obviously it’s very, very important — but I guess one question I would have is do you think ethical AI captures the issues that are most salient to the humanities? Because sometimes you’ll hear humanities scholars say, “We’re crucial to AI, because we bring the ethics.” But if you actually look at the mission statements or the kinds of work that these centers are doing, I just wonder what’s your opinion about them? Does it capture, as far as you’re concerned, what’s important to the humanities?

Hollis Robbins:

I think the weak link, and one of the ways that we are approaching AI, is the request that humanities be central to all projects having to do with the ways that large language models have bias, the way that things like prompt engineering channel responses in a very narrow channel. And also one of our faculty members here in English is doing a project on artificial ignorance and artificial forgetting. Forgetting and ignorance and deciding not to know or not to focus on something is so central to literature, to the human condition.

Hollis Robbins:

“I’m not going to go there.” “We’re not going to think about this.” Or you can imagine the way GPS works when you get — or a map works — when you get near Dick Cheney’s house, right? It all goes gray, right? So what happens when you ask AI, or ask a large language model, not to go someplace or to forget about a certain thing? We humans know what that means. We go around something. We evade. Or we fill in the gaps. But how does a large language model do that? How might AI do that differently? How do we think about our human mind for which ignorance and forgetting, deliberate ignorance, is such a big part of who we are. So I think these central questions are going to be a key part of how we approach responsible and ethical AI at the University of Utah.

Amanda Anderson:

Again, what I’m so struck by is your pointing to certain, I would even say, kind of psychological tendencies amongst humans and thinking about how that might play out in an intentional or reflective way with AI. And I agree that what’s crucial is the critique element of ethical AI, which is to say exposing the power-laden dimensions of certain systems. But I do think — and this hearkens back to what we were talking about earlier — that certain things are lost to view. I mean, what about judgment or layered experience that results in forms of situated judgment? How is that sort of mindset or thought habit going to be captured or covered by AI? And I think it’s just utterly crucial in so many forms of work, both intellectual and practical.

Hollis Robbins:

We have a real focus here as well on ethics in our philosophy department, and this wonderful faculty member who teaches a class on wilderness ethics and tells a story, for example, of two people hiking, and one falls in a crevasse, and they’re tied together, and at what point do you decide — the light is falling, and you’re not going to get back to the camp unless you cut the rope right now, and you can’t hear, and you can’t speak, and you can’t communicate, and so what do you do? This was a real situation. She tells what happens afterwards, but you can’t put a question like this to AI. You can’t have it think through the way people think through bringing their ways of thinking, their methodologies, their histories, their emotions, their religion. And that kind of work and teaching will continue to occur in the classroom, despite all of this.

Amanda Anderson:

Right. And you mentioned religion, and I’ve been wanting to ask you — I really appreciate these wonderful examples you’re giving of work that’s being done at the University of Utah, and of course Utah is a distinctive state, given its Mormon history and population. And I’m just wondering, are there specific aspects of the Utah context that are relevant to the work that you’re trying to do to promote and sustain the humanities?

Hollis Robbins:

Well, interestingly, yes. I was invited to go visit the Church History Library a couple of months ago, a fantastic place, the entire archive history. And I had lunch afterwards with one of the scholars who’s a curator of the Joseph Smith papers. And I was saying, “What’s it like being a curator when you could find a piece of paper that might change something?” And I’m Jewish, and I said, “For us, it would be like finding a piece of paper that said ‘Lot’s wife didn’t turn around.’” And he said, “Yes!” He said, “In fact, this is what’s exciting, because we are a young religion. Things could change. We may have more nuances. We may find things that are new. And that’s really exciting. And it’s an opportunity to work with scholars on a new and growing and very influential religion.” So it’s been interesting to see how we might work together, for example.

Hollis Robbins:

When I was down in the basement of the Church History Library, down in cold storage — like, minus 40 degrees or something — he’s showing me some shelves, and there’s an entire shelf of Mormon games. So, like, Lego Brigham Young and some Trivial Pursuit and these really fantastic sort of Chutes and Ladders games. And I’m, like, “Wow, this is really interesting.” And I went back, and I told some of our video game narrative people. And so we may have a postdoc who works on religious games or the genealogy of games in particular settings — how do games support, augment, amplify, are needed for or are useful for particular religious groups? So it’s going to be fantastic.

Amanda Anderson:

Yeah, I think that you’re making a general point about the importance of the local, which in your case is lit up due to probably prior unfamiliarity with it, but that in other cases might be more directly and intentionally embraced. And I think that, in a sense, there’s much work in the public humanities that aims to do just this, but that’s fascinating that you’re seizing the opportunity.

Hollis Robbins:

Seizing it.

Amanda Anderson:

Yeah. Wonderful. Okay. I want to return to your own scholarship for a minute. And in particular, your recent book on the African American sonnet tradition, and kind of try to link it up to this prior discussion we’ve been having about AI. Is there any place for data science in the kind of recovery work that you do in the book? And perhaps first for the benefit of our listeners, tell us a little bit about the project, and then maybe you could address this question of whether data science can contribute to such a project.

Hollis Robbins:

So the book, “Forms of Contention: Influence in the African American Sonnet Tradition,” is about this long and long hidden tradition of African American sonnet writing. So Phillis Wheatley in fact wrote the very first Black sonnet, or sonnet by an African American writer, in 1768, in fact. And in the 18th, 19th, and 20th century, there was a long tradition of African American poets who wrote sonnets. And why not? In part because sonnets are what poets do, and why not? And in the era of Black newspapers and journals, sonnets fit really well at the bottom of a column because they’re square. And so editors would say, “Give me more of those sonnets,” because they fit, right? But in the Black Arts era, Amiri Baraka — his great move in the Black Arts Movement to deracinate poetry from its “European roots” really promoted and launched a conception of Black poetry that got great purchase in universities that decided to artificially ignore or forget this entire literary tradition.

Hollis Robbins:

And so when I started reading old Black newspapers, African American newspapers, from the 19th and 20th centuries, and kept seeing these poems, my book is about this hidden tradition and this really strong genealogy of influence for sonnet writers that you see today, for example, in writers like Natasha Trethewey and Terrence Hayes. So the interesting question I’ve been asking is if all these newspapers get into the training dataset, what have you, would AI have noticed this long before I did? Would it have said, “Okay, I have read all sonnets written by humans, and in fact, there are many, many, many sonnets written by African American poets”? As a kind of pattern recognition functionality, big data as sort of nonbiased viewer or reader may have seen this before I did. I was not influenced by Amiri Baraka, but I saw his influence. But there was a great barrier of entry to get to the point where you would know this history, and AI might’ve just seen it.

Amanda Anderson:

What you’re acknowledging there is that there’s bias in scholarship and the history of literary production too. We know this. We’ve been talking a lot about what AI can’t see, but I think that’s a really fascinating point that some sort of information gathering could light up things that we thought we knew or had no idea existed.

Hollis Robbins:

And I will go back to the Hannah Crafts question when I opened up The New Yorker, and here is Henry Louis Gates telling the story of this manuscript he found by this woman Hannah Crafts. And we know now she was a real person because of the excellent scholarship of Gregg Hecimovich, who was a Victorianist, or started as a Victorianist, and actually did the archival work to find her.

Hollis Robbins:

But at the time, in 2002, nobody who Skip was working with, none of the African American studies scholars, had thought to think or to imagine that she would be reading Dickens. So passages that are clear echoes of Dickens went unnoticed. And so there’s a way of thinking that if some sort of AI reader had picked up “The New Yorker” or picked up this manuscript 20, 30 years ago, would have seen, “Oh, clearly she’s echoing Dickens here. She’s echoing “Jane Eyre” there. She’s echoing Walter Scott there.” And I found them because I had been a Victorianist and saw what she was doing. Scholars who had been trained solely in African American texts didn’t see those echoes. So it’s an interesting question: what knowledgeable humans bring to the table when we’re constrained by discipline and don’t really think outside our discipline, and what pattern recognition software might assist us with.

Amanda Anderson:

Yeah. And just to clarify, when you say Skip, you are referring to Henry Louis Gates, Jr.?

Hollis Robbins:

Yes.

Amanda Anderson:

I don’t know if ChatGPT would know that. Okay. So it’s been really terrific talking to you about all of this. And just as a last question, I want to ask you what projects do you have in mind for the future, whether scholarly or institutional?

Hollis Robbins:

I’m working now on a book about Robert Hayden, who was the great poet who studied with Auden actually, and his work really flowered in the ’50s and ’60s. And he too was sort of cast out by Amiri Baraka as not being Black enough, as not focusing enough on the Civil Rights project or the Black Arts project, when his poetry is, of course, deeply embedded in his own life as an African American. So the ways that his work hasn’t been read for the influences of Auden or the way that Auden, who forged a close relationship with his student, might have been also influenced by Hayden. So this is the work that I’m working on now.

Amanda Anderson:

Oh, I love that. That’s wonderful. Well, it’s been a pleasure to talk to you, and I just want to really thank you for being on the show and for engaging these questions with me.

Hollis Robbins:

Thank you. It’s been delightful.

Gregory Kimbrell:

“Meeting Street” explores some of the most important and creative work being done in the humanities today, through conversations with scholars and thinkers who are extending the boundaries of their respective fields. The show is produced by the Cogut Institute for the Humanities at Brown University. Gregory Kimbrell is production manager, and Jake Sokolov-Gonzalez is sound editor. If you enjoyed this episode of “Meeting Street,” please follow and leave a review wherever you listen to your favorite podcasts.

Chapters

Video

More from YouTube