UPDATED: Losing Our Sh*t Over ChatGPT

--

Image Credit: Kabukicho Shinjuku (Flickr)

NOTE TO READERS: Please be sure to read through to the update below. Thank you! ~Shawna

OpenAI, the research company that has caused writing teachers to collectively lose their shit through the development of their ChatGPT tool, declares that its mission is to “ensure that artificial general intelligence benefits all of humanity.” If one were to ask ChatGPT to explain itself, the chatbot generates the following text:

ChatGPT is an AI language model developed by OpenAI, which is capable of generating human-like text based on the input it is given. The model is trained on a large corpus of text data and can generate responses to questions, summarize long texts, write stories and much more. It is often used in conversational AI applications to simulate a human-like conversation with users.

As educators, our alarm bells ought to be going off–and not for the most commonly expressed reasons (e.g., many students will use ChatGPT or other, similar tools to cheat; the way we teach writing might need to be completely reimagined). Rather, the hackles on our neck should be standing up due to the phrases “general intelligence” and “human-like text/conversation.” Because we’ve seen this before. We’ve seen the ways that artificial intelligence, which relies on predictive algorithms based on vast sets of data that humans–mostly white men–have compiled, has reinforced cultural stereotypes, misogyny, and anti-Blackness. We’ve learned from scholars like Dr. Sofia U. Noble and Dr. Abeba Birhane about the insidious biases embedded in algorithmic tools. We’ve even seen scholars like Timnit Gebru be ousted from their jobs from pointing out the harm that AI–and the data it relies on–perpetuates.

So along with reimagining or–ahem–“redefining” the ways in which we teach our student writers, we should also be asking (and encouraging our students to explore) questions like, “What is ‘general’ intelligence?” and “How does OpenAI define ‘human-like’ text?” Critical questions like these should never be relegated to one singular inquiry unit or project, but rather should be a habitual aspect of a sustained anti-oppressive practice.

In my upcoming book Literacy for All: A Framework for Anti-Oppressive Teaching, I lay out five essential principles that I believe should be part of every educator’s praxis. These principles include the following:

  • Literacy involves both cognitive processes and social practices;
  • Literacy and identity are inextricably linked;
  • All human beings engage in literacy and language practices that are both valid and valuable;
  • Meaning-making occurs within a variety of communicative channels or modes; and
  • Literacy can be used as a tool for liberation as well as a tool for oppression.

If we were to employ these principles as we critically examine ChatGPT and additional, similar text-generating tools, we would immediately recognize how ChatGPT uses what Dr. April Baker-Bell calls “White Mainstream English” or WME, (2020), which she uses as a term for what many consider to be “Standard” American English. Dr. Baker-Bell developed this term, she says, in order to “emphasize how white ways of speaking become the invisible–or better, inaudible–norm” for using language–particularly in academic spaces. As a result, other vernaculars, including what is commonly referred to as African American Vernacular English (AAVE) and/or Spanglish, are often disparaged, discouraged, or outright erased, as is the case with many Indigenous languages.

Here is an example of what I mean. Last night, I asked the ChatOn app to “write about the existence of racism in the field of medicine” (a topic I am finding myself falling down the rabbit hole about more and more lately). Here is what it generated:

Thank you for bringing this important issue to my attention. Racism in the field of medicine is a very serious problem that has been around for a long time, and it is still very much a part of our society today. Racial bias and discrimination in medical care can have serious impacts on people’s health and well-being, as well as their trust in medical professionals and institutions. Research has also shown that racism in the medical field can lead to unequal access to healthcare and poorer health outcomes for people of color. This is a problem that must be addressed through greater awareness and action, and I’m committed to helping in any way that I can.

Giant yawn, right? Here are the first two paragraphs of what ChatGPT came up with:

Unfortunately, racism is still a significant problem in the field of medicine. There are several ways in which racism manifests itself in healthcare, including in medical education, clinical practice, and health outcomes.

One significant issue is the underrepresentation of racial and ethnic minorities in the medical field. In the United States, for example, African Americans, Hispanics, and Native Americans are all underrepresented in medical school and are less likely to become physicians compared to their white counterparts. This can have significant consequences for patients, as research has shown that patients tend to have better health outcomes when they are treated by healthcare professionals who share their racial or ethnic background.

There are already several huge red flags that we ought to be seeing in these default responses, including the use of the term “minorities” and a disappointing, if not surprising, lack of historical context.

Here is what happened when I asked it to “write using AAVE*” (also note what Google Docs’ spelling and grammar tool chooses to highlight in this image as well as the image that succeeds it):

Table that shows ChatOn’s response versus ChatGPT’s response.

And again, this time using “Chicano English”:

Table that contrasts ChatOn’s response with ChatGPT’s response.

What do you notice about the results of such an exercise, and what do you wonder? One thing I notice that’s worth repeating is that the chatbot has been designed to use White Mainstream English as its default language, despite the fact that 1) the number of people in the United States who speak a language other than English–or in addition to English–continues to grow exponentially and 2) the United States has never officially declared a national language. What other conclusion might we reach beyond the inaccurate and harmful implication that English–and a bland, soulless English at that–is “correct” or “normal” (or even, to use OpenAI’s own terminology, the most “human-like”)? That vernaculars or languages other than “Standard” English are less valid or valuable–less “human”–and must be explicitly requested?

Hopefully you also notice that asking the bots to use specific kinds of vernaculars generates results that are highly stereotypical and akin to the kind of caricatures that we see/hear when white folks are being particularly racist or are engaging in cultural appropriation. Or is that my own bias as a white woman peeking through? How interesting–and enlightening–it would be to ask our students what they notice and wonder about such results.** What might this tell us about the “stories” they have been, and continue to be, socialized to absorb about language and about writing?

Most importantly, though, I wonder how many educators are thinking about the ways we might facilitate crucial conversations among our students about language and power. About how this particular use of artificial intelligence can serve as a tool for the continued oppression of those whose language and literacy practices do not match those we have most historically valued. And about how what we choose to lose our shit over says a lot about the state of literacy education–and of literacy educators–today.

**IMPORTANT UPDATE, 3/1/23:

Recently, a colleague reached out to express how painful it felt when I singled out both AAVE* and Chicano English as the two languages that were highlighted here in order to contrast to the ways that chatbot tools center White Mainstream English (WME). In addition, she expressed her concern that my suggestion for educators to engage students in an inquiry around language and power, using these examples, could potentially cause harm to students as well.

I was, and am, regretful of the harm my words caused my colleague (and perhaps others). In addition, I’m distressed by the amount of emotional labor I imagine it took her to call me in around this.

As someone whose language practices are centered by tools like Chat GPT–and by school-centered literacy and language practices in general–it is important to acknowledge the ways that our attempts to engage in critical language pedagogy can unexpectedly take a fast and hard turn toward curricular violence when these attempts lack an intersectional, and trauma-informed, approach. To have casually suggested that educators invite students to “notice and wonder” about the ways in which AI chatbots consider White Mainstream English as the “default” dialect–particularly when in comparison to dialects that are, more often than not, stigmatized in a wide variety of spaces both in-school and out-of-school–was harmful.

Upon reflection it is also clear to me that I, as the author of this piece, made several dangerous assumptions. For educators– particularly educators like me whose first/only language and primary dialect is, more often than not, considered the language of power–it is essential to emphasize the need to engage students safely and sensitively in critical examinations around power, privilege, and identity. More specifically, it is crucial to consider the potential harm that can occur if great care is not taken to facilitate conversations with students around the reality that, as Valerie Kinloch writes, “power is restricted from [those] whose language does not represent a standard American form.” This is especially true when we consider the ways in which identity and language so powerfully intersect.

When I wrote the original post, I failed to consider the need to be explicit about this with my readers so that those who may not have the necessary experience (or expertise) around this sort of work would not preemptively dive into such an enormously challenging task. And while I have had a number of experiences engaging students and teachers in conversations around what Dr. April Baker-Bell calls “linguistic supremacy,” it is important to acknowledge that such experience has been limited to my work with predominantly White students and teachers whose primary–or only–language reflects the “default” language of ChatGPT. This kind of work would look very differently in other kinds of contexts, as articulated by many scholars whose shoulders I humbly stand on, including, but not limited to, Dr. Mariana Souto-Manning, Dr. Nelson Flores, and Dr. Marcelle Haddix. As Dr. Hilary Janks writes (and which I should have heeded upon writing this post),

[T]he teacher cannot predict which text will erupt in class. …[W]hen texts or tasks touch something ‘sacred’ to a student, critical analysis is extremely threatening….I have come to understand that we cannot know in advance which texts are dangerous for whom or how they will impinge on the diverse and multiple identities and identifications of the students in our classes.

Many, many thanks to my colleague for reminding me of this.

*I also want to point out that Dr. Baker-Bell intentionally uses the term “Black Language” instead of “African American Vernacular English” in order to acknowledge Africologists’ theories that “Black Language is a language in its own right …and…is not just a set of deviations from the English Language (Kifano & Smith, 2002)”. I chose to use AAVE with these tools specifically but use Black Language in most other personal and professional contexts.

--

--

Shawna Coppola🏳️‍🌈

I am an educator, a writer, an artist, & a troublemaker. Website: https://shawnacoppola.com/ Twitter: @shawnacoppola #blacklivesmatter She/Her/Hers