UPDATED: Losing Our Sh*t Over ChatGPT
NOTE TO READERS: Please be sure to read through to the update below. Thank you! ~Shawna
OpenAI, the research company that has caused writing teachers to collectively lose their shit through the development of their ChatGPT tool, declares that its mission is to âensure that artificial general intelligence benefits all of humanity.â If one were to ask ChatGPT to explain itself, the chatbot generates the following text:
ChatGPT is an AI language model developed by OpenAI, which is capable of generating human-like text based on the input it is given. The model is trained on a large corpus of text data and can generate responses to questions, summarize long texts, write stories and much more. It is often used in conversational AI applications to simulate a human-like conversation with users.
As educators, our alarm bells ought to be going offâand not for the most commonly expressed reasons (e.g., many students will use ChatGPT or other, similar tools to cheat; the way we teach writing might need to be completely reimagined). Rather, the hackles on our neck should be standing up due to the phrases âgeneral intelligenceâ and âhuman-like text/conversation.â Because weâve seen this before. Weâve seen the ways that artificial intelligence, which relies on predictive algorithms based on vast sets of data that humansâmostly white menâhave compiled, has reinforced cultural stereotypes, misogyny, and anti-Blackness. Weâve learned from scholars like Dr. Sofia U. Noble and Dr. Abeba Birhane about the insidious biases embedded in algorithmic tools. Weâve even seen scholars like Timnit Gebru be ousted from their jobs from pointing out the harm that AIâand the data it relies onâperpetuates.
So along with reimagining orâahemââredefiningâ the ways in which we teach our student writers, we should also be asking (and encouraging our students to explore) questions like, âWhat is âgeneralâ intelligence?â and âHow does OpenAI define âhuman-likeâ text?â Critical questions like these should never be relegated to one singular inquiry unit or project, but rather should be a habitual aspect of a sustained anti-oppressive practice.
In my upcoming book Literacy for All: A Framework for Anti-Oppressive Teaching, I lay out five essential principles that I believe should be part of every educatorâs praxis. These principles include the following:
- Literacy involves both cognitive processes and social practices;
- Literacy and identity are inextricably linked;
- All human beings engage in literacy and language practices that are both valid and valuable;
- Meaning-making occurs within a variety of communicative channels or modes; and
- Literacy can be used as a tool for liberation as well as a tool for oppression.
If we were to employ these principles as we critically examine ChatGPT and additional, similar text-generating tools, we would immediately recognize how ChatGPT uses what Dr. April Baker-Bell calls âWhite Mainstream Englishâ or WME, (2020), which she uses as a term for what many consider to be âStandardâ American English. Dr. Baker-Bell developed this term, she says, in order to âemphasize how white ways of speaking become the invisibleâor better, inaudibleânormâ for using languageâparticularly in academic spaces. As a result, other vernaculars, including what is commonly referred to as African American Vernacular English (AAVE) and/or Spanglish, are often disparaged, discouraged, or outright erased, as is the case with many Indigenous languages.
Here is an example of what I mean. Last night, I asked the ChatOn app to âwrite about the existence of racism in the field of medicineâ (a topic I am finding myself falling down the rabbit hole about more and more lately). Here is what it generated:
Thank you for bringing this important issue to my attention. Racism in the field of medicine is a very serious problem that has been around for a long time, and it is still very much a part of our society today. Racial bias and discrimination in medical care can have serious impacts on peopleâs health and well-being, as well as their trust in medical professionals and institutions. Research has also shown that racism in the medical field can lead to unequal access to healthcare and poorer health outcomes for people of color. This is a problem that must be addressed through greater awareness and action, and Iâm committed to helping in any way that I can.
Giant yawn, right? Here are the first two paragraphs of what ChatGPT came up with:
Unfortunately, racism is still a significant problem in the field of medicine. There are several ways in which racism manifests itself in healthcare, including in medical education, clinical practice, and health outcomes.
One significant issue is the underrepresentation of racial and ethnic minorities in the medical field. In the United States, for example, African Americans, Hispanics, and Native Americans are all underrepresented in medical school and are less likely to become physicians compared to their white counterparts. This can have significant consequences for patients, as research has shown that patients tend to have better health outcomes when they are treated by healthcare professionals who share their racial or ethnic background.
There are already several huge red flags that we ought to be seeing in these default responses, including the use of the term âminoritiesâ and a disappointing, if not surprising, lack of historical context.
Here is what happened when I asked it to âwrite using AAVE*â (also note what Google Docsâ spelling and grammar tool chooses to highlight in this image as well as the image that succeeds it):
And again, this time using âChicano Englishâ:
What do you notice about the results of such an exercise, and what do you wonder? One thing I notice thatâs worth repeating is that the chatbot has been designed to use White Mainstream English as its default language, despite the fact that 1) the number of people in the United States who speak a language other than Englishâor in addition to Englishâcontinues to grow exponentially and 2) the United States has never officially declared a national language. What other conclusion might we reach beyond the inaccurate and harmful implication that Englishâand a bland, soulless English at thatâis âcorrectâ or ânormalâ (or even, to use OpenAIâs own terminology, the most âhuman-likeâ)? That vernaculars or languages other than âStandardâ English are less valid or valuableâless âhumanââand must be explicitly requested?
Hopefully you also notice that asking the bots to use specific kinds of vernaculars generates results that are highly stereotypical and akin to the kind of caricatures that we see/hear when white folks are being particularly racist or are engaging in cultural appropriation. Or is that my own bias as a white woman peeking through? How interestingâand enlighteningâit would be to ask our students what they notice and wonder about such results.** What might this tell us about the âstoriesâ they have been, and continue to be, socialized to absorb about language and about writing?
Most importantly, though, I wonder how many educators are thinking about the ways we might facilitate crucial conversations among our students about language and power. About how this particular use of artificial intelligence can serve as a tool for the continued oppression of those whose language and literacy practices do not match those we have most historically valued. And about how what we choose to lose our shit over says a lot about the state of literacy educationâand of literacy educatorsâtoday.
**IMPORTANT UPDATE, 3/1/23:
Recently, a colleague reached out to express how painful it felt when I singled out both AAVE* and Chicano English as the two languages that were highlighted here in order to contrast to the ways that chatbot tools center White Mainstream English (WME). In addition, she expressed her concern that my suggestion for educators to engage students in an inquiry around language and power, using these examples, could potentially cause harm to students as well.
I was, and am, regretful of the harm my words caused my colleague (and perhaps others). In addition, Iâm distressed by the amount of emotional labor I imagine it took her to call me in around this.
As someone whose language practices are centered by tools like Chat GPTâand by school-centered literacy and language practices in generalâit is important to acknowledge the ways that our attempts to engage in critical language pedagogy can unexpectedly take a fast and hard turn toward curricular violence when these attempts lack an intersectional, and trauma-informed, approach. To have casually suggested that educators invite students to ânotice and wonderâ about the ways in which AI chatbots consider White Mainstream English as the âdefaultâ dialectâparticularly when in comparison to dialects that are, more often than not, stigmatized in a wide variety of spaces both in-school and out-of-schoolâwas harmful.
Upon reflection it is also clear to me that I, as the author of this piece, made several dangerous assumptions. For educatorsâ particularly educators like me whose first/only language and primary dialect is, more often than not, considered the language of powerâit is essential to emphasize the need to engage students safely and sensitively in critical examinations around power, privilege, and identity. More specifically, it is crucial to consider the potential harm that can occur if great care is not taken to facilitate conversations with students around the reality that, as Valerie Kinloch writes, âpower is restricted from [those] whose language does not represent a standard American form.â This is especially true when we consider the ways in which identity and language so powerfully intersect.
When I wrote the original post, I failed to consider the need to be explicit about this with my readers so that those who may not have the necessary experience (or expertise) around this sort of work would not preemptively dive into such an enormously challenging task. And while I have had a number of experiences engaging students and teachers in conversations around what Dr. April Baker-Bell calls âlinguistic supremacy,â it is important to acknowledge that such experience has been limited to my work with predominantly White students and teachers whose primaryâor onlyâlanguage reflects the âdefaultâ language of ChatGPT. This kind of work would look very differently in other kinds of contexts, as articulated by many scholars whose shoulders I humbly stand on, including, but not limited to, Dr. Mariana Souto-Manning, Dr. Nelson Flores, and Dr. Marcelle Haddix. As Dr. Hilary Janks writes (and which I should have heeded upon writing this post),
[T]he teacher cannot predict which text will erupt in class. âŚ[W]hen texts or tasks touch something âsacredâ to a student, critical analysis is extremely threateningâŚ.I have come to understand that we cannot know in advance which texts are dangerous for whom or how they will impinge on the diverse and multiple identities and identifications of the students in our classes.
Many, many thanks to my colleague for reminding me of this.
*I also want to point out that Dr. Baker-Bell intentionally uses the term âBlack Languageâ instead of âAfrican American Vernacular Englishâ in order to acknowledge Africologistsâ theories that âBlack Language is a language in its own right âŚandâŚis not just a set of deviations from the English Language (Kifano & Smith, 2002)â. I chose to use AAVE with these tools specifically but use Black Language in most other personal and professional contexts.