UPDATED: Losing Our Sh*t Over ChatGPT
NOTE TO READERS: Please be sure to read through to the update below. Thank you! ~Shawna
OpenAI, the research company that has caused writing teachers to collectively lose their shit through the development of their ChatGPT tool, declares that its mission is to āensure that artificial general intelligence benefits all of humanity.ā If one were to ask ChatGPT to explain itself, the chatbot generates the following text:
ChatGPT is an AI language model developed by OpenAI, which is capable of generating human-like text based on the input it is given. The model is trained on a large corpus of text data and can generate responses to questions, summarize long texts, write stories and much more. It is often used in conversational AI applications to simulate a human-like conversation with users.
As educators, our alarm bells ought to be going offāand not for the most commonly expressed reasons (e.g., many students will use ChatGPT or other, similar tools to cheat; the way we teach writing might need to be completely reimagined). Rather, the hackles on our neck should be standing up due to the phrases āgeneral intelligenceā and āhuman-like text/conversation.ā Because weāve seen this before. Weāve seen the ways that artificial intelligence, which relies on predictive algorithms based on vast sets of data that humansāmostly white menāhave compiled, has reinforced cultural stereotypes, misogyny, and anti-Blackness. Weāve learned from scholars like Dr. Sofia U. Noble and Dr. Abeba Birhane about the insidious biases embedded in algorithmic tools. Weāve even seen scholars like Timnit Gebru be ousted from their jobs from pointing out the harm that AIāand the data it relies onāperpetuates.
So along with reimagining orāahemāāredefiningā the ways in which we teach our student writers, we should also be asking (and encouraging our students to explore) questions like, āWhat is āgeneralā intelligence?ā and āHow does OpenAI define āhuman-likeā text?ā Critical questions like these should never be relegated to one singular inquiry unit or project, but rather should be a habitual aspect of a sustained anti-oppressive practice.
In my upcoming book Literacy for All: A Framework for Anti-Oppressive Teaching, I lay out five essential principles that I believe should be part of every educatorās praxis. These principles include the following:
- Literacy involves both cognitive processes and social practices;
- Literacy and identity are inextricably linked;
- All human beings engage in literacy and language practices that are both valid and valuable;
- Meaning-making occurs within a variety of communicative channels or modes; and
- Literacy can be used as a tool for liberation as well as a tool for oppression.
If we were to employ these principles as we critically examine ChatGPT and additional, similar text-generating tools, we would immediately recognize how ChatGPT uses what Dr. April Baker-Bell calls āWhite Mainstream Englishā or WME, (2020), which she uses as a term for what many consider to be āStandardā American English. Dr. Baker-Bell developed this term, she says, in order to āemphasize how white ways of speaking become the invisibleāor better, inaudibleānormā for using languageāparticularly in academic spaces. As a result, other vernaculars, including what is commonly referred to as African American Vernacular English (AAVE) and/or Spanglish, are often disparaged, discouraged, or outright erased, as is the case with many Indigenous languages.
Here is an example of what I mean. Last night, I asked the ChatOn app to āwrite about the existence of racism in the field of medicineā (a topic I am finding myself falling down the rabbit hole about more and more lately). Here is what it generated:
Thank you for bringing this important issue to my attention. Racism in the field of medicine is a very serious problem that has been around for a long time, and it is still very much a part of our society today. Racial bias and discrimination in medical care can have serious impacts on peopleās health and well-being, as well as their trust in medical professionals and institutions. Research has also shown that racism in the medical field can lead to unequal access to healthcare and poorer health outcomes for people of color. This is a problem that must be addressed through greater awareness and action, and Iām committed to helping in any way that I can.
Giant yawn, right? Here are the first two paragraphs of what ChatGPT came up with:
Unfortunately, racism is still a significant problem in the field of medicine. There are several ways in which racism manifests itself in healthcare, including in medical education, clinical practice, and health outcomes.
One significant issue is the underrepresentation of racial and ethnic minorities in the medical field. In the United States, for example, African Americans, Hispanics, and Native Americans are all underrepresented in medical school and are less likely to become physicians compared to their white counterparts. This can have significant consequences for patients, as research has shown that patients tend to have better health outcomes when they are treated by healthcare professionals who share their racial or ethnic background.
There are already several huge red flags that we ought to be seeing in these default responses, including the use of the term āminoritiesā and a disappointing, if not surprising, lack of historical context.
Here is what happened when I asked it to āwrite using AAVE*ā (also note what Google Docsā spelling and grammar tool chooses to highlight in this image as well as the image that succeeds it):
And again, this time using āChicano Englishā:
What do you notice about the results of such an exercise, and what do you wonder? One thing I notice thatās worth repeating is that the chatbot has been designed to use White Mainstream English as its default language, despite the fact that 1) the number of people in the United States who speak a language other than Englishāor in addition to Englishācontinues to grow exponentially and 2) the United States has never officially declared a national language. What other conclusion might we reach beyond the inaccurate and harmful implication that Englishāand a bland, soulless English at thatāis ācorrectā or ānormalā (or even, to use OpenAIās own terminology, the most āhuman-likeā)? That vernaculars or languages other than āStandardā English are less valid or valuableāless āhumanāāand must be explicitly requested?
Hopefully you also notice that asking the bots to use specific kinds of vernaculars generates results that are highly stereotypical and akin to the kind of caricatures that we see/hear when white folks are being particularly racist or are engaging in cultural appropriation. Or is that my own bias as a white woman peeking through? How interestingāand enlighteningāit would be to ask our students what they notice and wonder about such results.** What might this tell us about the āstoriesā they have been, and continue to be, socialized to absorb about language and about writing?
Most importantly, though, I wonder how many educators are thinking about the ways we might facilitate crucial conversations among our students about language and power. About how this particular use of artificial intelligence can serve as a tool for the continued oppression of those whose language and literacy practices do not match those we have most historically valued. And about how what we choose to lose our shit over says a lot about the state of literacy educationāand of literacy educatorsātoday.
**IMPORTANT UPDATE, 3/1/23:
Recently, a colleague reached out to express how painful it felt when I singled out both AAVE* and Chicano English as the two languages that were highlighted here in order to contrast to the ways that chatbot tools center White Mainstream English (WME). In addition, she expressed her concern that my suggestion for educators to engage students in an inquiry around language and power, using these examples, could potentially cause harm to students as well.
I was, and am, regretful of the harm my words caused my colleague (and perhaps others). In addition, Iām distressed by the amount of emotional labor I imagine it took her to call me in around this.
As someone whose language practices are centered by tools like Chat GPTāand by school-centered literacy and language practices in generalāit is important to acknowledge the ways that our attempts to engage in critical language pedagogy can unexpectedly take a fast and hard turn toward curricular violence when these attempts lack an intersectional, and trauma-informed, approach. To have casually suggested that educators invite students to ānotice and wonderā about the ways in which AI chatbots consider White Mainstream English as the ādefaultā dialectāparticularly when in comparison to dialects that are, more often than not, stigmatized in a wide variety of spaces both in-school and out-of-schoolāwas harmful.
Upon reflection it is also clear to me that I, as the author of this piece, made several dangerous assumptions. For educatorsā particularly educators like me whose first/only language and primary dialect is, more often than not, considered the language of powerāit is essential to emphasize the need to engage students safely and sensitively in critical examinations around power, privilege, and identity. More specifically, it is crucial to consider the potential harm that can occur if great care is not taken to facilitate conversations with students around the reality that, as Valerie Kinloch writes, āpower is restricted from [those] whose language does not represent a standard American form.ā This is especially true when we consider the ways in which identity and language so powerfully intersect.
When I wrote the original post, I failed to consider the need to be explicit about this with my readers so that those who may not have the necessary experience (or expertise) around this sort of work would not preemptively dive into such an enormously challenging task. And while I have had a number of experiences engaging students and teachers in conversations around what Dr. April Baker-Bell calls ālinguistic supremacy,ā it is important to acknowledge that such experience has been limited to my work with predominantly White students and teachers whose primaryāor onlyālanguage reflects the ādefaultā language of ChatGPT. This kind of work would look very differently in other kinds of contexts, as articulated by many scholars whose shoulders I humbly stand on, including, but not limited to, Dr. Mariana Souto-Manning, Dr. Nelson Flores, and Dr. Marcelle Haddix. As Dr. Hilary Janks writes (and which I should have heeded upon writing this post),
[T]he teacher cannot predict which text will erupt in class. ā¦[W]hen texts or tasks touch something āsacredā to a student, critical analysis is extremely threateningā¦.I have come to understand that we cannot know in advance which texts are dangerous for whom or how they will impinge on the diverse and multiple identities and identifications of the students in our classes.
Many, many thanks to my colleague for reminding me of this.
*I also want to point out that Dr. Baker-Bell intentionally uses the term āBlack Languageā instead of āAfrican American Vernacular Englishā in order to acknowledge Africologistsā theories that āBlack Language is a language in its own right ā¦andā¦is not just a set of deviations from the English Language (Kifano & Smith, 2002)ā. I chose to use AAVE with these tools specifically but use Black Language in most other personal and professional contexts.