Cross-cultural Considerations and Analytics in NLP Models

“Treat AI as a mouthpiece for its corpus, not a thinking thing.” 

- Group takeaway

The rapid advancement of Natural Language Processing (NLP) has led to the development of increasingly sophisticated models capable of understanding and generating human language. However, these models often reflect biases present in the data they are trained on, which can perpetuate and amplify cultural biases. For instance, as the following student researchers discovered, text-generating programs like often generate language that can be hearably racist, xenophobic, or transphobic.  

bhowmikreesa_391628_12938826_Linguistics 1000 Final Presentation.pptx

Cite this project as: Bhowmik, R., Brown, M., Burke, S., Gaffigan, M., Ilitch, T., Jergins, A., & Landegger, L. Dogwhistles: Coded Rhetoric and Language Models. Under the supervision of Professor Lara Bryfonski and teaching assistant Xiang Li. LING 1000: Introduction to Linguistics, Georgetown University.
Spring 2024.

Click on the links below to see other students' work on this topic

For further information, we direct you to the following resources: