LING 575 — Ethics in NLP:
Including Society in Discourse & Design
LING 575 — Ethical Considerations in NLP
- Ryan Georgi
- Office Hours: Wednesdays 12:00-2:00.
- Office: GUG 418-D
As systems involving NLP technology become more and more prevalent in people’s lives, it is more important than ever to consider the societal impacts, both short and long-term of our research in academia, and implementation of systems in industry. As much of the technology developed for machine learning and NLP becomes further democratized, it is also no longer only trained linguists who are implementing systems that rely upon NLP.
The goal of this course is to better understand the ethical considerations in the field of NLP, both in our own conduct, and how to communicate these issues both inside and outside the research community. Additionally, since morality and ethics arise from societies, we will look at how to treat science communication as a bidrectional process, listening to the concerns of various stakeholders and using these external perspectives to inform our work.
We will start with foundations in ethics, and then move to the current and growing research literature on ethics in NLP and allied fields, before considering specific NLP tasks, data sets and training methodologies through the lens of the ethical considerations identified. Course projects are expected to take the form of a term paper analyzing some particular NLP task or data set in terms of the concepts developed through the quarter and looking forward to how ethical best practices could be developed for that task/data set. In particular, I hope to find answers to the following guiding questions over the course of the term:
- What ethical considerations arise in the design and deployment of NLP technologies?
- Which of these are specific to NLP (as opposed to AI or technology more generally?)
- What best practices can/should NLP developers deploy in light of the ethical concerns identified?
- What is the best way to communicate effectively with different stakeholders, and what are our responsibilities as listeners?
Note: To request academic accommodations due to a disability, please contact Disability Resources for Students , 448 Schmitz, 206-543-8924 (V/TTY). If you have a letter from DSR indicating that you have a disability which requires academic accommodations, please present the letter to the instructor so we can discuss the accommodations you might need in this class.
|KWLA Paper (~2 pages)||15%|
|Weekly Reading Check-in
+ Discussion Participation
|SciComm Assignment (1-2 Pages)||20%|
|Term Project (6-8 Pages)||50%|
Schedule of Topics and Assignments (subject to change)
Why are we here? What do we hope to accomplish?
|Hovy and Spruit 2016 plus at least 1 other papers/articles listed under Overviews/Calls to Action (or just one, if you pick something particularly long)||1 – Intro|
|1/16||KWLA papers: K & W|
|1/17||What is Ethics? Philosophical foundations||2 items from Philosophical Foundations, at least one of which comes from an author whose perspective varies greatly from your own life experience. Be prepared to discuss the following:
||2 – Philosophical Underpinnings|
|1/24||Value Sensitive Design||Read any two other papers from Value Sensitive Design. Reading questions:
In addition, for an NLP project you are interested in:
|3 – VSD|
|1/31||Accountability: Institutional and Professional Incentivization||Read 1-2 papers from the Human Subjects/Professional code of Conduct Section and Hal Daumé III’s proposed ethics guidelines for the ML and NLP communities||4 – HSD/Professional Ethics|
|2/7||Science Communication||Read 2 Papers from the Science Communication section of the bibliography, and consider the following questions (also in the W5 Reading Questions):
||5 – SciComm|
|2/14||Data Collection and Human Subjects: Social Media & Crowdsourcing||Read two Papers (or Book Chapters) From the Social Media & Human Subjects section of the bibliography. W6 Reading Questions are here, as well as below:
||Term Paper Proposal Due 2/18||6 – Social Media & Crowdsourcing|
|2/21||How Bias in NLP Data Arises; Treating Language Data as Ground Truth||Read two papers from the Bias — How It Emerges; Language Data as Ground Truth section of the bibliography and fill out the W6 Reading Q’s. Think about the following questions while reading:
||SciComm Assignment Due 2/25||7 – Bias|
|2/28||Read two papers from the Addressing Bias – Algorithmic Fairness section.
|8 – Algorithmic Fairness|
|3/4||Read two papers from the Abusive Language section.||Term Paper Outline Due||9 – Abusive Language|
|3/11||Read two further papers of interest from the other topics/best practices.||Term Paper Due 3/20||10 – Wrapup|